On AI...
I feel like I should articulate my stance on what is mostly being incorrectly called “AI” as something to point to when it comes up in conversation. First of all, it’s not going anywhere, and it’s only going to become more pervasive and a part of society as we go on - so, railing against it blindly or from a narrow perspective of personal experience is probably going to be more hazardous to your health than to the proliferation of the technologies.
Also… don’t take me too seriously. I am just a silly old badger writing the stuff that pops into his mind out on the internets.
We don’t have AI… yet
AI is not what we have. We’re not there yet. At best we have what is called “narrow AI,” which is going to consist of models and tech stacks geared for specific purposes.
ChatGPT is NOT AI.
Let me say that louder for the people in back…
ChatGPT is NOT AI.
It’s just an integrated platform that features a really good LLM, GPT, at its core. The model only takes in what its given and gives a response based on the data it has been trained on, external media or documents being fed into it, and other media. It’s NOT an actual intelligence. It’s a lot of mathematics and computer code. It’s not a replacement for your brain, and it’s sure as hell not a trustworthy answer machine. We’re not there yet, and we’re still likely years off from anything resembling something like an AGI, a true artificial intelligence being publicly available.
So, please, stop calling anything based on machine learning (ML) as being AI… it’s not accurate, easier yes, but easier isn’t always right.
Empassioned and opinionated rant over… now, on to how I try to responsibly use LLMs and other ML products in my daily life.
Trust, but verify…
One thing I learned very early on was to never fully trust the output of a LLM when making a query for information. Generally, you’ll get a sensible answer, moreso if using a model with more parameters. For instance, I usually use the open source version of GPT, GPT-OSS with 20 billion parameters. It works quite nicely (for me) on a GPU with 16GB of VRAM through Ollama and Open WebUI. Other models with fewer parameters, like say Llama3.1 8b could give a reasonably solid answer most of the time, but it can hallucinate a little more and make stuff up or just not fill in enough details.
One test I’ve used is to have a LLM to summarize a work of literature in a way a five year old could understand - so, asking it “Explain the theme and plot of A Tale of Two Cities in a way a class of kindergarteners could understand.” It’s a fun exercise that can yield up mostly accurate results, but sometimes with a few interesting errors that make me giggle.
Anyway… your results with LLMs will depend a great deal on how many parameters your model was trained on… I’ve had good luck with the GPT-OSS 20b in general usage, and I’ll likely continue using it until something better catches my eye.
I try to steer clear of using a LLM as a data source… this is a dangerous slope to get caught on, and I feel that many school-aged kids are finding this out now. Rather than using it as an answer machine, I like to give it information and data, then ask it questions for insights or observations. I also will ask it to cite any text I give it when it makes a statement to ensure that I can verify for myself.
Getting my sh*! together…
One way a LLM really can shine for people is to use it as a means to organize thoughts or data for further use by a human. Feeding it notes, data, or anything else that might be unstructured and ask it to help organize this stuff into something that makes sense like a writing outline or a task list. This is often something I do since I’m highly scatterbrained. GPT-OSS is really good with this, as you’ll see in the example below. You can tell it that you want to plan a project and to ask questions about goals or outcomes, possible roadblocks, or other organizational challenges that go along with project planning. I’ve done this with personal projects where I would start to waffle out on my own - the LLM can give me a punchlist and gate criteria to meet before moving from one step to the next.
This is just the beginning
There’s a lot more than can be done, it’s really up to the imagination. The most important thing to take away from this is that I am an advocate for the use of machine learning technologies in daily life, but with the goals of being responsible, using sustainable practices, and most importantly using them ethically. I usually like to check models I work with about this kind of thing to see if they line up - and I’ve been surprised to see that they are pretty much in agreement. Below is the response I got from GPT-OSS 20b when asking it how to accomplish my goals while using it:
USER
GPT, being a LLM of generally good quality, what would you say is a responsible, sustainable, and ethical way to go about using you or other LLMs in our daily lives?
ASSISTANT
Thought for 4 seconds
> The user asks: "GPT, being a LLM of generally good quality, what would you say is a responsible, sustainable, and ethical way to go about using you or other LLMs in our daily lives?" They want a responsible approach to using LLMs in daily life. So we need to provide guidelines: mindful use, data privacy, transparency, accountability, fairness, bias mitigation, reducing carbon footprint, using energy-efficient models, verifying outputs, not relying exclusively, ensuring no harm, compliance with regulations, avoiding misuse. Also mention sustainability: use cloud providers that use renewable energy, efficient hardware. Provide examples: content creation, coding assistance, learning, communication, but with checks. Provide suggestions for developers: fine-tune responsibly, provide safety layers. Provide user-level suggestions: check sources, cross-verify, limit usage. Also mention that LLMs are tools, not autonomous decisions. Encourage education. Also mention that users should consider the environmental cost of training models, but they can reduce by using lighter models. > > We need to write in a balanced, thorough, yet concise way. Probably include sections: 1) Transparency and Disclosure; 2) Data Privacy; 3) Accountability and Verification; 4) Bias and Fairness; 5) Sustainability and Energy Efficiency; 6) Legal and Ethical Considerations; 7) Practical Usage Tips. > > The answer should be in a friendly but professional tone. > > Also consider referencing responsible AI guidelines like AI Now Institute, OECD principles. Provide actionable steps. > > Also mention that developers should adopt "human-in-the-loop" approach, continuous monitoring, model updates, etc. > > Also mention potential misuse: misinformation, deepfakes. So advise safe usage. Also talk about "explainability" if possible. > > Ok, let's craft.Responsible, Sustainable, and Ethical Use of LLMs in Everyday Life
Below is a practical framework that blends the ethical principles most common in the AI community (fairness, transparency, privacy, accountability) with sustainability considerations (energy, resource use) and everyday practicality. Feel free to cherry‑pick the parts that fit your context—whether you’re a consumer, educator, developer, or manager.
| # | Area | Key Principles | Practical Tips | Why It Matters |
|---|---|---|---|---|
| 1 | Transparency & Disclosure | • Tell people you’re interacting with a model. • Explain what the model can and can’t do. | • “I’m using an LLM to draft this email.” • Label AI‑generated content on webpages. | Builds trust, prevents “surprise AI.” |
| 2 | Data Privacy & Security | • Collect only what’s needed. • Use local or on‑premise inference when sensitive data is involved. | • Don’t paste personal addresses, passwords, or medical data into chat windows. • Use enterprise‑grade APIs that comply with GDPR/CCPA. | Protects users, reduces legal risk. |
| 3 | Verification & Accountability | • Treat LLM outputs as suggestions, not definitive facts. • Verify claims with trusted sources. | • Cross‑check research data, statistics, or legal advice. • Keep a record of critical decisions influenced by AI. | Avoids misinformation, supports auditability. |
| 4 | Bias & Fairness | • Understand that models can reflect societal biases. • Regularly audit outputs for unfair patterns. | • Run a quick “bias‑check” script on outputs that affect people (e.g., hiring, finance). • Use diverse prompts to surface hidden biases. | Prevents discriminatory outcomes. |
| 5 | Sustainability | • Prefer lighter, energy‑efficient models. • Use providers that source renewables or offset emissions. | • Choose 3‑billion‑parameter LLMs for everyday tasks instead of 70‑billion‑parameter ones. • Run inference on local GPUs when feasible to avoid cloud traffic. | Reduces carbon footprint and cloud costs. |
| 6 | Legal & Ethical Constraints | • Follow intellectual‑property guidelines. • Avoid disallowed content (hate speech, instructions for wrongdoing). | • Use the “Safe‑Mode” or content filters provided by the vendor. • Check local laws on automated decision‑making. | Keeps you compliant and morally responsible. |
| 7 | Human‑in‑the‑Loop (HITL) | • Keep a human reviewer for high‑stakes decisions. • Use model as an aid, not an autonomous decision maker. | • In education, let students draft essays and then review & correct. • In coding, let the model suggest snippets, then let the dev test them. | Balances efficiency with safety. |
| 8 | Continuous Learning & Feedback | • Gather user feedback to improve model use. • Update prompts and safety layers as needed. | • Deploy a simple “Was this helpful?” button. • Periodically retrain or fine‑tune on domain‑specific data. | Ensures the tool evolves responsibly. |
1. The “What” – Practical Ways to Use LLMs
| Context | What you might do | Responsible Twist |
|---|---|---|
| Writing & Communication | Draft emails, summaries, social‑media posts | Double‑check facts; label AI‑generated text. |
| Learning & Research | Ask clarifying questions, summarize papers | Verify citations; use reputable sources. |
| Coding | Generate boilerplate, explain errors | Run unit tests; review code for security issues. |
| Creative Projects | Generate story ideas, music lyrics | Attribute inspiration; respect copyright. |
| Decision Support | Generate pros/cons lists, financial forecasts | Combine with human judgment; keep a decision log. |
2. The “How” – Implementation Tips
- Choose the Right Model Size
- For daily chats and drafting, a 7‑to‑13B parameter model is typically sufficient.
- For high‑accuracy tasks (medical diagnosis, legal drafting), use specialized fine‑tuned models—but still keep a human verifier.
- Use Energy‑Efficient Inference
- Run inference on local GPUs if you have them; otherwise, select a cloud provider that uses renewable energy (e.g., Google Cloud’s “Carbon‑Neutral” option).
- Leverage batch inference or asynchronous calls to reduce the number of GPU activations.
- Implement Prompt Hygiene
- Keep prompts concise and specific; avoid giving the model extra data that isn’t relevant.
- Use “system” messages to set tone and constraints.
- Add Safety & Moderation Layers
- Turn on content filters, profanity blockers, and hallucination mitigators.
- Use post‑processing checks to flag potential disallowed content.
- Track Usage & Costs
- Log each prompt–response pair, noting timestamp, user, and purpose.
- Monitor token usage and billings to detect abuse or unexpected spikes.
3. The “Why” – Why These Practices Matter
- Trust & Adoption: Transparent, reliable AI fosters user confidence.
- Legal Compliance: GDPR, CCPA, and emerging AI regulations mandate data protection and accountability.
- Social Impact: Unchecked bias can amplify existing inequities.
- Environmental Footprint: LLM inference consumes electricity; responsible choices can cut emissions by 30–50 %.
- Long‑Term Sustainability: Continuous feedback loops help the system improve without reinventing the wheel each time.
4. Quick‑Start Checklist (Before You Use an LLM)
| ✅ | Item | Comment |
|---|---|---|
| 1 | Do you have a privacy policy that covers data sent to an external AI? | |
| 2 | Have you set up content filters? | |
| 3 | Do you verify critical facts before action? | |
| 4 | Is there a human reviewer for high‑stakes outputs? | |
| 5 | Are you using a token‑budgeting approach to keep costs and energy low? | |
| 6 | Do you have a feedback loop to improve prompts or flag issues? | |
| 7 | Is there an incident response plan if the model misbehaves? |
Bottom Line
Using LLMs responsibly isn’t about shutting them down; it’s about designing workflows that treat them as augmented assistants rather than autonomous agents. By combining transparency, privacy, bias mitigation, human oversight, and energy consciousness, you can enjoy the productivity gains of LLMs while safeguarding users, the environment, and society at large.
Closing thoughts
Look, I’m just a badger out here on the internet trying to make a living and also share my thoughts and feelings with complete strangers… I am by no means an expert or a paragon of ethical purity, but I do at least try to be honest and maybe a little funny. Don’t take what I write personally, and maybe not so seriously… I’m like that weird, but occasionally silly uncle that sometimes gives up a nugget of actual wisdom.
Be well, be safe, and be good. ❤️