§ contents
A friend texted me recently. She uses AI daily — as a writing tool, a sounding board, even “honestly my therapist, so bad.” But she’s anxious. She’s never used Claude. She’s never programmed with AI. And she’s worried she’s falling behind.
She’s also worried she’s not worried enough. A lot of people think AI will replace jobs, upend institutions, reshape society. But it still hallucinates basic facts. So which is it — existential threat or overhyped autocomplete?
She asked me because I jokingly mentioned that I was a “self-proclaimed AI nerd.” I told her I’d spare her the wall of text and write a blog post instead. So here it is.
You don’t need to get good at AI
This is the core thing I want to say: you don’t need to get good at AI.
You don’t need to memorize prompt engineering techniques. You don’t need to know which model released yesterday or which one benchmarks 2% higher on MMLU. You don’t need to feel guilty for using ChatGPT instead of Claude, or Claude instead of Gemini, or whatever comes next.
AI is not a skill tree you need to min-max. It’s infrastructure.
Think about it this way: do you consider yourself “good at the internet”? Probably not. You just use it. You Google things, you send emails, you watch videos. Some people are extremely good at the internet — SEO experts, network engineers, the person who can find anything on Reddit in 30 seconds. But most people don’t need that level of fluency to benefit from it.
AI is heading in the same direction. It’s becoming the water we swim in, not the scuba certification we need to earn.
What actually matters
If mastery isn’t the goal, what is?
Knowing what to ask. AI can generate code, essays, business plans, legal documents, meal plans, workout routines. But it can’t tell you which of those things you actually need. It can’t tell you if the code it wrote has a subtle security flaw, if the business plan ignores your actual market constraints, or if the legal document would hold up in court.
Knowing when it’s wrong. AI is confidently wrong on a regular basis. It cites papers that don’t exist. It generates code with subtle bugs. It gives medical advice that sounds plausible but isn’t. The critical skill isn’t knowing how to make AI generate something — it’s knowing whether that something is correct.
Having deep domain knowledge. This is the one that matters most. AI democratizes execution, but it doesn’t democratize judgment. A non-technical person can now spin up a web app in an afternoon. But they can’t tell if it’s secure, scalable, or maintainable. They don’t know what they don’t know.
Amazon is learning this the hard way. After laying off tens of thousands of corporate workers, they suffered a string of severe outages — including one that locked shoppers out of checkout for six hours. Internal documents initially flagged “GenAI-assisted changes” as a factor. The company now says it needs more humans in the loop, not fewer. AI can write code, but it takes experience to know if it’s writing the right code. That’s true for software engineering, and it’s true for everything else.
The democratization paradox
AI is genuinely transformative for one reason: it collapses the gap between having an idea and executing on it.
Before, if you wanted a website, you needed a developer or months of learning. Now you describe what you want and get a working prototype in minutes. If you wanted a marketing campaign, you needed a copywriter and a designer. Now you get drafts and visuals instantly.
This is good. More people can build things, test ideas, and solve problems without gatekeepers.
But it also creates a trap: the illusion of competence. When AI generates something that looks right, it’s easy to assume it is right. The people who will thrive aren’t the ones who generate the most AI output. They’re the ones who can evaluate it.
It’s like Wikipedia. When Wikipedia launched, people worried it would replace experts, that crowdsourced knowledge would be unreliable, that students would cite it blindly. Some of that happened. But Wikipedia also made information accessible to billions. Libraries still exist. Experts still matter. But the shape of expertise changed — now it’s less about memorizing facts and more about evaluating sources, synthesizing information, and applying judgment.
AI is doing the same thing to knowledge work.
The keeping-up anxiety isn’t serving you
There’s a whole economy built on making you feel like you’re behind on AI. New models drop weekly. Benchmarks get beaten monthly. So-called “thought-leaders” tweet threads about the latest technique you “need to know.” Newsletter subject lines scream about how AI is about to replace some profession you care about.
Most of it is noise.
The underlying technology changes fast, but the use cases change slowly. The fundamentals of how to work with AI — how to prompt effectively, how to verify outputs, how to integrate it into workflows — have been stable for a while. The models get better, but the skills don’t get different.
If you learned how to use AI productively six months ago, you’re probably fine. If you haven’t started yet, you’re also fine. The best time to start was six months ago. The second best time is now. But you don’t need to binge every announcement to catch up.
What to do instead
Stop tracking AI news and start applying it to problems you already understand.
If you’re a writer, use it to draft faster and edit better. If you’re a teacher, use it to generate practice problems and explanations. If you’re a manager, use it to analyze data and draft communications. If you’re a developer, use it to scaffold boilerplate and debug faster.
The value isn’t in knowing AI. It’s in knowing your work well enough to use AI on it.
My friend doesn’t need to learn how to program with AI. She needs to keep doing what she’s doing — using it as a writing tool, a thinking partner, a sanity check — and trust that she’s getting value from it. The anxiety comes from comparing herself to people who treat AI as an identity instead of a tool.
Don’t be that person. Be the person who uses the tool well.
The library and the encyclopedia
There’s an analogy I keep coming back to.
Before the internet, if you wanted to know something, you went to the library. You found the right encyclopedia volume. You hoped the information was current. It worked, but it was slow and unevenly distributed.
Then Wikipedia happened, and smartphones. Now you can access most of human knowledge from your pocket, instantly, for free (as long as you pay your phone bill).
Some people lamented this. They said it would make people lazy, that memorization was a virtue, that the library was sacred. And sure, there’s something lost when you don’t wander the stacks and stumble on unexpected books. Libraries are still beautiful, useful places.
But no one serious argues we should go back. The internet and Wikipedia didn’t make us dumber — they changed what being informed looks like. Even though we all have computers more powerful than the one that sent man on the moon in 1969, we still learn fundamentals, like arithmetic.
Not using AI will soon feel like not using the internet. Not because you’re a Luddite, but because you’re choosing a slower, more limited way to work with information. You can still do it. But you’ll be making your own life harder than it needs to be, just like going to the library to check out an encyclopedia.
So what do I tell my friend?
I tell her she’s fine. I tell her that using AI as a writing tool and a thinking partner is exactly what most people should be doing. I tell her that the gap between her and the “AI nerds” is mostly performative — that a lot of us are just slightly more enthusiastic about trying new buttons, not fundamentally better at extracting value.
I tell her that the people who will benefit most from AI are the ones who know their domain deeply and use AI to amplify that knowledge. Not the ones who treat every model release like a sports draft.
I tell her that AI literacy is not a competition. It’s not a leaderboard. It’s just… literacy. You learn enough to use the tool. You keep using it. You get better over time without trying to “keep up.” Just like learning anything in school. You can make it through most of life knowing basic arithmetic. You aren’t “behind” because you don’t understand derivatives. Learn as much (or as little) as interests you, and tune out the rest. You don’t need to be aware of the latest mathematical discovery if you only need basic arithmetic in your life.
And then I send her this blog post, which is much longer than the wall of text I promised to spare her. Sorry-not-sorry!