Me, Myself, and the AI
The good, the bad, and the worrying
Happy New Year!
I’ve been avoiding writing about AI for the same reason everyone else has been yelling about it: the topic is everywhere. But after yet another breakfast, lunch, and dinner conversations dominated by it, I figured I should stop hiding and share my own take, focusing on my experience, and not the headline.
(Note: When I say “AI”, I’m talking specifically about large language models and their interfaces: ChatGPT, Gemini, Claude, Grok, etc.. Not the broader sense of AI that’s been in everything from Snapchat face filters, cameras, phones, etc.)
The Good
I use LLMs every single day. For work: brainstorming, drafting, challenging documents, prepping Q&A, summarizing meetings, extracting action items. For life: researching purchases, practicing languages, getting photography feedback, summarizing news, running financial simulations, even building websites and handy scripts.
It has become a magic stick for people who already know enough to steer it. It is productivity on steroids for competent, critical users who know what they want to achieve but want to achieve it faster. Though I should note: anyone can develop over-reliance or blind trust. Magic always comes with hidden costs.
A motivated, ambitious individual now has leverage that was unimaginable five years ago: anyone with internet and some smart machine can teach themselves advanced topics, prototype ideas, or research opportunities at a level that was not achievable before.
The Bad (and some might be pretty bad)
That said, the downsides are real, frightening, and growing.
First, the noise is exhausting. Everything and everyone is about AI right now: news, dinner parties, LinkedIn posts, Instagram photos, and even toilet papers. Strong opinions everywhere, but evidence far less so.
Second, we’ve flooded the world with polished-but-shallow content (slop). Anyone can now produce professional-looking websites, articles, images, videos, or “expert” takes. Signal-to-noise has skyrocketed. The world is becoming so artificially polished that it’s become depressing. I stopped opening LinkedIn because of the amount of junk displayed there. Yes, it has helped more people create and introduced more diversity, but it also introduced a lot of low-effort junk that is soulless and meaningless.
Third, over-trust and false confidence. LLMs speak with perfect grammar and zero hesitation, so people treat them as oracles. I’ve seen business plans written in a couple of hours without verifying sources, medical or political advice taken at face value, life decisions influenced by chatbots with no skin in the game. Daily, I spot obvious mistakes from AI that users miss entirely. The Dunning-Kruger effect got rocket fuel.
Fourth, the impact on learning—especially for kids. Students have always sought shortcuts, but AI makes it seamless and undetectable. The deeper problem is that education already over-optimized for task completion rather than deep understanding. Now we’re supercharging that flaw. Learning how to learn, how to reason, how to challenge outputs—these muscles will fade when you offload the hard parts.
Longterm, we risk raising a generation that can generate credentials but struggles to evaluate or originate ideas. What happens when the generation growing up with ChatGPT reaches adulthood without developing the foundational skills to validate AI outputs? When the baseline assumption is “the AI is probably right” because they never built the knowledge to challenge it? This isn’t just about students cheating on essays—it’s about the erosion of critical thinking at scale.
The Future (and I hope it is not so ugly)
I don’t expect world-eating AGI or a sudden singularity. Instead, we’ll see steady, incremental progress: more reliable domain-specific models, better agents, tighter workflow integrations. Useful applications will solve real shortages—healthcare triage, personalized tutoring—without replacing humans entirely.
The hype will eventually fade when diminishing returns kick in (and probably an AI bubble burst) and investors chase the next shiny thing or a geopolitical crisis steals the spotlight.
We will see serious privacy issues, embedded bias, addiction problems, and a Big Brother moment when companies and governments start monitoring what you do and share.
We will definitely have more deepfakes and personalized propaganda—Cambridge Analytica-style manipulation, but now available to anyone with an API key and a motive. This will accelerate polarization until we find ways to validate outputs with the same speed we generate ideas.
Of course, I see more power concentration in a handful of tech companies. That will raise geopolitical safety and antitrust questions. And of course the ethical question of whether using AI in weapons is acceptable or not.
What do you think? What have I missed?


I agree with the noise! Some days I want to
puke just hearing about AI!