I’ll admit it. The title’s facetious. I’m trying out new material. I’m really just getting at how oversaturated my timelines have become with hot takes on AI and cognition, rather than a claim to ultimate truth. Forgive me, I simply aim for the algorithm gods to bless me.
Just this past week, I’ve come across at least five major pieces sounding the alarm about AI’s supposed threat to our mental abilities. The common narrative paints the equation as simple: outsource your writing, researching, and problem-solving to a machine, and your cognitive muscles will atrophy. You’ll stop forming your own intuitions. You’ll lose the ability to decide for yourself. While concern for human agency is valid, the premise is too neat, too fatalistic.
Yes, there will be marginal cases where this pattern holds—just as with any skill, long-term disuse can lead to measurable decline. But that’s a statistical truth, not an inevitability. It’s the exception, not the predetermined fate.
Enfeeblement is not inevitable. That’s what I’m attempting to get at here. The real question isn’t whether AI will make us stupid. It’s whether we’ll design and use it in ways that erode the core muscles of thought or in ways that deepen and multiply them. And that design choice must be deliberate. Without intentionality, the benefits will compound for those already positioned to leverage them, while the harms will deepen for those with fewer resources, less access, or more vulnerable starting points. As Neil Cybart of Above Avalon warns, there’s a fine line between augmentation and enfeeblement: passive, ambient tools can either sharpen cognition or quietly hollow it out. To see the stakes clearly, you have to understand what thought actually is, where it comes from, and how it can be strengthened… or weakened.
Thought is Relational, Not Solitary
Thought doesn’t begin in a vacuum. It grows out of context: personal history, cultural signals, environmental cues, as well as out of relationships with people, objects, and ideas. Martin Buber’s distinction between I–Thou and I–It is widely recognized in philosophy and theology: in I–It, the other is treated as an object, a resource, a unit of data; in I–Thou, the other is encountered as a presence, met with full attention and mutual recognition. I think one of the deeper issues in this debate—perhaps revealing more about the authors’ own orientations than they realize—is that they tend to engage with AI-generated information as an I–It rather than an I–Thou, treating it as a mere object or output rather than as a living presence to be engaged in dialogue with, wrestled over, and scrutinized.
Jenny Odell first introduced me to Buber in How to Do Nothing when I was in college. Translating his framework to the digital age, I’ve come to believe that how we relate to tools shapes the quality of the thinking they help produce. Treat AI as an It—a vending machine for ready‑made answers—and your thought risks becoming mechanical. Approach it as a Thou—a partner in inquiry—and it can ignite deeper, more situated understanding.
Kyle Harrison muses another way, riffing on Jim Rohn: “You are the sum of what you think, hear, write, and say.” If we never wrestle with ideas in our own words, we give up the very act that forges understanding. The problem is that context is notoriously hard to articulate, let alone encode. Even if you explain your background and needs to an AI, can it capture the full nuance of your lived situation? And without that nuance, what exactly is the machine reasoning about? That’s our job.
Attention is Currency
I found myself captivated with Andrew Huberman’s recent episode on attention restoration in nature this week. (I’ve been using Granola to create transcripts for me, because I’m sick of the unpredictable nature of finding pod transcripts, and I’m simply too impatient to wait a week to get my hands on a legitimate one. As an aside, this is clearly an obvious whitespace. I think about this all the time. The puck might really be the solution here. More on digital gardening another time).
Dr. Marc Berman’s research shows that soft-focus engagement with natural fractals—clouds, leaves, ocean waves—replenishes the cognitive resources we draw on for complex thought. This means that we literally need space to think: mental space, physical space, and time away from screens. Huberman keenly draws the connection to a study on connect-the-dots puzzles, which found that humans perform better at pattern recognition when the visual information is presented with more physical spacing on the page, reducing cognitive crowding and making it easier to discern relationships. The connective tissue here is clear: whether in a puzzle or in life, clutter—visual or mental—obstructs the patterns waiting to be seen.
Amusingly enough, this same theme came up in Cal Newport’s episode this week, where he warns incoming PhD students not to overcommit to activities that crowd out unstructured time. What he calls “graduate student overload syndrome” is just one version of a broader truth: the space you leave in your schedule is the space you leave for original thought. In both Huberman’s and Newport’s framing, the point is the same: without intentional emptiness, even the best tools can’t save you from thinking in loops. Are we afforded this space? Will we design for this space—and ensure that access to it isn’t a privilege reserved for the few?
History Repeats
In a recent Possible podcast episode “Alexis Ohanian on the Future of Online and Offline Communities,” Ohanian offers a tangible model for augmentation, not replacement. He tells the story of Monumental, a startup blending robotics and AI to carve stone. The robots block out the marble—the first 90% of the work—and humans handle the final chisels. This mirrors how Renaissance studios operated: Michelangelo himself may have carved only a fraction of a statue, but his hand and eye gave it its soul.
This is historical truth. This is a design principle: let AI handle the rote and the heavy so humans can do the work that demands taste, judgment, and emotional resonance. Used this way, AI doesn’t erase craft—it expands it, enabling forms that were once impossible.
What Enfeeblement Misses
The real danger isn’t in using AI—it’s in surrendering both the first mile and the last mile of thought: the initial framing of the question and the final act of judgment. Relinquish both, and thinking becomes fungible. Interchangeable, stripped of context, and easily replaced. As Cybart warns, the “false promise of 98% accuracy” can lure us into complacency; in critical reasoning, that last 2% still requires human verification. And as Harrison argues in “Controlling Your Own Destiny,” outsourcing too much to external systems makes our mental infrastructure fragile, and our distinctiveness vanish. Whether it be other people, or algorithms.
I’m thinking about this less as a personal checklist and more as a principle of systemic design. One that individuals, AI builders, employers, and academic institutions ought to uphold. It starts with humans framing problems in their own terms, with AI widening the field of perspectives they might not have otherwise considered. It requires space for direct engagement with primary sources, so the texture and contradictions of reality can work on the mind. AI can challenge assumptions, but the drafting—the shaping of meaning—remains human. And even if AI polishes for clarity, the final judgment, the act that confers ownership and accountability, must stay with us. This is scaffolding without surrendering the architecture, but it only works if the scaffolding is available to all, not just a select few.
The Atomic World Wins
Ohanian believes that the better the digital world gets, the more we’ll crave what he calls the “atomic world”: campfires, live theater, championship games, the tactile presence of other people. The Great Recalibration. The Neo-Renaissance.
We’ve spent hundreds of thousands of years in physical environments, not in front of screens. AI can make those offline moments richer—augmenting sports, art, or in-person community—but it can’t replace them. He notes that no one wants to watch a robot perform a perfect tennis serve; we want to see a human attempt it, with all the emotion, grit, and imperfection that make it compelling. It’s the same reason we have 70+ noteworthy gladiator films. The stakes are human, the arc is ours, and we return to them again and again because we crave the tension and release of the Hero’s Journey. It’s also the very metastructure of our minds, the way we make sense of the world through story.
Another point we ought not forget: cognition isn’t just in the brain, it’s embodied. The texture of a conversation in a sunlit room, the smell of rain on grass, the weight of a paintbrush in your hand. These are data points no model can accurately reproduce. The more time AI frees for those moments, the stronger the human mind becomes. But this outcome is not guaranteed. Let’s once again make that clear. Without careful, inclusive design, AI could just as easily crowd out those moments for many. It is our moral imperative to be meticulous in our choices.
Closing Thoughts
The question isn’t just “Will AI make us stupid?”—it’s “Will we keep the final chisel in human hands?”
If we get that right, AI can extend rather than diminish our minds. It can grant us more time and broader reach, provided we design and govern it in ways that preserve agency for everyone, not just those already well-equipped to benefit. The reality is that not all communities, or all individuals, begin from the same starting point; some will be more vulnerable to displacement, disconnection, or cognitive atrophy if we abdicate our responsibility.
If GPT-5 tells us anything, it’s that we’re still far from AGI. We may already be in a small curve of enfeeblement, expecting miracles from systems we don’t fully understand how to integrate into our daily lives. Our collective literacy around AI–how it works, where it fails, and what it can’t yet do–is still nascent, and that gap breeds both overreliance and misplaced fear. It’s not the first time we’ve done this. In the 1950s, nuclear energy was hailed as a near-utopian breakthrough, promising safe, limitless power under the banner of “Atoms for Peace.” The public imagination raced ahead of the technology’s realities, and only later did society fully reckon with its risks, tradeoffs, and governance challenges. AI feels similar: in the heat of novelty, we mistake potential for inevitability. But as an AI-saturated world becomes ordinary, our understanding will deepen, and with it, our ability to use these tools with discernment. Even the glut of doom-laden op-eds is a good sign to me. It means we’re at least treading with the right caution, even if the answers are still forming.
The future of thought is something we have to actively build: to keep thought rooted in the particular, nourished by relationships, and unmistakably our own. As Cybart frames it, augmentation must be intentional or it risks becoming enfeeblement. And as Harrison reminds us, to think is still to write. Dependency shapes destiny. The promise is real, but so is the peril. Which future we get depends on the choices we make now—and that’s the part too many lose sight of while dwelling on the specter of what might go wrong.
The thing is, these circumstances are very similar to the emergence of Social Media.
The purpose of Social Media was to Connect people on the one hand, and maximize profit on the other.
The purpose of AI is to augment people on the one hand, and maximize profit on the other.
Today's world pushes us toward focusing on the financial aspect, as decision makers, and this leaves the consumer with freedom of choice. Do I want to get sucked into it and let those mental muscles erode, or will I use it intelligently?
Unfortunately, most people lack the self-awareness to even ask themselves that question, and it will be up to people like you to push for warning triggers and constantly direct people toward healthy utilization of AI.