Robots + Revolutions
I’m trying a different format this week. As always, share your thoughts with me.
AI + Robots are Here
We get it. We’ve passed the inflection point; AI + robots are here. Techno-optimists promise massive productivity gains; more productivity ultimately creates more jobs and frees humans to pursue “meaningful” work. But when? And at what cost?
Communist Wave + Catastrophe-Driven Innovation
I’ve been connecting nearly everything I’m encountering with Ray Dalio’s How Countries Go Broke. The Trump Administration’s mandate to delever + the imminence of AI and robotics displacing employment = communist movement impending (e.g. The Great Depression).
debt cycles + wealth gaps + political polarization → social unrest and major policy shifts
Could mass unemployment trigger a renewed push toward communism—or will new technologies create enough new opportunities in time to avert that?
WWII and Vietnam as testaments that crisis breeds ingenuity. But the AI Race is already here… what more could be triggered by mass labor disruption and national debt reconciliation? Again, my take, = communist movement imminent.
CC: All-In's 2025 Predictions (the year of the robot), Morgan Housel on Huberman
Implications for Post-AI Labor Market
The zeitgeist seems to be fixated on consulting and research gigs becoming obsolete but this misses the mark, imo.
Low-wage employment has not gotten sufficient coverage. Service industries are beginning to see partial automation (fast food robots, AI-driven order systems). Manufacturing (while we try to bring onshore, re: Trump) faces the highest risk; Agriculture (while we disrupt labor demographics, re: Trump) faces the highest risk.
I really hope we keep human touch in the last mile. Robots/AI handle back-end operations; humans provide the face-to-face interaction, emotional connection, and real-time problem-solving.
AI Will Make us More Human: Service, Hospitality, Teaching, and Caretaking
I get the argument: AI will force humans to be more human. These fields rely on empathy and human nuance. Automation can streamline logistics but may never fully replace the emotional resonance of human presence—roles emphasizing soft skills, relationship-building, and high-touch service.
And, as AI removes mundane tasks, people should theoretically have more time for creativity, family, community, and personal projects (maybe we call this The Human Flourishing Dividend?).
These two points are connected: more human = more community. We need a solution to the Loneliness Epidemic (side note: you know how Richard Dawkins told Jordan Peterson he’s drunk on symbols? I think I’m drunk on proper nouns and epithets).
This excites me as it better informs my overarching thesis on the emergence of neotribalism and communal living (CC: my article, The Great Recalibration).
From Individualism to Neotribalism
Okay let’s click into more human = more community. Hyper-individualistic societies may shift to co-living or communal models to share resources and childcare.
I’ve been listening to Dr. Becky Kennedy’s Good Inside podcast and she’s repeatedly pointed out how the responsibility of child-rearing is becoming increasingly individualistic. We’ve lost trust in our communities, schools, neighbors, to help raise our kids, placing exceedingly more pressure on nuclear caretakers.
Let’s connect this back to employment. I’ve also wondered if current employment (chiefly underemployment and job hopping) issues are due to our highly individualistic, autonomous ways?
i.e., the entry-level role requiring you to have had the job prior to getting the job fundamentally means no one wants to train you. We’re losing the practice of mentorship.
On the other end of the aisle, while it’s currently at a record low, people switch roles searching for meaning or better pay in a fast-changing market.
Implications for Post-AI Skilling
Webbing back my concern with manufacturing and agriculture as a priority pain-point, I want to think about what the future of black, blue, and pink collar work, and thus, skilling, will look like.
Trade schools might pivot toward robotics maintenance, AI-assisted healthcare, or skilled crafts that aren’t easily automated. Undergraduate programs may emphasize interdisciplinary and “soft” skills—critical thinking, creativity, emotional intelligence.
I have tangential thoughts about how schools ought to practice more deep exposure to various fields rather than trying to immediately label kids as left or right brained. So while promoting conscientiousness is hugely important to child outcomes, I hope we don’t just become warm and fuzzy teachers. I hope life post-AI creates more scientists.
I’m excited about the implication that the pace of AI evolution will require lifelong learning, and continuous reskilling. Not just for the opportunity for and cultural evolution toward multiple careers, but for what continuous learning implicates for brain health and longevity.
Will Computational Epistemology get its moment?
This skill merits its own bulleted section. I like the analogy of AI as the new printing press. Like how the Gutenberg printing press disrupted access to information (might I add: and fueled The Protestant Reformation), AI has transformed how knowledge is generated and validated. If we treat AI outputs as truth without scrutiny, we risk misinformation, just as we do with the press.
I’m encouraged by models like o1 and o3 that walk you through reasoning. Schools deserve the most flack for this, but again, I have faith we’ll move away from “learning what to think” and back into “learning how to think”.
The “Map vs. Territory” Problem: AI models are maps of reality, never fully capturing the complexity (“the territory”). Computational epistemology ensures we rigorously question and interpret AI outputs, demanding transparency, ethics, and accountability. I would like to see more conversations around computational epistemology as a skill, though I foresee this to be complicated by our current cultural crossroads—where critiques of truth challenge longstanding moral frameworks. In such a climate, even the goal of “objective” AI outputs can feel fraught, underscoring the need for transparent, ethically grounded approaches to algorithmic decision-making.