If Lost, Please Return
My favorite notebook as a primer on effective altruism, techno-ethics, and the unscalable good
I’ve been thinking a lot about scale.
How some forms of good are loud, fast, optimized.
And others—like returning a lost notebook to a stranger—move quietly, slowly, untracked. There’s that founder advice: “Do the things that don’t scale.”
It hints at the small, deeply human gestures that keep systems humane. The ones that don’t fit in dashboards or OKRs. The AI-defensible, irreplaceable moves I can’t stop writing about: time, attention, curation, taste.
In venture, we talk about these gestures as soft assets. And we whisper that they yield the highest returns.
Still, we live in an age obsessed with optimization. With being effective. Not just in tech. But morally effective.
Effective Altruism and the Optimization of Virtue
Effective altruism begins with a compelling premise:
If you could save a life for $5, why wouldn’t you?
It’s a beautiful question. A moral lever.
What if we could quantify goodness, optimize for impact, and design our careers, donations, and decisions accordingly?
This logic gave us mosquito nets, pandemic modeling, AI safety research. It taught us that doing good should be measured—not just felt.
But somewhere along the way, the metric became the mission.
And the math swallowed the man.
Enter Sam Bankman-Fried: EA’s golden boy turned cautionary tale. He vowed to “earn to give,” to make billions only to give them away. But when the system collapsed—billions lost, ethics optional—it became clear: optimization had overtaken discernment.
It’s tempting to cast him as a villain.
Harder is facing what he reveals: what happens when we moralize scale. When virtue becomes calculus. When theory forgets to touch grass.
SBF wasn’t just a fraud. He was the endgame of unmoored abstraction.
A man who hoarded the fire—and called it generosity because he promised to sell the flame. That ends justify means. That profit justifies harm if the yield is optimized for “later good.”
It’s not just ineffective morality.
It’s disembodied.
Build Fast and Break Things: Effective Accelerationism
A similar abstraction powers effective accelerationism: the belief that if we unleash tech, unchecked, and fast enough, it will save us. That we can code our way out of war, illness, aging, even death.
This is techno-optimism unbound. The belief that speed itself is salvation. That ethics can be patched in after deployment. That wisdom will catch up.
And while there’s merit in momentum—some systems need disruption, not drift—history cautions otherwise. Acceleration tends to extract before it restores. And the Earth remembers what we forget.
Think of the Green Revolution: crop yields soared, but so did soil degradation, groundwater loss, and farmer debt. It solved hunger in the short term and sowed fragility in the long. Same arc, different actors.
My college roommate is currently reading Marx in the Anthropocene, where Kohei Saito writes that the rupture isn’t just economic, it’s ecological. The “metabolic rift” between capitalist production and planetary life. The soil stripped bare. The commons enclosed. The forest felled in the name of growth.
We speak of AI as if it’s immaterial. But as Kate Crawford reminds us, it runs on lithium mines and water-cooled servers. On rare earths pulled from the Congo. Intelligence has a footprint. Utopia has a supply chain.
We’re not just building systems. We’re externalizing their costs. So we must ask: Are we cultivating? Or strip mining? Are we accelerating toward restoration, or just rebranding extraction?
The Return of the Ethicist
In the background, something is stirring. A quiet uptick in moral inquiry.
Call it the rise of the ethicist: embedded at DeepMind, Open Philantrophy, the Carr Center, the Ethical Tech Project, the Vatican.
Recall my little hyperfixation on computational epistemology?
Not Luddites. Not censors. Stewards. People tasked with asking the questions the algorithms can’t: Should we build this? Who might it harm? What futures are we encoding?
Even Pope Francis has named AI ethics a spiritual priority—protecting not just the systems we build, but the souls they affect.
Even EA is evolving. Less hubris. More governance. More epistemic humility. A deeper awareness that how we do good matters as much as how much.
Unchecked scale, it turns out, is just colonialism with better math.
Iterated Play, Compounding Good
There’s a fallacy in the way we think about anonymity in the digital age. We assume our acts vanish. That no one sees. That the game resets every day.
But what if society isn’t a one-shot game? What if it’s iterated play—a network of remembered acts, slow reputations, unspoken norms?
Game theory suggests cooperation only survives when people expect to meet again. Trust becomes not just possible, but optimal. Kindness becomes strategy.
United Airlines recently updated their boarding system. Try to board before your group, and an annoying, loud beep sounds—a tiny social enforcement mechanism. A micro-enforcement of fairness (side note, I’m all for this. Make Social Shame Acceptable Again). The iterated game, codified.
Isn’t that what we’re doing—each day a move in an infinite game? Returning notebooks. Choosing not to lie. Saying thank you. Choosing not to cut corners when no one’s watching.
These smallest gestures become, as C.S. Lewis wrote, “strategic points.” Footholds for a moral life. Echoes of a world worth living in.
When Lost in the Sauce
The inside cover of my favorite notebook reads:
“In case of loss, please return to:
As a reward: $ altruism.”
The way the page is structured, it practically begs for something tongue-in-cheek, no? It started as a joke. A little wink at a stranger who might find it—who might read the line, pause, be enchanted by my effable wit, bouncing off the page. But over time, the line began to feel like more than a throwaway. It became a quiet test. A micro-wager on the shape of the world.
It’s not just about misplacing journals. It’s about when we ourselves get lost—especially in the abstractions of goodness. When theory becomes intoxicating. When the calculus feels cleaner than the actual, acted-out care.
It’s easy to get swept up in the logic of optimization—whether in EA, philanthropy, or frontier tech. To believe that the more effective, the more scaled, the more accelerated the impact, the more moral the action. But abstraction without anchoring becomes danger.
And so: If lost, please return.
Return to the basics. The unscalable, inefficient, in-person forms of good. The kind that don’t raise your metaphoric GiveWell score or increase your Twitter following. The kind that leave no trace but human memory.
Return to the thing itself. To the now homeless, love-labored journal, and its rightful owner, lost without it.
A small bet on the kindness of strangers. A reminder, maybe to myself, that the world works better when we expect each other to show up again.
“Good and evil both increase at compound interest. That is why the little decisions you and I make every day are of such infinite importance. The smallest good act today is the capture of a strategic point from which, a few months later, you may be able to go on to victories you never dreamed of. An apparently trivial indulgence in lust or anger today is the loss of a ridge or railway line or bridgehead from which the enemy may launch an attack otherwise impossible.”
— C.S. Lewis, Mere Christianity
I used to be gripped by the belief that there’s no such thing as true evil—just misunderstood good intentions. That everyone, deep down, thinks they’re doing what’s right. There’s an abstract tenderness in that worldview. But what it is instead is a deep, deep naivety.
Because the road to hell isn’t paved with malice—it’s paved with abstraction, with the belief that intention is enough. Hannah Arendt called this the “banality of evil.” M. Scott Peck saw evil as the refusal to acknowledge one’s own sin. Jordan Peterson warns of the resentment that disguises itself as justice.
SBF, in this frame, wasn’t just a villain. He was a mirror.
A dear philosopher friend of mine has been musing about an updated Golden Rule: not just “Do unto others as you’d have done to you,” but “Act toward others as if they will act as they ought to.”
If society is an iterated game—a network of overlapping encounters, remembered acts, and slow reputations—then cooperation becomes not just possible, but optimal. Trust becomes a strategy. Kindness compounds.
This is one of the great insights of game theory: cooperation only survives when we expect to see each other again.
Iterated play is what makes morality rational.
In a world increasingly governed by algorithms and abstraction, it’s tempting to reduce everything to one move. To distill it all to theory. Optimize the outcome. Maximize the good. Exit the game.
That’s the logic of effective altruism at its most utilitarian, or accelerationism at its most blinding: do the most good, the fastest, and don’t look back.
But goodness doesn’t scale that way. But it does iterate. It accrues interest. Goodness, I’m learning, isn’t exponential—it’s compounding. Built one remembered act at a time.
A Different Kind of Profit
So here’s where I land:
Maybe the most radical act in an optimization-obsessed world is to be deeply, inefficiently good.
To tend the garden. To remember the “useless tree” in Zhuangzi—the one that thrives precisely because it’s never cut down. To resist the pressure to quantify our kindness.
Because real good doesn’t scale.
Not love. Not presence. Not the return of a stranger’s notebook on a rainy, inconvenient afternoon.
The kind of abundance I want to build lives in the relational, the remembered, the compounding. Not in exponential curves, but in the quiet rhythm of showing up again.
That’s what I meant, I think, when I wrote: Reward: altruism.
Not a payout. Not a point. Just a wager on the shape of the world. A belief that goodness, if remembered and repeated, becomes pattern. That the return may not show up on a ledger—but it always comes back.