The "Rational" Conclusion
The AI Doomers produce their first Molotov cocktail
A 20-year-old threw a Molotov cocktail at Sam Altman's house at 3:45 AM Friday. Then walked three miles to OpenAI headquarters and threatened to burn it down. He has been booked on suspicion of attempted murder.
He was not a lone wolf. He was an active member of PauseAI with six community roles. His Discord handle was "Butlerian Jihadist." His Instagram was a feed of doomer content: capability curves captioned "if we do nothing very soon we will die," Venn diagrams placing us at the intersection of The Matrix, Terminator, and Idiocracy. Four months before the attack, he recommended Yudkowsky and Soares' If Anyone Builds It, Everyone Dies to his followers.
His name is Daniel Moreno-Gama.
He had his own Substack. In January he published “AI Existential Risk,” estimating the probability of AI-caused extinction as “nearly certain.” He called the technology “an active threat against anyone who is using it and especially towards the people building it.” He concluded: “We must deal with the threat first and ask questions later.” He wrote a poem imagining the children of AI developers dying, asking their parents why they did nothing. “May Hell be kind to such a vile creature,” he wrote about the builders.
PauseAI has already deleted his messages from their Discord.
For an investing newsletter, I know this is not what most of you are here for. The goal here is to to explain where my worldview is coming from, so that the longer term calls start to make more sense. My ideas behind the “New New Deal” are intended as a direct response to where this is going.
All I am doing here is running their model forward, and connecting the dots.
Here is the framework. It has three moving parts.
Start with certainty. Yudkowsky’s position is that if anyone builds sufficiently intelligent AI, every human being on earth dies. Not probably. Not maybe. Everyone. Your children. His daughter Nina, whom he invokes by name. He published this in TIME. He wrote it in a book called If Anyone Builds It, Everyone Dies. He said we should airstrike data centers, and that the risk of nuclear exchange is preferable to a training run completing.
Purity spiral aka escalation. Within this community, members compete to demonstrate commitment by raising the stakes. P(doom) numbers climb from 50% to 90% to 99.99999%. The Center for AI Safety's national spokesperson said on camera that the correct response is to "walk to the labs across the country and burn them down." PauseAI activated something called a "Warning Shot Protocol" declaring an AI model "a weapon of mass destruction." One of PauseAI's leaders said an Anthropic researcher "deserves whatever is coming to her." When someone flagged this rhetoric in PauseAI's Discord, the mods deleted the post.
The day before the attack, Nate Soares, Yudkowsky's co-author on the very book the kid recommended, tweeted that Altman was "doing terrible stuff."
Then cheap talk gets tested. Game theorists have a term for this: cheap talk is costless signaling that eventually meets reality. When you make the stakes existential for the human race, you can justify any level of extremism if it lowers the hallowed p(doom). These aren't isolated incidents. They are a series of escalating and mutually reinforcing claims around an eschatological philosophy that, taken to its conclusion, would accept killing 99% of the world to save the last 1%.
It was only a matter of time before someone took the framework at face value. The kid read the book. He joined the community. He wrote his own manifesto. In a memoir for his community college English class, he described himself as a consequentialist: “I give very little credence to intentions if the results do not match.” He chose “Butlerian Jihadist” as his name. On December 3rd he wrote in PauseAI’s Discord: “We are close to midnight it’s time to actually act.”
Then he acted.
They gave him a trolley problem. One life versus all of humanity. The kid pulled the lever.
There is a final irony that deserves attention. If the doomers truly hold their stated beliefs at their stated confidence levels, they should be more honest about what those beliefs imply. A few weeks before the attack, a journalist asked Yudkowsky: if AI is so dangerous, why aren't you attacking data centers? His answer, relayed by Soares: "If you saw a headline saying I'd done that, would you say, 'wow, AI has been stopped, we're safe'? If not, you already know it wouldn't be effective."
Notice what that answer is not. It is not “because violence is wrong.” It is “because it wouldn’t work yet.” The restraint is strategic, not moral. And the community knows it. The dark undercurrent is an unspoken agreement: the kid’s greatest sin was bad timing.
This is what I mean by intelligence not equaling power, and it is the deepest flaw in the entire doomer worldview.
Yudkowsky’s framework rests on the conflation: a sufficiently intelligent AI will necessarily acquire the power to destroy humanity because intelligence converts automatically into capability. Most of his followers are not technical. They do not build AI systems or work on alignment engineering. They possess a particular kind of verbal intelligence that lets them construct elaborate arguments about risk, and they have convinced themselves this entitles them to a priestly authority over the technology. They can construct the argument. They cannot build the system.
This isn’t accidental. It’s baked into the foundational texts. Yudkowsky’s Harry Potter and the Methods of Rationality literally models a world where the person who reasons best deserves to override every institution around him. The Sequences build the liturgy: a small caste of correct thinkers, epistemically and morally superior, whose rationality entitles them to govern what the rest of humanity is allowed to build. It’s not a safety movement. It’s a priesthood with an origin story written in fanfiction.
Yudkowsky can distance himself from the kid with the Molotov. But he cannot distance himself from the syllogism. If the builders are going to kill everyone, stopping the builders is self-defense. That is the central claim, stated plainly. The only question was always when someone would take it at face value.
They should stop acting surprised when their own logic shows up at 3:45 AM with a bottle full of gasoline.
Disclaimers
I am not advocating for or against any position on AI safety. I am observing that a framework built on certainty of extinction produces predictable consequences. The suspect is innocent until proven guilty.
These views do not represent those of any investors, clients or affiliates of Rose.























