Sunday, April 12, 2026

The EA Edge

     I find the EA framework (scale, neglecteness, tractability), perhaps the single most powerful way to think about the world. It's too early to tell, but actually being able to discern well with these criteria is so incredible useful that it probably has to lead to fame and fortune. I hope to avoid both (especially the former), but many EAs who are smart and diligent seem pretty able to build an immense amount of one or both. Maybe it's just the case that backing AI is backing the winning horse, but it's hard to believe in market efficiency once you see the world play out so closely to some people's (Christiano's) predictions.

AI Safety and Violence

     Two days ago, someone threw a molotov cocktail at Sam Altman's home. This is a horrifying development in the state of the world, but not one entirely unexpected. The United States has seen a string of assassinations and assassination attempts; the most noteworthy of late being the attack on President Trump during the election cycle and the murder of conservative commentator Charlie Kirk. To state it plainly, the shift from targeting political leaders (Trump, etc.) to targeting political activists (Kirk) and the rhetoric of those within the public square is immensely distressing. It was shocking to see people with liberal ideologies celebrating or at the very least appearing ambivalent to the attacks on Trump and Kirk, and see clearly how rhetoric can nullify one's common sensibilities surrounding morality.

    Violence has always been a mainstay of the human condition, but its weird to see it happen in real-time. The PauseAI movement, of which there are some pretty sensible arguments backing, has fallen into the abyss. Twitter advocates single out Anthropic employees to be labeled evil for working for a company advancing the AI frontier. Personal ad-hominem attacks and the escalatory rhetoric are the norm on social media, and certainly have been normalized across the pause/stop AI ecosystem. These so called "advocates", imo, are probably creating the highest amount of negative EV energy I can imagine for the long-term future. It's baffling. Breaking standard social norms in order to score revenge points against AI safety sympathetics, especially when such attacks lack any standard of common human decency, only create a more polarized and dangerous ecosystem. There will be more molotov's thrown, no doubt, as if there is not already enough pressure on those working on transformative AI.

    I have no general take on Sam Altman, other than I wish him and his family a free and happy life full of fulfillment and peace. There are no cartoonish villains in my mental model of the AI ecosystem, only a collection of characters with widely different priorities and worldviews. I'm pretty worried about AI developers being coupled in with the "eat-the-rich" anti-elite politics of leftist ideology, since I don't think the comparison is at all adequate and there are some very positive benefits to AI progress (let's not pretend there's certainty on negative outcomes here). I think that pause/stop AI people who believe ASI is extremely dangerous, but can't convince the public to be anti-AI on the merits of these arguments, commit a cardinal sin of stupidity by trying to piggy-back off anti-AI sentiment in other domains in order to build a public coalition against AI. This is how movements become poisoned, by utilitarian-gaming, lack-of-focus (occupy wall street), and violence (the burning of a random Wendy's, for what exactly?). 

    I've spent countless nights sitting awake, concerned about the rapid advances being made towards machine superintelligence. I have tremendous respect for every leader in the AI space (Sam, Dario, Elon, etc.) who is likely dealing with the same nightmares. This is not a situation any person wants to be in, and the threat of personal assassination does not increase the clarity of one's focus. In fact it disincentivizes future truth-telling about risks, and successful violence could permanently shatter the AI safety movement entirely.

The EA Edge

      I find the EA framework (scale, neglecteness, tractability), perhaps the single most powerful way to think about the world. It's t...