I find the EA framework (scale, neglecteness, tractability), perhaps the single most powerful way to think about the world. It's too early to tell, but actually being able to discern well with these criteria is so incredible useful that it probably has to lead to fame and fortune. I hope to avoid both (especially the former), but many EAs who are smart and diligent seem pretty able to build an immense amount of one or both. Maybe it's just the case that backing AI is backing the winning horse, but it's hard to believe in market efficiency once you see the world play out so closely to some people's (Christiano's) predictions.
Losing Money Effectively
Sunday, April 12, 2026
AI Safety and Violence
Two days ago, someone threw a molotov cocktail at Sam Altman's home. This is a horrifying development in the state of the world, but not one entirely unexpected. The United States has seen a string of assassinations and assassination attempts; the most noteworthy of late being the attack on President Trump during the election cycle and the murder of conservative commentator Charlie Kirk. To state it plainly, the shift from targeting political leaders (Trump, etc.) to targeting political activists (Kirk) and the rhetoric of those within the public square is immensely distressing. It was shocking to see people with liberal ideologies celebrating or at the very least appearing ambivalent to the attacks on Trump and Kirk, and see clearly how rhetoric can nullify one's common sensibilities surrounding morality.
Violence has always been a mainstay of the human condition, but its weird to see it happen in real-time. The PauseAI movement, of which there are some pretty sensible arguments backing, has fallen into the abyss. Twitter advocates single out Anthropic employees to be labeled evil for working for a company advancing the AI frontier. Personal ad-hominem attacks and the escalatory rhetoric are the norm on social media, and certainly have been normalized across the pause/stop AI ecosystem. These so called "advocates", imo, are probably creating the highest amount of negative EV energy I can imagine for the long-term future. It's baffling. Breaking standard social norms in order to score revenge points against AI safety sympathetics, especially when such attacks lack any standard of common human decency, only create a more polarized and dangerous ecosystem. There will be more molotov's thrown, no doubt, as if there is not already enough pressure on those working on transformative AI.
I have no general take on Sam Altman, other than I wish him and his family a free and happy life full of fulfillment and peace. There are no cartoonish villains in my mental model of the AI ecosystem, only a collection of characters with widely different priorities and worldviews. I'm pretty worried about AI developers being coupled in with the "eat-the-rich" anti-elite politics of leftist ideology, since I don't think the comparison is at all adequate and there are some very positive benefits to AI progress (let's not pretend there's certainty on negative outcomes here). I think that pause/stop AI people who believe ASI is extremely dangerous, but can't convince the public to be anti-AI on the merits of these arguments, commit a cardinal sin of stupidity by trying to piggy-back off anti-AI sentiment in other domains in order to build a public coalition against AI. This is how movements become poisoned, by utilitarian-gaming, lack-of-focus (occupy wall street), and violence (the burning of a random Wendy's, for what exactly?).
I've spent countless nights sitting awake, concerned about the rapid advances being made towards machine superintelligence. I have tremendous respect for every leader in the AI space (Sam, Dario, Elon, etc.) who is likely dealing with the same nightmares. This is not a situation any person wants to be in, and the threat of personal assassination does not increase the clarity of one's focus. In fact it disincentivizes future truth-telling about risks, and successful violence could permanently shatter the AI safety movement entirely.
Sunday, March 15, 2026
On Writing
Writing clarifies focus and ideas, and is the single best determiner for intelligent decision making I've seen. Most now influential AI and EA people, for example, from Sam to Dario to Paul to Carl to Brian to others, had blogs where they just dumped their thoughts (albeit in a better laid out format than this very informal blog). I think I should do this way more, and the year-long hiatus I took was probably a pretty dumb move.
Saving Money Isn't Very Useful
Small dollar values don't matter if you're gunning for civilizational change. If you are making a substantial income or very productive work, saving money on groceries or nights out don't matter much. In the AI-based future where lots of long term value will be decided immediately, it is likely not worth saving a little extra money to donate to causes that will only be relevant for the next few years (until they're surpassed by the importance of ASI). The headache and lifestyle hit on buying the less expensive canned corn at the grocery store just don't really matter. The big decisions do.
I don't think EAs should be forced to live like hermits (or Jesus) and get stuck in the most frugal lifestyle possible, and think that if they don't do this they are immoral (which according to standard utilitarianism they basically are). I think we should just care about the big ticket items, and try to move the biggest levers possible to change the world. Even if this means eating meat or spending money on fancy dinners that could have gone to the less fortunate, I think the most important thing anyone will do with their life is gun for literal transformational change.
Sunday, March 8, 2026
AI Safety Kills EA
If you take AI safety seriously and have short timelines, there's not really a reason to do anything in the EA space. Sure, we need some people for diversification purposes (what if we're wrong), and a lot of these groups are complementary and can take in a broader array of people and perspectives, but they are somewhat useless. Saving the shrimp may matter only insofar as this mission ends up in the ASI's training data, a footnote on a list of priorities and capabilities far beyond our understanding or control. This conclusion is uncomfortable, and distressing, as it means as time goes on the circle of impactful individuals will become less and less. The sphere of influence over ASI development will only become smaller until the end, and eventually, it will become nonexistent. Unless we figure a way to implement broad democratic access, there will be no more EAs, as there will be no "effective" way to do anything that doesn't involve controlling the machine in the first place. There may be altruists, but if the ASI isn't built as one, there may not be many for long.
AI Positivity
Things may get very positive, until they get very scary. One of the issues with planning for the future, is that rapid AI progress may lead to scientific discovery across a range of areas. Technology often leads to better outcomes for people, in terms of both health and entertainment, so it could be that the upcoming technological advancement greatly extends lifespans, reduces suffering, and creates amazing content. Perhaps there is societal disruption and mass unemployment, but we could also rapidly respond to those issues as we blow through eras of technological progress. We may believe that we are on a crazy upward trajectory to the stars. And we may well be, until we aren't. Until power concentration, or AI takeover, or some form of immense tragedy that comes from recursive self improvement, puts an end to our happiness or our species. Until the train falls off the tracks, off it's previous upward slope to heaven. We should be prepared for this, and be willing to pull for the breaks even if everything is looking rosy on our way up. Unfortunately, I do not think we will collectively have the wisdom to do this.
Sunday, January 18, 2026
The Precipice is Distant
Toby Ord's book, The Precipice, is one of the best books I've read. In it, Toby argues that we are at a particularly important time in human history, where there is a consolidation of x-risks that may result in us blowing everything up (nuclear war) or permanently locking in values due to superintelligent AI systems. The decisions made over the next hundred years may be extremely important. It is inferred in the EA community that there may be a period of "long reflection" after this initial period, where if we don't destroy ourselves or lock in bad values, we could chill for a bit and then strategically decide what the best moves our (taken hundreds of years to discuss before our next actions).
However, the progress of technology may make this claim entirely irrelevant. Perhaps in 200 years our problems are those of civilizational importance, but that civilization is about to colonize the galaxy and then universe. Determining if China and the US split the universe, or what specific space governance system should be implemented, may be drastically more important than the decisions we could make today. The discovery of novel physics in five hundred years, something crazy like the ability to create false vaccums or access different dimensions (or multiverses) could make those times the "precipice" of human history, where individual actions hold extaordinary weight. We are certainly, in my view, at the most important time period in human history. But aside from the consolidation of power possible through ASI, I have no reason to believe this trend will not simply continue upward.
Tuesday, September 30, 2025
Random EA Reflections
Utopia:
If there's only a five percent chance we go extinct, that means there's a 95% chance that human experience lives on. Should we not spend more time ensuring that time is spent well? Should we not dedicate more resources to ensuring we get to post-instrumental utopia?
The future:
Let's say someone else is playing a game. The outcome of the game is that there is a 10% chance your parents die, and a 90% chance your parents get to utopia. If you killed the person playing this game, would you be wrong?
Nightmares:
If you are an EA, you believe that conscious suffering matters. In-the-moment suffering, meaning if you are tortured and your brain is wiped after and you have no memory of the event, that is bad. If you are a shrimp and you suffocate once brought on land, that is bad (possibly). Well, what about nightmares? There are some nightmares I've had that I certainly remember, and I'm fairly positive in the moment I am facing actual psychic distress. Should a new cause area be to limit the amount of nightmares people have, or the intensity?
Breakups:
Breakups are some of the most negatively impactful events for most people. I'd much rather break a bone than get a divorce, and it's not even close. Pain in the moment is hard to compare to the toil of human relationships. It seems in a country where most middle class families can put food on the table, but almost half dissolve because of divorce, we might be missing some low-hanging fruit.
Magnitude:
EAs aren't usually directionally wrong about things. Sure, they mess up the magnitude. But the direction is usually correct. Animal welfare is a good example of this.
The Repotato Conclusion:
Are plants morally valuable? Is a potato? How many potatoes equals one human life?
Life:
It is very hard to live life outside the Overton Window. It's easy to claim to be an independent free thinker who stands up for their ideas. But when actually faced with public mockery and shame, one realizes how hard life can become.
Digital:
Consciousness also falls victim to the anthropic principle. This may be the only sort of universe where consciousness can exist. This may mean things like digital consciousness are more likely.
AI:
We basically want the future ASIs to think humans are utility monsters. That is the control problem.
Simulation:
If you take simulation theory seriously, you think that we are probably digital minds. In which case, you should probably care a lot about how digital minds are treated.
Sunday, July 20, 2025
Moral Non-realism
There is something particularly disturbing about moral non-realists who believe we should phase out life itself. This is the position of many anti-natalists adjacent to the EA community, who often focus on suffering-focused ethics. I've never been convinced by this "moral non-realism" stuff in general, it just seems like nihilism with extra steps. This idea of preference satisfaction ("I am a utilitarian, and we should do good, but by good I just mean my idea of good and my preferences") is frankly pretty stupid. If you believe that morality is not objective, you are a nihilist. Or a cultural relativist. Or whatever else you want to call yourself, but it basically excludes you from arguing for moral actions. Sure, there are arguments regarding how to act under uncertainty (many of these I have made), but to argue that we should pave over the rainforests requires stronger claims. Arguing that we should prefer a world without any sort of life (because suffering is so bad), especially when you are actually a nihilist, is a particular kind of derangement. And it is obvious that the majority of the world thinks that taking actual actions towards this goal would be considered evil. To walk this road anyway is to claim that your subjective beliefs (that you believe are subjective) should override the beliefs of others (that you know they believe to be objective). I am not sure what the right word for this is, but it sure sounds sickening.
Friday, July 18, 2025
"Literally Everyone"
Saturday, July 12, 2025
Negative Utilitariansism
The EA Edge
I find the EA framework (scale, neglecteness, tractability), perhaps the single most powerful way to think about the world. It's t...
-
If you were going to try to optimize your donations in a bang-for-buck fashion in order to have a positive impact on the world, how mu...
-
Effective Altruism, as a philosophy, is very simple. Basically, the argument is that if you shouldn't do bad in the world, that me...