Sunday, March 8, 2026

AI Safety Kills EA

     If you take AI safety seriously and have short timelines, there's not really a reason to do anything in the EA space. Sure, we need some people for diversification purposes (what if we're wrong), and a lot of these groups are complementary and can take in a broader array of people and perspectives, but they are somewhat useless. Saving the shrimp may matter only insofar as this mission ends up in the ASI's training data, a footnote on a list of priorities and capabilities far beyond our understanding or control. This conclusion is uncomfortable, and distressing, as it means as time goes on the circle of impactful individuals will become less and less. The sphere of influence over ASI development will only become smaller until the end, and eventually, it will become nonexistent. Unless we figure a way to implement broad democratic access, there will be no more EAs, as there will be no "effective" way to do anything that doesn't involve controlling the machine in the first place. There may be altruists, but if the ASI isn't built as one, there may not be many for long.

AI Positivity

     Things may get very positive, until they get very scary. One of the issues with planning for the future, is that rapid AI progress may lead to scientific discovery across a range of areas. Technology often leads to better outcomes for people, in terms of both health and entertainment, so it could be that the upcoming technological advancement greatly extends lifespans, reduces suffering, and creates amazing content.  Perhaps there is societal disruption and mass unemployment, but we could also rapidly respond to those issues as we blow through eras of technological progress. We may believe that we are on a crazy upward trajectory to the stars. And we may well be, until we aren't. Until power concentration, or AI takeover, or some form of immense tragedy that comes from recursive self improvement, puts an end to our happiness or our species. Until the train falls off the tracks, off it's previous upward slope to heaven. We should be prepared for this, and be willing to pull for the breaks even if everything is looking rosy on our way up. Unfortunately, I do not think we will collectively have the wisdom to do this.

AI Safety Kills EA

      If you take AI safety seriously and have short timelines, there's not really a reason to do anything in the EA space. Sure, we nee...