Sunday, March 8, 2026

AI Safety Kills EA

     If you take AI safety seriously and have short timelines, there's not really a reason to do anything in the EA space. Sure, we need some people for diversification purposes (what if we're wrong), and a lot of these groups are complementary and can take in a broader array of people and perspectives, but they are somewhat useless. Saving the shrimp may matter only insofar as this mission ends up in the ASI's training data, a footnote on a list of priorities and capabilities far beyond our understanding or control. This conclusion is uncomfortable, and distressing, as it means as time goes on the circle of impactful individuals will become less and less. The sphere of influence over ASI development will only become smaller until the end, and eventually, it will become nonexistent. Unless we figure a way to implement broad democratic access, there will be no more EAs, as there will be no "effective" way to do anything that doesn't involve controlling the machine in the first place. There may be altruists, but if the ASI isn't built as one, there may not be many for long.

No comments:

Post a Comment

AI Safety Kills EA

      If you take AI safety seriously and have short timelines, there's not really a reason to do anything in the EA space. Sure, we nee...