What actually matters to people? If you say "there is an extremely high likelihood that the vast majority of humans lose their agency, permanently," that is a strong statement that people can work with. We already know how fragile our autonomy is, as most of the things that affect us day to day are not a result of our personal decisions and actions. We have a boss, who has a boss. We get paid in a currency from our government. We've made almost none of our food, clothing, or furniture (note how I threw in "almost"). We could die at any moment if someone pushes the nuclear button. It's not hard to imagine being thrown out of the workforce because ASI doesn't need us, and easy to understand the implications of such a world. I think the rationalist/doomer community needs a reality check, and while it's cool to move the overton window, it makes sense to better understand human irrationality in order to craft a more impactful message.
Friday, July 18, 2025
"Literally Everyone"
Imagine hearing the claim: "literally everyone is going to die if someone builds superintelligent AI." What is your response? Humans have this innate tendency to discount claims that are absolute, whereas claims that are directionally very similar but less absolute have much greater impact. If someone says that "ASI is going to kill every human," they are making an extremely strong claim that is easily discounted psychologically. What if they keep a few thousands people in a zoo? What if they take over and keep us as slaves? What if actually just the top .001% of humans become permanent ruling class with the AIs, and everyone else starves? There are just so many scenarios in which ASI catastrophe doesn't result is "literally every human dies" that is easy to imagine for a layperson. In all of these scenarios, the person making the initial claim is WRONG. Maybe it doesn't matter much, and 99% of people dying is really bad too and that's the point, but the psychological defense is to just assume that this person is wildly overconfident and probably wrong about the entire claim in general. Plus, the only scenario in which "literally everybody dies" according to the pushers of this rhetoric usually involves pretty instant genocide involving technology that hasn't been even invented yet (bioweapons or nanobots), which again beg the question from people "well what if like there is a secret bunker somewhere and a few hundred people live and the AI ignores them because it owns 99.9999% of the world and might want them for research purposes later or something." It doesn't matter how far fetched the scenario is, it's extremely rhetorically ineffective to talk in such absolutes.
Subscribe to:
Post Comments (Atom)
"Literally Everyone"
Imagine hearing the claim: "literally everyone is going to die if someone builds superintelligent AI." What is your response? ...
-
Effective Altruism, as a philosophy, is very simple. Basically, the argument is that if you shouldn't do bad in the world, that me...
-
I visited Vietnam recently. A beautiful country, filled with phenomenal food, wonderful people, and a much different political system. W...
No comments:
Post a Comment