There is something particularly disturbing about moral non-realists who believe we should phase out life itself. This is the position of many anti-natalists adjacent to the EA community, who often focus on suffering-focused ethics. I've never been convinced by this "moral non-realism" stuff in general, it just seems like nihilism with extra steps. This idea of preference satisfaction ("I am a utilitarian, and we should do good, but by good I just mean my idea of good and my preferences") is frankly pretty stupid. If you believe that morality is not objective, you are a nihilist. Or a cultural relativist. Or whatever else you want to call yourself, but it basically excludes you from arguing for moral actions. Sure, there are arguments regarding how to act under uncertainty (many of these I have made), but to argue that we should pave over the rainforests requires stronger claims. Arguing that we should prefer a world without any sort of life (because suffering is so bad), especially when you are actually a nihilist, is a particular kind of derangement. And it is obvious that the majority of the world thinks that taking actual actions towards this goal would be considered evil. To walk this road anyway is to claim that your subjective beliefs (that you believe are subjective) should override the beliefs of others (that you know they believe to be objective). I am not sure what the right word for this is, but it sure sounds sickening.
Sunday, July 20, 2025
Friday, July 18, 2025
"Literally Everyone"
Imagine hearing the claim: "literally everyone is going to die if someone builds superintelligent AI." What is your response? Humans have this innate tendency to discount claims that are absolute, whereas claims that are directionally very similar but less absolute have much greater impact. If someone says that "ASI is going to kill every human," they are making an extremely strong claim that is easily discounted psychologically. What if they keep a few thousands people in a zoo? What if they take over and keep us as slaves? What if actually just the top .001% of humans become permanent ruling class with the AIs, and everyone else starves? There are just so many scenarios in which ASI catastrophe doesn't result is "literally every human dies" that is easy to imagine for a layperson. In all of these scenarios, the person making the initial claim is WRONG. Maybe it doesn't matter much, and 99% of people dying is really bad too and that's the point, but the psychological defense is to just assume that this person is wildly overconfident and probably wrong about the entire claim in general. Plus, the only scenario in which "literally everybody dies" according to the pushers of this rhetoric usually involves pretty instant genocide involving technology that hasn't been even invented yet (bioweapons or nanobots), which again beg the question from people "well what if like there is a secret bunker somewhere and a few hundred people live and the AI ignores them because it owns 99.9999% of the world and might want them for research purposes later or something." It doesn't matter how far fetched the scenario is, it's extremely rhetorically ineffective to talk in such absolutes.
What actually matters to people? If you say "there is an extremely high likelihood that the vast majority of humans lose their agency, permanently," that is a strong statement that people can work with. We already know how fragile our autonomy is, as most of the things that affect us day to day are not a result of our personal decisions and actions. We have a boss, who has a boss. We get paid in a currency from our government. We've made almost none of our food, clothing, or furniture (note how I threw in "almost"). We could die at any moment if someone pushes the nuclear button. It's not hard to imagine being thrown out of the workforce because ASI doesn't need us, and easy to understand the implications of such a world. I think the rationalist/doomer community needs a reality check, and while it's cool to move the overton window, it makes sense to better understand human irrationality in order to craft a more impactful message.
Saturday, July 12, 2025
Negative Utilitariansism
Quick housekeeping: so much of my productive time was spent on my book (Mind Crime) over the past year and a half, I have done essentially zero or no blogging since. This will change! A lot of the books material came from stream-of-consciousness blogging on this site (and my book and AI-focused blog), which I should certainly pick back up now. Also, I have felt a lot of pressure regarding AI timelines to try to figure out the best optimization of the explore/exploit path. I'm trying to now do 50/50 of both. So 50% of my time reading a linear algebra book, reading AI research, novel mind stuff, etc., and 50% of my time thinking through actual strategy, writing, producing in some capacity or setting up meetings with people.
Now, back to my point. During my book's revision process earlier this year, someone mentioned that my book (Mind Crime) was quite "suffering focused." This is true, although I was not quite so sure at the time how much I had been influenced by "suffering-focused ethics," or what exactly that means. Essentially the people who drive the s-risk discussion are actually negative utilitarians and naturalists basically, a fact that I was previously unaware of. To them, the likes of Brian Tomasik, for example, the concern is not just creating a universe where suffering > well-being, meaning it would be potentially better of if nothing existed, or a universe of maximal torment (hell), but actually just a universe where suffering spreads across the universe at all. That, in much of a sense, means it would probably be better for life not to continue at all. They also frame their arguments in an attempt to be practical, which means not stating exactly what they believe because it's better (utilitarian reasoning) to boil the frog slowly. Also, the conclusions sound so absurd and unpopular, but the ideas may be correct, so there is use in not stating the full "logical conclusion" principles up front.
First off, I hated Better to Never Have Been as a book, as I found it very sloppy, unprofessional, and unconvincing. I am very open to nihilism as a theory, and anti-natalism as a "malignantly useless" sort of lens, although much less sympathetic to anti-natalism from a utilitarian perspective. Let's talk about Brian Tomasik, who is one of the most important contributors to EA philosophy (earning to give, s-risk, etc.) and cofounder of the Center on Long Term Risk. I've read a lot of his work recently, here is a likely strawman version of his beliefs: being an animal is bad. Animals suffer a lot and nature is a horrorshow (true). It would be better if animals and bugs did not exist, because of how much they suffer (unknown). Environmentalism is thus bad, a world made of concrete is better than a world of rainforests because all of the suffering that happens in the wild. Also, eating beef is better than being vegan potentially because if there were no beef farms that land would probably be forest, where more suffering would happen. Space exploration would be bad because terraforming could lead to more insect and bugs spreading and suffering on other planets (an s-risk).
Basically, I was under the impression that s-risk was more about having suffering so great it was worse than life being around at all. To Brian, we are already in this scenario. No optimized ASI-driven-torture is needed, it's already the case that we should wish to push the button switches this universe to one with only unconscious rocks. As I saw on reddit, "that's the problem if you get really into utilitarian harm reduction. The best way to eliminate all suffering is to eliminate all life." Brian is very worried about zooplankton so feels guilty about washing his hands, and tries really hard not to harm any bugs or squish them around the house. He states that "unbearable torture cannot be 'morally offset' by also bringing enough pleasure into the world." He basically leans into the "rebugnant conclusion", and worries about RL algorthims (like those perhaps on simple NPCs in video games and cares a lot about bugs. One thing I love about reading Brian's work is he simply follows everything to it's logical conclusion. If suffering is really bad, so bad that it's something like 10x worse than well-being, and there is some suffering so bad that not even infinite well-being can make up for it, this is where we end up. If you believe this theory, it seems far more important to wipe out as many animal habitats as possible rather than avoiding eating a relatively small number of animals by being a vegan.
I might start a company that offers Brian-offsets, where I offer negative utilitarians offsets that commit to paving over one rainforest in exchange for cash. You can do bad things as a profit seeking company, but as long as you are destroying the environment (or paying me to do so) we're actually contributing to the world. Again, I do not think the logic here is bad, or that Brian is unhinged. I think that negative utilitarianism is just probably wrong, and a thus it's not crazy that these takes seem so weird.
Subscribe to:
Posts (Atom)
Moral Non-realism
There is something particularly disturbing about moral non-realists who believe we should phase out life itself. This is the position ...
-
Effective Altruism, as a philosophy, is very simple. Basically, the argument is that if you shouldn't do bad in the world, that me...
-
I visited Vietnam recently. A beautiful country, filled with phenomenal food, wonderful people, and a much different political system. W...