Now, back to my point. During my book's revision process earlier this year, someone mentioned that my book (Mind Crime) was quite "suffering focused." This is true, although I was not quite so sure at the time how much I had been influenced by "suffering-focused ethics," or what exactly that means. Essentially the people who drive the s-risk discussion are actually negative utilitarians and naturalists basically, a fact that I was previously unaware of. To them, the likes of Brian Tomasik, for example, the concern is not just creating a universe where suffering > well-being, meaning it would be potentially better of if nothing existed, or a universe of maximal torment (hell), but actually just a universe where suffering spreads across the universe at all. That, in much of a sense, means it would probably be better for life not to continue at all. They also frame their arguments in an attempt to be practical, which means not stating exactly what they believe because it's better (utilitarian reasoning) to boil the frog slowly. Also, the conclusions sound so absurd and unpopular, but the ideas may be correct, so there is use in not stating the full "logical conclusion" principles up front.
First off, I hated Better to Never Have Been as a book, as I found it very sloppy, unprofessional, and unconvincing. I am very open to nihilism as a theory, and anti-natalism as a "malignantly useless" sort of lens, although much less sympathetic to anti-natalism from a utilitarian perspective. Let's talk about Brian Tomasik, who is one of the most important contributors to EA philosophy (earning to give, s-risk, etc.) and cofounder of the Center on Long Term Risk. I've read a lot of his work recently, here is a likely strawman version of his beliefs: being an animal is bad. Animals suffer a lot and nature is a horrorshow (true). It would be better if animals and bugs did not exist, because of how much they suffer (unknown). Environmentalism is thus bad, a world made of concrete is better than a world of rainforests because all of the suffering that happens in the wild. Also, eating beef is better than being vegan potentially because if there were no beef farms that land would probably be forest, where more suffering would happen. Space exploration would be bad because terraforming could lead to more insect and bugs spreading and suffering on other planets (an s-risk).
Basically, I was under the impression that s-risk was more about having suffering so great it was worse than life being around at all. To Brian, we are already in this scenario. No optimized ASI-driven-torture is needed, it's already the case that we should wish to push the button switches this universe to one with only unconscious rocks. As I saw on reddit, "that's the problem if you get really into utilitarian harm reduction. The best way to eliminate all suffering is to eliminate all life." Brian is very worried about zooplankton so feels guilty about washing his hands, and tries really hard not to harm any bugs or squish them around the house. He states that "unbearable torture cannot be 'morally offset' by also bringing enough pleasure into the world." He basically leans into the "rebugnant conclusion", and worries about RL algorthims (like those perhaps on simple NPCs in video games and cares a lot about bugs. One thing I love about reading Brian's work is he simply follows everything to it's logical conclusion. If suffering is really bad, so bad that it's something like 10x worse than well-being, and there is some suffering so bad that not even infinite well-being can make up for it, this is where we end up. If you believe this theory, it seems far more important to wipe out as many animal habitats as possible rather than avoiding eating a relatively small number of animals by being a vegan.
I might start a company that offers Brian-offsets, where I offer negative utilitarians offsets that commit to paving over one rainforest in exchange for cash. You can do bad things as a profit seeking company, but as long as you are destroying the environment (or paying me to do so) we're actually contributing to the world. Again, I do not think the logic here is bad, or that Brian is unhinged. I think that negative utilitarianism is just probably wrong, and a thus it's not crazy that these takes seem so weird.