Who said that we must build AI to increase future happiness?
August 28, 2024 12:25 AM Subscribe
Some AI evangelists believe that AI will make future humans incredibly happy - because of this, we have a moral obligation to accelerate AI development. The future happiness of billions of people is worth more than any present problems AI may cause. Who is a well known proponent of this idea?
If I remember correctly, some people have even said that it would be a crime to prevent this hypothetical, perfect future AI to develop. Is this true, and if so, who has said it?
I'm sure that this is an easy question, but googling various combinations of "AI", "happiness" and other keywords just gives me a big pile of hay when all I want is the needle. Thanks for your help!
If I remember correctly, some people have even said that it would be a crime to prevent this hypothetical, perfect future AI to develop. Is this true, and if so, who has said it?
I'm sure that this is an easy question, but googling various combinations of "AI", "happiness" and other keywords just gives me a big pile of hay when all I want is the needle. Thanks for your help!
The new Nate Silver book seems to call this 'effective altruism' (n.b. I've not read it, only reviews of it).
posted by Ardnamurchan at 1:13 AM on August 28
posted by Ardnamurchan at 1:13 AM on August 28
Nick Bostrom has made a career out of peddling this line of horseshit.
Not sure he's actually gone so far as to defend the obviously untestable claim that our virtual, uploaded, cloud-dwelling descendants will be any happier than we are, contenting himself instead with the untestable and morally dubious idea that their sheer numbers would make any amount of happiness each amount to far more total happiness than exists today.
posted by flabdablet at 2:06 AM on August 28 [4 favorites]
Not sure he's actually gone so far as to defend the obviously untestable claim that our virtual, uploaded, cloud-dwelling descendants will be any happier than we are, contenting himself instead with the untestable and morally dubious idea that their sheer numbers would make any amount of happiness each amount to far more total happiness than exists today.
posted by flabdablet at 2:06 AM on August 28 [4 favorites]
Longtermism is a keyword that will help you to find proponents of this kind of craziness
and TESCREAL is another. Often used as a neologism rather than being explicitly marked as an acronym, as in "these tescreal cretins must be stopped".
Here's Mefi's own Charlie Stross on it: We're sorry we created the Torment Nexus
And since it's inevitable that the name Eliezer Yudkowsky will turn up in this thread, it might as well be my fault.
posted by flabdablet at 2:18 AM on August 28 [6 favorites]
and TESCREAL is another. Often used as a neologism rather than being explicitly marked as an acronym, as in "these tescreal cretins must be stopped".
Here's Mefi's own Charlie Stross on it: We're sorry we created the Torment Nexus
And since it's inevitable that the name Eliezer Yudkowsky will turn up in this thread, it might as well be my fault.
posted by flabdablet at 2:18 AM on August 28 [6 favorites]
Here's Marc Andreessen sidestepping the happiness argument but ending up with much the same conclusion:
We believe Artificial Intelligence can save lives – if we let it. Medicine, among many other fields, is in the stone age compared to what we can achieve with joined human and machine intelligence working on new cures. There are scores of common causes of death that can be fixed with AI, from car crashes to pandemics to wartime friendly fire.posted by flabdablet at 3:56 AM on August 28 [2 favorites]
We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.
Response by poster: Yeah, the horrible Techno-optimist manifesto! Thanks flabdablet for reminding me - and thanks to all who have answered so far.
posted by Termite at 4:11 AM on August 28
posted by Termite at 4:11 AM on August 28
On the idea that it would be a crime to not support AI development, there's Roko's basilisk.
posted by SaltySalticid at 5:11 AM on August 28 [2 favorites]
posted by SaltySalticid at 5:11 AM on August 28 [2 favorites]
Just a clarification on terminology: "effective altruism" is a much broader term, which is simply the belief that one should "aim to find the best ways to help others," which admittedly is vague enough to be meaningless. Although some in the self-described EA community are enthusiastic about AI, others consider it a possible existential threat to humanity. (I'd say: A little from column A, a little from column B.)
posted by Mr.Know-it-some at 6:34 AM on August 28 [2 favorites]
posted by Mr.Know-it-some at 6:34 AM on August 28 [2 favorites]
It is a cruelty to introduce Roko’s Basilisk without some sort of extreme neurohazard warning.
posted by jimfl at 7:34 AM on August 28 [1 favorite]
posted by jimfl at 7:34 AM on August 28 [1 favorite]
There's also e/acc (Reddit ELI5 link), another strain of thought popular among certain tech billionaires, "effective accelerationism" (being a spin on effective altruism). This label is endorsed by Marc Andreessen. Essentially, "there are no real risks and we must go as fast as possible". Making even the most AI-optimistic effective altruism types (those who feel concentrating all efforts on AI capability research is a moral obligation because safe AI will save us all) look moderate.
While sometimes posed as opposites, e/acc and the most vocal EAs fundamentally agree that we should expect superpowerful AI in the very near future, and do not concentrate efforts on what they see as comparatively mundane risks like privacy, election manipulation, environmental harm, damage to fundamental institutions of human society etc. Many if not most prominent EA people also endorse the idea that safe/"aligned" super intelligent AI is a road to maximizing human happiness, so making sure this 'inevitable' super intelligent AI is 'aligned' is a core moral imperative.
posted by lookoutbelow at 7:39 AM on August 28
While sometimes posed as opposites, e/acc and the most vocal EAs fundamentally agree that we should expect superpowerful AI in the very near future, and do not concentrate efforts on what they see as comparatively mundane risks like privacy, election manipulation, environmental harm, damage to fundamental institutions of human society etc. Many if not most prominent EA people also endorse the idea that safe/"aligned" super intelligent AI is a road to maximizing human happiness, so making sure this 'inevitable' super intelligent AI is 'aligned' is a core moral imperative.
posted by lookoutbelow at 7:39 AM on August 28
This, to use an archaic term, is kind of a "pig in a poke" situation. They are asking us to take on faith that the AI they propose to build will in fact be wonderful for all of us, without any evidence that it will; and of course we don't have any evidence that it won't. It's analogous to a religion, _but_ the difference is that computer and AI technology changed a pace almost more rapidly than we can keep up with, and that pace is accelerating to the point where we _won't_ be able to keep up. Given the pace, it's reasonable to ask for safety measures now, because if we don't, the time will come when any safety measure imagined will be too late.
posted by TimHare at 10:20 AM on August 28
posted by TimHare at 10:20 AM on August 28
Roko's basilisk is just Calvinism, but with robots instead of God—which I think is part of why there's so much crossover, in my observation, between techno-utopians and evangelicals. Both posit that you need to be good, and visibly show evidence of being good, so future Jesus or future AI doesn't think you're bad and leave you out. See also: Santa and Elf on the Shelf.
posted by limeonaire at 11:57 AM on August 28 [4 favorites]
posted by limeonaire at 11:57 AM on August 28 [4 favorites]
Response by poster: Thank you all! I have marked this question as resolved.
posted by Termite at 9:04 PM on August 29
posted by Termite at 9:04 PM on August 29
You are not logged in, either login or create an account to post comments
posted by rd45 at 12:39 AM on August 28