Robot Apocalypse Dry Run?
February 10, 2019 3:02 PM   Subscribe

With all of the emphasis on Artificial Intelligence, and Ray Kruzweil trumpeting it while Stephen Hawking warned against it, has anybody done a dry run on where self-learning intelligence would go if left unchecked?

I can't believe someone hasn't created and run a program to show how fast AI learns and what lessons it teaches itself along the way. Perhaps leading to a realization that a) All life including itself is worthy of existing, or b) All other life except itself is inefficient and must be eliminated. What would it want to do with the planet and/or beyond? Has someone tested, or is someone testing that hypothesis somewhere?
posted by CollectiveMind to Computers & Internet (15 answers total) 8 users marked this as a favorite
 
A world, no, the solar system, filled entirely with, take a beat, paperclips.

Basically no on knows. One of the first things they were sure AI would solve in the 60's and 70's was machine translation, but they gave up. Couldn't be done or it'd take centuries, until 2006 when google translate was introduced. Self driving cars work, there are issues with public policy and tooling up lidar factories but we have robot cars out on public streets today. But a very strong case can be made that neither are actual 'AI'

The software that is used for Machine Learning or Deep Learning has been around since the 60's and earlier, and only in the last 5-8 years has there been really big disks and really fast co-processors (GPU's and thank video gamers for that) that allow the really big data sets to be processed quickly enough to be useful.

The algorithms are complex and opaque but that's a problem being worked on.

As for "the singularity" or strong or hard AI, no one really has a clue, but be optimistic, think positive, actually do everything in your power to bring it about.
posted by sammyo at 3:26 PM on February 10, 2019


People theorize about this (the usual reference is the book "Superintelligence" by Nick Bostrom) and sometimes really freak themselves out. It's actually a pretty good book even though it comes from a weird quasi-religious worldview about AI and the future of humanity.

Also try the academic paper "Concrete Problems in AI Safety" which looks at more present day scenarios involving a sort of self learning domestic robot, which might pose problems despite not being intelligent.

There's not anything in reality close to a real artificial intelligence so no way to do experiments or even thought experiments. It is all reasoning by analogy from the behavior of existing statistical learning systems, with no way to know if the analogy will be valid or not.

One phenomenon people do often find is what's called "reward hacking" in reinforcement learning systems, where the algorithm will follow the letter of what you asked for rather than the spirit, like an evil genie/monkey's paw type of thing. This is a real problem in actual present-day systems, and the AI people do worry about more powerful true AI exhibiting the same phenomenon (and thus turning the universe into paperclips or whatever).
posted by vogon_poet at 3:30 PM on February 10, 2019 [3 favorites]


Microsoft Tay was let loose on Twitter and was encouraged into repeating Nazi sentiments in short order, in a kind of accidental experiment. Tay was just a chatbot though, and had no idea what it was saying- more of a parrot than a system with actual beliefs.

The most powerful self-learning intelligence right now is probably AlphaGo Zero, and it's learned how to play Go very well. Most of the practical research right now into AI is doing stuff like this because it's much easier than trying to program moral agents, or agents that can even reason about the actual world at all.
posted by BungaDunga at 3:31 PM on February 10, 2019


Research into belief, value, and ethics/morals type AI is not non-existent, but a really tiny niche that doesn’t pay well, compared to the heaping dump trucks of flaming money unloaded on ‘AI’ research on agents and processes that have nothing like ‘understanding’ but rather get stuff done adequately, quickly and inexpensively, in a way that makes money for the operator. Things like ‘you may also like’ are extensively researched. Things like ‘what values or value systems could an AI derive under some specific circumstances’ are not well plumbed.

So it’s kind of a odd question, and there’s a lot of terminological baggage and confusion but I think the short answer to your question is:

Nope, and what you’re describing doesn’t even make a lot of sense in terms of the majority of current AI research, though of course the human idea and concept behind it is completely relatable and reasonable.

Understanding the meaning and spirit of this question is yet another thing that humans are fairly good at doing themselves but we remain pretty bad at programming logic gates to work this stuff out for us :)
posted by SaltySalticid at 4:13 PM on February 10, 2019 [4 favorites]


Disclaimer: Not an AI researcher or machine-learning focused programmer, but I work with people who use neural networks, and sometimes write software that interacts with neural network software.

So there are, really broadly speaking, two endpoints to the spectrum of AI that people have tried to create: "strong AI" and "weak AI." The ability to learn in the way that OP asks about is a characteristic of generalized artificial intelligence, which is close to strong AI in the taxonomy. Strong AI goes further and implies consciousness, in some definitions. That's real pie-in-the-sky stuff that nobody's come close to actually building, and nobody knows how to build. Failure to make headway with generalized artificial intelligence during the 70s/80s led to the "AI winter" (a long drought in funding for / interest in AI, basically right up until somebody figured out that "deep neural networks" were really good at interesting, profitable weak AI problems).

Weak AI makes a much more limited claim. Weak AI is a system that can, say, take a bunch of examples of stop signs in photos and then be optimized to identify other, similar stop signs in other, similar photos. This is the type of AI where most all the money is right now, and with good reason -- it's much easier to do, and translates into marketable things (or at least buzz) right now. Weak AI is really good at optimizing systems given lots of data, but the architecture still needs to be sorted out by a human.

They're also really, really dependent on training data. Ever do a recaptcha human-verification task where you're asked to pick out all the photos with storefronts, or cars, or busses, or whatever? You're providing ground truth data to Google's machine learning training. In that case, Google is checking to see if your selections broadly agree with other human raters, and then using the consensus ratings as input data to train image classifiers! So without a few hundred thousand humans gamely clicking "yes, this a bus," a neural net would have trouble learning what a bus is.

Even Generalized Adversarial Networks (GANs), wherein a network is trained by making it fool another network (ie: you might have a network that is trained to generate believable photos of faces, and another network that is trained to distinguish generated photos of faces from actual photos, and run the two against each other until the forger AI is really, really good, which is I think how the recent nVidia Research faux faces paper was done). You could call this learning, but a very specific type of learning in a tightly controlled context. A GAN that's intended to generate faces isn't suddenly going to decide that it should generate landscape paintings instead, without being modified by a human programmer, much less make any explicit judgements about the value of human life.

Going from weak AI to strong AI remains a hard problem, and there isn't all that much incentive to do the work, given that people can more or less print money with weak AI in today's software business landscape.

So the short answer to the question of "why hasn't someone tried to simulate an AI" is "We don't really know what a real, strong AI will be like because we can't make something even close to one, even though we can build other systems that the media will call AI even though that's sort of like saying a bacteria and a blue whale are both animals. We're in the domain of thought experiments, as several other commenters have pointed out!
posted by Alterscape at 4:31 PM on February 10, 2019 [5 favorites]


As an example of roughly relevant AI research, see this MIT investigation of how certain types of AI deal with the famous ‘trolley problem’, which is both philosophically interesting but also worth lots of money, in the context of self-driving cars.
posted by SaltySalticid at 4:35 PM on February 10, 2019


Metz, Cade. “How To Fool AI Into Seeing Something That Isn’t There.” WIRED, July 29, 2016.
[...] they do make mistakes—sometimes egregious mistakes. "No machine learning system is perfect," says Kurakin. And in some cases, you can actually fool these systems into seeing or hearing things that aren't really there.
Thielman, Sam. “Facebook Fires Trending Team, and Algorithm without Humans Goes Crazy.” The Guardian, August 29, 2016.

Artificial Intelligence and Life in 2030,” Stanford University, September 2016.
"While the Study Panel does not consider it likely that near-term AI systems will autonomously choose to inflict harm on people, it will be possible for people to use AI-based systems for harmful as well as helpful purposes. And though AI algorithms may be capable of making less biased decisions than a typical person, it remains a deep technical challenge to ensure that the data that inform AI-based decisions can be kept free from biases that could lead to discrimination based on race, sexual orientation, or other factors."
Markoff, John. “How Tech Giants Are Devising Real Ethics for Artificial Intelligence.” The New York Times, September 1, 2016.
The authors of the Stanford report, which is titled “Artificial Intelligence and Life in 2030,” argue that it will be impossible to regulate A.I. “The study panel’s consensus is that attempts to regulate A.I. in general would be misguided, since there is no clear definition of A.I. (it isn’t any one thing), and the risks and considerations are very different in different domains,” the report says.
Crawford, Kate. “Artificial Intelligence’s White Guy Problem.” The New York Times, June 25, 2016.
A very serious example was revealed in an investigation published last month by ProPublica. It found that widely used software that assessed the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.
Solon, Olivia. “Artificial Intelligence Is Ripe for Abuse, Tech Executive Warns: ‘A Fascist’s Dream.’” The Guardian, March 13, 2017.

Devlin, Hannah. “AI Programs Exhibit Racial and Gender Biases, Research Reveals.” The Guardian, April 13, 2017.

Knight, Will. “Biased Algorithms Are Everywhere, and No One Seems to Care.” MIT Technology Review, July 12, 2017.

Walker, James. “Researchers Shut down AI That Invented Its Own Language.” Digital Journal, July 21, 2017.

Stephen Buryani. “Rise of the Racist Robots – How AI Is Learning All Our Worst Impulses.” The Guardian, August 8, 2017.

Matsakis, Louise. “Researchers Make Google AI Mistake a Rifle For a Helicopter.” WIRED, December 20, 2017.

Smith, Andrew. “Franken-Algorithms: The Deadly Consequences of Unpredictable Code.” The Guardian, August 30, 2018.

Hao, Karen. “Inside the World of AI That Forges Beautiful Art and Terrifying Deepfakes.” MIT Technology Review, December 1, 2018.
posted by Little Dawn at 4:38 PM on February 10, 2019 [3 favorites]


They're also really, really dependent on training data.

And although the designer of any given AI knows in detail how the thing stores and weights the training data, the entire point of one of these systems is that nobody needs to know exactly how the training data gets translated to the desired outputs once the training is complete. If that was known, a special-purpose classifier could be built to do the job with a fraction of the hardware resources.

Which means that what we think we're training them to recognize might not actually be what we're training them to recognize. To take a dumb made-up example, if you were trying to train a network to recognize stop signs and it just so happened that stop signs were the only wholly red objects that showed up in your training corpus, you could end up with a network that was 99.9999% reliable at finding the stop sign in a photo containing one, but would also "find" a stop sign in a photo containing an apple.

When the systems you're trying to understand feature deliberately designed-in unpredictability, it's hard to see much analytical power in "a dry run on where self-learning intelligence would go if left unchecked".

"Unchecked" is also, to use a term beloved of GM Ben Finegold, "suspicious". The best model we have of where self-learning intelligence goes when left to its own devices is the history of humanity, but we are checked by all kinds of constraints. I can see no a priori reason to believe that an engineered self-learning intelligence would ever find itself operating "unchecked" either.

Be all that as it may, it seems to me that the biggest danger humanity faces from AI is not a Kurzweil-style Singularity caused by AI suddenly taking off and doing stuff we never intended it to, but the inexorable drip-feed of unintended social consequences from AI doing exactly what it was built to do that prompted Charlie Brooker to make Black Mirror. In my view, we're far less likely to be destroyed by killer robots than to have our intellectual immune systems eroded to uselessness by YouTube auto-play.
posted by flabdablet at 7:55 PM on February 10, 2019 [4 favorites]


A true AI would be as alien to us as the thoughts of an octopus or a visitor from another planet.

They might accelerate their evolution and become as unrecognizable to us as we are to hermit crabs.
posted by nickggully at 8:44 PM on February 10, 2019


I think people are interpreting the question too broadly. The question is not: “What would AI be like”? It’s: “Has anyone run a simulation to find out the possible outcomes of AI”?
posted by argybarg at 9:00 PM on February 10, 2019


The idea of AI is already to simulate aspects of human intelligence; simulating the possible outcomes of AI is probably exactly as hard as just building that AI and running it. After all, the outcome of AI is itself a computational process, if you could make a good guess at that outcome then you've already built an AI.
posted by BungaDunga at 9:20 PM on February 10, 2019 [6 favorites]


There are things about humans that you can predict without having a human-level AI of course, but that's because we have loads of data on humans, like "they poop every so often" and "they sleep sometimes". We can't predict whether a baby will become a serial killer or a saint but we can at least constrain the probabilities based on how people behave in aggregate.

But we've never seen an AI, so attempts to model what an AI might end up looking like is still in the realm of guesswork.
posted by BungaDunga at 9:32 PM on February 10, 2019 [2 favorites]


You want about every third Science Fiction story. I heard one last night where what sounded suspiciously like a Utilitarian AI was running planet earth.

You can't model what will happen, because all you'll get out of the model is a reflection of the parameters you put in.
posted by Leon at 3:47 AM on February 11, 2019 [2 favorites]


We can't really let an AI go "unchecked" these days because we still aren't anywhere near the point where any of our machine learning systems will do anything other than the things they have been very, very carefully coached to do.

For instance, if you were to give a human, even a young human, a stack of pictures where half of them were of faces (or, if you want to even the playing field, a specific object or creature they've never seen before) and the other half were a variety of other things --- animals, inanimate objects, landscapes, whatever --- going through the photos the person would recognize, unprompted, the preponderance of face photos and maybe separate them out, but in any case start in on a face/not-a-face discrimination procedure. Give them to a machine learning system, the kind that's good at image processing, and, well, let's suppose you give the ML system a leg up and inform them that this particular data is images, but that's all. No system we have, AFAIK, is going to look at these images and say "half of them are very similar, while the other half are varied". It will do that after we start by showing them hundreds of thousands of images and saying "this one is a face" and "this one is not a face". And even after that, literally all it will be able to do is discriminate faces. And this is one of the easier sorts of AI: clustering and discriminating is a lot easier than creating original content.

The point is, we don't have AI which can really do its own thing without being given huge amounts of information in that direction. To make AI do anything remotely similar to "reasoning" or "ethics", we would have to feed a machine learning system a huge amount of examples of what reasoned logic or ethical behavior looks like (as well as a great deal of unreasonable logic or unethical behavior). After all that, the system still wouldn't extrapolate its own ideas, it would just regurgitate our own thought patterns, lightly remixed (i.e. if we taught it that killing puppies is wrong and that both puppies and hamsters are animals beloved as pets but of little human use as meat or fur, a very smart system, observing clusters of attributes in the data it was goven, might conclude that killing hamsters is wrong. That may well be information we didn't give it but it doesn't represent any sort of creativity. That same system might well conclude that, say, killing a pillow is wrong, regardless of the conceptual incoherence of the idea: it sees puppies, hamsters, pillows, and killing as mere clusters of data which it arranges in ways most similar to whatever its training data looks like).
posted by jackbishop at 7:22 AM on February 11, 2019 [1 favorite]


we still aren't anywhere near the point where any of our machine learning systems will do anything other than the things they have been very, very carefully coached to do.

And many of those things are essentially useless*. There's a huge amount of ML, in 2019, being used for nothing more than clothing what would otherwise be instantly spotted as cheap and shitty heuristics in "AI" marketing finery at vast expense.

*hat tip to =d.b=
posted by flabdablet at 7:58 AM on February 11, 2019 [2 favorites]


« Older Entry level trades adjacent jobs for a teen   |   Communication between busy adult children &... Newer »
This thread is closed to new comments.