What was going on behind that smily face, anyway?
December 19, 2011 11:53 AM   Subscribe

I'm trying to get a better understanding for how an AI like GERTY (from the 2009 film Moon) would work under the hood.

For one thing, would he be a true AI, or just exceptionally good at analyzing data to spit out appropriate responses? If he is a true AI, what sort might he be? How might his creators have solved the "Friendly AI" problem or prevented him from spiraling off in a singularity-type scenario, wherein his own capabilities increase at a superhuman speed and catapult him outside of our ability to understand or control?

Online articles or in-thread explanations would be excellent! Although please keep in mind that I'm not a computer programmer or engineer myself, and as such there's only so much technical detail I can digest.
posted by Narrative Priorities to Computers & Internet (18 answers total) 4 users marked this as a favorite
 
What's a "true" AI? We don't exactly have a good working description of consciousness, so it's hard to understand what criteria you're using here.
posted by mr_roboto at 11:57 AM on December 19, 2011


In the old text adventure game A Mind Forever Voyaging, scientists were only able to solve the problem by exposing the AI to real-life painful human coming-of-age experiences.
posted by steinsaltz at 11:59 AM on December 19, 2011


would he be a true AI, or just exceptionally good at analyzing data to spit out appropriate responses?

You might be interested in the arguments surrounding Searle's Chinese Room thought experiment and the notion of the philosophical zombie. In short: if something is exceptionally good at analyzing data to spit out appropriate responses in a manner largely indistinguishable from human reasoning, how can we say it isn't conscious in the same way humans are?
posted by jedicus at 12:06 PM on December 19, 2011 [4 favorites]


just exceptionally good at analyzing data to spit out appropriate responses

How is this different from human intelligence?

In all seriousness (though the above question is semi-serious), you'd have some sort of complex supervised (and eventually, unsupervised) learning algorithm. On an extremely simple scale, the sort of thing that you can read about and get a sense of how a computer system would "learn" are similar to backpropogation networks, neural networks with hidden layers.

The way to get artificial intelligence to emulate human intelligence is to allow it to re-engineer its own architecture. The limits on that re-engineering will veer into Asimovian "Three Laws" territory. People more skilled in computer science that me will have to take over from there.
posted by supercres at 12:06 PM on December 19, 2011


Response by poster: With regards to the "true" AI question: I've read about the Chinese Room before, although I'll referesh my memory now with this particular issue in mind.

Basically, we already have programs that will statistically analyze data and carry out a "conversation" that seems at least a little bit natural -- Cleverbot comes to mind. But obviously those programs have no consciousness and don't actually understand anything they're spitting out. I can therefore imagine GERTY as a much more sophisticated version of Cleverbot, although that's less narratively satisfying than a GERTY that genuinely understands the decisions he's making.
posted by Narrative Priorities at 12:19 PM on December 19, 2011


Answers to this question will by definition be entirely speculative, given that the type of AI depicted in that movie does not exist and that there's not even any consensus on whether or not it can exist.

would he be a true AI, or just exceptionally good at analyzing data to spit out appropriate responses?

Whether there is in fact any difference between those two things has not been resolved: Chinese Room, zombies and zimboes, etc. (Personally I feel the Chinese Room argument is incredibly intellectually dishonest -- it just shuffles the important question behind a layer of metaphor and misdirection without demonstrating anything at all in the process -- but plenty of people who are smarter than I am lend it more credence I do.)

Assuming that there is a difference, GERTY (who shows empathy, self-sacrifice, and other emotions one wouldn't normally associate with an expert system) would probably have to be classified as "true" AI, though stating that is just begging the question of what "true AI" means in the first place.

How might his creators have solved the "Friendly AI" problem or prevented him from spiraling off in a singularity-type scenario

At this point any old wild-ass guess is basically as good as any other. Nobody knows how to do any of these things, or if it's even possible to do these things, or if the questions are even meaningful: what prevents our own consciousness from "spiraling off in a singularity-type scenario"? Why would a computer consciousness be any more or less likely to do this than a meat-based consciousness? What is consciousness?

REPLY HAZY ASK AGAIN AFTER THE SINGULARITY
posted by ook at 12:22 PM on December 19, 2011


But obviously those programs have no consciousness and don't actually understand anything they're spitting out.

The point, though, is this: What is consciousness? How can we actually distinguish this thing we call "consciousness" from a program that will analyze data and carry our a conversation well enough to look exactly like "consciousness?" How do we even know that we aren't just meat-based Cleverbots?
posted by Tomorrowful at 12:22 PM on December 19, 2011


Nobody actually knows how to create a true artificial intelligence. That's why we don't have them yet. Which means no one knows the answer to your question.

What Jedicus talks about derives from the original Turing Test. What Turing proposed was that if a computer program could spend an hour communicating with a human (via teletype; this was the early 1950's) and if after that time the human couldn't tell whether he'd been communicating with a human or a machine, then the difference between "intelligence" and whatever it was the computer program was doing no longer mattered.

He did not say that in that case we could conclude that the computer was intelligent -- and in fact, since we cannot come up with a rigorous definition of "intelligence", the question of whether computers can ever be intelligent isn't really a meaningful question. (Yet.)
posted by Chocolate Pickle at 12:23 PM on December 19, 2011


If true AI is human data driven (eg. like Watson, but with the ability to also question and learn from that data) then I would think the best you could hope for is a computer that is able to rationalize and re-index human provided data at very fast speeds. I think we can already achieve AI in this manner....but true "human AI" is a whole different story. We're biological beings governed by our hormones and primal survival instincts. Sure you could emulate these things given time and advances in technology, but we're not quite to the point yet where it can be done believably. Many conditions have to be in place for something to rationalize in a human manner...many conditions that we don't fully understand ourselves.

Just for fun speculation, I'd think adding let's say a "hormone routine" would act more as a handicap than a positive for a computer...as would adding all the psychological routines that plague our brains, yet enable us to survive and relate to each other. The AI would need humans or other AI's to talk to...how else would it otherwise learn concepts like "empathy," "frustration," "politeness," "jealousy," or "friendship?" What certain conditions would be flagged TRUE to equal "love?" How would you prevent it from getting stuck in a loop, data mining through incomprehensible jibberish unless you programmed in a mild ADD routine to occasionally distract it? This all would take serious computing power with our current understanding of programmed logic....it makes AI such as GERTY, GlaDOS, and HAL hard for me to believe unless I see massive hardware backing it up. But who knows, it might all fit on a wristwatch someday...
posted by samsara at 12:27 PM on December 19, 2011


How might his creators have solved the "Friendly AI" problem or prevented him from spiraling off in a singularity-type scenario, wherein his own capabilities increase at a superhuman speed and catapult him outside of our ability to understand or control?

Isolation: By limiting its computational power and access to means for self-improvement.
posted by qxntpqbbbqxl at 12:28 PM on December 19, 2011


Response by poster: Nobody actually knows how to create a true artificial intelligence. That's why we don't have them yet. Which means no one knows the answer to your question.

Of course.

I suppose it would help to stress that I'm mostly interested in this question with regards to thinking about and better understanding the characters and story of Moon (see also: my username.) I have some knowledge of current thinking with regards to AI, but I'd hoped that other folks could help expand that super-rudimentary knowledge. Particularly since many essays that I've read on the subject are 10+ years old, and a lot's change since then.
posted by Narrative Priorities at 12:28 PM on December 19, 2011


(Spoilers!)

Given what's in the movie, I would guess that GERTY is a true AI with some specific, hard-coded canned responses ("I can only account for what occurs on the base") which are meant to protect the company's secret. It's worth noting that Moon suggests that GERTY's alliance with Sam was a bit of a loophole -- when Sam asks why ("Why did you help me with the password? Doesn't that go against your programming or something?"), GERTY merely says "helping you is what I do". Perhaps GERTY's creators ordered him to "help" Sam as well as to keep the base's secret, without considering how those orders might be interpreted if Sam began to become aware of what was going on.

One possible reading of the movie is that everything that happens after the crash is a result of GERTY's attempts to create a situation in which Sam could encounter the secret, and thus free both himself and GERTY...
posted by vorfeed at 12:29 PM on December 19, 2011 [1 favorite]


Great question and discussion above!!


I think Chocolate Pickle hit it on the nose with Turing:
if a computer program could spend an hour communicating with a human (via teletype; this was the early 1950's) and if after that time the human couldn't tell whether he'd been communicating with a human or a machine, then the difference between "intelligence" and whatever it was the computer program was doing no longer mattered.

"Intelligence" is just a word we use to describe deciding on an outcome given certain input to the argument. If that decision is just a giant game tree of sorts then it's not really intelligence, but does it even matter at that point anymore? If we can create an application that can watch over certain criteria and act on it, and quite possibly learn (I have seen a new error 10 times, i should catalog that and see how to solve it later) then that is "intelligence".

As vorfeed suggests GERTY was created to help Sam, the creators of that AI could not foresee Sam becoming aware of the situation, so there was an exploit in the code that Sam unknowingly found (he traversed the game tree).
posted by zombieApoc at 12:53 PM on December 19, 2011


Particularly since many essays that I've read on the subject are 10+ years old, and a lot's change since then.

Not really. It's not a field where things are moving very rapidly.
posted by Chocolate Pickle at 12:53 PM on December 19, 2011


I'm mostly interested in this question with regards to thinking about and better understanding the characters and story of Moon

Oh, in that case.

(Spoilers ahead!)

Narratively speaking there are only a handful of reasons GERTY has to be a computer rather than a human:

1) It's creepy and isolating
2) It offers a fun opportunity to play against HAL-9000 expectations
3a) Having a second human character on the base would -- obviously -- eliminate the whole point of the film
3b) Trying to skirt that issue by placing the human-GERTY character on earth would unnecessarily complicate the plot: Why would human-Gerty side with Sam instead of with the mining corp? Is this some lone rogue clone-sympathizer, or are there factions, or what? How does he communicate with or assist Sam without getting busted? It just doesn't work, plot- or mood-wise: Sam has to be completely isolated; if he has friends or supporters on earth it diminishes the film's impact.

For all other purposes, GERTY is effectively human. The film doesn't (and needn't) treat the question of AI's possibilities or limitations thoughtfully or carefully, because that's not what the film is about. AI is a red herring.

Particularly since many essays that I've read on the subject are 10+ years old, and a lot's change since then.

Not as much as you'd think. There's been a lot of theorizing and argumentation, but we're still essentially stuck on the same question Searle was tackling in 1980, which is for that matter basically the same question punted on by Turing in 1950, which is "what is consciousness, really?"
posted by ook at 1:05 PM on December 19, 2011


But obviously those programs have no consciousness and don't actually understand anything they're spitting out.

I would argue that Cleverbot does have a very rudimentary, if rote, understanding of syntax and commonly linked words.
posted by cmoj at 2:30 PM on December 19, 2011


Best answer: First, it's been a year or two since I saw Moon, and I only saw it once, so I may be forgetting something. I'm also not going to get into too much specifics given your knowledge, but most of the concepts I mention are easily Google-able.

That said, personally, I expect any sort of "true AI" to be a combination of many things, not just one thing extrapolated out. Take the idea of robot butlers, for example. In some ways, GERTY is less complex, but much of the basic principles are still there. In theory, everything exists right now to allow for this theoretical robot butler:
  • There are research groups working on realistic walking gaits designed to work in unstructured environments.
  • There are research groups working on simultaneous localization and mapping, too - learning your environment and where you are in it.
  • There are research groups working on arms and hands, able to grip and manipulate a wide range of objects.
  • There are research groups working on object recognition and tracking. Ditto face recognition.
  • There are research groups working on voice recognition.
  • There are research groups working on voice synthesis.
  • There are research groups working on natural language.
  • Bring it all into (or out of?) the uncanny valley with the research groups working on realistic heads with all the works such as eye blinking, mouth movement, eye movement, etc.
Finally, you have the research groups that are working on the so-called "AI". This is the part you are asking about, but every single bit of the above includes a very specific intelligence that needs to built in just to do that one specialized task. In some cases, the required synergy is already there. But not between all of the above together. Integration is a hard problem, and sometimes I don't feel it is as respected as it should be. It takes time and money and it is pretty much all that stands between you and robot butlers in the home (and even then, it'll likely start out as a toy for the super rich).

As an example, my car's Sync system uses voice recognition (from a limited dictionary of commands + names in my phone book + artists/albums/tracks of my mp3 player), and combines that with some rudimentary voice synthesis. It is not 100% accurate at either. It mispronounces my own last name and makes recognition mistakes all the time (particularly with my wife's voice). And this is a very simple sort of intelligence: if I say something, and the closest matching word is "play", it decides that I want to play something. Is the next word "track", "album", or "artist"? if so, it gets to narrow down the search on the next part. Et cetera. There is not much in the way of natural language processing, as you see. It requires keyword input. I can't say "Hey Majel (my car's name), can you play me some Beatles music?"

GERTY has at least what my Sync system has, but has the advantage that he only has to really know one voice really well. So there goes a large part of the problems with voice recognition. Also, for some reason, voice recognition work thus far seems to work better on male voices out of the box than female voices, hence my wife's more frequent issues with the Sync system. In fact, Gerty has many such advantages, because his environment is more or less limited, e.g. there is an advantage in face/object recognition and localized mapping, as he only needs to have the knowledge of the small area in and around the base. This makes it a vastly easier problem than if Gerty existed in the real world. This also means there is a limit to what he can learn (or needs to!), so I don't think there is any worry about Gerty becoming some sort of superpower.

I imagine Gerty's voice recognition, natural language processing, and pronunciation of new words could be upgraded either continually or periodically, using the recorded letters home for training, along with perhaps recorded conversations with GERTY directly.

But all this is skirting around the issue you really want to know, which is the decision-making process. And as you see, we've barely scratched the surface of intelligence, and haven't even talked about that yet! As I said, integration is a hard problem. For every case you may think of, there are at least 10 edge cases to think of. The movie deals with an extreme edge case, but we are led to believe that they are doing something right since the scheme appears to have worked fine until the crash.

My guess is that a well-developed AI would use some sort of bagging or voting scheme for its decision making, instead of relying on a single algorithm. In essence, if you have five different people do a complex physics problem on a chalkboard, you go with the answer that the majority of them came up with. That's for the stuff it doesn't "know". One algorithm may try to map it to a similar problem, another may work directly off its directives, another may attempt to break it down into smaller tasks to solve, etc. Who knows.

Then you have stuff it does "know". It could have specialized algorithms for specialized tasks (as I've shown above in regards to natural language processing and voice recognition). Need to answer a math question? Deal it to the math co-processor. Need to understand the emotional state of the your single occupant (well, supposed to be single)? Deal it to the emotion engine. That kind of thing. There may be a lot of these. Just like the amount of work required to get your robot butler to understand you are asking him to get you a beer, walk to the fridge, open it, take out a beer, close the fridge, open the freezer, get out a frosty mug, close the freezer, open the bottle, pour it into the frosty mug, bring it back to you without tripping over the dog, and tell you "you are welcome!" when you thank it. There's quite a bit of specialized tasks there (some of them repeated). It also seems like intelligence is there, even if these tasks were programmed in specifically (and if I were making a robot butler, you can be damn sure getting a beer would be a unit test!). "If current clone is dead, revive new clone using standard procedure" is a programmed decision, though it may seem intelligent. The edge case we see is when the algorithm for deciding that current clone is dead fails (you can also be sure this edge case would have went into the bank of stuff it "knows"). This introduces all sorts of new information for it to learn and solve. I'm sure that if the movie ended differently, some of those decisions would be marked by the authorities as outright wrong solutions.

I feel like I've used a lot of words to not answer the question very well at all. :/ Still, it is an interesting, if not a bit too open-ended, question.
posted by mysterpigg at 3:17 PM on December 19, 2011 [1 favorite]


Best answer: Cleverbot relies heavily on imitating other's conversations, and its primary goal is to do an effective imitation. GERTY's primary goal would be to keep a moon-based hydrogen extraction facility running smoothly. It wouldn't have a pool of thousands of people running moon-based hydrogen extraction facilities to imitate, so you couldn't really use the same strategy to build GERTY.

From a programming standpoint, some basic tasks like language processing are still complicated problems. For narrative pruposes, science fiction will usually treat them as solved so they can focus on higher-level decision making.

One simplistic way a program can be made to choose an action is by going:
For each possible action:
.. Simulate results of that action
.. Evaluate those results
Choose action with best results

So then GERTY's decision making is dependent upon:
1) How is the list of possible actions generated?
2) How accurate is GERTY's simulation of the facility?
3) What criteria is used to judge the results?

Related to vorfeed's comment about a "loophole" it could be the programmers excluded "tell Sam the truth about ___" from the list of generated actions but did not make "Sam knows the truth about ___" a negative criteria.

If making a "true" AI, I think allowing it to refine its simulation based upon observation has the best useful to dangerous ratio. It should definitely not be allowed to change its own criteria to judge what is a good result, or it may no longer do the task you want. And keeping some inviolable restrictions on its actions, like "don't kill" seems important. But if it can't update its simulation based on observation, it could end up making the same mistake again and again without learning.

So if an AI starts acting in a more moral fashion, my first thought isn't that it has learned to value morals higher, but that it has learned that the facility isn't viable in the long term unless the treatment of the workers is improved.

:)
posted by RobotHero at 9:29 AM on December 20, 2011


« Older Hospice Recommendations   |   How do I fight back from fracking? Newer »
This thread is closed to new comments.