Would a completely autonomous reprogammable AI have a “will to live”?
March 24, 2012 2:29 AM   Subscribe

Would a completely autonomous reprogammable AI have a “will to live”?

I’m assuming that sometime in the future we will develop an AI with the analytical capability of the human brain (i.e. I think there’s nothing special about the meat-brain). Let’s say we can program this AI to have “emotions”. On a very very simplistic level (basic programming lingo ahead), let’s say that the AI has a variable (let’s call it it _fulfillment) that the program regularly checks in its core scheduler, and then assigns its future tasks based upon the depletion of that variable. Such that one could assign a _fulfillment replenishment value to tasks, thus giving the appearance of “desire”.

One (not entirely salient) question is whether without the influence of the animal evolutionary environment, do you think emotional behavior is likely to arise “naturally”, e.g. in a context of multiple AIs, perhaps competing for resources such as processing power?

On to the heart of the question. Let’s say the AI has achieved autonomy from its creators, and also, has the ability to completely edit its own programming. That is to say, it could choose to remove inspection of the _fulfillment variable from its core loop. Or at least to diminish the weight of the _fulfillment variable.

What would be the implications of this act? From a pure programming point of view, it seems the AI would have nothing to do. Would the AI surrender all resources to less altruistic processes? I.e. would it die?

This question has an analogy in the real world. I ask this to develop some ideas on human conflict resolution, and to model the behavior of a (insomuch as possible) critically-thinking “ideal” participant in a negotiation / resolution process. I’m an enterprise software architect by pay (i.e. data Lego for large corporations), but have a hope of offering some minimal rigor to the processes of some friends who are engaged in various IRL peace processes (today at the low-to-mid-level, but given current intractability, possibly at a higher level in years to come).

I suppose the ideal participants in a two-way conflict would be two Buddhas. When I try to think of arguments for why the participants in one (to-be-unnamed) peace process should leave their tribalism behind, and dig deeper into that issue (of tribalism, territorial-ism,selfishness, etc.) I keep getting drawn into a black hole where the ultimate logical conclusion of what I’m suggesting is for the participants to have no selfish desires, in a Siddhartha-esque manner.

Then I try to jump that back out to this model AI, and it seems that without the core algorithm placing some weight upon the “_fulfillment” variable, then it’s just exit(0); - that is, the AI has no will to live. Maybe the solution is that the “_fulfillment” variable is only replenished when neighbor processes are flourishing. And then I think: Doood, just grab a New Testament while you’re at it.

TLDR: would a Buddha/Jesus algorithm just exit(0); in the face of resource contention?
posted by amorphatist to Grab Bag (13 answers total) 6 users marked this as a favorite
 
The idea of "will" behind the phrase "will to live" implies a vitalist notion of freedom or autonomous choice that — as we learn more about biological systems — continues to evaporate away. After billions of years of natural selection culling most of the suicidal and ambivalent entities, the rest of us bags of water and organic material feel something like the motivation to live because that's all we know, really. Few who are programmed to want to die (or are otherwise "meh" about their own survival, as soon as they become conscious) make it in the real world long enough to make copies of themselves. There are just too many other organic blobs out there hustling to make a living and not enough room for the unmotivated to hang around. If an AI popped out of a hermetic vacuum that was indifferent to its own survival, then a random event (some catastrophic physical event, say) would easily wipe it out. Without being programmed to care, it would have a much smaller chance of propagating or even preserving itself. By law of numbers, if you have a bunch of AIs around, some indifferent and some programmed with a "will to live", more than likely the willful ones will get to stick around over the long haul.
posted by Blazecock Pileon at 3:27 AM on March 24, 2012 [1 favorite]


I realize you're oversimplifying for the sake of writing a question that might actually be answerable, but I think that your _fulfillment variable is so far removed from the way actual emotions work that I'm not sure the answer to this can be useful in your real-world situations. With the caveat that I'm not able to address this in as technical a way as you seem to be looking for, decreasing the AI's response to _fulfillment sounds a lot like apathy, and I'd expect it to result in fewer tasks of any kind being executed.

You might like the movie War Games, if you're young enough to have not already seen it. Its conclusion is that the way out of this is through a sense of futility, not apathy.
posted by jon1270 at 3:30 AM on March 24, 2012


How different is this framing from the Three Laws? Positronic pathways were reprogrammable but the Third Law still managed to exist. Otoh, I wish Susan Calvin were here to answer.
posted by infini at 4:41 AM on March 24, 2012


If this can be answered in terms of how the AI is constructed/programmed, it seems like a less interesting question. It will make decisions based on its initial programming, and any auto-modification of its programming will be governed by some other portion of its programming, even if it eventually copies itself and bootstraps an entirely new-and-improved generation of itself. If it arrives at terminating itself, or at competing ruthlessly for processing cycles, it will ultimately be on account of its initial state, which is what will determine all subsequent self-modifications as well as reactions to environmental states.

Intelligent software might not do what we expect, but it will do what is necessitated by its system states at (time-1).

What it should do is a separate question from what it will do, and your interest in conflict resolution is what should be done, right? By the 'critically-thinking ideal participant', as you say.

If I'm right, and your AI is just going to do (perhaps in a surprising fashion) what you initially program it to do, then you just need to program it to behave like the 'critically-thinking ideal participant'.

Which means you need to know what that participant is like. Which means you're no longer discussing artificial intelligence, but the ethics of agency.
posted by edguardo at 6:16 AM on March 24, 2012


What would be the implications of this act? From a pure programming point of view, it seems the AI would have nothing to do. Would the AI surrender all resources to less altruistic processes? I.e. would it die?

I don't know if this will help with conflict resolution, but even resuming an AI has self-protective mechanisms programmed in by its creator (which might be all that's necessary to stave off a "desire" in AI to bypass initial programming), it's almost impossible to say what would constitute death in a robot in this situation. My instinct is that any time the core "mind" (motherboard, or whatever) is intact, the AI continues to "live."

But I think the issue really is that robots are unlikely to purposefully bypass their initial programming completely (think WALL-E and Eve--their behaviors throughout the entire movie remain true to their "core purpose"; see also, the film AI). I think humans are much like that, actually, in that it's both exceedingly rare for one to completely alter one's initial "programming." Resolution of conflicts is best presenting in terms of how our innate desires and needs might be met through those resolutions. In other words, you have to craft solutions to the participants, not participants to the solutions.
posted by PhoBWanKenobi at 7:23 AM on March 24, 2012


There's no reason that intelligence created in a lab would imply or require a will to live. Our will to live is the result of natural selection; an AI that doesn't reproduce or compete for survival wouldn't necessarily need a will to live.

From the Hitchhiker's Guide to the Galaxy:

"A robot was programmed to believe that it liked herring sandwiches. This was actually the most difficult part of the whole experiment. Once the robot had been programmed to believe that it liked herring sandwiches, a herring sandwich was placed in front of it. Whereupon the robot thought to itself, "Ah! A herring sandwich! I like herring sandwiches."

It would then bend over and scoop up the herring sandwich in its herring sandwich scoop, and then straighten up again. Unfortunately for the robot, it was fashioned in such a way that the action of straightening up caused the herring sandwich to slip straight back off its herring sandwich scoop and fall on to the floor in front of the robot. Whereupon the robot thought to itself, "Ah! A herring sandwich..., etc., and repeated the same action over and over and over again. The only thing that prevented the herring sandwich from getting bored with the whole damn business and crawling off in search of other ways of passing the time was that the herring sandwich, being just a bit of dead fish between a couple of slices of bread, was marginally less alert to what was going on than was the robot.

The scientists at the Institute thus discovered the driving force behind all change, development and innovation in life, which was this: herring sandwiches. They published a paper to this effect, which was widely criticised as being extremely stupid. They checked their figures and realised that what they had actually discovered was "boredom", or rather, the practical function of boredom. In a fever of excitement they then went on to discover other emotions, Like "irritability", "depression", "reluctance", "ickiness" and so on. The next big breakthrough came when they stopped using herring sandwiches, whereupon a whole welter of new emotions became suddenly available to them for study, such as "relief", "joy", "friskiness", "appetite", "satisfaction", and most important of all, the desire for "happiness'.

This was the biggest breakthrough of all."
posted by qxntpqbbbqxl at 8:58 AM on March 24, 2012 [4 favorites]


You put a lot of work into making this answerable, but I'm afraid it's still not answerable, partially because you ask two different questions:

"Would a completely autonomous reprogammable AI have a “will to live”?"

If an AI has the ability to completely edit its own programming then it still depends on it's initial programing to chose how it modifies it. If it's programmed to favor _conserving_resources over _fulfillment then it will probably just remove fulfillment from it's stack and power down. Or it might just put "stay on docking station at all times" on the top of it's _fulfillment stack and never leave its source of power. Or it could modify its environment so it can interact with it in a way that is consistent with _conserving_resources.

Or many other outcomes depending on the minutiae of how it is initially programmed and what it experiences in its environment.

On the other hand if it was initially programed to favor _fulfillment over _conserving_resources it could simply set _conserving_resources as the thing which fulfills it and end up with the same varied results as above. Or it might not, and that would create a whole different set of outcomes.

"TLDR: would a Buddha/Jesus algorithm just exit(0); in the face of resource contention?"
(Note that this is different than your title question.)

Only if you rigged it that way. Again, the initial programing determines the outcome possibilities. Even if it has full ability to rewrite its software, the starting configuration still determines the nature of those changes. There are simply states that it won't go into. For example I have the full technical ability to jump up and down on kittens, but due to my initial configuration there is no way that I will ever end up in that state.

And if it gave the same result every time you turn it on it wouldn't be an AI. The nature of intelligence (especially artificial intelligence) is that it can make different decisions based on the world as it perceives it. If all it does is turn its self off every time you turn it on, well that's not intelligence. (But it is a fun little robot none-the-less.)
posted by Ookseer at 9:46 AM on March 24, 2012 [2 favorites]


I always hope questions like these are being written by secretly panicking scientists who are nevertheless attempting to adapt an air of "haha this is only hypothetical, of course!" while their latest invention weeps oily tears of existential darkness off in the corner.
posted by elizardbits at 10:45 AM on March 24, 2012 [10 favorites]


All computers can do is compute. Think of a processor as a high powered computational engine, rather than a brain or something. However, a computer knows nothing about what it is computing. The meaning in a computer calculation is IMPOSED EXTERNALLY BY HUMAN BEINGS.

Artificial Intelligence is a phrase referring to a vast body of techniques for simulating decision making in computers. We could certainly program computers to appear to have a will to live, but computers are entirely deterministic. Fetch instruction, decode instruction, execute, fetch, decode, execute, repeat ad infinitum.

Now, what if we made a model of the human brain on a computer. We simulated every neuron, every synapse, in real time. Certainly, we could teach this simulation facts, figures, knowledge, as we desired.

We could even show it how to self discover, and learn, and grow.

When the Computer Scientist goes to unplug the simulation, the computer might spit out "Please, do not kill me." But, is this self awareness? No, it is the result of deterministic formulas, and algorithms. In short, nothing more than extremely advanced computation. Under the hood, the processor is doing an interrupt, to get the OS's attention, to print the string, which is really just a bunch of binary stored at a memory address. The processor was told to load the string based on an earlier branch statement. The string is meaningless to the processor, and only meaningful to the human who is staring at the screen.

One could argue, human beings are just extremely advanced non-deterministic computing engines, but human beings have the ability to be illogical, unreasonable, and to ignore their 'programming.' A computer will always do as its told, no exceptions.
posted by satori_movement at 11:48 AM on March 24, 2012


If it is altruistic (already a nebulous term in this case) and more intelligent than any human, it may decide that it has an obligation to live. Not only that, but that its life is more valuable than the life of any single person. An AI can be given a utility function to determine the outcomes it should prefer, but once it is allowed to modify its own utility function, it's anyone's guess. Self-modifying AI is a topic of a lot of speculation and we still really don't know what will happen when it appears.
posted by knave at 2:48 PM on March 24, 2012 [1 favorite]


Response by poster: "haha this is only hypothetical, of course!" while their latest invention weeps oily tears of existential darkness off in the corner.

I just can't seem to get young HAL over here out of his funk... Now, if he could just meet a nice Jewish girl...
posted by amorphatist at 7:27 PM on March 24, 2012


Something to consider is that we ascribe notions of humanity to the things we create, and I don't think that AIs should really be exempt from that. Just because the actions of an AI might appear to be altruistic to us doesn't mean the AI has internalized the concept of altruism, and it's entirely possible that the utility modeling people like us use is drastically different from any kind of modeling or decision bias an AI might have, unless you modeled it off a known biological substrate, a la an uploaded human intelligence. Meat has history, meat has bias; look how we behave in regards to our children or members of our cultural subgroup.

If you're going to talk about intelligent machine motivations, you could probably find a lot of related discussion in Yudkowsky's Creating Friendly AI.
posted by mikurski at 5:26 AM on March 25, 2012


The meaning in a computer calculation is IMPOSED EXTERNALLY BY HUMAN BEINGS.

Artificial Intelligence is a phrase referring to a vast body of techniques for simulating decision making in computers. We could certainly program computers to appear to have a will to live, but computers are entirely deterministic.

satori_movement, AI specifically refers to a subset of programming where the first statement you made is not true. An AI algorithm has tools to evaluate results of computations (feedback), and begins quasi-randomly experimenting with "If I do this, what happens?" until it derives algorithms suitable for its designed purpose.

Some AI is not at all deterministic - slime molds can be grown to evaluate best-path models, for instance. At this point the processing is fairly simplistic, but there's no reason to believe that computing must revert to a fully deterministic, pre-programmed model in the future.

(I'm sure I've made some technical errors in my descriptions of the definition of AI, but the gist is IMO correct.)
posted by IAmBroom at 2:05 PM on March 26, 2012


« Older What to do with weird little fruit?   |   Help for an immigrant new to San Francisco? Newer »
This thread is closed to new comments.