Say termites took over your mind...
July 27, 2009 6:28 AM   Subscribe

Could an AI zombify another?

I write a science fiction from time to time. My latest story is set in a Balkanized America circa 2060. It'll be about 5,000-6,000 words long, and I've got roughly 3500 words so far. My lead character has developed a self-contained termite colony for he hopes will be used by space travelers or in the "Third World" (it produces methane and edible adults). Individual bugs are networked by nanos, controlled by an AI. Figure several thousand member units.

The plot revolves around a chance encounter (in the field, essentially) with another AI-controlled group, an illegal slave force of perhaps twenty, thirty units. I want the termite AI to take over the slave AI. I'm not systems-savvy enough to contrive a convincing way for this to be accomplished. For the sake of the tale, the explanation should be short and sweet, but I don't mind lots of detail for my own edification. Does anyone have a suggestion as to what I could say?
posted by Guy_Inamonkeysuit to Writing & Language (12 answers total) 1 user marked this as a favorite
 
For a sufficiently advanced pair of AIs, I think the first needs only to persuasively lie to the second. Think cults, Fox News, etc.
posted by fatllama at 6:35 AM on July 27, 2009 [1 favorite]


I'm assuming these slaves are human.

The termite colony controlled by a centralized AI is essentially a classic client-server network, as is the slave one. Here's an approach you could run with:

TAI (Termite AI) somehow "spoofs" the protocol of the SAI (Slave AI), and "just happens" to broadcast a stronger, better signal than SAI. Slaves start "listening" to the Termite "server". If the SAI was essentially a hacked-up version of TAI, it comes across as pretty plausible.

or

Something happens to cause the SAI to go offline briefly and/or reboot. The Slaves start looking for their AI, and wind up connecting to the strongest signal in the area, which is the TAI. The TAI sees new "clients", but doesn't totally recognize them, so it thinks there's just some kind of data corruption that it attempts to fix by overwriting the Slave's config.
posted by mkultra at 6:58 AM on July 27, 2009


Maybe make it something simple, like human error that people today can relate to; I'm thinking of something like an automatic update mechanism in the nanobots that will automatically download and install software with a higher version number from swarm members in range (so a revision would only need to be applied to a few "seed" bots and from there be populated throughout the swarm).

The termites have gone through a lot of testing and revising, so their build number is very high, and the slave-controlling nanobots haven't been correctly set to "read-only" (maybe that requires special codes the illegal keepers don't have access to, maybe they're just sloppy). So, now as soon as they get into range the nanobots download the "improved" code into their systems.
That would allow for all kinds of back-and-forth information exchange, even an update of a central AI with the "newer" program code.
posted by PontifexPrimus at 7:05 AM on July 27, 2009


Yes, of course. If AI:s are at all modular, like if they have partially independent subsystems for moving one agent, detecting shapes in images, recognizing words from auditory input -- one by one capture their inputs and outputs and replace them with what ever you want. From peripheral systems move to whatever is the most central system, somekind of attention prioritizing system, conceptual manipulator, consciousness, CPU, decider or whatever you want to call it. Or core group of competing prioritizers if you don't want to sound too classic AI. Capture everything around it and isolate it and you have all of the efficient subsystems in your use without the part that would integrate their outputs and decide what would be good for the AI and its current goals. In effect, zombifying it.

With your termites and 'nanos', this isolation and corruption can be done physically, but even without physical access modules are probably not all black boxes and can limitedly affect each other's operations and then it would resemble zombifying a modern computer. A design hole in one subsystem is used to overload or otherwise break it and muddle the distinction between data given to the system and operational code which implements the system. So it thinks that some data given to it is actually part of subsystem itself and so the data can modify the subsystem to attack and capture other subsystems.
posted by Free word order! at 7:11 AM on July 27, 2009


As others said, you should think about biological processes. Think about HIV, which destroys a single link in the defense mechanisms, making you susceptible to a range of diseases that the uninfected don't really have to ever worry about.
posted by Cool Papa Bell at 7:21 AM on July 27, 2009


Is it important that the takeover be like zombifying? Might they just mistake the other network's members as malfunctioning members of its own and "repair" them? Or does the termite colony have a deliberate ASSIMILATE ethic?
posted by Zed at 7:41 AM on July 27, 2009


I can't find the article now, but I recall reading years ago that the first AIs may have the characteristics of sociopaths. This is because the Turing Test, and other tests for AI, essentially require the machine to lie convincingly to humans.

fatllama's comment reminded me of that. What if one AI has inherited some primordial code from those early sociopathic AIs, and this allows it to trick and subsume the other?
posted by wfrgms at 8:03 AM on July 27, 2009


You might want to read Daniel Dennet's "Where Am I"? This is a metaphysical take on where the sense of self exists. Reading this might give you some ideas on how one AI could take over another AI by messing with its sense of self. Building off the notion of one AI lying to another, one AI could completely deceive another by controlling its input to get the behavior it wants.
posted by plinth at 8:17 AM on July 27, 2009


Response by poster: Termite brain is a custom piece of software, to my way of thinking; possibly cracked, perhaps lacking certain standard ethical "routines," so lying might work. I'm thinking the slaver AI (obviously run by a human) is essentially "off the shelf," put into play by someone less savvy than my protagonist -- who himself is no computer guru (he's a bio guy; ribo-punk) but has the hacker mentality -- and in any case is in a life-or-death situation.

I see the emergent termite AI (it's young) as having little or no "will" per se. The takeover need not be like zombifying; that's just the first word that occurred to me while writing up this post. The end result will be that my protagonist finds himself responsible for the (human, yes) slaves. It doesn't really matter to me if the termite AI "survives." Dunno if any of this helps.
posted by Guy_Inamonkeysuit at 8:39 AM on July 27, 2009


All lifeforms (that we know about anyway) share DNA. An awful lot of this DNA is common. There was an article which I cannot find now describing how one sequence of DNA common in humans and sea turtles (and other creatures) all produced the same protein but that protein had wildly different uses in each organism. Basically, to be alive requires a lot of DNA and that DNA is going to keep replicating since without it you don't have a living organism. The differences, while highly visible in the organisms, are all details as far as the DNA sequence goes.

It's fair to assume that once we have AI, they will also share some common traits for the same reason: it probably takes a lot of code to build an AI. Some of these common traits that may be exploitable, especially if one AI is smart enough to understand its own code.
posted by chairface at 10:07 AM on July 27, 2009


Response by poster: You guys rock. So many great answers here; I can't mark just one as "best in show," because I think I will be using bits and pieces of them all. chairface's "common code" idea is interesting; I like that combined with fatllama's and wfrgms's "psychopath" AI. Also like mkultra's "reboot" idea.
posted by Guy_Inamonkeysuit at 10:23 AM on July 27, 2009


Response by poster: Rather late in the game, but the story was written, submitted -- and accepted.

Here's the link...

Thanks to all!
posted by Guy_Inamonkeysuit at 3:58 PM on July 15, 2010


« Older "On Demand"? Well, I'm DEMANDING already!   |   Do Mormons make better employees? Newer »
This thread is closed to new comments.