Fear of a Bot Planet
December 15, 2011 3:59 AM   Subscribe

Are there people out there taking a war against robots/AI seriously?

I'm looking for people (writers, military theorists, bloggers, essayists, etc) that are taking a future human war against robots or artificial intelligence as a future possibility.

More specifically, I'm wondering if there is any sort of literature out there discussing counter-technology, counter-tactics, strategy, and military theory against robots and artificial intelligence. Whether the technology or counter-tactics be against real-world robots used in war, like air-based Drones. Or used against NSA's machine learning technology (like AQUAINT) that underpins some of their electronic surveillance (A proto-example of this might be the "Jam Echelon Day" from the late 90s, which targeted Echelon keywords). Basically I'm after any literature ranging from descriptions of current day guerrilla warfare against Western robots, to serious takes on Hollywood Terminator scenarios.

Works I'm aware of already that don't really fit the bill:

P.W. Singer's Wired for War.

Manuel de Landa's War in the Age of Intelligent Machines.

Hugo de Garis' Artilect War.

The first two are historical and accurate, but don't really go that much into counter-measures. They are excellent, but they tend to focus on the uses of robots and AI by nation states, rather than exploring what humans could or should do against robots/AI. The third book by Garis is rather historically speculative, and doesn't have much about tactics either.

Thanks.
posted by ollyollyoxenfree to Law & Government (11 answers total) 16 users marked this as a favorite
 
Best answer: Well, there's How To Survive a Robot Uprising. It's a bit tongue-in-cheek, but the defense tactics take into account (and very accurately describe) the current state of robot/AI technology.
posted by olinerd at 4:42 AM on December 15, 2011


Best answer: The book you want is Robopocalypse by Daniel H. Wilson. It's fiction, but it's written in the style of World War Z, in that it recounts stories of survivors of the robot apocalypse.
posted by ThaBombShelterSmith at 5:34 AM on December 15, 2011 [2 favorites]


Best answer: This panel talk, titled "will spiritual robots replace humanity by the year 2100" consisting of Douglas Hofstadter, Ray Kurzweil, Hans Moravec, Kevin Kelly, Ralph Merkle, Bill Joy, Frank Drake, John Holland, and John Koza was pretty fun to listen to last time I heard it.
posted by duckstab at 5:48 AM on December 15, 2011 [1 favorite]


Best answer: The concept of "Friendly AI," especially popular among futurists and singularitarians, takes the catastrophic potential of artificial intelligence very seriously. The big idea is building AIs with a mitigated risk of starting trouble in the first place, so it's more of a Sun Tzu "not to fight is supreme strategy" tactic than a Skynet robo-war resistance one. Here is a good introduction.
posted by ecmendenhall at 6:22 AM on December 15, 2011 [1 favorite]


Best answer: Are there people out there taking a war against robots/AI seriously?

I bet some people in Pakistan and Afghanistan are.
posted by empath at 6:55 AM on December 15, 2011 [4 favorites]


Best answer: Yes, it's quite common. Everything we use against human-developed viruses, cyber-warfare, etc. would apply. Against mechanized warfare from robots we would behave as we currently would against unmanned drones, missiles, etc. To guard against something like NSA systems going rogue, we use the same measures as we currently use to prevent them from enemy control or a rogue employee.

Skynet-type entities are just super intelligent users who know some good zero-day exploits. The scary part is they're not human but do what humans do, so that's why preventing malicious human control of systems stops AI too.
posted by michaelh at 7:10 AM on December 15, 2011 [1 favorite]


Best answer: Don't underestimate the fragility of the electrical power and data networks that would allow a rogue AI to operate. Humans could get by without power and internet longer than an AI could, in an extreme case.

(Assuming a sudden-awakening style attack-- If you give the hypothetical AI time to lay plans, like the one in Robopocalypse, this is less true.)
posted by Wretch729 at 7:21 AM on December 15, 2011


Best answer: You will find the work of Adam Harvey fascinating. He developed a form of "expressive interference" -- makeup and hairstyles which fuck up facial recognition software.
posted by fake at 8:54 AM on December 15, 2011 [6 favorites]


Best answer: michaelh nailed it. The frontier between an augmented human and a robot is blurry, so if we're talking radar, sensors, communication, cyberwar etc. your question basically boils down to "Are there people out there taking a war against the US army seriously?" — why yes, but it's all classified.

You got your attacks against electrical infrastructure, such as EMP or the BLU114/B soft bomb.

You got your good ol' electronic warfare, which is about dominating the EM spectrum.

You got your cyberwarfare, which is about dominating the connected infrastructure. The US has been taking this really seriously only since 2009, with USCYBERCOM. Stuxnet and Duqu are fascinating case studies. Israel's plan for an attack on Iran is interesting — it "includes electronic warfare against Iran’s electric grid, Internet, cellphone network, and emergency frequencies for firemen and police officers."

Also under cyberwarfare, there's the ongoing infection of the US drone fleet by viruses. Brass says it's "benign", but with the recent capture of a RQ-170 Sentinel by Iran and now the MQ9 Reaper crash in the Seychelles, people are wondering.

You gotta have your space war — robots, just like us, are dependent on GPS sygnals and satellite sensors & communication. In 1985, the US shot down a satellite with a missile fired from an F-15. China shot down a satellite in 2007.

Or you can go low tech! Insurgents in Afghanistan have captured (surprisingly) unencrypted drone video using regular satellite dishes and off the shelf software.

And last but not least, there's the human side. One of the tenets of insurgency and assymetrical warfare is that the only way to counter it is by massive slaughter of the civilian population. It won't work against AI, but while there still are human politicians at the end of the decision chain, media awareness is a powerful weapon. On skynet's side, DARPA is currently looking into social interaction sensors.

So in a sense, the robots are us. To fight them, you need to fight DARPA.
posted by Tom-B at 11:28 AM on December 15, 2011 [3 favorites]


Best answer: Yes. It is actually a very important concept/problem and has different names. "Intellectual event horizon" or "Technical singularity".
posted by yoyo_nyc at 11:55 AM on December 15, 2011


Best answer: It's not a "glamorous" form of robot, but landmines are autonomous killers, which is why they present very real problems.
posted by -harlequin- at 1:50 PM on December 15, 2011 [1 favorite]


« Older Practical tips around positive support of youth...   |   It's been a long time since I had to worry about... Newer »
This thread is closed to new comments.