I'm looking for examples of weak AI, aside from Google Sets and Twenty Questions.
March 20, 2004 2:26 PM   Subscribe

Looking for examples of weak AI, that is, machines acting as if they were intelligent, but simply on the basis of a complex set of rules. Things like Google Sets or Twenty Questions come to mind, and to a lesser extent Google itself. Anything else I should look into?

Both Google Sets and Twenty Questions impress me as examples of abstract reasoning that's easy for humans, but hard for computers. The creepy banana-recognizing monkey from Wales doesn't seem quite so impressive, but I guess it qualifies.
posted by Jeff Howard to Computers & Internet (14 answers total) 1 user marked this as a favorite
Those first two examples seem more like very simple sets of rules, albeit with very large datasets. All AI is pretty much just a set of rules, because that's what software is. I'm not sure I understand your question.

However, Metacat might be an interesting example for you. It is software that attempts to form analogies in a way that mimics human thought. It and much of Douglas Hofstadter's work in similar areas is presented in Fluid Concepts and Creative Analogies, which I found to be an very interesting read, for a book presenting a bunch of academic work.
posted by whatnotever at 2:54 PM on March 20, 2004

I would recommend checking out Racter if only for laughs. It ostensibly, in short, a BASIC program that "spontaneously" generates grammatically correct text. It was released by Mindscape in 1984 and is currently abandonware. That last link could very well open a popup or two. caveat clicker.
posted by clockwork at 5:49 PM on March 20, 2004

sweet, sweet irony. sheesh.
posted by clockwork at 5:55 PM on March 20, 2004

Thanks whatnotever. The Metacat example looks interesting.

Sorry if I was vague. It may be harder to pin down what I'm looking for than I thought. Maybe it doesn't have to be a complex set of rules, just some system of rules that enable it to mimic human-like behaviors convincingly to the viewer.

Here's an example of the distinction I think I'm trying to find. Two different font-identification programs. What the Font versus Identifont. Identifont asks a series of questions about the characteristics of the font, and tries to pare down the choices little by little. In the process, it exposes the system of rules that it uses. What the Font on the other hand just asks to look at a picture of the font, and apparently instantly knows what it is, much like a human designer might.

I realize in retrospect that Identifont is actually pretty close to what the Twenty Questions thing does. But humans appear to play that game exactly the same way. I don't know, maybe Twenty Questions isn't the best example.

On preview: Thanks clockwork.
posted by Jeff Howard at 6:21 PM on March 20, 2004

Anything done on a computer is operating on a complex set of rules, the question you have is if a human could tease out its internal representation. It will be easier to do that in a rule based system, sure. You're maybe asking for the difference between machine learning (unsupervised, statistical or bayesian) and more general artificial intelligence (heuristics, rules-based.) (ML is often considered a subset of AI but I've found it easier to separate them.) The main difference is that the set of rules ML operates with is often oblique or hidden from the user as statistical basis functions (a neural network's or HMM's hidden layer, support vectors, eigenvectors etc) while AI rule-based systems are encoded directly by the experimenter and are as such human readable. We tend to consider ML implementations 'black boxes' for this reason, much the same way we consider our brain's operations in complex perceptual tasks.

I wouldn't call rule based systems 'weak' as they quite often work, but you're right in that they are limited to the scientist's bias. (This is a generalization problem; make a program that solves 20 questions about cats and it certainly doesn't know about dogs.) ML problems have their own issues such as intense overfitting (give it 10 calico cats and it won't be able to tell you what a manx is) and registration (give it 10 standing cats and it won't be able to figure out the sleeping cat.)

I am also pretty sure google sets is a ML problem not a rule-based one. I'm not sure what rules you've been able to figure out from its process. From a cursory glance at it it's just simple unsupervised clustering.
posted by neustile at 7:53 PM on March 20, 2004

Boids is the first thing that pops to mind. They simulate flock intelligence more than individual smarts, though.
posted by callmejay at 8:03 PM on March 20, 2004

(Meanwhile I maintain that human and other animal brains might work "simply on the basis of a complex set of rules" and nobody has ever shown evidence to suggest otherwise.)
posted by callmejay at 8:06 PM on March 20, 2004

The black box analogy is pretty good. I think of Clarke's "any sufficiently advanced technology is indistinguishable from magic." That's what I'm looking for. Computer programs that seem unaccountably smarter than they should be.
posted by Jeff Howard at 8:43 PM on March 20, 2004

Jeff, a terrific starting place is Robot Wisdom (the pages behind the weblog; scroll down). Jorn Barger, before inventing the word "weblog", was an AI researcher. If you e-mail him intelligent questions I'm all but certain he'll respond -- though keep in mind he's emotionally invested in one or two philosophical taking-sides things. And, um, see also.

In any case the key words you'll want to search on for your question are neural net[work] and expert system, as well as knowledge representation and information architecture. AI used to be about intangibles (q.v. Turing test), but today great strides have been made in what might be called Practical AI by finding ways that computers can help us make decisions, without requiring (for example) complex language skills, things which turn out to be an entirely different order-of-magnitude problem.
posted by dhartung at 9:34 PM on March 20, 2004

The classic ones are ELIZA and ALICE.
posted by calwatch at 10:16 PM on March 20, 2004

if you're interested in the code norvig's book on ai and lisp is worth reading.
posted by andrew cooke at 4:18 AM on March 21, 2004

Also check out Pfeifer's "Understanding Intelligence." It's a crash course in 'behaivioral' AI with a focus on simple neural networks. It has a number of very simple algorthims that get 'smart' really fast.
posted by kaibutsu at 11:22 AM on March 21, 2004

yeah, i should have added that norvig focuses on "old" ai - planning, logic, etc (not neural networks, bayesian doodahs etc).
posted by andrew cooke at 3:06 PM on March 21, 2004

Along the same lines as 20 Questions, there's Guess the Dictator or Sitcom Character.
posted by yarmond at 10:11 PM on March 21, 2004

« Older What good articles have you read at the Prelinger...   |   Buying a Bike Newer »
This thread is closed to new comments.