How to test a chat bot?
February 6, 2009 8:08 PM   Subscribe

How would you know a decent chat bot if you found one? I'm wondering what kinds of tricks you would use to figure out it's a bot, and to figure out how (un)sophisticated it was.

I recently decided to play around with Perl again, and managed to cobble together a simple irc bot skeleton that I downloaded and somehow got to work. I might try to code up something rudimentary to amuse my daughter or something. I was looking at threads like this one and trying to get an idea of what would be impressive feats, for a chat bot.

I have some ideas myself, such as:

- teaching the bot something then asking about it later
- asking things about its personal history, family, whatnot
- asking it simple math problems

This is just a simple little project I'm playing with that probably won't go anywhere, but I find it intriguing to think about how I would go about trying to solve some of these problems, if I lacked my crippling laziness.

So, how would you probe a bot? What would be the dead giveaways, and what would give you pause / impress you if you saw the bot could handle it?
posted by marble to Computers & Internet (12 answers total) 4 users marked this as a favorite
 
Toss puns at it. It's very unlikely to respond the way a human does. Most likely it will pick one meaning and respond to it seriously.

Write several sentences in a single paragraph and leave out all the punctuation.
posted by Chocolate Pickle at 8:16 PM on February 6, 2009


Also, deliberate misspellings. Or substitute homonyms occasionally.

Real humans are very good at picking up occasional errors and correcting them. Natural languages generally have a high degree of redundance in service of error correction. (By many calculations, written English is about 2/3 redundant, for example.)

But taking advantage of that is extremely high level; no AI is likely to be able to do a good job of it.

For instance, if you were to use "roll" instead of "role" in a paragraph discussing an actor's latest job, a human reader would not be confused. A bot likely would be.
posted by Chocolate Pickle at 8:21 PM on February 6, 2009


If you can't figure it out, then whatever's on the other end of the line is at least as sentient as you are.

How's that for motivation?
posted by Aquaman at 8:26 PM on February 6, 2009


"Are you a pentium?" ... This is an actual question asked at the 2006(?) annual Loebner prize competition, a competition for chat bots. You can browse the conversations on both sides and see what the judges ask (many of the judges are lay people, i.e., not AI researchers, though I think some each year are "rockstar" judges). It's a great way to see how easy it is to trip up a chat bot.

A chatbot has not sense of conversational flow.

Syntax is hard. Parsing is hard. But "flow" man, that's the thing that's basically impossible to teach, describe, or codify (but easy for humans to learn of course). Look through some of your IM or other chat sessions. How did you know when a statement was the end of a thought or story. And which times you were sure it wasn't. And how you can be having two threads in a conversation at the same time (this is something that isn't possible in spoken english, but that's reasonably common in my IM conversations).
posted by zpousman at 8:29 PM on February 6, 2009 [1 favorite]


You should track down, well, basically everything Douglas Hofstadter has ever written. Particularly Goedel, Escher, Bach and Metamagical Themas.
posted by Kid Charlemagne at 8:36 PM on February 6, 2009 [1 favorite]


I'm not sure "figure out" is exactly the right phrase. I do not spend any time puzzling over some text being from a human or not. Bots are immediately apparent, even watching someone else interact with them. "Is a baker's dozen lucky or unlucky?" is a test, but it's one I would never need to apply.

Most bots that make deliberate mistakes do so very badly. Real mistakes are frequently transpositions, or missed keystrokes, or substitution of an i for an o, that sort of thing.

All of that aside, I have seen one bot on IRC (how long ago, I am embarrassed to say) that most did not really "get" was a bot, and it fooled people by a trick: it sought and provoked arguments. You could eventually see it exhaust its repertoire if you watched long enough, but by that time, most folks were in a frothing rage and no longer parsing text themselves, they merely responded to it along their pre-programmed scripts.

It didn't fool people into thinking it was a person; it fooled people into thinking they were bots.
posted by adipocere at 8:47 PM on February 6, 2009 [7 favorites]


This is easy. Just keep asking the same question over and over again. A human will ignore you eventually.
posted by kindall at 9:02 PM on February 6, 2009


Non sequiturs. Specifically of the shaggy-dog-story variety. Assume they know about half the story, and tell it in such a way to be confusing if the listener doesn't ask a couple small questions.
posted by Lemurrhea at 9:13 PM on February 6, 2009


Maybe this is too obvious, but I'd say response time. A lot of chatbot programs that try to emulate humans give themselves away pretty easily when they consistently spit out an error-free sentence or two almost instantly. If you're aiming for realism, set a delay to make it look like there's a person on the other end considering the user's input and responding in turn, rather than a mere algorithm that fires off an answer as quickly as it can calculate one.
posted by Rhaomi at 9:43 PM on February 6, 2009


Bots seem to be limited to x number of conversational variables. Most people, when they speak to customer service, for example, say:

My computer isn't working. Please help.

If you said something like:

My computer hasn't worked for a week. I'm using Windows. I've rebooted and the problem hasn't corrected itself. I've gone into the control panel and played around with my system settings, to no avail. Please advise.

A human will respond with the next step and request specifics (did you see this option? Was this box checked or not?) that will help determine the logical series of next steps.

A bot will pick up three key words (windows, advice, computer) and respond with a generic paragraph or suggestions that often were already mentioned in the paragraph (i.e. a suggestion to reboot)
posted by Grrlscout at 11:32 PM on February 6, 2009


Bots do not respond appropriately when you misrepresent common knowledge. ("I think it's great that Obama is America's first Latino president.")

Are there any competitions where humans attempt to fool people into thinking they're bots? That would be much more interesting.
posted by Number Used Once at 11:38 PM on February 6, 2009 [2 favorites]


There's the annual Loebner Test, a rigorous application of the Turing Test that is held every year to see how AI is coming along. Bots try to pretend they're human, and humans try to pretend they're bots.

Very interesting results this year!
posted by Aquaman at 8:32 AM on February 7, 2009


« Older DSLR or Point and Shoot?   |   Need backup solution for my WM5 Phone. Want a... Newer »
This thread is closed to new comments.