Is computer artificial intelligence a dead field?
February 19, 2005 2:23 AM Subscribe
Is computer artificial intelligence a dead field?
I've seen what evolutionary algorithms can do, but stuff like complex neural nets or what I would consider intelligent is always "10 or so years away." What are the hot topics in this field? Is it progressing? Does anyone with credibility take Star Trek's Data or Gibson's Wintermute type predictions seriously?
I've seen what evolutionary algorithms can do, but stuff like complex neural nets or what I would consider intelligent is always "10 or so years away." What are the hot topics in this field? Is it progressing? Does anyone with credibility take Star Trek's Data or Gibson's Wintermute type predictions seriously?
This post was deleted for the following reason: Poster's Request -- frimble
The public sector funders (DARPA and NSF mainly) basically doubt that there's any practical hope to creating true AI until and unless the discreet problems that erdferkel mentions, like parsing basic auditory and visual information, can be solved.
Private funders just don't see any money in cognition engines. Humans think perfectly well and are cheap, particularly if located in India or China. Makes much more sense to pair up a cheap human with a cheap but fast task-based computer and a cheap fat fiber optic pipe to wherever you need to go.
posted by MattD at 5:03 AM on February 19, 2005
Private funders just don't see any money in cognition engines. Humans think perfectly well and are cheap, particularly if located in India or China. Makes much more sense to pair up a cheap human with a cheap but fast task-based computer and a cheap fat fiber optic pipe to wherever you need to go.
posted by MattD at 5:03 AM on February 19, 2005
Basically what erdferkel and MattD said. The short answer, if you are asking about whether there is still substantial research going on to actually create an "artificial intelligence", is no. An alternative answer, if you are asking about the rather broad variety of research devoted to search algorithms, statistical modelling, and other basic research problems that have typically been (mis)labelled as "artificial intelligence", is yes. There is plenty of research going on in the areas of speech recognition, data mining, information retrieval, image processing, and computational linguistics (e.g. machine translation) that essentially seeks to make use of the vast quantities of visual, audio, and text data available from a huge number of sources, the internet being the most obvious one. Google (and its many ongoing research projects) is fairly representative of modern "artificial intelligence". But don't let the baggage of that term get in the way of your understanding of what Google represents. Google isn't going to become self-aware, run on a "positronic brain", lament its lack of emotional awareness, or any of that other "Star Trek" hooey anytime in the forseeable future.
posted by casu marzu at 6:20 AM on February 19, 2005
posted by casu marzu at 6:20 AM on February 19, 2005
A professor in my department when I was an undergrad, Hubert Dreyfus, argued that the whole field of AI based on computers was stillborn because some of its basic assumptions were false.
posted by shoos at 7:05 AM on February 19, 2005
posted by shoos at 7:05 AM on February 19, 2005
This Hubert Dreyfus?
I think that you could make a philosophical argument that the concept of "artificial intelligence" is stillborn (in fact, I would agree with you). However, the field of artificial intelligence is very much alive and kicking. The name of the field isn't very accurate, it's more of a historical accident.
posted by casu marzu at 7:18 AM on February 19, 2005
I think that you could make a philosophical argument that the concept of "artificial intelligence" is stillborn (in fact, I would agree with you). However, the field of artificial intelligence is very much alive and kicking. The name of the field isn't very accurate, it's more of a historical accident.
posted by casu marzu at 7:18 AM on February 19, 2005
This Usenet post is, in my opinion, a pretty good summary of modern AI (though I didn't write it or know the guy who did).
posted by jacobm at 7:19 AM on February 19, 2005
posted by jacobm at 7:19 AM on February 19, 2005
The main problem with traditional artificial intelligence was that it separated mind from body, and imagined an asensitive brain as capable of thought. Most current theories consider the brain the result and organization of sensory input, and the body as basically the data gathering extension of a brain. In that sense, the brain-in-a-vat style of AI has nowhere to go. Artificial intelligence now has to be concerned with artificial life.
posted by mdn at 8:17 AM on February 19, 2005
posted by mdn at 8:17 AM on February 19, 2005
Although they're probably wrong, you might be interested in Cyc, who haven't given up on the old-fashioned way.
Most of the early AI pushers didn't realize how freaking complicated human thought is. In my opinion (I work with machine language stuff sometimes, but am no expert) we are decades and decades away from true AI, which would require immensely more powerful software and possibly seriously complicated analog computing. (See analog robots for an idea of how analog computing might work.)
posted by callmejay at 11:03 AM on February 19, 2005
Most of the early AI pushers didn't realize how freaking complicated human thought is. In my opinion (I work with machine language stuff sometimes, but am no expert) we are decades and decades away from true AI, which would require immensely more powerful software and possibly seriously complicated analog computing. (See analog robots for an idea of how analog computing might work.)
posted by callmejay at 11:03 AM on February 19, 2005
There's a lot of philosophical meat on this discussion, not least of which is on the question "what is artificial intelligence, anyway?" I think a lot of the growth of the field has depended on answering that question correctly. Most people have elected against attempting to create human-like intelligence in the past few decades, but that's not to say that breakthroughs won't come that will suddenly make it much more feasible.
Remember that people tried to develop flying machines for hundreds of years before finally succeeding. If you take the analogy further, note that most early aviation buffs attempted to emulate bird flight with ornithopters, and it was only later that the much more feasible airfoil developed. What most people think of as AI is actually more akin to ornithopter-creation. An airfoil-like solution is waiting in the wings somewhere for someone who thinks outside the box. And I believe this is what a lot of current research is attempting to uncover.
Check out the Common Sense Computing group at the MIT Media Lab. They're tackling the common sense aspect of intelligence, which may be the crucial part. I'd suggest reading John McCarthy's paper, as well.
posted by breath at 11:37 AM on February 19, 2005
Remember that people tried to develop flying machines for hundreds of years before finally succeeding. If you take the analogy further, note that most early aviation buffs attempted to emulate bird flight with ornithopters, and it was only later that the much more feasible airfoil developed. What most people think of as AI is actually more akin to ornithopter-creation. An airfoil-like solution is waiting in the wings somewhere for someone who thinks outside the box. And I believe this is what a lot of current research is attempting to uncover.
Check out the Common Sense Computing group at the MIT Media Lab. They're tackling the common sense aspect of intelligence, which may be the crucial part. I'd suggest reading John McCarthy's paper, as well.
posted by breath at 11:37 AM on February 19, 2005
Implementing human-style intelligence, deliberately, will require figuring out Vernon Mouncastle's cortical algorithm.
posted by Gyan at 12:28 PM on February 19, 2005
posted by Gyan at 12:28 PM on February 19, 2005
I think there's an excellent chance we'll be able to build something complicated enough that we have to call it sentient. I think advances in neuroscience and connectionism will eventually merge, so that we can basically combine neural circuits in random ways until one works. However, at that point, we won't have any clue how the artificial intelligent thing actually WORKS. Old school AI is all based on the idea that we can perfectly decompose the human brain into little bits, like it was some sort of really complicated steam engine. I think that idea is mostly dead. I think we need 30-40 years of advances in the various areas, and then some real convergence can happen. Unless, of course, we all get distracted by military contracts (which is unfortunantly likely).
I'm just an undergrad in the area, so I'm probably a bit too optimistic.
posted by JZig at 12:36 PM on February 19, 2005
I'm just an undergrad in the area, so I'm probably a bit too optimistic.
posted by JZig at 12:36 PM on February 19, 2005
As others have said, the field "AI" is not dead - there is a tremendous amount of research/funding going on under that heading (google for "machine learning" for instance). The goals of the field are not to produce something like a human brain, though. The goals are typically to solve certain hard, specific problems that involve complex decision making, searching "intelligently" through large decision spaces, massively distributing problems over many autonomous agents, figuring out how to have autonomous agents act with "flexibility", improve techniques of answering questions posed by humans given a large body of knowledge, etc.
Almost no one is doing what the public thought of as AI back in the 70s and 80s. Even the methodologies popular then (expert systems, large scale symbolic reasoning systems) are not popular now. The only place that I can think of is the MIT media lab, where they do things such as try to make a robot face act "human" or convey emotion, that kind of thing. From my experience working in a AI lab as an undergrad, most real AI researchers believe that that class of research is basically a dead-end, at least on the short term; showy but pointless.
posted by advil at 1:21 PM on February 19, 2005
Almost no one is doing what the public thought of as AI back in the 70s and 80s. Even the methodologies popular then (expert systems, large scale symbolic reasoning systems) are not popular now. The only place that I can think of is the MIT media lab, where they do things such as try to make a robot face act "human" or convey emotion, that kind of thing. From my experience working in a AI lab as an undergrad, most real AI researchers believe that that class of research is basically a dead-end, at least on the short term; showy but pointless.
posted by advil at 1:21 PM on February 19, 2005
I should hope not, since that's what I'm studying.
Look, PCs and other computers have nowhere near the processing power of the human brain, but in 10 to 20 years they will. My guess is that once that happens the 'statistical models' stuff will work as well as a human.
You just can't expect a computer to emulate something 100 or 1000 times as powerfully effectively. But we're making great strides with statistical systems so eventually it should work. I think, anyway.
posted by delmoi at 1:29 PM on February 19, 2005
Look, PCs and other computers have nowhere near the processing power of the human brain, but in 10 to 20 years they will. My guess is that once that happens the 'statistical models' stuff will work as well as a human.
You just can't expect a computer to emulate something 100 or 1000 times as powerfully effectively. But we're making great strides with statistical systems so eventually it should work. I think, anyway.
posted by delmoi at 1:29 PM on February 19, 2005
Don't assume Hubert Dreyfus is anywhere near right in his predictions. He's somewhat softened his position (he used to say, in 1970, that computers would never be able to play chess well), and it's unclear to me whether his position is now that AI is bankrupt in principle or if intelligent machines are actual possibilities, it would just be really hard to build them and we're nowhere close to the right solution. Dreyfus is kind of a dirty word in the AI community.
Anyway, as most people have pointed out, there are two strands of AI research. One is made up of engineers who are just interested in solving practical problems, like finding the shortest distance between two points on a map. They'll borrow features from human reasoning willy-nilly and implement them in their programs if it thinks it'll help them out, but they don't especially care if their programs diverge radically from the way that human intelligence actually works. A lot of people working in this manner (mostly computer scientists and engineers) do feel disappointed with AI - the simple neural nets we can build, for instance, just can't do very much very reliably, and research has kinda petered off in favor of hard computation.
On the other hand, there are the cognitive scientist A.I. researchers who are trying to model the brain in software. This field is by no means dead: a lot of psychology departments have at least someone on staff who is involved in computational psychology. The MIT Media lab, cited up above, is a great example of a place where research of this type still goes on. The sticky mess for the computational pscyhologists is figuring out how a connectionist network can be modelled to look like a symbolic processor. Pretty much all the Good-Old Fashioned A.I. from the 70s and 80s was symbolic in nature, and now that it looks like the brain is connectionist, we have a tough transition ahead. Especially since so many of the computer scientists don't really care about neural nets.
In the beginning, both the computer scientists and the psychologists saw their research as being roughly equivalent. Once we got over the initial hurdles and realized how massively complex the brain is, the two were forced to diverge. It's been a rocky divorce, as each field wants to extract as much information as it can from the other.
posted by painquale at 3:29 PM on February 19, 2005
Anyway, as most people have pointed out, there are two strands of AI research. One is made up of engineers who are just interested in solving practical problems, like finding the shortest distance between two points on a map. They'll borrow features from human reasoning willy-nilly and implement them in their programs if it thinks it'll help them out, but they don't especially care if their programs diverge radically from the way that human intelligence actually works. A lot of people working in this manner (mostly computer scientists and engineers) do feel disappointed with AI - the simple neural nets we can build, for instance, just can't do very much very reliably, and research has kinda petered off in favor of hard computation.
On the other hand, there are the cognitive scientist A.I. researchers who are trying to model the brain in software. This field is by no means dead: a lot of psychology departments have at least someone on staff who is involved in computational psychology. The MIT Media lab, cited up above, is a great example of a place where research of this type still goes on. The sticky mess for the computational pscyhologists is figuring out how a connectionist network can be modelled to look like a symbolic processor. Pretty much all the Good-Old Fashioned A.I. from the 70s and 80s was symbolic in nature, and now that it looks like the brain is connectionist, we have a tough transition ahead. Especially since so many of the computer scientists don't really care about neural nets.
In the beginning, both the computer scientists and the psychologists saw their research as being roughly equivalent. Once we got over the initial hurdles and realized how massively complex the brain is, the two were forced to diverge. It's been a rocky divorce, as each field wants to extract as much information as it can from the other.
posted by painquale at 3:29 PM on February 19, 2005
Is AI a dead field? I dunno, seems to be working quite well for Amazon.
Maybe we don't have Robby the Robot yet, but the fields that AI created (machine learning, natural language processing, pattern recognition to name just a few) are still alive and well, and actually creating results that are used in real products.
And then there's the bottom-up approach, ALife. Discover magazine just had a cover article on digital evolution.
Cool stuff (and being worked on at MSU, where I'm currently studying computational linguistics and nlp. Which hasn't quite had as much mainstream success as machine learning, but to me, that makes it all the more interesting :))
So yeah, it's still alive. Just maybe not quite as arrogant as it was in the 50s.
posted by formless at 4:10 PM on February 19, 2005
Maybe we don't have Robby the Robot yet, but the fields that AI created (machine learning, natural language processing, pattern recognition to name just a few) are still alive and well, and actually creating results that are used in real products.
And then there's the bottom-up approach, ALife. Discover magazine just had a cover article on digital evolution.
Cool stuff (and being worked on at MSU, where I'm currently studying computational linguistics and nlp. Which hasn't quite had as much mainstream success as machine learning, but to me, that makes it all the more interesting :))
So yeah, it's still alive. Just maybe not quite as arrogant as it was in the 50s.
posted by formless at 4:10 PM on February 19, 2005
Well, the American Association for Artificial Intelligence still exists.
On the other hand, Wired Magazine had an interview with Marvin Minsky in August 2003, which was titled Why A.I. Is Brain-Dead.
callmejay mentioned CYC, but I think it's worth more than just a link to their website. This is a project that has spent an estimated $80 million over the past 20 years (the first ten years funded by an consortium, the second ten as Cycorp) to build a database of commonsense reasoning and logic. [For example, "It is raining outside" has implications for raincoat wearing and getting wet.] And there are a lot of distributed projects that are trying to do the same thing.
posted by WestCoaster at 4:39 PM on February 19, 2005
On the other hand, Wired Magazine had an interview with Marvin Minsky in August 2003, which was titled Why A.I. Is Brain-Dead.
callmejay mentioned CYC, but I think it's worth more than just a link to their website. This is a project that has spent an estimated $80 million over the past 20 years (the first ten years funded by an consortium, the second ten as Cycorp) to build a database of commonsense reasoning and logic. [For example, "It is raining outside" has implications for raincoat wearing and getting wet.] And there are a lot of distributed projects that are trying to do the same thing.
posted by WestCoaster at 4:39 PM on February 19, 2005
I study AI (among other things).
AI has acquired a massive baggage of dead weight over the years - research that may have looked hopeful and exciting but went nowhere and now only serves to distract. A lot of people who should have known better made very assured statements and predictions which turned out to be meaningless. It is also a huge field, encompassing vastly different disciplines, approaches, and goals.
AI is by no means dead. Like painquale said, there are two aspects to modern AI research. One is to solve practical "weak AI" problems with any of the approaches that AI has been interested in over its history. That field is huge, diverse, and producing tiny but useful results all the time - in fact, you probably don't realize just how much weak AI is behind the technology you use every day. Most of the approaches being used are simply statistical learning methods.
The other aspect is the pursuit of strong AI. This field is also huge, but most of the interesting research is focused on analyzing the cognitive aspects of intelligence and trying to build up links (weak now, but getting stronger all the time) between computational and statistical formalisms on one side and cognitive, neural, and sensory ones on the other. You won't hear any predictions like "strong AI is X years away" from self-respecting people now for obvious reasons, but my opinion is that there is a lot of progress being made that puts us closer to it.
By the way, Dreyfus (whom I asked about his views last spring only to listen to him water down his assertions in a very mellow way) said that human-like intelligence is impossible without a framework of human-like cognition, which, even if true, rules out very nearly nothing that modern AI is concerned with (but does rule out a lot of things AI was concerned with 30 years ago).
posted by azazello at 5:31 PM on February 19, 2005
AI has acquired a massive baggage of dead weight over the years - research that may have looked hopeful and exciting but went nowhere and now only serves to distract. A lot of people who should have known better made very assured statements and predictions which turned out to be meaningless. It is also a huge field, encompassing vastly different disciplines, approaches, and goals.
AI is by no means dead. Like painquale said, there are two aspects to modern AI research. One is to solve practical "weak AI" problems with any of the approaches that AI has been interested in over its history. That field is huge, diverse, and producing tiny but useful results all the time - in fact, you probably don't realize just how much weak AI is behind the technology you use every day. Most of the approaches being used are simply statistical learning methods.
The other aspect is the pursuit of strong AI. This field is also huge, but most of the interesting research is focused on analyzing the cognitive aspects of intelligence and trying to build up links (weak now, but getting stronger all the time) between computational and statistical formalisms on one side and cognitive, neural, and sensory ones on the other. You won't hear any predictions like "strong AI is X years away" from self-respecting people now for obvious reasons, but my opinion is that there is a lot of progress being made that puts us closer to it.
By the way, Dreyfus (whom I asked about his views last spring only to listen to him water down his assertions in a very mellow way) said that human-like intelligence is impossible without a framework of human-like cognition, which, even if true, rules out very nearly nothing that modern AI is concerned with (but does rule out a lot of things AI was concerned with 30 years ago).
posted by azazello at 5:31 PM on February 19, 2005
If I may tack on a question to the thread: do AI researchers still consider the Turing test to be a worthwhile way of gauging machine intelligence? (if they ever did)
posted by Hildago at 10:21 PM on February 19, 2005
posted by Hildago at 10:21 PM on February 19, 2005
Hidalgo: Not really. There already are a bunch of AI bots that can fool most normal people into thinking that they're speaking to real person. Those programs are nothing but bags of tricks that don't even remotely approach real intelligence. Most A.I. researchers think that it might be possible to make a very complicated bag of tricks that would fool most people... but it would still be an unintelligent bag of tricks. Not many computer programmers are setting out to crack the test, because if you try to do that, you probably won't illuminate our concept of intelligence at all.
The problem is with the Turing Test's dependence on 'fooling' the tester. How smart is the tester allowed to be? Normal people can get fooled by simple little programs like Eliza quite easily. It still might be the case that full-blown intelligence is required to have an intricate conversation about international affairs and the taste of wine and anything else one could imagine, and a smart interviewer could try to talk about these things to probe weaknesses that simple A.I. bots normally have. The Turing Test was never meant to be a definitional test (if you can pass it, you're intelligent, if not, then you're not), but rather a kind of diagostic test (if something can pass it, then there's a pretty darn good chance that it's intelligent). Even a lot of A.I. researchers miss this fact.
So, it still might be a pretty good diagostic test. It's reasonably possible that there's no bag-of-tricks shortcut that will lead to a perfectly conversational robot: you might need to give it knowledge about the world, knowledge about itself and its standing in the world, beliefs and desires, etc., at which point it should probably be considered intelligent.
But I'm just imparting my own interpretation here. Truth is: I'm pretty sure that most A.I. researchers don't give a damn about the Turing Test.
posted by painquale at 12:13 AM on February 20, 2005
The problem is with the Turing Test's dependence on 'fooling' the tester. How smart is the tester allowed to be? Normal people can get fooled by simple little programs like Eliza quite easily. It still might be the case that full-blown intelligence is required to have an intricate conversation about international affairs and the taste of wine and anything else one could imagine, and a smart interviewer could try to talk about these things to probe weaknesses that simple A.I. bots normally have. The Turing Test was never meant to be a definitional test (if you can pass it, you're intelligent, if not, then you're not), but rather a kind of diagostic test (if something can pass it, then there's a pretty darn good chance that it's intelligent). Even a lot of A.I. researchers miss this fact.
So, it still might be a pretty good diagostic test. It's reasonably possible that there's no bag-of-tricks shortcut that will lead to a perfectly conversational robot: you might need to give it knowledge about the world, knowledge about itself and its standing in the world, beliefs and desires, etc., at which point it should probably be considered intelligent.
But I'm just imparting my own interpretation here. Truth is: I'm pretty sure that most A.I. researchers don't give a damn about the Turing Test.
posted by painquale at 12:13 AM on February 20, 2005
When I was looking for a subject to study at PhD level around two years ago I started off looking at AI research groups and ended up choosing Computational Neuroscience instead. 18 months into my PhD I still feel that that was the right choice. My expectation is that we're only a decade or so away from distilling the ideas that will be needed to do artificial intelligence the right way. Clearly that will leave a lot of engineering to do, but I expect the progress to be shockingly fast once the right principles are clearly established.
Why am I so optimistic? Mainly because there seems to be an increasing (though still relatively small) number of people actually looking for principles of organization and operation in the brain as opposed to just trying to catalogue things. We're starting to realize how important it is to view the brain as a system that has to build and maintain itself. We understand that different firing patterns within the brain are as important as the structures that carry them and are starting to understand that there are a limited number of ways that these patterns and their basins of attraction can be developed and maintained. All of this stuff is supported by ongoing improvements in imaging (parts of) the brain in vivo and in vitro.
I would recommend anyone interested in a neuroscience-inspired approach to AI have a look at Steve Grand's latest book, Growing Up With Lucy (disclaimer: he is a friend).
posted by teleskiving at 5:46 AM on February 20, 2005
Why am I so optimistic? Mainly because there seems to be an increasing (though still relatively small) number of people actually looking for principles of organization and operation in the brain as opposed to just trying to catalogue things. We're starting to realize how important it is to view the brain as a system that has to build and maintain itself. We understand that different firing patterns within the brain are as important as the structures that carry them and are starting to understand that there are a limited number of ways that these patterns and their basins of attraction can be developed and maintained. All of this stuff is supported by ongoing improvements in imaging (parts of) the brain in vivo and in vitro.
I would recommend anyone interested in a neuroscience-inspired approach to AI have a look at Steve Grand's latest book, Growing Up With Lucy (disclaimer: he is a friend).
posted by teleskiving at 5:46 AM on February 20, 2005
Azazello, I'm interested in your statement from Dreyfus, "human-like intelligence is impossible without a framework of human-like cognition." Is he ruling out the Physical Symbol System hypothesis? I guess I'm wondering what he means by "human-like cognition".
The PSS says that a system that manipulates symbols has the necessary and sufficient means for intelligence. It's related to the symbolic approach to AI, as opposed to the connectionist approach. The connectionist approach more closely models the neural architecture of the brain, i.e. neural networks.
Hildago: A good argument against the Turing Test is Searle's Chinese Room Argument. I personally think language use is still a good indicator of intelligence, so the Turing test is relevant. Like painquale said, a good diagnostic.
This is somewhat related to the PSS. I don't think a system necessarily needs to be modeled on the neural system to be intelligent. To me the human brain is just one implementation of an intelligent device.
posted by formless at 9:08 AM on February 20, 2005
The PSS says that a system that manipulates symbols has the necessary and sufficient means for intelligence. It's related to the symbolic approach to AI, as opposed to the connectionist approach. The connectionist approach more closely models the neural architecture of the brain, i.e. neural networks.
Hildago: A good argument against the Turing Test is Searle's Chinese Room Argument. I personally think language use is still a good indicator of intelligence, so the Turing test is relevant. Like painquale said, a good diagnostic.
This is somewhat related to the PSS. I don't think a system necessarily needs to be modeled on the neural system to be intelligent. To me the human brain is just one implementation of an intelligent device.
posted by formless at 9:08 AM on February 20, 2005
I dropped the Steve Grand book you recommended in my Amazon Wish List, teleskiving. Thanks for the recommendation!
posted by painquale at 9:30 AM on February 20, 2005
posted by painquale at 9:30 AM on February 20, 2005
I will respectfully disagree with painquale regarding the Turing test.
First, I think that anybody who uses IM would discover the ruse of any existing AI chatbot in a 5 minute unrestricted turing test. Now, I know that IM users tend to skew more intelligent, more rich, and more nuanced in their conversations than the bellcurve of all human beings alive today. But the point holds -- nobody would be tricked by Eliza or any of her descendents.
Second, I think that the Turing Test is a very hard test to pass. And passing it will be a sea change for how people view their computers. Language processing, when unfettered, has all of the complications of the real world. It takes real intelligence to navigate that world. Combinitorial explosion? Check. Multiple meanings of the same words? Check. ("we saw her duck"). Hard disambiguations of plain speech? Check.
If a computer passes the Turing test, I'll be sufficiently impressed / surprised. And I'll be willing to say that that program is intelligent. If you could befriend, and I mean really befriend, a computer, then it'd be intelligent. For example, if, as the Metafilter lore goes, Quonsar really is a chatbot, then A.I. has been achieved.
I don't know what A.I. researchers have to say about it, but Mr. GOFAI himself, John Haugeland seems to think so. I'd bet that many A.I. researchers would accept the Turing Test bar (full natural language processing) as a necessary condition for "intelligence," though I'd guess that it might not be a "sufficient" condition, especially for those ballet-robot designing AI researchers.
posted by zpousman at 10:06 AM on February 22, 2005
First, I think that anybody who uses IM would discover the ruse of any existing AI chatbot in a 5 minute unrestricted turing test. Now, I know that IM users tend to skew more intelligent, more rich, and more nuanced in their conversations than the bellcurve of all human beings alive today. But the point holds -- nobody would be tricked by Eliza or any of her descendents.
Second, I think that the Turing Test is a very hard test to pass. And passing it will be a sea change for how people view their computers. Language processing, when unfettered, has all of the complications of the real world. It takes real intelligence to navigate that world. Combinitorial explosion? Check. Multiple meanings of the same words? Check. ("we saw her duck"). Hard disambiguations of plain speech? Check.
If a computer passes the Turing test, I'll be sufficiently impressed / surprised. And I'll be willing to say that that program is intelligent. If you could befriend, and I mean really befriend, a computer, then it'd be intelligent. For example, if, as the Metafilter lore goes, Quonsar really is a chatbot, then A.I. has been achieved.
I don't know what A.I. researchers have to say about it, but Mr. GOFAI himself, John Haugeland seems to think so. I'd bet that many A.I. researchers would accept the Turing Test bar (full natural language processing) as a necessary condition for "intelligence," though I'd guess that it might not be a "sufficient" condition, especially for those ballet-robot designing AI researchers.
posted by zpousman at 10:06 AM on February 22, 2005
Yeah, I might be representing the A.I. community wrongly by saying that most have given up on the Turing Test. Thanks, zpousman.
posted by painquale at 10:45 AM on February 22, 2005
posted by painquale at 10:45 AM on February 22, 2005
This thread is closed to new comments.
Since the mid-nineties, artificial intelligence lost much of the overblown self-confidence it had in the thirty years before that. The "10 years away" cliche didn't work forever...
Wintermute, HAL and Data are still decades away, if they should ever be possible; but there have been great results in fields like image (or voice) understanding, data mining or bioinformatics. Of course, that's next to nothing compared to "real intelligence", and sadly almost all of the working approaches are more or less applied statistics, not super-cool intelligent algorithms, but there's a really, really long way to go to solve some really, really hard problems.
Hot topics, as far as I can tell, are recognition and understanding of voice or speech, data mining and the semantic web; the latter either hyped or despised, depending on whom you ask.
posted by erdferkel at 3:46 AM on February 19, 2005