I, Emote.
February 24, 2017 1:01 AM
Is it scientifically possible to create artificial sentience?
Hi there! I just read a few IoT articles and am now wondering about how consciousness works from a chemical perspective. Would it be possible to create a conscious computer through artificial or digital means? Is it possible to create robots with emotions? If so, how?
Hi there! I just read a few IoT articles and am now wondering about how consciousness works from a chemical perspective. Would it be possible to create a conscious computer through artificial or digital means? Is it possible to create robots with emotions? If so, how?
A lot depends on what we accept as emotion or consciousness. I liked Jules a lot. I can imagine if I spent enough time with him, I'd feel like he was a person. There's another video of him expressing apprehension about leaving. Must do some googling and find out what happened to him.
You might be interested in the Chinese Room thought experiment.
Theoretically we could sooner or later build a thing that can process as much and as fast and as elegantly as the greatest human minds, so at what point would a programmed machine be considered conscious?
posted by stellathon at 1:40 AM on February 24, 2017
You might be interested in the Chinese Room thought experiment.
Theoretically we could sooner or later build a thing that can process as much and as fast and as elegantly as the greatest human minds, so at what point would a programmed machine be considered conscious?
posted by stellathon at 1:40 AM on February 24, 2017
The concept of "consciousness" is a really complicated one, and we don't know how consciousness works from a neurophysiological perspective. Some scientists think that's not even a well-formed question. So, we're pretty far away from answering the question of what kinds of technology we'd need to replicate it.
I just read this article about consciousness yesterday, which you might find interesting. It's a book review, which is actually helpful even if you haven't read the book because it presents some disagreements within the field.
posted by Kutsuwamushi at 3:40 AM on February 24, 2017
I just read this article about consciousness yesterday, which you might find interesting. It's a book review, which is actually helpful even if you haven't read the book because it presents some disagreements within the field.
posted by Kutsuwamushi at 3:40 AM on February 24, 2017
You might enjoy reading the book "I am a Strange Loop" by Douglas Hofstadter. Building AIs may be the only way to test out some of these theories about what consciousness is and what processes give rise to it.
posted by rikschell at 4:52 AM on February 24, 2017
posted by rikschell at 4:52 AM on February 24, 2017
This is an open question in philosophy of mind and probably depends on your definitions of consciousness and artificial intelligence. We don't have good definitions for either. Consciousness and its Place in Nature is a pretty accessible paper by David Chalmers that's worth reading on this subject.
posted by Prunesquallor at 5:44 AM on February 24, 2017
posted by Prunesquallor at 5:44 AM on February 24, 2017
With prunequallor: You may get some interesting ideas and opinions here. But right now, among the smartest people who think about this stuff full-time, there's no agreed-upon answer to this.
As an indication of how hard and undecided this question is: In philosophy, the question of how and whether physical systems can be conscious is actually called the Hard Problem of Consciousness. As further indication of how undecided it is, philosophers also disagree over whether it actually should be called that.
So: Not a question to which ask.mefi is going to provide a definitive answer.
posted by ManInSuit at 5:52 AM on February 24, 2017
As an indication of how hard and undecided this question is: In philosophy, the question of how and whether physical systems can be conscious is actually called the Hard Problem of Consciousness. As further indication of how undecided it is, philosophers also disagree over whether it actually should be called that.
So: Not a question to which ask.mefi is going to provide a definitive answer.
posted by ManInSuit at 5:52 AM on February 24, 2017
Like most if not all future scientific facts it's just not possible to know the answer at this point in time. There really is not a fully agreed definition and analysis of "sentience", it's often similar to the porn definition "you know it if you have it".
But it's pretty clear that some form of the Turing test will be smashed in the not too distant future. If the outside observer can not determine if the test candidate is machine or human, how will the observer refute a claim of self awareness?
If you had asked any tech person in early 2006 if we would ever have machine translation of languages they would have just shaken their heads and said maybe one day, then google translate was released. To some degree that was related to the volume of cpu/storage available to through at the problem, while the classic metric Moore's Law seems to be slowing the power and availability of GPU's and cheapness of storage is accelerating, any phone is essentially a supercomputer. So if it's possible to build a sentient machine it seems inevitable that the hardware to run a self aware computer will be available in our lifetimes.
posted by sammyo at 6:02 AM on February 24, 2017
But it's pretty clear that some form of the Turing test will be smashed in the not too distant future. If the outside observer can not determine if the test candidate is machine or human, how will the observer refute a claim of self awareness?
If you had asked any tech person in early 2006 if we would ever have machine translation of languages they would have just shaken their heads and said maybe one day, then google translate was released. To some degree that was related to the volume of cpu/storage available to through at the problem, while the classic metric Moore's Law seems to be slowing the power and availability of GPU's and cheapness of storage is accelerating, any phone is essentially a supercomputer. So if it's possible to build a sentient machine it seems inevitable that the hardware to run a self aware computer will be available in our lifetimes.
posted by sammyo at 6:02 AM on February 24, 2017
Came to mention the "hard problem." ManInSuit's link mentions David Chalmers right up front, and if you're interested a lot of Chalmers' interviews and presentations from the last 30-ish years are on YouTube. I love these older ones in which he's interviewed by Jeffrey Mishlove. Start there and work your way to the present, as Chalmers' appearance morphs more deeply into heavy metal before transitioning into a short-haired established professor.
posted by late afternoon dreaming hotel at 11:44 AM on February 24, 2017
posted by late afternoon dreaming hotel at 11:44 AM on February 24, 2017
I don't really think so. Computers are stupid. You literally have to tell it what to do, programming its response to each and every single thing. Artificial sentience can be defined in different ways. Is it a computer's ability to pass the Turing test (for a person to have a text-only conversation through a screen with that computer and not know that it is a computer, that it passes by seeming human in the way it responds). The computer will need to know the laws of conversation in order to have a conversation that is natural and realistically human-seeming when it responds.
I took a few CS courses in college, and what I do remember is that when you have to program everything for the simplest method, for an AI to seem human, it would need to collect/perceive stimuli (data) and respond to this data in a way that is ultimately intuitive. How do you program intuition? For the computer to know to make connections between different perceived stimuli? This ability to make connections will always have to be programmed. Maybe we'll reach a point where this is possible, but I'm imagining a future where computers can watch humans and know what to look for and replicate this behavior in a convincing way. It would ultimately be mimicry and not non-organic sentience.
posted by hellomina at 4:59 AM on February 28, 2017
I took a few CS courses in college, and what I do remember is that when you have to program everything for the simplest method, for an AI to seem human, it would need to collect/perceive stimuli (data) and respond to this data in a way that is ultimately intuitive. How do you program intuition? For the computer to know to make connections between different perceived stimuli? This ability to make connections will always have to be programmed. Maybe we'll reach a point where this is possible, but I'm imagining a future where computers can watch humans and know what to look for and replicate this behavior in a convincing way. It would ultimately be mimicry and not non-organic sentience.
posted by hellomina at 4:59 AM on February 28, 2017
This thread is closed to new comments.
There are reasonable grounds to believe that there's a whole sliding scale of consciousness, and even fairly rudimentary AIs might just about get onto the lowest of the lower slopes already. Arguably.
posted by rd45 at 1:25 AM on February 24, 2017