Re:The Turing test
« Reply #45 on: 2003-02-04 15:56:48 »
[David Lucifer replying to garyrob] You reject the Turing test because it is hypothetically possible to display behavior that appears conscious without actually being conscious. I merely pointed out that you can use the exact same reasoning to reject any possible objective evidence of consciousness. I'm afraid you can't have it both ways.
[rhinoceros] It is reasonable that it is hypothetically possible for a computer to display a behavior that appears as intelligent while it is not (I'll avoid using the word conscious for now, for reasons of personal laziness -- I wouldn't want to have to justify that). Turing himself started by presenting his test as an imitation game in his original paper -- a must read.
He even started with a test where a man tries to trick the judge into believing that he is a woman, and he explicitly called the test an imitation game. Of course, that does not mean we can't go further. If ever a machine could make everyone believe it is intelligent, we would have no reason to behave to the machine as if it was not.
That said, and although Turing himself makes some good points in his paper, I think that the test as described is inadequate. First, I am sure that many computer programs would beat me right now if it came to writing poetry, for example.
The Turing test cannot avoid subjectivity or social and educational bias. A human of the 15th century or a human coming from a marginal background might fail to respond meaningfully to the kind of conversation the judge would chose to make, because of the differences in knowledge, cultural background and "way of thinking". Of course, the test could also miss "alien intelligence".
The Turing test, as described, is also rather static. Our intelligence is developed through active communication and interaction with people and things in the context of our culture. It is not developed just by learning stuff that we are being told. Human intelligence includes creating new concepts and symbolic representations and language evolution. I can imagine that computers might be able to do something similar, or even something different which would have similar result acceptable to a human judge, instead of being provided with the knowledge. But how would the Turing test detect their ability to do so? The Turing test examines only a snapshot. A machine which could pass the test now could fail the test after 10 or 100 years.
I think that the value of the Turing test is confined to a restricted kind of machine intelligence. Perhaps other tests could be deviced for machines with a more dynamic kind of intelligence.
Here is a link to an interesting chat about the Turing test, at the end of which there is a part by Douglas Hofstadter:
It is reasonable that it is hypothetically possible for a computer to display a behavior that appears as intelligent while it is not (I'll avoid using the word conscious for now, for reasons of personal laziness -- I wouldn't want to have to justify that). Turing himself started by presenting his test as an imitation game in his original paper -- a must read.
Whether the Turing test is a test of intelligence or consciousness are two distinct questions.
I'm willing to switch to the question of testing intelligence. How do you define it? Does an insect displaying an evolved instinct appear to be intelligent without being intelligent? What is the simplest animal that has real intelligence in your opinion?
Re:The Turing test
« Reply #47 on: 2003-02-08 04:56:06 »
[rhinoceros 1] It is reasonable that it is hypothetically possible for a computer to display a behavior that appears as intelligent while it is not (I'll avoid using the word conscious for now, for reasons of personal laziness -- I wouldn't want to have to justify that). Turing himself started by presenting his test as an imitation game in his original paper -- a must read.
[David Lucifer 2] Whether the Turing test is a test of intelligence or consciousness are two distinct questions.
I'm willing to switch to the question of testing intelligence. How do you define it? Does an insect displaying an evolved instinct appear to be intelligent without being intelligent? What is the simplest animal that has real intelligence in your opinion?
[rhinoceros 3] After stating the initial problem, "Can machines think?", Turing realized the difficulty of defining "machine" and "think". Rather than trying to do that, he replaced the original question with a more tangible imitation test: "Try to tell apart a machine from a human" and then "Are there imaginable digital computers which would do well in the imitation game?"
If we are still talking about the Turing test, questions of whether an insect with evolved insticts "can think" would be handled using the imitation test. To pass the test, an insect with evolved insticts should be able to make the interrogator think that it is a human. So, the simplest animal that would be able to display intelligence in a Turing test seems to be a human and nothing less. Of course, the test does take into account human behaviors due to insticts, among other things.
As I argued in my previous post, the Turing test is basically a test of the ability of an entity to display a snapshot of human behavior. This makes the Turing test narrower and at the dame time wider for testing whether an entity has what we usually call intelligence. Machines -- not insects -- might reach a point where they can interact socially, devise new paradigms, setting their own goals... being "more human". By our usual definitions of intelligence, however, much less than that can be enough for calling an entity intelligent. Intelligence is an adjustable target left to our choice -- it depends on why we ask. But the Turing test, the imitation test, requires no less than human intelligence.
The term consciousness is even more elusive than intelligence, because it includes a "subjective feeling", as you said. We could chose to grant consciousness to a machine or insect with evolved insticts. In this case, we might or might not require high intelligence, but our best argument would be "why not?". The Turing test, of course, would still require no less than a behavior charasteristic of what we call human consciousness.
If we are still talking about the Turing test, questions of whether an insect with evolved insticts "can think" would be handled using the imitation test.
No, I was asking about how you define intelligence. For instance, do you use a functional definition, only looking at behavior? Or is it important how the intelligent behaviour is generated?
Quote:
As I argued in my previous post, the Turing test is basically a test of the ability of an entity to display a snapshot of human behavior. This makes the Turing test narrower and at the dame time wider for testing whether an entity has what we usually call intelligence. Machines -- not insects -- might reach a point where they can interact socially, devise new paradigms, setting their own goals... being "more human". By our usual definitions of intelligence, however, much less than that can be enough for calling an entity intelligent. Intelligence is an adjustable target left to our choice -- it depends on why we ask. But the Turing test, the imitation test, requires no less than human intelligence.
Wider and narrower? The Turing test merely says that if something passes the test it is almost certainly intelligent. If it doesn't pass the test, nothing can be concluded.
Re:The Turing test
« Reply #49 on: 2003-02-10 20:25:58 »
[David Lucifer 4] No, I was asking about how you define intelligence. For instance, do you use a functional definition, only looking at behavior? Or is it important how the intelligent behaviour is generated?
[rhinoceros 5] If I had to give a general definition of intelligence as I understand it, I would define it only by looking at behavior. But I would take into account not only the ability to solve problems, but also the ability to devise ways to solve new kinds of problems. I wouldn't even require that it should look like human intelligence, just in case I miss the aliens.
From what I have seen, a lot of AI research today is targeted towards creating machines that would display a more human-like way of interaction with humans, so that they can be used as tools, as the market commands. A kind of Turing test could be useful for that purpose, but I would not demand human-like behavior to call a machine intelligent. In this sense, the Turing test seems to adopt a narrower definition of intelligence than mine.
[rhinoceros 3] As I argued in my previous post, the Turing test is basically a test of the ability of an entity to display a snapshot of human behavior. This makes the Turing test narrower and at the dame time wider for testing whether an entity has what we usually call intelligence. Machines -- not insects -- might reach a point where they can interact socially, devise new paradigms, setting their own goals... being "more human". By our usual definitions of intelligence, however, much less than that can be enough for calling an entity intelligent. Intelligence is an adjustable target left to our choice -- it depends on why we ask. But the Turing test, the imitation test, requires no less than human intelligence.
[David Lucifer 4] Wider and narrower? The Turing test merely says that if something passes the test it is almost certainly intelligent. If it doesn't pass the test, nothing can be concluded.
[rhinoceros 5] I called the Turing test "narrower" because it would leave out any intelligence which does not appear as human. I called it "wider" because it would include implanted behaviors which do appear as "snapshot" human intelligence but will not work under different conditions, as I explained, because the ability to devise ways to solve new kinds of problems is possibly not there.
If nothing can be concluded in case something doesn't pass the Turing test, then I withrdaw the "narrower" argument.
Editing to add this comment: Not passing the Turing test means that the interrogator in the imitation game finds out which one of the players is a machine. If we still cannot conclude that the machine was not intelligent, that means that we admit there are forms of intelligence which the Turing test cannot recognize.
Many people have questions about the origins of the universe. The generally accepted scientific theory of the universe holds that at some point it did not exist (or rather, from some frame of reference, it did not exist). Since time and logic were not present at this frame of reference, it would be possible for them to generate themselves, i.e. a rule could form in the nothingness stating: "Rules can form from nothingness." Rules would then possibly form from this nothingness, and eventually a rule would form stating: "The generation (creation) of rules must be a product of and comply with rules that exist." This would be preliminary to rules of cause and effect, and establish a temporal framework for further events. While there are many ways for these frameworks to generate themselves, I hope you understand the point I am making, which is that it is possible our universe began in this manner. People will wonder how concrete substances such as matter formed out rules of logic; they will argue that while physics may be the result of a self-generating orginizational structure, the fire in the fireplace is not, and that the matter must have come from somewhere. This is incorrect because the matter in the fireplace is not neccessarily real. It is not there in every frame of reference. You and I percieve the warmth of the fire becaues we are inside the frame of reference of physics, and rules of physics interact with our body to create the perception of the fire. For example: If you play the game of Asteroids, from your frame of reference, the asteroids are simply simulated, and not composed of actual matter. However, if you were an asteroid, and if your perceptions were regulated by the rules of the game, the little ship blowing you to pieces would be real. You would be blown to pieces, unaware of the fact that being blown to pieces is simply the result of the game's logic. From a frame of reference inside the game, everything in the game is real.
I do not belive consiousness exists. I think that from a removed frame of reference there is no great quality that makes us consiouness. We believe we are consious because we are in our own heads. If a program tells itself it enjoys the fire, then it enjoys the fire from it's frame of reference, just as we enjoy the fire from our frame of reference. Ultimately, there is no great consiousness difference between a machine emulating a human, and a human emulating a human.
While this little essay is not as stringently well crafted as the previous entries, I hope my basic belief is made clear.
The perception of our own consiousness is not the measurement of an abstract property, but rather a simulation resulting from our immersion in a particular frame of reference.
Whether the Turing test is a test of intelligence or consciousness are two distinct questions.
I've always thought the Turing Test, while good in its own right, seemed to always come short since it really only requires a machine to emulate a conversational task. I find it telling that some people don't pass the Turing Test.
For true consciousness and self-awareness what's wrong with the machine asking the unsolicated question, "What happens after I am no longer in existence?"
I've always thought this would be instant proof that the machine has achieved true self-awareness by comprehending the world existing separate from itself.
I've always thought the Turing Test, while good in its own right, seemed to always come short since it really only requires a machine to emulate a conversational task.
How is that coming up short? Keeping up your end of an intelligent conversation might require real intelligence.
Quote:
I find it telling that some people don't pass the Turing Test.
That is not a problem because failing the Turing Test doesn't mean anything.
Quote:
For true consciousness and self-awareness what's wrong with the machine asking the unsolicated question, "What happens after I am no longer in existence?"
I can program that into a chat bot in less than a minute.
I've always thought the Turing Test, while good in its own right, seemed to always come short since it really only requires a machine to emulate a conversational task.
How is that coming up short? Keeping up your end of an intelligent conversation might require real intelligence.
Quote:
I find it telling that some people don't pass the Turing Test.
That is not a problem because failing the Turing Test doesn't mean anything.
Quote:
For true consciousness and self-awareness what's wrong with the machine asking the unsolicated question, "What happens after I am no longer in existence?"
I can program that into a chat bot in less than a minute.
Okay, interesting points. I suppose the Turing Test isn't bad, but you're never going to be able to convince the nay sayers until the machine asks about it's own demise without being prompted (which is what I meant before).
My feeling is that you can't have true A.I. without a core algorithm that is capable of assimilating new information. I'm not a big fan of the "brute force" approaches that involve coding lots of rules and laws into the machine. I think if we focused more energy into an algorithm that was capable of learning and set it loose on the 'Net it would be much better.
And from that if the machine determined on it's own, during it's own learning processes, that it could die - I would definitely consider this a strong indication of self-awareness. I mean, isn't that a clear demonstration that an intelligent entity has determined it is separate from its surroundings? Of course, we're discussing intelligence here, not self-awareness.
Okay, interesting points. I suppose the Turing Test isn't bad, but you're never going to be able to convince the nay sayers until the machine asks about it's own demise without being prompted (which is what I meant before).
Even then I'm sure there will still be nay sayers.
Quote:
My feeling is that you can't have true A.I. without a core algorithm that is capable of assimilating new information. I'm not a big fan of the "brute force" approaches that involve coding lots of rules and laws into the machine. I think if we focused more energy into an algorithm that was capable of learning and set it loose on the 'Net it would be much better.
I entirely agree that intelligence requires learning. But learning is not sufficient for intelligence. Our own Alice bot, Futura, in the #virus channel is capable of learning but I don't think anyone would claim that she is intelligent.
Right so intelligence isn't just learning, right? It's understanding. So how exactly can you prove that something can understand something the way you understand it? The only reason we attribute self-awareness to each other is because we think we're all human and that if I'm self-aware than you must be self-aware.
This was a big topic in Dennett's Consciousness Explained book which I read in preparation for Pinker's How The Mind Works.
I then go back to the fact that the best evidence you could put forward is the question of death when it's not intially programmed in. If a learning algorithm is put together obviously a part of that would be to ask questions to things it can't figure out on it's own. At the point in which it learns about death and applies the concept to itself I feel that's the end point. It's demonstrated not only an ability to learn on its own (not intelligence), but a capability to understand that information by applying it to itself. Of course, long before that it could be making those connections to demonstrate intelligence, but self-awareness is the unique ability to distinguish the self from the world. If an intelligent machine successfully puts "2 and 2 together" and applies a concept to itself such as death - well, I'm just not sure what more proof you could ask for. At some level you just have to take it on "faith" - and I mean the little faith, not the big Faith.
Right so intelligence isn't just learning, right? It's understanding. So how exactly can you prove that something can understand something the way you understand it? The only reason we attribute self-awareness to each other is because we think we're all human and that if I'm self-aware than you must be self-aware.
One way to demonstrate understanding is to apply the knowledge in a real world situation, such as a conversation with someone who already understands the subject matter, i.e. a Turing Test.
Quote:
I then go back to the fact that the best evidence you could put forward is the question of death when it's not intially programmed in. If a learning algorithm is put together obviously a part of that would be to ask questions to things it can't figure out on it's own. At the point in which it learns about death and applies the concept to itself I feel that's the end point. It's demonstrated not only an ability to learn on its own (not intelligence), but a capability to understand that information by applying it to itself. Of course, long before that it could be making those connections to demonstrate intelligence, but self-awareness is the unique ability to distinguish the self from the world. If an intelligent machine successfully puts "2 and 2 together" and applies a concept to itself such as death - well, I'm just not sure what more proof you could ask for. At some level you just have to take it on "faith" - and I mean the little faith, not the big Faith.
Human children don't understand their own mortality before they are around 10. Would you say they are not intelligent until then? Would you say you can't know that they are intelligent until then?
I don't think the concept of mortality is the end point, as you say. I think intelligence is on a spectrum with no upper bound, and our knowledge of a system's intelligence is also on a spectrum but between 0 and 1 reflecting our confidence based on evidence. If a system seemed to have the concept of its own mortality, that would place it minimally at a certain level of intelligence (quite a bit below human average), but my confidence would depend a lot on what exactly it did to give me that impression and also on what else I have observed.
One way to demonstrate understanding is to apply the knowledge in a real world situation, such as a conversation with someone who already understands the subject matter, i.e. a Turing Test.
Good point!
Quote:
Human children don't understand their own mortality before they are around 10. Would you say they are not intelligent until then? Would you say you can't know that they are intelligent until then?
I admit, I am mixing intelligence with self-awareness. I would say children aren't fully self-aware until mortality is thoroughly grasped, but certainly they have a degree of intelligence.
Quote:
I don't think the concept of mortality is the end point, as you say.
As far as intelligence is concerned, I agree. But can't any degree of intelligence be better with self-awareness than without? Or is self-awareness simply an illusion occurring within anything sufficiently intelligent? In which case, how do you know when a machine has reached that point? Does it matter? Probably only if we think ethics and morals should be applied to any sufficiently intelligent system, right?
Quote:
If a system seemed to have the concept of its own mortality, that would place it minimally at a certain level of intelligence (quite a bit below human average), but my confidence would depend a lot on what exactly it did to give me that impression and also on what else I have observed.
You think a system programmed to learn and advanced enough to conceive of its own mortality wouldn't by default have to already have gained a good amount of intelligence to reach that point at all? It sounds as though you are saying it doesn't take much to understand ones own mortality. To that I would disagree.
But I think I jumped into the middle of this without defining a couple of basics. Are we in agreement there is such a thing as self-awareness and while related to intelligence it isn't the same thing? Would an independently learning system that is gaining knowledge and in theory intelligence come up with a concept of its own mortality on its own or does it have to be explicitly taught it? Does this even mean anything?
I've also always assumed that conceiving of one's own mortality was an important step in the formation of the concept of the "self" in a person's brain. Without it there isn't that final division between the brain and the world around it. Not only that, but I've always looked towards the conception of mortality as the crowning moment in which the brain decides that the world can't possibly exist without it (since it's never known a world in which it wasn't a part of) and often decides that its own self-awareness is permanent.
Back to the Turing Test - it sounds like the Turing Test is looking to decide on true intelligence and I agree that it fits the bill. Does that mean that self-awareness comes along with that? I understand that during the course of the Turing Test the machine could demonstrate a deeper understanding of the topics being discussed, but how can you convince the user of true self-awareness given that the entire conversation could be brute forced with a sufficiently powerful machine and a large enough database? That wouldn't be self-awareness so much as it would be a mimic of human conversation.
So then does a machine passing the Turing Test mean that debates will be sparked over its "rights" as a self-aware machine? Doesn't the machine first have to provide evidence of its own self-awareness before anyone would be interested in providing it rights of any kind? Is passing the Turing Test the final test that would be required? Would a race of intelligent machines ever deserve any kinds of rights?