Oh I see. People are exactly the same, physically, as fish. For instance, there is no difference in the number of neurons such that fish might be below, and a person above, the threshhold of complexity needed to create consciousness as an emergent phenonenon (if it is one) Right. Gotcha.
No, I was suggesting that fish and humans are built on the exact same physical substrate of matter and energy.
I think, as you suggest above, that the reason humans are more conscious than fish is a matter only of organization, not materials. So the argument that computers cannot be conscious because they are not made of the right stuff is untenable.
There are others who think that reasoning is utterly absurd. Totally, completely missing what it is to be conscious. I am among them. To me, such reasoning is analogous to a colorblind person saying that because he can't tell the difference between blue and red, then there is in fact no difference. It would just be silly to make such a claim based on his limited ability to sense the state of the thing being discussed.
Your analogy is not very similar to my argument. An analogy that would apply is the person saying that because no one can possibly ever tell the difference between the colors "blu-1" and "blu-2", there is no difference.
It doesn't seem so absurd now, does it?
Believe it or not, I understand your argument. I've read Chalmers and Dennett. I've read about philosophical zombies and the Hard Problem of consciousness. Yes, I can imagine someone else acting like they were conscious when really there was no one home. The ability to imagine the situation does not make is possible. I guess my real point is that if there is no theoretical way to tell the difference, the question is entirely academic, and as useful as arguing that a person or a group of people isn't really conscious.
So if a computer can pass the Turning test at least as often as an intelligent human among other intelligent humans, that is good enough for me. I think we have all previously agreed that it is possible.
« Last Edit: 2003-01-07 19:42:13 by David Lucifer »
No, I was suggesting that fish and humans are built on the exact same physical substrate of matter and energy.
I think, as you suggest above, that the reason humans are more conscious than fish is a matter only of organization, not materials. So the argument that computers cannot be conscious because they are not made of the right stuff is untenable.
Does that make more sense?
It makes more sense.
I personally would not say that computers cannot be conscious because they are not made of the right stuff.
I'd say we don't know either way. I'd say it is VERY POSSIBLE that computers made of current computer-making materials can't be conscious because they are not made of the right stuff.
It is premature to claim anything in either direction because we just don't know enough.
Believe it or not, I understand your argument. I've read Chalmers and Dennett. I've read about philosophical zombies and the Hard Problem of consciousness. Yes, I can imagine someone else acting like they were conscious when really there was no one home. The ability to imagine the situation does not make is possible.
Right. The ability to imagine the situation does not make it possible. Just as the inability to sense whether the situation has occurred does not make it impossible.
What does make it seem possible to create an insensate but humanlike object, to me, is that my long experience in designing software and reading about it leads me to see paths to work toward accomplishing it.
I see no reason whatsoever to think it isn't possible. I think it is very possible and will happen within 100 years at the latest.
I am not alone in that view.
And you are not alone in yours.
But what really fascinates me is the following:
Quote:
I guess my real point is that if there is no theoretical way to tell the difference, the question is entirely academic, and as useful as arguing that a person or a group of people isn't really conscious.
I really cannot grasp how someone could feel that way, but I know there are people who do. At the very least, I would feel a great mystery: is the entity I am talking to conscious or not? Am I wasting my empathy on something that is ultimately no different from a rock?
How you could feel it would make no difference... I can't understand that.
And this will be a practical, not merely abstract, issue in the next 100 years or 200 years. Spielberg's recent movie about the robot kid pointed to the problem pretty well, I thought. Many people thought the robots were mere machines which could be tortured with no remorse. Others felt empathy for them (I know I did in the course of the movie). The moral questions in the movie will occur in real life, I believe.
Overnight I had the thought that the following may be better than the dog example as a way of getting at our difference:
Suppose you were interacting with an entity that behaved, as far as you can tell, like a sensitive, caring person. In every way, he acted like a good friend of yours.
But suppose also that this entity actually was not concious. It is just a sophisticated version of Eliza. There is nobody home. Suppose that, due to some process of scientific reasoning, you KNEW that to be true. You had no doubt about it.
Would that make a difference to you in interacting with that entity? Why or why not?
What does make it seem possible to create an insensate but humanlike object, to me, is that my long experience in designing software and reading about it leads me to see paths to work toward accomplishing it.
I'm curious about what artificial intelligence programming experience you have.
Quote:
I really cannot grasp how someone could feel that way, but I know there are people who do. At the very least, I would feel a great mystery: is the entity I am talking to conscious or not? Am I wasting my empathy on something that is ultimately no different from a rock?
How you could feel it would make no difference... I can't understand that.
And I can't understand how someone could even consider not treating something as conscious when they cannot tell the difference between it and other entities that they do consider to be conscious. Wouldn't they have to be a sociopath?
« Last Edit: 2003-01-08 11:47:07 by David Lucifer »
Suppose you were interacting with an entity that behaved, as far as you can tell, like a sensitive, caring person. In every way, he acted like a good friend of yours.
But suppose also that this entity actually was not concious. It is just a sophisticated version of Eliza. There is nobody home. Suppose that, due to some process of scientific reasoning, you KNEW that to be true. You had no doubt about it.
Would that make a difference to you in interacting with that entity? Why or why not?
I think the situation you describe is impossible so I can't say. In this example I would have to assume that the argument started from faulty premises.
I should also point out that no possible process of scientific reasoning would lead me to saying that any claim is 100% true. There is always room for doubt.
You asked what AI programming I've done. I am pesonally responsible for certain mathematical approaches to "reading" an email and "deciding" whether it is a spam or not. They are used in several open-source spam filters and are being incorporated into a commercial one right now. (My article on this mathematics, which involves both Bayesian and parametric statistics, will be published in Linux Journal in March.) I have also created collaborative filtering software (movie recommendations, for instance) and have patents in that area. I also did work for the phone company in determining from limited data whether two separate corporate accounts should be thought of as really the same company.
I would call loosely those tasks AI because they are using machines to do tasks that used to require human intelligence, although I some people might want to call them something else.
I've studied neural nets, genetic programming, etc.
So I wouldn't be so pompous as to call myself an "AI Expert", but I am certainly above-average in sophistication with regard to such matters.
Suppose you were interacting with an entity that behaved, as far as you can tell, like a sensitive, caring person. In every way, he acted like a good friend of yours.
But suppose also that this entity actually was not concious. It is just a sophisticated version of Eliza. There is nobody home. Suppose that, due to some process of scientific reasoning, you KNEW that to be true. You had no doubt about it.
Would that make a difference to you in interacting with that entity? Why or why not?
I think the situation you describe is impossible so I can't say. In this example I would have to assume that the argument started from faulty premises.
OK, that's one of the things I was trying to get at in my question -- in not answering it, you answered my real question.
One thing it confirms again is that you don't equate SEEMING conscious with BEING conscious. You can see that the two things are distinct issues, even though you think that in practical terms, one can't occur without the other. (Assuming I understand you correctly) I DO NOT THINK EVERYONE FEELS THAT WAY. I have observed that there seem to be people who would not acknowledge that there is something called consciousness that is separate from conscious-like behavior. That is, they wouldn't say it was impossible for a conscious-like entity to exist that actually wasn't conscious; they would say it was a meaningless question.
(Probably, very few of those have done any kind of meditation.)
Of the people who think the two things are logically distinct, there are some who assume that if something seems conscious then it certainly (or almost certainly) must actually be. The conundrum about Eliza being able to fool people for 30 seconds, and the question of what time interval of being able to fool somebody must actually entail consciousness, does not faze them. They simply believe that if they can't tell the difference after some lengthy-enough amount of time, then the thing must be conscious. You appear, to me, to be in that group. Many of these people are very intelligent and well-informed.
Then there is another group, which includes me and, for example, Ray Kurzweil, who think that the assumption mentioned in the above paragraph is without rational basis. They assume that consciousness and computational performance great enough to mimic a conscious are distinct issues. Many of these people are very intelligent and well-informed.
There does not seem to be any way easy way for a person of one group to convince a person of another that he is right. Ultimately, it seems to be based on intuition based on each person's personal experiences with software and machines, and the ways we have thought about life as a human.
The conundrum can, even theoretically, only be answered for sure if/when we have a fairly perfect understanding of what consciousness is.
All this is only relevant to the CoV to the extent that it changes core doctrine or axioms. That will probably only need to happen if/when we get to the point of actually having conscious-seeming machines.
Of the people who think the two things are logically distinct, there are some who assume that if something seems conscious then it certainly (or almost certainly) must actually be. The conundrum about Eliza being able to fool people for 30 seconds, and the question of what time interval of being able to fool somebody must actually entail consciousness, does not faze them. They simply believe that if they can't tell the difference after some lengthy-enough amount of time, then the thing must be conscious. You appear, to me, to be in that group. Many of these people are very intelligent and well-informed.
More precisely, as the evidence mounts supporting the hypothesis that the entity is conscious, the confidence in the hypothesis increases. I don't see any "conundrum".
Quote:
Then there is another group, which includes me and, for example, Ray Kurzweil, who think that the assumption mentioned in the above paragraph is without rational basis. They assume that consciousness and computational performance great enough to mimic a conscious are distinct issues. Many of these people are very intelligent and well-informed.
I couldn't find anything on KurzweilAI.net to support this claim. Do you have a reference?
Now that I've clarified my position do you still think my assumptions are not rational?
More precisely, as the evidence mounts supporting the hypothesis that the entity is conscious, the confidence in the hypothesis increases. I don't see any "conundrum".
Quote:
Fair enough.
Quote:
I couldn't find anything on KurzweilAI.net to support this claim. Do you have a reference?
Hmmm... I can't spend a lot more time on this, but just quickly I found this: http://www.kurzweilai.net/articles/art0374.html?printable=1 (search for the word "soul"). I'm sure you'll note that he doesn''t claim that there is a "soul" as a separate issue from passing the turing test; he says one can hold that point of view.
But if you read my own postings above, you'll see that that is my position too. My point is not that I am asserting that one can create a machine that passes the turning test but is not conscious, only that one can't presently say it can't be done because we just don't know enough.
Quote:
Quote:
Now that I've clarified my position do you still think my assumptions are not rational?
I think they are exactly as rational as someone 200 years ago looking at the failed attempts of people to fly with bird-like wings, and therefore asserting that mankind will never fly. I think it comes out of precisely the same kind of thinking. 200 years ago, it may and may not have been true that mankind would never learn to fly; it was too soon to tell. Same with "soulless" (in kurzweil's terminology) machines that pass the turning test.
It would be different if you were asserting that it was meaningless to say that a machine that passed the turning test was not conscious. But we have clarified that that is not what you are asserting. You merely assert that it is impossible to do in the real world. At this point, I don't see how this is ultimately grounded in anything other than the fact that today we can't prove we know exactly how to do it (although we have plenty of clues, I'd guess at least akin to the clues we had 200 years ago about how to enable humans to fly).
Note that I'm not claiming that it MATTERS what I think; you obviously have as much right to your thoughts as I have to mine. Ultimately, as I point out in the other thread we're engaged in, I think it's pretty much humanly impossible to tell what's rational and what isn't on controversial matters unless they've been totally reduced to symbolic logical notation. (Which is not always even theoretically possible and is usually impossible for alll practical intents and purposes.) But in your last post you asked me what I personally thought about how rational you were being on this issue, and so I answered honestly.
I think they are exactly as rational as someone 200 years ago looking at the failed attempts of people to fly with bird-like wings, and therefore asserting that mankind will never fly. I think it comes out of precisely the same kind of thinking. 200 years ago, it may and may not have been true that mankind would never learn to fly; it was too soon to tell. Same with "soulless" (in kurzweil's terminology) machines that pass the turning test.
If you think that is a good analogy, then you don't yet understand my position.
If we were having this argument 200 years ago about flying machines, we would both say it is quite possible that machines may one day seem to fly. I'm claiming additionally that if they appear to fly, then chances are that they really are flying. You are saying that we currently lack the knowledge to make that claim. You say maybe there is something about birds that allow them to fly, while machines doing the same thing aren't really flying.
We both agree that seeming to fly and really flying are different conceptually. We already have paper gliders that fool some naive people for a short period of time into thinking it is a flying machine, but we both know and agree that it isn't really flying. Neverthelss we also both can see how future technology can lead to machines that appear to fly. Our only difference is whether a machine that appears to fly can be said to really fly. I can't understand why you would say maybe it doesn't, while you claim that we just don't know for sure.
Does that make sense?
p.s. If you think my example is farfetched, you might be interested to know that it really happened. After airplanes were invented, some people that previously said that machines will never fly claimed that the airplanes were not *really* flying. I kid you not.
You say maybe there is something about birds that allow them to fly, while machines doing the same thing aren't really flying.
This is extremely difficult to communicate about obviously.
We're talking about consciousness. It's an inner state which may have no outward representations at all -- consider the outward manifestations, for instance, of a Zen meditator vs. someone who is literally brain dead but has been put in the Zen "sitting" position.
In contrast, passing the Turing test is PURELY behavioral. You posit that consciousness is necessary to make it happen, but I see no reason whatsoever to make that assumption.
One thing is an inner state; the other is a behaviour. They are different.
When you say "Our only difference is whether a machine that appears to fly can be said to really fly" that is totally different, because the question of whether something can answer questions in similar ways to a human, which is purely a behavioural thing, is totally different from the question of whether that thing is conscious, which doesn't necessarily have anything to do with behaviour. Whereas, flying always has a lot to do with flying.
The question of whether a non-conscious entity can mimic behaviours that allow it to pass the Turing test is something that I see as purely an engineering problem. Then the machine it would have to be near people over a long period of time and pretending to be a person so that it had a lot of human-related data to work with. Still, it's an engineering problem.
My analogy to airplanes was simply that there were people 200 years ago that thought that the engineering couldn't be done, simply because it hadn't been done yet. I think you are making the exact same error.
My statement above again involves my understanding that you and I agree that consciousness and seeming to be conscious are not the same thing.
Unlike Kurzweil, I don't think it will be done in 50 years.
Quote:
p.s. If you think my example is farfetched, you might be interested to know that it really happened. After airplanes were invented, some people that previously said that machines will never fly claimed that the airplanes were not *really* flying. I kid you not.
that is really great, I'd love to see a reference on that!
My statement above again involves my understanding that you and I agree that consciousness and seeming to be conscious are not the same thing.
Do you agree that if AIs seem to be conscious, they will be assumed to conscious because there is no possible way to know if they are (other than how they behave)?
It's true today that there is no other way of determining concsciousness other than behaviour. There is no reason to think that statement will always be true other than that old favorite "it can't be done today therefore it can never be done." Just because we understand virtually nothing about consciousness now doesn't mean we will never know anything about it other than can be communicated through physical behaviour, any more than the fact that we knew nothing about electricity a few hundred years ago other than how it manifested itself in lightning bolts meant that we would never know anything about it other than how it manifested itself in lightning bolts.