It's true today that there is no other way of determining concsciousness other than behaviour.
So if that doesn't change by the time we have machines that can pass the Turing test, then you agree that we will categorize them as conscious, treat them as conscious, and for all intents and purposes they will be conscious.
Quote:
There is no reason to think that statement will always be true other than that old favorite "it can't be done today therefore it can never be done."
I will give you one very good reason. Consciousness is necessarily subjective which means it is not amenable to scientific inquiry. Have you read anything by David Chalmers on the subject? I would recommend Facing Up to the Problem of Consciousness.
Here's an excerpt:
Quote:
The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.
I assume that it is this aspect, the experience of consciousness, that we are talking about here.
So if that doesn't change by the time we have machines that can pass the Turing test, then you agree that we will categorize them as conscious, treat them as conscious, and for all intents and purposes they will be conscious.
I think we're in sync here. I can imagine a possible alternative is that if we find that there are people who treat them badly just because they are machines, as happened in the Spielberg AI, and we find that we can't stop people from doing so, we may have to outlaw the creation of such machines until we know whether they are really conscious or not.
Quote:
I will give you one very good reason. Consciousness is necessarily subjective which means it is not amenable to scientific inquiry. Have you read anything by David Chalmers on the subject? I would recommend Facing Up to the Problem of Consciousness.
I haven't read anything by Chalmers yet, and hope to find the time at some point, but in the quote you give below he is merely stating something very poetically that I have tried to state a number of times in this thread. It is the key point about the difference between consciousness and turing-test-passing behavior. When I talk to people about it, I often use a visual example, as he does, such as experiencing a fireplace (as opposed to "redness" in the text below, although I have used the color example as well). He's saying the exact same thing, but better, because he's a professional writer and I'm not.
Quote:
The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.
I think you are dead wrong in the jump you are making. Just because consciousness is experience, and we know virtually nothing about it, and therefore, today, have no reliable way to determine whether a turning-test-passing-machine is conscious, does not mean that that will always be the case.
[the following is an edited version of my initial post, which I changed because I thought I could address it better].
Let's see if I understand your point of view.
* You seem to agree that consciousness and passing the Turing test are different matters. In other words, you seem to agree with me that the Turing test isn't valid as a determinant of what is and isn't conscious.
* You also seem to be saying that since consciousness is subjective experience, it will be forever out of reach of our understanding; therefore we will never have a better way than the Turing test for knowing whether something is conscious.
* You therefore draw the conclusion that if something passes the Turing test we have a moral imperative to treat it as conscious. You conclude that it doesn't matter, in a sense, whether it is or isn't conscious, since there is no way for us to know for sure. For all practical intents and purposes, the Turing test is remains the standard.
Let me know if I am misinterpreting you.
I think you are making the following error.
There was once a time when we could look up at the sky and, realizing that we didn't have the means to go up there or do anything more than gaze at it with our naked eyes, conclude that the nature of the motions of the objects we viewed would be forever unknowable. We could posit this or that about it, but we couldn't PROVE that the sun was not rolled along by a giant equivalent of a dung beetle (as the ancient egyptians thought), and therefore we should act as if it were. Even in principle, there was no way to prove it because there is no way to gather any information that would help us decide; it wascompletely out of our reach. And we didn't want to anger the giant dung beetle-like thing, so therefore to hedge our bets we had to worship him, etc.
One error in that reasoning was that although according to all known principles and science of the time, there was absolutely no way of imagining getting a better understanding of the sky, it turned out not to be the case. It was a mere assumption, ultimatedly based on our old favorite "what can't be done now can never be done".
You say, similarly, that we can never know whether a turing-test-passing-object is really conscious, that that information is simply out of our reach due to its having to do with subjective experience, and therefore we should hedge our bets and assume that it is so that we don't cause suffering to a conscious entity.
But in reality, just because the sky was out of our reach many years ago, and an understanding of consciousness is out of our reach now, does not mean that we will never be able to understand, measure, etc. It is a mere groundless assumption.
You might make the point, "well, even if we do think we have an understanding at this point, we can STILL never really KNOW, because it is subjective etc."
I saw a book recently, written in the last 50 years sometime, that attempted to reconcile our basioc knowledge of astronomy with a religious assumption that the heavens revolved around the Earth. It is possible to do. That is it is possible, if one desires, to construct a logical system that makes it possible that the heavens revolve around the earth, even knowing what we know.
But it just isn't REASONABLE, knowing what we know, to think that. (I hope you will agree.) While it MIGHT be the case, we still make the choice to risk people's lives and send them into space based on the assumption that we DO really understand what is happening, and what is happening is not that everything revolves around the earth.
Similarly, while we may never be able to absolutely prove that we know so much about consciousness that we can say with absolute certainty that a particular object that passes the Turing test isn't conscious, just as you, right now, don't know enough to be sure you are not a brain in a vat being stimulated to imagine you are sitting in a chair reading something on a computer screen, it is certainly conceivable that we may have enough understanding of consciousness that we can say that such a possibility is so unreasonable as to not deserve to be acted upon.
That is, you think it's completely impossible because subjective experience is not a class of thing we can know far more about than we do. I am saying that that's exactly logically equivalent to someone saying long ago that the nature of the objects in the sky was not knowable because the sky is not a class of thing that we could know far more about than we did then. In one case, the supposed reason is that it is subjective experience. In the other, it's that it's too high. But the two assumptions are logically equivalent as far as I can see.
I doubt if you could make really concerete case to the contrary; I think any argument you make to the contrary would ultimately be mere intuition.
But I could be wrong -- feel free to try if you want.
I think you are dead wrong in the jump you are making. Just because consciousness is experience, and we know virtually nothing about it, and therefore, today, have no reliable way to determine whether a turning-test-passing-machine is conscious, does not mean that that will always be the case.
Chalmers explains why, and as you note he is a professional writer so is likely to be more convincing than my paraphrasings. But I will try.
Quote:
Let's see if I understand your point of view.
* You seem to agree that consciousness and passing the Turing test are different matters. In other words, you seem to agree with me that the Turing test isn't valid as a determinant of what is and isn't conscious.
No, I said it is reasonable to assume the Turing test is a valid determinant.
Quote:
* You also seem to be saying that since consciousness is subjective experience, it will be forever out of reach of our understanding; therefore we will never have a better way than the Turing test for knowing whether something is conscious.
I didn't say it will be forever out of reach of our understanding. I even suggested a way we might come to understand consciousness, as an emergent property of massive computation.
Quote:
* You therefore draw the conclusion that if something passes the Turing test we have a moral imperative to treat it as conscious. You conclude that it doesn't matter, in a sense, whether it is or isn't conscious, since there is no way for us to know for sure. For all practical intents and purposes, the Turing test is remains the standard.
That is the best test we have today. But my real point is that no matter what test anyone devises in the future, it will be essentially the same as the Turning test. Things we already assume are conscious will pass the test. Things we assume are not conscious will not pass the test. But if an artificial intelligence passes the test it won't tell us whether it is "really" conscious or not because someone (like you) can always say, "Sure, it appears to be conscious, but there is more to consciousness than appearances."
Quote:
That is, you think it's completely impossible because subjective experience is not a class of thing we can know far more about than we do. I am saying that that's exactly logically equivalent to someone saying long ago that the nature of the objects in the sky was not knowable because the sky is not a class of thing that we could know far more about than we did then. In one case, the supposed reason is that it is subjective experience. In the other, it's that it's too high. But the two assumptions are logically equivalent as far as I can see.
The two cases are not equivalent because you are equivocating physical impossibility with logical impossibility. If detecting consciousness was just physically impossible like travelling faster than light, of course I would be foolish to say that we will never be able to do it. However I say that is is logically impossible, so unless you can come up with an error in my logic I think it is perfectly reasonable to say that we will never be able to do it.
OK, you say it isn't necessarily forever out of our understanding, for instance it may be an emergent phenomenon.
Suppose that we come to understand that it is an emergent phenomenon based on large numbers of information processing nodes operating in parallel within a volume of not more than 2 square feet.
From what you said in your reply, you seem to be admitting that the above is something that could happen.
Then, suppose we have something that acts like a human, but uses a single extremely fast processor. Then, by our understanding of consciousness, something that you state we may have at some point, the thing that acts like a human using a single processor wouldn't be conscious.
How do you resolve that?
"The two cases are not equivalent because you are equivocating physical impossibility with logical impossibility. If detecting consciousness was just physically impossible like travelling faster than light, of course I would be foolish to say that we will never be able to do it. However I say that is is logically impossible, so unless you can come up with an error in my logic I think it is perfectly reasonable to say that we will never be able to do it."
I understand what you are saying above and am glad I asked the question that prompted your statement above because it helps me understand your point better.
Answering VERY quickly, I don't agree that it is logically impossible to detect consciousness.
If one were to assume, as you say you do, that we may understand consciousness in the future, we may understand it to have certain signs. For instance, a very compact body of a lot of massive particles NECESSARIly has the associated "sign" of gravity. There may be associated signs of consciousness. There is no reason whatsoever to assume that there won't be, if one believes we may be able to understand consciousness.
But even without detection of signs, our understanding may enable us to say: we understand the conditions under which consciousness arises, such as the emergent phenomenon mentioned above; and something that doesn't have those conditions therefore isn't conscious, whether or not it passes the turning test.
Then, suppose we have something that acts like a human, but uses a single extremely fast processor. Then, by our understanding of consciousness, something that you state we may have at some point, the thing that acts like a human using a single processor wouldn't be conscious.
How do you resolve that?
Well either the single-processor isn't really conscious or our understanding of consciousness is lacking. How do you decide which is the case?
Quote:
Answering VERY quickly, I don't agree that it is logically impossible to detect consciousness.
If one were to assume, as you say you do, that we may understand consciousness in the future, we may understand it to have certain signs. For instance, a very compact body of a lot of massive particles NECESSARIly has the associated "sign" of gravity. There may be associated signs of consciousness. There is no reason whatsoever to assume that there won't be, if one believes we may be able to understand consciousness.
You have already stated quite clearly that any possible signs of consciousness don't tell you whether or not it is conscious.
Quote:
But even without detection of signs, our understanding may enable us to say: we understand the conditions under which consciousness arises, such as the emergent phenomenon mentioned above; and something that doesn't have those conditions therefore isn't conscious, whether or not it passes the turning test.
Whether or not the entity in question displays the conditions under which consciousness arises is just more signs and behavior that you have already discarded as inadmissable evidence. I can still imagine a machine that fulfills all possible criteria of this kind and yet is not conscious, in the same way you can imagine something passing the Turing test without being conscious. The problem is you can't detect consciousness because the very word "detect" means to observe objective phenomena. You can certainly correlate objective phenomena with things you believe to be conscious (which is my position), but you can't use that to determine if it is "really" conscious (which is your position).
« Last Edit: 2003-01-25 10:17:29 by David Lucifer »
Re:The Turing test
« Reply #35 on: 2003-01-25 12:51:10 »
Pardon my intrusion in this dialog but I'm finding this exchange to be very interesting. Could it be that a single indicator (like a Turing test) wouldn't be enough to indicate the presence of counsciousness but that a greater number of indicators (say 3 or 4) would tip the probability balance towards the general acceptance of a system's consciousness? A measurement of probability would seem to be the only way to judge a system's possession of any subjective and non-appearant attributes.
For example, how do we determine if someone is "crazy"? I'm no expert on the matter :-D but I'd assume that the psycologists look for a number of elements of behavior and if a number of them are present in an individual, then that individual is deemed to be crazy.
Pardon my intrusion in this dialog but I'm finding this exchange to be very interesting. Could it be that a single indicator (like a Turing test) wouldn't be enough to indicate the presence of counsciousness but that a greater number of indicators (say 3 or 4) would tip the probability balance towards the general acceptance of a system's consciousness?
Multiple indicators can't hurt, but a single indicator is sufficient for most people. Something like the Turing test is all I use on IRC for example, and my confidence in the belief that the participants (excluding the bots) are conscious is up around the 99% range. Extra indicators wouldn't help much.
[ You have already stated quite clearly that any possible signs of consciousness don't tell you whether or not it is conscious.
I can't recall exactly what I said that you are interpreting that way, but I certainly do not believe what you are saying I have "stated quite clearly."
If I did say something to that effect, I meant it in the way that I also say I can't really know whether or not I am a brain in a vat being proded to think I am reading a message from you.
That is, my belief is that it is impossible to know anything of consequence as an absolute fact, but for practical purposes, that doesn't matter. What I am interested in is acquiring knowledge that reaches a practical point of certainty.
Quote:
Whether or not the entity in question displays the conditions under which consciousness arises is just more signs and behavior that you have already discarded as inadmissable evidence.
Hopefully we can take "Gary has discarded signs as inadmissable evidence," a thesis explicitly, directly, and obviously contradicted in the message you're responding to here, and contradicted again in the present message, out of the picture.
Frankly, it's a waste of time to respond to a statement of X by saying "you have already admitted not-X" because obviously the person believes X if he is saying it.
So let's not waste our time, OK?
Quote:
The problem is you can't detect consciousness because the very word "detect" means to observe objective phenomena. You can certainly correlate objective phenomena with things you believe to be conscious (which is my position), but you can't use that to determine if it is "really" conscious (which is your position).
OK, correct me if I'm wrong, but you seem to admit above that there may be objective things that we are correlated with consciousness, other than the Turing test. You don't say so explicitly, but you seem to imply it.
I assume you will count certain brain activity picked up by various medical sensors that exist today as another such objective phenomenon.
If I am misunderstanding you in the above, let me know.
If I am am understanding you correctly in the above, let's go on to another step:
Do you admit that there MAY be objective phenomena that always go along with consciousness, in the same way that a curvature of space always goes along with the presence of a massive object?
Or, a lesser bar, that there may be some objective phenomena that always accompanies consciousness that we can measure to the degree of certainty that we can measure the presence or absence of radio waves?
Then, suppose we have something that acts like a human, but uses a single extremely fast processor. Then, by our understanding of consciousness, something that you state we may have at some point, the thing that acts like a human using a single processor wouldn't be conscious.
How do you resolve that?
Well either the single-processor isn't really conscious or our understanding of consciousness is lacking. How do you decide which is the case?
When I ask you whether it's possible to "understand" it, I mean is it possible to understand it in the sense that is usually meant by the term "understand".
If you say you have a "hypothesis" about something, you are making it clear that you aren't sure. If you say you understand something, particularly if a scientist says he "understands" rather than "hypothesizes," he is expressing a degree of confidence.
So I'll ask you: when you say it may be possible to "understand" consciousness, are you really only saying "it's possible to make hypotheses about it" or are you saying what is normally meant be "understand", i.e. that there is a substantial degree confidence that the hypothesis is correct?
Note that people make life-and-death decisions every day based on their "understandings." Making a decision about whether a turing-test-passing entity is not conscious, and thereby subjecting it, say, to careless destruction, would not be different from other life-and-death decisions made all the time.
For instance, I understand that my Macintosh is not conscious, so I throw it away without concern if I can't sell it for enough to make it worth my while to do so.
Do you or do you not think it may be possible to understand consciousness the way scientists normally mean when they say they understand something (as opposed to saying they are making a hypothesis based on some evidence).
Do you or do you not think it is POSSIBLE that we may one day understand the necessary (or necessary and sufficient) conditions for the arising of consciousness, in the sense that is usually meant by the word "understand"?
I can't recall exactly what I said that you are interpreting that way, but I certainly do not believe what you are saying I have "stated quite clearly."
You said it when you rejected the Turing test as a test for consciousness. Your reason was that it could pass the test and still not be conscious. I took that to mean that you don't count objective phenomena as a test for consciousness. Have you changed your mind?
Quote:
If I did say something to that effect, I meant it in the way that I also say I can't really know whether or not I am a brain in a vat being proded to think I am reading a message from you.
Intesting that you should bring that up. Do you think it is possible in the future to devise a test to discover whether or not you are a brain in a vat? I don't think it is possible, because any possible test results could be just more signals sent to the brain in the vat.
Quote:
That is, my belief is that it is impossible to know anything of consequence as an absolute fact, but for practical purposes, that doesn't matter. What I am interested in is acquiring knowledge that reaches a practical point of certainty.
That is what I have been arguing all along. Even though we can't know for certain whether something is conscious because all we have is access to objective phenomena, we can be reasonably sure if it passes the test.
Quote:
Hopefully we can take "Gary has discarded signs as inadmissable evidence," a thesis explicitly, directly, and obviously contradicted in the message you're responding to here, and contradicted again in the present message, out of the picture.
Frankly, it's a waste of time to respond to a statement of X by saying "you have already admitted not-X" because obviously the person believes X if he is saying it.
So let's not waste our time, OK?
Well given that you have claimed both X and not-X at different times I wasn't sure what you really thought. Either an objective test for consciousness is sufficient or it is not.
Quote:
OK, correct me if I'm wrong, but you seem to admit above that there may be objective things that we are correlated with consciousness, other than the Turing test. You don't say so explicitly, but you seem to imply it.
I have no doubt that there are objective phenomena correlated with consciousness. We already have that, and that's how I can tell when someone else is conscious. In fact, I claim that is the only way to know. But you have pointed out that even if something passes every test, it is still possible to imagine that it isn't really conscious.
Do you or do you not think it may be possible to understand consciousness the way scientists normally mean when they say they understand something (as opposed to saying they are making a hypothesis based on some evidence).
I do not think it is possible as long as we are talking about science as the study of objective phenomena and consciousness as the subjective experience.
The following is quoted from The Puzzle of Conscious Experience by David Chalmers which appeared in Scientific American in December 1995:
The Hard Problem
Researchers use the word "consciousness" in many different ways. To clarify the issues, we first have to separate the problems that are often clustered together under the name. For this purpose, I find it useful to distinguish between the "easy problems" and the "hard problem" of consciousness. The easy problems are by no means trivial - they are actually as challenging as most in psychology and biology - but it is with the hard problem that the central mystery lies.
The easy problems of consciousness include the following: How can a human subject discriminate sensory stimuli and react to them appropriately? How does the brain integrate information from many different sources and use this information to control behavior? How is it that subjects can verbalize their internal states? Although all these questions are associated with consciousness, they all concern the objective mechanisms of the cognitive system. Consequently, we have every reason to expect that continued work in cognitive psychology and neuroscience will answer them.
The hard problem, in contrast, is the question of how physical processes in the brain give rise to subjective experience. This puzzle involves the inner aspect of thought and perception: the way things feel for the subject. When we see, for example, we experience visual sensations, such as that of vivid blue. Or think of the ineffable sound of a distant oboe, the agony of an intense pain, the sparkle of happiness or the meditative quality of a moment lost in thought. All are part of what I am calling consciousness. It is these phenomena that pose the real mystery of the mind.
ISOLATED NEUROSCIENTIST in a black-and-white room knows everything about how the brain processes colors but does not know what it is like to see them. This scenario suggests that knowledge of the brain does not yield complete knowledge of conscious experience.
To illustrate the distinction, consider a thought experiment devised by the Australian philosopher Frank Jackson. Suppose that Mary, a neuroscientist in the 23rd century, is the world's leading expert on the brain processes responsible for color vision. But Mary has lived her whole life in a black-and-white room and has never seen any other colors. She knows everything there is to know about physical processes in the brain - its biology, structure and function. This understanding enables her to grasp everything there is to know about the easy problems: how the brain discriminates stimuli, integrates information and produces verbal reports. From her knowledge of color vision, she knows the way color names correspond with wavelengths on the light spectrum. But there is still something crucial about color vision that Man does not know: what it is like to experience a color such as red. It follows that there are facts about conscious experience that cannot be deduced from physical facts about the functioning of the brain.
Indeed, nobody knows why these physical processes are accompanied by conscious experience at all. Why is it that when our brains process light of a certain wavelength, we have an experience of deep purple? Why do we have any experience at all? Could not an unconscious automaton have performed the same tasks just as well? These are questions that we would like a theory of consciousness to answer.
I am not denying that consciousness arises from the brain. We know, for example, that the subjective experience of vision is closely linked to processes in the visual cortex. It is the link itself that perplexes, however. Remarkably, subjective experience seems to emerge from a physical process. But we have no idea how or why this is.
« Last Edit: 2003-02-03 19:19:45 by David Lucifer »
Do you or do you not think it is POSSIBLE that we may one day understand the necessary (or necessary and sufficient) conditions for the arising of consciousness, in the sense that is usually meant by the word "understand"?
I think it is possible, and even likely, but if something passes all the tests (meets all the conditions) someone could still claim that it isn't "really" conscious and there would be no way to prove them wrong because the subjective experience cannot be detected (by definition).
To summarize, if you now concede that objective tests are good enough to reasonably ascertain whether something is conscious, then your previous criticisms of the Turing test are no longer valid.
« Last Edit: 2003-02-03 19:26:22 by David Lucifer »
I can't recall exactly what I said that you are interpreting that way, but I certainly do not believe what you are saying I have "stated quite clearly."
You said it when you rejected the Turing test as a test for consciousness. Your reason was that it could pass the test and still not be conscious. I took that to mean that you don't count objective phenomena as a test for consciousness. Have you changed your mind?
That simply isn't logical.
To reject one objective phenomenon as a test while accepting the possibility of others is not equivalent to changing one's mind, or to asserting that both X and not-X are true.
If I must be more explicit, I reject the objective phenomenon of being "white" as a test for whether an item is a snowflake. Such an item could be a piece of paper, or a sheep. That whiteness test for being a snowflake would, in my view, be about as valid as the Turing test is as a test for consciousness.
But there are other objective phenomena that I would accept as a test for being a snowflake, and I think it's quite possible that equally good tests may come to existence in the future that would be a test for consciousness. In this thread, I have repeatedly given examples of objective phenomena that I hold as valid tests of various qualities; the sole purpose of those examples was to say that by analogy, such valid tests might emerge for consciousness. But a belief that it is possible that such tests will one day emerge does not contradict the assertion that there is no such test today.
If I have to argue on such basic and obvious points, there isn't time left over to make arguments about things that are more difficult and are of actual interest.
With regard to the Chalmers extract, I remember well when that article appeared in Scientific American and I strongly applauded it, since I agree with everything it says, and I think that if you were making a real effort to understand what I've said you would know that. (I didn't, however, recall that the author was the Chalmers you mentioned earlier.)
Chalmers discusses Jackson's example: "But there is still something crucial about color vision that Man does not know: what it is like to experience a color such as red. It follows that there are facts about conscious experience that cannot be deduced from physical facts about the functioning of the brain." I fully agree with him that this cannot be arrived at through the kind of intellectual understanding he describes.
Being able to understand certain facts about consciousness such as the minimal requirements for its coming into being does not imply being able to fully grok redness if one has never experienced color. Those are different types of "facts". There is no reason why it wouldn't be possible to grok one and not the other.
Chalmers also says: " I am not denying that consciousness arises from the brain. We know, for example, that the subjective experience of vision is closely linked to processes in the visual cortex. It is the link itself that perplexes, however. Remarkably, subjective experience seems to emerge from a physical process. But we have no idea how or why this is. "
This is a major part of what I have been saying, except that I have been pointing out that just because TODAY we have no idea how or why this is, does not mean we NEVER will. Chalmers is not saying, in the extract you quote, that we never will understand the connection or in principle cannot; he is saying that we don't understand it, pure and simple.
Finally, in your last message you say "To summarize, if you now concede that objective tests are good enough to reasonably ascertain whether something is conscious, then your previous criticisms of the Turing test are no longer valid". I hope I don't need to restate the total illogic of that assertion, which I hope I have made clear above. But just in case I do, I will, for the last time. There MAY ONE DAY be objective tests that are good enough. That does not mean that every conceivable objective test is good enough; in particular it does not mean that the Turing test is good enough. I don't think it is.
Why? Because I think that creating a non-conscious entity that passes the Turing test is quite possibly a pure engineering problem, and one that there is plenty of motivation to solve. Wouldn't it be great to fill Al Quaeda with totally expendable non-conscious entities that fool Bin Laden into thinking they are his followers but are in fact spies? The motivation is there, and it may be an engineering problem, albeit an extremely difficult one considering where we are at TODAY technologically. Where there's a motivation for solving an engineering problem, there is a good chance it will be solved. And if it is, the Turing test will be obviously not enough. Pure and simple. However, a test based on an understanding of the necessary and sufficient physical conditions for consciousness, which involves seeing if those condistions are met, would still be conceivable.
I'm going to have to bow out of this discussion. It's requiring me to spend too much time responding to "arguments" involving what I personally regard as too little rational substance, such as the assertion that "if you now concede that objective tests are good enough to reasonably ascertain whether something is conscious, then your previous criticisms of the Turing test are no longer valid".
Much of this discussion has been a pleasure for which I thank you, really. But I think it is losing its purpose. Let's move on.
To reject one objective phenomenon as a test while accepting the possibility of others is not equivalent to changing one's mind, or to asserting that both X and not-X are true.
I'm probably not the only one that thought you were being inconsistent. You reject the Turing test because it is hypothetically possible to display behavior that appears conscious without actually being conscious. I merely pointed out that you can use the exact same reasoning to reject any possible objective evidence of consciousness. I'm afraid you can't have it both ways.