Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Incubator Q&A

Welcome to the staging ground for new communities! Each proposal has a description in the "Descriptions" category and a body of questions and answers in "Incubator Q&A". You can ask questions (and get answers, we hope!) right away, and start new proposals.

Are you here to participate in a specific proposal? Click on the proposal tag (with the dark outline) to see only posts about that proposal and not all of the others that are in progress. Tags are at the bottom of each post.

Could a philosophical zombie verify that it is a philosophical zombie? Question

+1
−1

A philosophical zombie is an entity that is externally, behaviorally indistinguishable from some conscious entity, but lacks inner conscious experience, a.k.a. qualia.

See articles “Zombies” and “Consciousness - Objection 4: Zombies”.

A common idea in thought experiments involving a p-zombie, similar to the Chinese room argument, is that it would be impossible to externally verify the difference between two things, even though they are internally different. Purportedly, if you asked a p-zombie if they were conscious, they would say so, because all cognitive processing abilities are sufficiently in place to give, by calculation alone, an identical answer to one a conscious being would give. (Something like that can already sometimes be seen in certain conversations with an AI like ChatGPT.)

If you asked a p-zombie if it can “see” blue, it would say “yes”, and this would be false.

Consider a different kind of p-zombie. It is capable of giving any answer a human would give, because it fully understands the logic of human cognition. It has never seen blue, but it has flawless understanding of all the facts relating to blueness. This p-zombie can effectively lie in response to any question and be indistinguishable from someone conscious. However, this p-zombie is aware that it is lying. It knows that it cannot actually see blue.

Is this possible? I wonder if something conscious requires some kind of baseline “qualic” experience in order to even perceive what qualia it cannot perceive.

It is said some species have more “cones” in their eyes, allowing them to see colors humans cannot. It boggles the mind to try to imagine a new color. Somehow, we cannot.

I wonder if it is logically impossible for something with purely zero access to the sphere of qualic being, to even pose itself the question of if it does or does not perceive. (I think David Chalmers has argued that a p-zombie is logically impossible.)

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

0 comment threads

1 answer

+0
−0

The first example you gave reminds me of a black-box test. Meaning: You have two black boxes and inside is some electrical circuitry. Two wires are provided on the front of each box, which exert 1.5 Volts. You have all electrical stuff available there is to tell, if there is a difference between them.

In all the philosophical cases there is bit of a fallacy (imho) because they assume that all those scenarios are persistent forever exactly in the way they were setup.

Imagine the above example but now we know that in one case there is a AA battery in it providing the voltage, in the other there is a direct connection to some power plant. Meaning, if you measure long enough, you will start to see differences. The battery goes low, the other would not.

Same goes for the Chinese Room example. If the conditions change, there would be a sign of inconclusive behavior (I would argue). For example you give inputs very fast and detect a slow down in responses - But in fact, there is much speculation involved at this point, because it is only a thought experiment and you can ask all kinds of "worldbuilding questions" like: "Has the computer access to the internet", "can it also answer in other languages", etc.

The following thought about debunking a p-Zombie is prone to speculation: To determine if the p-Zombie understands the concept of blue you actually just need to give him two options or pictures: Neither of them shows blue and you ask "Which one is the blue one?" or "Point at the blue one". If the zombie is wired to lie about the fact that it "knows" what blue is, it can only guess - or compare images. In the latter case show him pictures of different reds and tell him "this is blue", over and over again. Then make the same test with the none colors again, neither being blue or red. And then again with one red and afterwards one blue. At some point the results would be contradicting. You can argue that a brain washed human would also react like this.

Therefore a different approach: Prompt the zombie to act like a dog until you say stop. It will follow your order, because it thinks that it the new task at hand. But you will never tell it to stop. Any human being would at some point start to suffer and to strike, because his soul tells him here is something wrong. But the logic on which the zombie is based does not see a logical contradiction in the given task. The zombie should be fine forever.

Another aspect is, is the p-Zombie really deliberately lying or is it convinced that the answers it gives are true facts.

If you had a child, 4 years old or something and that toddler tells you "I am a pilot!". You would smile and play along. Although you know the fact is not true, you know the intention of the sentence. You can also ask about facts: "Do you also have a plane?" ("Yes!"), "Which color is it?" ("blue"), "Where did you land last?" ("On my bed!") and you would get answers that are "plausible" to some extent. Would you now infer that any little one is in fact a p-Zombie of sorts? (it is "lying" but is not aware of it (but could be made aware of it by confronting it), it is convinced about his perceived truth)

And then there is perception:

Imagine a blind human and ask him about "blue". He would probably not associate the same thing as you do, but he knows a concept of blue in his world. Maybe associating it with the sensible feeling of his favorite pullover of which it was told to him is of "blue" color. Show him pictures and he says "I don't know which is which". Hand him different pullovers and he will always identify his (or the same sensation) as being "blue" consistently. Is he a p-Zombie?

I think there is a huge portion of bias involved when it comes to define what perceiving is, what meaning is or qualia. We like to have a definition that distinguishes us from "non-humans". We do not like the idea that what we call "cognition/consciousness" can also be observed in different places but working differently although producing the same or at least consistent results (it reminds me of the Uncanny Valley theory).

If an AI/p-Zombie tells you it can "see blue" or is "self-aware" then the constituent factor is, if you are inclined to believe it and play along - or not. You, the observer, makes the AI either being self-aware or a p-zombie and in both cases you would be "right" because you find or make up reasons to support your idea ("in your perceived world").

How should the AI disproof your claim?

So in the end, I think it is a draw? You cannot tell a p-Zombie apart from a human that easily, but on the other hand a p-Zombie or AI cannot proof to you that their claims about perception or awareness are true either. Maybe there is really some kind of barrier we cannot permeate. A kind of world limit, where two worlds touch and can be rubbed against one another, but truly understanding what is inside of the other is not possible.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »