Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Incubator Q&A

Welcome to the staging ground for new communities! Each proposal has a description in the "Descriptions" category and a body of questions and answers in "Incubator Q&A". You can ask questions (and get answers, we hope!) right away, and start new proposals.

Are you here to participate in a specific proposal? Click on the proposal tag (with the dark outline) to see only posts about that proposal and not all of the others that are in progress. Tags are at the bottom of each post.

Post History

50%
+0 −0
Incubator Q&A Could a philosophical zombie verify that it is a philosophical zombie?

The first example you gave reminds me of a black-box test. Meaning: You have two black boxes and inside is some electrical circuitry. Two wires are provided on the front of each box, which exert 1....

posted 2mo ago by Antares‭  ·  edited 2mo ago by Antares‭

Answer
#2: Post edited by user avatar Antares‭ · 2024-09-01T14:13:46Z (about 2 months ago)
  • /Maybe my answer lost a bit of focus in the course of writing it. Sorry in advance. Hope you can draw something from it.
  • The first example you gave reminds me of a black-box test. Meaning: You have two black boxes and inside is some electrical circuitry. Two wires are provided on the front of each box, which exert 1.5 Volts. You have all electrical stuff available there is to tell, if there is a difference between them.
  • In all the philosophical cases there is bit of a fallacy (imho) because they assume that all those scenarios are persistent forever exactly in the way they were setup.
  • Imagine the above example but now we know that in one case there is a AA battery in it providing the voltage, in the other there is a direct connection to some power plant. Meaning, if you measure long enough, you will start to see differences. The battery goes low, the other would not.
  • Same goes for the Chinese Room example. If the conditions change, there would be a sign of inconclusive behavior. For example you give inputs very fast and detect a slow down in responses. But in fact, there is much speculation involved at this point, because it is only a thought experiment and you can ask all kinds of "worldbuilding quesions" like: "Has the computer access to the internet", "can it also answer in other languages", etc.
  • Also this thought about debunking a p-Zombie is prone to speculation:
  • To determine if the p-Zombie understands the _concept of blue_ you actually just need to give him two options or pictures: Neither of them shows blue and you ask "Which one is the blue one?" or "Point at the blue one". If the zombie is wired to lie about the fact that it "knows" what blue is, it can only guess - or compare images. In the latter case show him pictures of different reds and tell him "this is blue", over and over again. Then make the same test with the none colors again, neither being blue or red. And then again with one red and afterwards one blue. At some point the results would be contradicting. You can argue that a brain washed human would also react like this.
  • Therefore a different approach: Prompt the zombie to act like a dog until you say stop. It will follow your order, because it thinks that it the new task at hand. But you will never tell it to stop. Any human being would at some point start to suffer and to strike, because his soul tells him here is something wrong. But the logic on which the zombie is based does not see a logical contradiction in the given task. The zombie should be fine forever.
  • The thing is, is the p-Zombie really deliberately lying or is it _convinced that the answers it gives are true facts_.
  • If you had a child, 4 years old or something and that toddler tells you "I am a pilot!". You would smile and play along. Although you know the fact is not true, you know the intention of the sentence. You can also ask about facts: "Do you also have a plane?" ("Yes!"), "Which color is it?" ("blue"), "Where did you land last?" ("On my bed!") and you would get answers that are "plausible" to some extent. Would you now infer that any little one is in fact a p-Zombie of sorts?
  • And then there is perception:
  • Imagine a blind human and ask him about "blue". He would probably not associate the same thing as you do, but he knows a concept of blue in his world. Maybe associating it with the sensible feeling of his favorite pullover of which it was told to him is of "blue" color. Show him pictures and he says "I don't know which is which". Hand him different pullovers and he will always identify his (or the same sensation) as being "blue" consistently. Is he a p-Zombie?
  • I think there is a huge portion of bias involved when it comes to define what _perceiving_ is, what _meaning_ is or _qualia_. We like to have a definition that distinguishes us from "non-humans". We do not like the idea that what we call "cognition" can also be observed in different places but working differently although producing the same or at least consistent results.
  • If an AI tells you it can "see blue" or is "self-aware" then the constituent factor is if you are inclined to believe it and play along - or not. You, the observer, makes the AI either being self-aware or a p-zombie and in both cases you would be "right" because you find or make up reasons to support your idea.
  • How should the AI disproof your claim?
  • So in the end, I think it is a draw? You cannot tell a p-Zombie apart from a human that easily, but on the other hand a p-Zombie or AI cannot proof to you that their claims are true. Maybe there is really some kind of barrier we cannot permeate. A kind of world limit, where two worlds touch and can be rubbed against one another, but truly understand what is _inside_ of the other is not possible.
  • The first example you gave reminds me of a black-box test. Meaning: You have two black boxes and inside is some electrical circuitry. Two wires are provided on the front of each box, which exert 1.5 Volts. You have all electrical stuff available there is to tell, if there is a difference between them.
  • In all the philosophical cases there is bit of a fallacy (imho) because they assume that all those scenarios are persistent forever exactly in the way they were setup.
  • Imagine the above example but now we know that in one case there is a AA battery in it providing the voltage, in the other there is a direct connection to some power plant. Meaning, if you measure long enough, you will start to see differences. The battery goes low, the other would not.
  • Same goes for the Chinese Room example. If the conditions change, there would be a sign of inconclusive behavior (I would argue). For example you give inputs very fast and detect a slow down in responses - But in fact, there is much speculation involved at this point, because it is only a thought experiment and you can ask all kinds of "worldbuilding questions" like: "Has the computer access to the internet", "can it also answer in other languages", etc.
  • The following thought about debunking a p-Zombie is prone to speculation:
  • To determine if the p-Zombie understands the _concept of blue_ you actually just need to give him two options or pictures: Neither of them shows blue and you ask "Which one is the blue one?" or "Point at the blue one". If the zombie is wired to lie about the fact that it "knows" what blue is, it can only guess - or compare images. In the latter case show him pictures of different reds and tell him "this is blue", over and over again. Then make the same test with the none colors again, neither being blue or red. And then again with one red and afterwards one blue. At some point the results would be contradicting. You can argue that a brain washed human would also react like this.
  • Therefore a different approach: Prompt the zombie to act like a dog until you say stop. It will follow your order, because it thinks that it the new task at hand. But you will never tell it to stop. Any human being would at some point start to suffer and to strike, because his soul tells him here is something wrong. But the logic on which the zombie is based does not see a logical contradiction in the given task. The zombie should be fine forever.
  • Another aspect is, is the p-Zombie really deliberately lying or is it _convinced that the answers it gives are true facts_.
  • If you had a child, 4 years old or something and that toddler tells you "I am a pilot!". You would smile and play along. Although you know the fact is not true, you know the intention of the sentence. You can also ask about facts: "Do you also have a plane?" ("Yes!"), "Which color is it?" ("blue"), "Where did you land last?" ("On my bed!") and you would get answers that are "plausible" to some extent. Would you now infer that any little one is in fact a p-Zombie of sorts? (it is "lying" but is not aware of it (but could be made aware of it by confronting it), it is convinced about his perceived truth)
  • And then there is perception:
  • Imagine a blind human and ask him about "blue". He would probably not associate the same thing as you do, but he knows a concept of blue in his world. Maybe associating it with the sensible feeling of his favorite pullover of which it was told to him is of "blue" color. Show him pictures and he says "I don't know which is which". Hand him different pullovers and he will always identify his (or the same sensation) as being "blue" consistently. Is he a p-Zombie?
  • I think there is a huge portion of bias involved when it comes to define what _perceiving_ is, what _meaning_ is or _qualia_. We like to have a definition that distinguishes us from "non-humans". We do not like the idea that what we call "cognition/consciousness" can also be observed in different places but working differently although producing the same or at least consistent results (it reminds me of the Uncanny Valley theory).
  • If an AI/p-Zombie tells you it can "see blue" or is "self-aware" then the constituent factor is, if you are inclined to believe it and play along - or not. You, the observer, makes the AI either being self-aware or a p-zombie and in both cases you would be "right" because you find or make up reasons to support your idea ("in your perceived world").
  • How should the AI disproof your claim?
  • So in the end, I think it is a draw? You cannot tell a p-Zombie apart from a human that easily, but on the other hand a p-Zombie or AI cannot proof to you that their claims about perception or awareness are true either. Maybe there is really some kind of barrier we cannot permeate. A kind of world limit, where two worlds touch and can be rubbed against one another, but truly understanding what is _inside_ of the other is not possible.
#1: Initial revision by user avatar Antares‭ · 2024-09-01T14:01:16Z (about 2 months ago)
/Maybe my answer lost a bit of focus in the course of writing it. Sorry in advance. Hope you can draw something from it.

The first example you gave reminds me of a black-box test. Meaning: You have two black boxes and inside is some electrical circuitry. Two wires are provided on the front of each box, which exert 1.5 Volts. You have all electrical stuff available there is to tell, if there is a difference between them. 

In all the philosophical cases there is bit of a fallacy (imho) because they assume that all those scenarios are persistent forever exactly in the way they were setup.

Imagine the above example but now we know that in one case there is a AA battery in it providing the voltage, in the other there is a direct connection to some power plant. Meaning, if you measure long enough, you will start to see differences. The battery goes low, the other would not.

Same goes for the Chinese Room example. If the conditions change, there would be a sign of inconclusive behavior. For example you give inputs very fast and detect a slow down in responses. But in fact, there is much speculation involved at this point, because it is only a thought experiment and you can ask all kinds of "worldbuilding quesions" like: "Has the computer access to the internet", "can it also answer in other languages", etc. 

Also this thought about debunking a p-Zombie is prone to speculation:
To determine if the p-Zombie understands the _concept of blue_ you actually just need to give him two options or pictures: Neither of them shows blue and you ask "Which one is the blue one?" or "Point at the blue one". If the zombie is wired to lie about the fact that it "knows" what blue is, it can only guess - or compare images. In the latter case show him pictures of different reds and tell him "this is blue", over and over again. Then make the same test with the none colors again, neither being blue or red. And then again with one red and afterwards one blue. At some point the results would be contradicting. You can argue that a brain washed human would also react like this.

Therefore a different approach: Prompt the zombie to act like a dog until you say stop. It will follow your order, because it thinks that it the new task at hand. But you will never tell it to stop. Any human being would at some point start to suffer and to strike, because his soul tells him here is something wrong. But the logic on which the zombie is based does not see a logical contradiction in the given task. The zombie should be fine forever.

The thing is, is the p-Zombie really deliberately lying or is it _convinced that the answers it gives are true facts_.

If you had a child, 4 years old or something and that toddler tells you "I am a pilot!". You would smile and play along. Although you know the fact is not true, you know the intention of the sentence. You can also ask about facts: "Do you also have a plane?" ("Yes!"), "Which color is it?" ("blue"), "Where did you land last?" ("On my bed!") and you would get answers that are "plausible" to some extent. Would you now infer that any little one is in fact a p-Zombie of sorts?

And then there is perception:

Imagine a blind human and ask him about "blue". He would probably not associate the same thing as you do, but he knows a concept of blue in his world. Maybe associating it with the sensible feeling of his favorite pullover of which it was told to him is of "blue" color. Show him pictures and he says "I don't know which is which". Hand him different pullovers and he will always identify his (or the same sensation) as being "blue" consistently. Is he a p-Zombie?

I think there is a huge portion of bias involved when it comes to define what _perceiving_ is, what _meaning_ is or _qualia_. We like to have a definition that distinguishes us from "non-humans". We do not like the idea that what we call "cognition" can also be observed in different places but working differently although producing the same or at least consistent results.

If an AI tells you it can "see blue" or is "self-aware" then the constituent factor is if you are inclined to believe it and play along - or not. You, the observer, makes the AI either being self-aware or a p-zombie and in both cases you would be "right" because you find or make up reasons to support your idea.

How should the AI disproof your claim?

So in the end, I think it is a draw? You cannot tell a p-Zombie apart from a human that easily, but on the other hand a p-Zombie or AI cannot proof to you that their claims are true. Maybe there is really some kind of barrier we cannot permeate. A kind of world limit, where two worlds touch and can be rubbed against one another, but truly understand what is _inside_ of the other is not possible.