For some reason, Mary’s room has been garnering attention lately. This TED Ed video on it was shared on Aeon’s site this week.
The wording of the actual thought experiment is important, so quoting Frank Jackson’s words (via the Wikipedia article on the knowledge argument):
Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like “red”, “blue”, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence “The sky is blue”. … What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?
If we take the phrase “all the physical information there is to obtain” literally, then Mary knows not only the facts relayed above, but every effect the stimulus will have on a human nervous system, including her own, every affective reaction, every association that will be triggered, both conscious and unconscious, every memory, every physiological reaction, as well as all their downstream effects. This might include Mary simulating the effects on her own nervous system without photons of the relevant wavelength ever striking her retina.
In other words, if physicalism is true, then Mary’s complete knowledge of all the physical facts will include knowledge of what it is like to experience color. When she does have the actual experience for the first time, there should be no surprises. If there are, then she didn’t really have all the physical facts.
On the other hand, if some form of dualism is true, then the physical facts are not all the facts, and she probably does learn something with the experience. But the thought experiment, like most philosophical thought experiments, doesn’t demonstrate that one way or another. It only flushes out our intuitions about the situation.
Of course, many might say it’s implausible for Mary to have such thorough, pretty much omniscient, knowledge of the experience of color prior to the experience. And they’d be right. But if we’re going to take the premise seriously, that’s what it entails.
Unless of course I’ve missed something?
125 thoughts on “The problem with Mary’s room”
I don’t think you need to be a dualist to understand that there are different kinds of “knowledge”. If you set up a mechanism to produce a specific output in response to a specific input, there’s a kind of knowledge there. The mechanism “knows” what to do. This would be Dennett’s competence without comprehension.
LikeLiked by 4 people
Certainly there are different types of knowledge, but the only types the thought experiment discusses is the physical kind. (Which to a physicalist includes every other type.)
We could say she only had the kind obtainable via symbolic communication (language, mathematics, etc). (I’ve seen Philip Goff make this stipulation.) In that case, there would be new things to learn, but then she wouldn’t have “all the physical information”, so we’ve modified the premise.
You could focus on the phrase “there is to obtain” and say that some of the physical information is unobtainable, even in principle, except by having the experience. If so, what would be some examples? And would a robot have the same limitation? If not, then what’s the difference?
LikeLiked by 1 person
I just tweeted an example. I’ll repeat it here:
Imagine Mary is in her room with a bundle of fiber-optic cables coming in thru a hole in the wall. Suppose there is another room with a camera. In a third room there is a computer that gets input from the camera, and when something red is in the room, the computer lights up exactly one of those fibers going into Mary’s room, with white light. So Mary knows everything about physics, and light, and computers, and she knows that the fibers are hooked up, but she doesn’t know which fiber in her bundle is hooked up to the computer. They all look the same. Then, something red is put into the room, and one of the fibers lights up. Mary can now label that fiber with tape, and from now on she knows which one is the “red” one. When that one lights up, she can report “I see red”.
Now replace Mary, who comprehends all this, with a robot that can do everything Mary does in the room, but doesn’t comprehend any of it.
Does that help any?
LikeLiked by 1 person
I can see how the cables correspond to axon firing patterns, which is what the brain works with to distinguish colors. But I wouldn’t think you’d say that that simple pattern, by itself, is the experience of red.
It seems like it takes all the other things I mentioned, such as all the associated affects, many of which are triggered due to unconscious associations with the axon firing patterns triggered when the L-cones on the retina are stimulated.
Or am I still missing something?
LikeLiked by 1 person
Don’t think I said anything about experience. But you’re right, the light going on in one fiber is not the experience, it’s just the first step. Doing something in response, like recording “red at time x” would be the second step, the experience being both steps. Of course, it would only be an experience for a system which can refer to it, usually via the recording. It would be unconscious relative to the guy in the next room with his own bundle of fibers.
LikeLiked by 1 person
Thanks. But I’m having trouble connecting the dots with information that is unobtainable in principle.
I truly like this thought experiment. And I admit I’m a dilettante in this area. But if I had been one of Frank Jackson’s grad students I would have suggested an interesting change. Assume Mary is stone deaf yet knows everything there is to know about the physics of acoustics, music theory and the physiology and neurological science of hearing. She then gains her hearing and hears Beethoven’s choral finale to the 9th Symphony—the Ode To Joy. If you say she understands and appreciates nothing new about the world at that point, then I say your soul is dead!
But more to the point, why does it have to be a strict Either/Or? Physicalism or Dualism? My consciousness is most likely caused by and a manifestation of my physical being. But causal reducibility does not “require” ontological reducibility. Does it? The question is just posed that way. Does it have to be so? Can’t consciousness be a biological phenomenon that can’t be reduced?
LikeLiked by 2 people
Even in the revised scenario, if Mary understands the full effects of hearing the symphony on her nervous system (which admittedly is becoming an increasingly strained proposition), including all the triggered affects and physiological reactions, then no, I can’t see how she’d learn anything new. Remember, she has “all the physical information”. If we’re going to entertain this thought experiment, then we have to take its actual premises seriously.
On either / or, there are a variety of positions out there. There are what Chalmers calls Type-A materialists, where it’s all reducible physics. There are also Type-B materialists, who see it as all physics, but with an irreducible identity correspondence between consciousness and certain physical systems, one that can’t be understood a priori, only a posteriori. Then you get into things like property dualism and panpsychism, which is Chalmers’ waters. Beyond that it gets into substance dualism and then idealism.
I’m in the Type-A camp, because I think that’s where logic and evidence points, but there are a lot of people in the other camps.
LikeLiked by 1 person
I can never tell if I’m type A or B. Take altruism as the example. Can you reduce altruism to physics? You sorta can (I think I can), but I’m not sure you can go the other way. Can altruism be understood a priori from physics?
I think it matters whether you see the relation unknowable from a practical standpoint vs unknowable in principle. For the former, what is impractical today may be practical later with new discoveries and technologies. If that sounds right, then you’re more likely in type-A. On the other hand, if you think no amount of knowledge will ever bridge that gap, but it’s all still physical, then you’re more in the type-B camp.
Haven’t been following this whole conversation but your statement about ‘unknowable from some particular standpoint and unknowable in principle’ may be on to something.
The “essence” of color shall always evade us! It is so pure and lofty that it has yet to ever been soiled by any kind of tangled and compromised realization in our world of unavoidable relationships. We always encounter ‘color’ in particular setting. I yearn for “color” in itself, in its principled form!
Chalmers’ position is revised Platonism. And we, or Mary, ever were or are or will be, in her supposed position. What did Mary eat all her life? White meat chicken, mashed potatoes, grey gravy and Russian black bread? Our functional lives have always suggested to us a direction or ‘things’ beyond themselves and always will. We are in the middle — as agents between the past and the future, or physics and the fact that we “take it”/”seems” to be more. That is a good thing. So, i think I agree with you and Dennett. Knowing the physical facts will lead us to all the other seemingly less physical facts about both color and symphonies.
If you pull out all its connections (functions) all you have left of anything is an abstraction, not any real thing (and that includes “consciousness”) that could be experienced on its own or investigated in a form other than all ‘its’ physical relations.
I think that is what I mean. I mean, that is what I think. I guess. Please help.
LikeLiked by 1 person
Thanks Greg. Excellent points.
Yeah, the details of the thought experiment (like most philosophical thought experiments) don’t bear scrutiny. We’d really have to postulate something like Mary being forced to wear goggles her entire life that only render everything in gray scale. Even then, assuming she has a healthy retina, she can close her eyes and probably see flashes of color spots. Eliminating it entirely would require us to actually alter her nervous system.
What’s interesting about even that is our perception of white in well lit conditions, actually comes from equal stimulation of all of the color cones at once. We perceive color when those get out of balance.
There’s also been speculation from neuroscientists that it would take time for Mary to learn what the stimulus means, that is, for the sensation of redness to have anything like what it means for everyone else, and that missing critical period development with no color stimulation would probably mean she’d never perceive it in the way we do.
There’s also the study James Cross shared on color blind monkeys that had some of their cones altered to be stimulated by different wavelengths, indicating there is some possibility of gaining the ability later in life, although their ability to discriminate red wasn’t as good as the females of their species, who grow up with that ability.
“If Mary understands the full effects of hearing the symphony on her nervous system” then she does not learn anything new. Respectfully, that is not an argument, it’s an assertion. This is much more than “an increasingly strained proposition.” It is a flat negation of the assumptions of the thought experiment. Not to be rude, but in Jackson’s version she simply does not see color, in mine she cannot hear. Something new is introduced. I submit that you cannot deny that. This is what makes Jackson’s thought experiment so very interesting. Type-A materialism, which you claim to adhere to, seems to force one into some sort of fundamentalism about the material world.
As I noted earlier, this is not my area. But I think I was describing a version of what could be called emergent materialism, if you really need labels. I like to shun labels because the labels themselves tend to influence the debate. But, to use a very crude example. As a young man I owed a small British sports car. I could not EXPERIENCE the thrill of speeding down country lanes unless I got in, started it up and went for a drive. However, later on, if I took apart my roadster piece by piece and showed it to some cave man, I could NOT demonstrate to him that experience—that experience emerges from a certain complex functioning of my roadster. Moreover it was uniquely my experience.
We humans chop up the world in many ways. That does mean that our chopping up and “labeling” actually is the discovery of something different. My roadster is the same roadster whether parked or zooming down a country lane. But, as I tried to say previously, the experience is causally reducible but not ontologically reducibility. I think the difficulty may be because Mary experiences the color or hears the symphony as a subjective or personal experience. The ontology is personal to her. It cannot be reduced to or described in objective terms independent of her experience.
LikeLiked by 1 person
So, within the description of the thought experiment, Mary either has all the physical information or she doesn’t. If she does, and the physics fix all the facts, then she learns nothing new with the actual experience. If she does learn something new, then she didn’t have all the facts. To say she had all the physical facts but insist she still learned something new is simply to beg the question in favor of dualism.
Now, you can say it’s implausible that Mary would have that degree of understanding of her own nervous system. But that’s the premise of the thought experiment. If it isn’t, if in fact she only has a subset of the physical information, then she well could learn something new, but that has no implications for physicalism.
If you see any non sequiturs in the above, please do point them out.
I can understand your concern with labeling, although you don’t seem to mind using them when it suits your purpose. (Such as flinging “fundamentalism” at me.) But communicating about different positions is difficult without at least some labels.
I disagree that the ontology of Mary’s person experience can’t be reduced. Certainly, within Mary’s subjective perspective, there are serious limits to how much it can be reduced from within that perspective. That’s because she doesn’t have introspective access to most of the functionality that produces the experience. But objectively, there’s no reason in principle it can’t be reduced.
It’s like the fact that my computer’s operating system has limited ability to monitor the state of many of the system’s components. There just isn’t wiring for it. But if we give the system all its design diagrams, it can use them to reconcile with its internal models of it own operations. However, it will ultimately only ever have correlations between its internal models and that external diagram. The same is true for humans.
I apologize about the fundamentalism reference, I agree that it also has an unfair negative connotation. I seek your forgiveness.
So, “Certainly, within Mary’s subjective perspective, there are serious limits to how much it can be reduced from within that perspective.” Yes, there are limits. And I would agree with you that they are “serious limits”—like it cannot be achieved. My point exactly.
But then you rebut that observation with something that takes it all away again: “That’s because she doesn’t have introspective access to most of the functionality that produces the experience.” To me that sounds like Mary can’t see color and can’t hear sounds because she has no access to that very subjective experience she would have if she did subjectively experience the color or the sound. That would be a neat trick. Mary experiences the color or hears the symphony as a subjective or personal experience. The ontology is personal to her. I respectfully submit that it cannot be reduced to objective terms independent of her experience. No amount of bold assertive insistence can convince me that one can know a subjective experience through objective terms independent of that experience.
LikeLiked by 1 person
No worries. We’re good. Although I’m grateful for the olive branch!
The thing to consider about experience, is that the various qualities are constructions of your brain. You have no access to the details of that construction. Consider vision. When you look around, you perceive a rich field of vision with color everywhere. But that’s not what your retina receives. It has a small area in the center with high acuity and color, but as you move further out, it becomes less colorful and the resolution drops. That, and there’s a hole in the center where the optic nerve is. That we don’t perceive it that way is due to all the work our nervous system does before we’re conscious of it.
So for Mary to see color and hear sounds, a lot of work has to go into it before her perception of the color or the sounds. She doesn’t have access to that work, and so from within her subjective experience, it’s irreducible.
But as I described with the computer example above, there’s no reason to consider it objectively irreducible. It should be practical to map every detail of her experience to objective activity in the brain, and then reduce it as needed from there.
I understand this is very counter-intuitive. But science often is.
Actually a form of this experiment has been done and Mary would learn something.
Researchers were able to modify some of the green light cones in the eyes of male squirrel monkeys so they would be sensitive to red. The monkeys which couldn’t previously distinguish red dots in an image could distinguish them after the modification to the eyes.
Mary would learn to distinguish colors.
LikeLiked by 1 person
I remember our conversation about this study a while back. It’s indeed very interesting.
But it’s not the same scenario as Mary’s room. The monkeys don’t have Mary’s purported knowledge, including a a complete understanding of all the physical dynamics in their own nervous system.
Still Mary would learn to distinguish colors with real world consequences. Difference between red flashing light or yellow at an intersection. Difference between a green banana and a yellow banana. There are evolutionary consequences to qualia.
I agree completely. But none of that pertains to what the thought experiment is supposed to be showing.
“This might include Mary simulating the effects on her own nervous system without photons of the relevant wavelength ever striking her retina.” – That would certainly do the trick, wouldn’t it Mike! I’ve never heard that suggestion in any response to this thought experiment – not even Dennett’s response. I also think it’s the most compelling response I’ve ever heard. Although those sympathetic to Jackson’s thought experiment might disallow this (I’m not sure on what grounds), I think you’re right on. Brilliant.
LikeLiked by 1 person
As I argue below I think that is an implicit violation of the conditions of the experiment.
If we include experiential information in all “physical information”, then Mary cannot possibly be in a room without red. The thought experiment would be a paradox like the statement: “This statement is a lie.”
Thank you James. I think I see your point. You’re disallowing any information that involves the experience of color, regardless of how that experience is induced. Is that correct?
‘If we include experiential information in all “physical information”’ – I think that’s where I must disagree with you. Everything Mary learns (through books, black and white video lectures, etc) is learned via experiential information processed by her sensory system. In the case Mike proposes, Mary is obtaining information about the sensation of the color red in the absence of the stimulant her sensory system is designed to sense; that is, she sees red in the absence of anything red in her environment or in her sensory system.
I guess we disagree, but it seems to me this method of obtaining information meets all the conditions imposed by Jackson’s thought experiment, although I’m perfectly willing to be disabused of any error in my logic.
This is a thought experiment not a legal contract where we are looking for loop holes to violate the terms of contract.
BTW, if you want to turn this into a legal contract, then let’s add the condition to the experiment that Mary is prohibited from taking any action to generate the experience of red.
We even have examples of this the experiment you describe, e.g. the “Jennifer Anniston neuron” experiment, or countless accounts of patients reporting sensations upon direct electrical stimulation of some particular neuron or brain region.
Thanks Jim! Truth be told, I think Dennett (or another philosopher in the same camp) did use this as a point somewhere, although I can’t remember where.
I have seen people disallow this as valid information, but if they do that, it makes the thought experiment moot. Mary either has “all the physical information” or she doesn’t. If she doesn’t, then learning something new with the experience has no metaphysical implications.
LikeLiked by 1 person
Now wait just a darned second! If Mary makes this simulation, and it’s got the right properties (for example, it’s sufficiently detailed) to create a conscious experience – that’s still not MARY’S conscious experience. It belongs to a new being, Simu-Mary.
On the other hand, if Mary isn’t merely simulating the effects on her nervous system, but *inducing* the effects on her nervous system, then James Cross’s objection applies.
LikeLiked by 1 person
Back to the Wikipedia article, I think the thought experiment does show this.
First, if Mary does learn something new, it shows that qualia (the subjective, qualitative properties of experiences, conceived as wholly independent of behavior and disposition) exist. If Mary gains something after she leaves the room—if she acquires knowledge of a particular thing that she did not possess before—then that knowledge, Jackson argues, is knowledge of the quale of seeing red. Therefore, it must be conceded that qualia are real properties, since there is a difference between a person who has access to a particular quale and one who does not.
However, the second part is not necessarily true.
Jackson argues that if Mary does learn something new upon experiencing color, then physicalism is false.
Not necessarily true because qualia could be physical, even though they do not appear to be. We do not know if they are physical.
I am not sure whether this would point to a contradiction in the original conditions but I don’t think it would since, as written, it is said Mary has access to all the “physical information” about colors, which I take to mean factual or abstract knowledge not experiential. If taken to include experiential knowledge, then the whole thought experiment is paradoxical and meaningless.
I would argue the distinction between mental and physical to be incorrect.
I would regard this to be an implicit violation of the thought experiment and shouldn’t be considered.
This might include Mary simulating the effects on her own nervous system without photons of the relevant wavelength ever striking her retina.
LikeLiked by 2 people
I agree that the thought experiment doesn’t demonstrate that physicalism is false. (Jackson himself later agreed that it doesn’t.)
If we limit Mary’s information to just symbolic communication, then she no longer has all the physical information. A lot of people do say her having experential knowledge violates the thought experiment, but under physicalism, that’s what the premises entail. If we alter those premises to add the stipulation that she can’t successfully imagine such experiences from her detailed knowledge, then the thought experiment still doesn’t demonstrate non-physicalism.
Either way, the thought experiment has no implications for physicalism.
When I posted about this last year, I referred to it as “one of the more horrific examples of virtual personal enslavement in the service of philosophy.” Keeping poor Mary locked up in a colorless room for so many years is much worse than Schrödinger killing cats! 😮
I honestly don’t see, even under type-A materialism, how it’s possible for Mary’s brain to experience the mental states of seeing red without actually seeing it no matter how much she understands about what that experience would be like.
“This might include Mary simulating the effects on her own nervous system without photons of the relevant wavelength ever striking her retina.”
How, specifically, do you propose she does that? If you mean by working it out on a computer or by analysis, that won’t produce in her brain the necessary mental states. The only way for Mary to ever experience *experience* knowledge is to have the brain states involved.
“When she does have the actual experience for the first time, there should be no surprises. If there are, then she didn’t really have all the physical facts.”
I think the error is in the assertion that it’s possible for Mary to have all the “physical” facts through analysis alone. That is simply not the case, and it has nothing to do with dualism.
The bottom line is it’s a cruel experiment that deliberately denies Mary the physical facts of personal experience.
LikeLiked by 3 people
Damn. Copied the wrong link there. The link to the post I mentioned is:
I actually have sort of a personal experience of Mary’s room, and I bet most of us have something similar (in my post I mentioned sex as something many have read a lot about before actually experiencing it).
I’d been interested for a long time in skydiving, but didn’t actually do it until I was in my 40s. I’d read about it, watched movies and videos, and knew a great deal about it intellectually. But the actual experience… well, it’s famously beyond words.
Seeing the Northern Lights up close for the first time, or Grand Canyon, might also compare. Another one for me was seeing Saturn through a telescope for the first time. Photons from the sun went to Saturn, bounced off it, and returned to my eye… it was profound and breath-taking.
Mary could through her knowledge of eyes and brains devise a device to stimulate electrically her brain or optic nerves to generate red. (Actually most people if they rub their close eyelids will be see red spots).
But it is a thought experiment and the point is you can’t experience something by reading about it or learning about it, which is what you are saying, I think. So, if you violate the conditions by actually generating the experience of red, it really isn’t different from having a friend slip a red rose into the door, which will make Mary’s stay in the room just slightly better but not much.
LikeLiked by 2 people
That we can see color by rubbing our eyes is just one more thing that makes this a philosophy phantasy. It’s really not a possible scenario. (In my post, exactly as you suggest, someone sent Mary a red rose, and it accidentally slipped through the massive security apparatus that shields poor Mary from ever seeing color.)
I agree that magically creating the necessary brain states of experience is tantamount to having the experience. As you say, my point is that having the experience — real or simulated — is the only way to have those brain states.
LikeLiked by 1 person
Definitely it would be a monstrous experiment to actually conduct, guaranteed not to be approved by your local IRB committee. And as we’ve all pointed out, it’s really not possible in any practical sense. But you’re supposedly not supposed to bring up those kinds of issues in discussions about philosophical thought experiments.
But that cuts both ways. If we’re not allowed to bring up those practicalities, then we’re also not allowed to ask how Mary could possibly acquire enough knowledge about the effects of red on her nervous system to have full knowledge of the experience without the experience. All we can say, is that under physicalism, if she really has all the physical information, then she has that information. If she doesn’t, then she doesn’t have all the physical information and the premise of the thought experiment is violated.
[wanted to point out that the experiment is easily possible with the use of sodium lights, but still monstrous, obv. Also, still subject to the eye-rubbing work around]
[also, I put this here for the sake of those who, like myself apparently, don’t read all of the other comments before hitting “Post Comment”]
The banter about the experiment itself it just that, banter, no more. The point is there is a difference between objective knowledge and subjective experience. One can *know* everything about something, but actually experiencing it is a different class of knowledge (knowledge of as opposed to knowledge about).
“All we can say, is that under physicalism, if she really has all the physical information, then she has that information.”
You seem to be implying that suggests she is also able to obtain knowledge of.
How is that possible without actually having the brain states of seeing red?
That’s the key here. The mental states involving knowledge about are different from the mental states involving knowledge of. How do you suggest Mary is able to have the brain states involving the latter other than through experience?
It has nothing to do with dualism or physicalism. The inability of the system to have the *experience* system states without the experience is true regardless of exactly how the system works.
I’ve noted a few places in this thread that she couldn’t have the experience without putting her nervous system in that state. (And a lot of you have said that if she did so without actually having the photons hit her eyes, that would be cheating. I think that’s wrong, but for the sake of argument, we can rule it out.)
But I’ll ask you the same question I asked Matti, assuming complete knowledge of the nervous system (which is the premise) and the associated technology, what aspect of the experience would be inaccessible to her from an objective standpoint?
“…assuming complete knowledge of the nervous system (which is the premise) and the associated technology, what aspect of the experience would be inaccessible to her from an objective standpoint?”
Do you not believe there is a difference between knowledge about and knowledge of? What about my example of skydiving? Do you hold that sufficient knowledge about what it’s like would mean that actually jumping out of a plane would not result in new personal information?
I’ve asked before exactly what you mean in saying Mary has all the knowledge. What “associated technology” does she have? How are you claiming Mary experiences the brain states associated with seeing red? If she only knows about them, then she does not have the knowledge of them.
The answer to your question is that the brain states of actually seeing red are inaccessible to her without either actually seeing red or personally stimulating her mind as if she were seeing (or once did see) red. I don’t care if that’s called cheating or not — the point is, without actually having those brain states, Mary does not, and can not, “have all the knowledge.”
If you’re going to stipulate Mary has some way of altering her brain such that she either sees red or remembers seeing red, that’s fine — Mary has knowledge of seeing red. Because she’s effectively seen it. I’m fine with that.
There’s no magic here. Back to the skydiving example. Imagine knowing everything possible about skydiving. Imagine a perfect brain scan before and after actually jumping. Those would be different scans — the brain would be different. The only thing is, as far as we know the only way to accomplish that brain change is through actual experience. Demystify it by calling it training of the neural net — we only know one way to do that: with input.
I replied below.
“If you mean by working it out on a computer or by analysis, that won’t produce in her brain the necessary mental states.”
Huh, I should have read all the comments and then just said “what Wyrd Smythe said”.
LikeLiked by 1 person
I’ve long been of the opinion that more people should adopt that way of thinking,… 😉
Mike, thanks for understanding my unkind exuberance. However, I think you are making my argument. Except, of course, for your final destination which includes the statement that the “science“ is sometimes counter-intuitive. Yes, sometimes it is. But, Mike, I don’t think there is any science to your position—it’s just one of several possible analyses of the situation.
Jim Gregoric claims that “Mary is obtaining information about the sensation of the color red in the absence of the stimulant her sensory system is designed to sense; that is, she sees red in the absence of anything red in her environment or in her sensory system.” That seems to be the only argument to make. Although it is incomprehensible. And, as you say, counter-intuitive. Unless you have a neat argument (which I haven’t heard) that one can jump from abstract knowledge to experience, which I doubt, it sounds like if you have to stick an electrode into Mary’s brain to give her the sensation of red. And that would be an artificial way to give her that same experience. Not what the thought experiment was all about. As James Cross succinctly states: “If taken to include experiential knowledge, then the whole thought experiment is paradoxical and meaningless.” You are dead-on James Cross.
LikeLiked by 1 person
As I and several other people in this thread have noted, actually conducting this experiment in real life is probably impossible. There are just too many ways Mary could stumble over the sensation of red, even while in the room. We’re not supposed to concern ourselves with those difficulties when thinking about philosophical thought experiments.
But as I’ve also noted, that cuts both ways. You’re pointing out practical difficulties, such as sticking an electrode in her head, with acquiring the relevant knowledge. But the thought experiment doesn’t get into those details. It merely states she has all the physical information.
The same holds for her having enough symbolic knowledge to know the experience. If she understood every effect the stimulus would have on her nervous system bar none, then she’d know what the experience would be. She couldn’t have the experience, but she could know everything about it. It’s impossible to imagine a human having that degree of knowledge (and going the simulation route would be easier), but again, we’re introducing practicality into the philosophical thought experiment. You can’t introduce it for the aspects you dislike but ignore it for the ones you like, at least not while staying consistent.
As it stands, I actually do think the whole thought experiment is meaningless. That’s the problem with it. It gets people excited over nothing.
[because I can’t help myself]
1. First, the experiment is doable, physically, if not ethically. I think I pointed this out in a different post, but if you make all the light sources in the room come from sodium lights, everything will be shades of orange, including blood from accidental cuts, etc.
2. That said, there is still a way to experience red in the room without red photons and without resort to electrodes in the brain. Close your eyes and push on them with the palms of your hands, holding the pressure for a while. You should cycle thru several colors, including red, blue, green, yellow, orange. Mary wouldn’t know which is which until she leaves the room, but then she would be able to say “oh, red is that one.”
3. I don’t think the whole experiment is meaningless. I think it points directly at what people mean when they talk about qualia. In reference to my optic fiber version of the experiment above, qualia comes from the ability to distinguish one thing from another, one fiber from another. In describing the distinction made, one can only refer to the difference between one fiber and the next, and the only difference between them is what causes the one to turn on and not the other, i.e., the pattern recognized.
LikeLiked by 1 person
[just now saw that point 2 was made above by others. Ah well.]
I’m not sure on the sodium light thing. It would definitely put the room in a yellowish-orange hue. But everything she saw in the room, including her own skin, would have noticeable differentiations. Her perception of color would be skewed in that scenario, but I’m not sure it would be absent.
As I noted before, I think the experience of a color is far more than just the different firing patterns of sensory input. But the main point is that if Mary understood all of that, she’s have full knowledge of what to expect.
Even if we imagine the experiment as plausible, the idea that Mary would have that level of knowledge, at least with current science and technology, isn’t plausible.
“If she understood every effect the stimulus would have on her nervous system bar none, then she’d know what the experience would be.”
She might be in a position to anticipate what having the experience would be like — perhaps by reference to something similar she has experienced — but she would still only know about the experience, not of it.
Again, it’s all down to the necessary brain states. Only experience (or some form of magic) can produce them.
I’m linking to the response I just made to your comment above, since it pertains to the same topic.
I was not (heaven forbid!) in any way suggesting that in order to get the science we perform the “Mary” experiment. I was merely pointing out that your analysis of the situation, like mine, is not science. It is our attempts to work through the thought experiment. Thus, your digression that sometimes science is non-intuitive neither adds nor subtracts to what’s been said far.
But, if we are making arguments, then I agree with you that your position is non-intuitive and I think my position is an acceptable alternative analysis that does not require the suspension of intuition. In short, material causal reducibility does not require ontological reducibility. That is, Mary experiences the color as a subjective or personal experience. The ontology is personal to her. Her experience is not unreal or an illusion (as some try to argue) because it cannot be reduced to objective terms independent of her experience. Her experience is causally reducible, i. e., explainable in physical terms, but not ontologically reducible. What’s so odd about that?
As I also said, we get tripped up because we are trying to move from subjective experience (which I don’t doubt is real) to an objective explanation independent of that experience. I think that’s not possible. I simply cannot understand or agree with your analysis that: “If she understood every effect the stimulus would have on her nervous system bar none, then she’d know what the experience would be.” That means her abstract objective knowledge, in some unexplainable manner, gives her the subjective experience of red. She moves from objective terms independent of subjective experience (her abstract knowledge) to having the subjective experience. Unless you are sneaking the subjective experience in back door I can’t see how that’s possible. I suggest more steps are needed to get to your conclusion.
LikeLiked by 1 person
I only made the point about the plausibility of the overall experiment to point out that talking about the plausibility of her having that knowledge was in the same category.
Assuming technological issues aren’t a limitation, what about Mary’s experience do you think is inaccessible from an objective perspective? I’ll grant she can’t have the experience without actually putting her nervous system in that state. But what knowledge in the experience would be inaccessible to her from an objective standpoint?
Remember, her knowledge of the nervous system is complete. She understands all the processing in V3 / V4, where the discrimination between colors take place and how, where they converge on the concept of a particular color allowing for self report of that color, and all the cascades of associations that are triggered from that discrimination, both conscious and unconscious, including the ones that in turn trigger particular affects. She also is aware that her particular experience of color will be an unusual one, since it won’t yet have all the unconscious associations we learn throughout our lives, although she may have innate ones. For example, it will probably still be vivid to her.
For somebody with that knowledge, and assuming she has the technology to go along with it, what part of subjective experience would be inaccessible to her?
You asked, “But what knowledge in the experience would be inaccessible to her from an objective standpoint?” Quite simply, the unique subjective experience of seeing the color red.
I’ve contributed five entries to this topic. I fear I may become hopelessly redundant. I hoped, for example, that my clumsy analogy to a sports car would be helpful. Perhaps it was not. The vehicle is casually reducible to its physical parts. One can certainly share the engineering knowledge of how the vehicle functions. Yet one cannot, by explaining each separate part, give someone else the knowledge of what it was like to experience a drive down a country road. The experience is uniquely personal.
Mary’s new experience of color is somewhat like that—personal and subjective. Yet I think few doubt that such experiences are real—subjective but ontologically real. If we accept that such experiences are indeed part of reality, then it may be (I think it is the case) that the knowledge of such subjective experiences cannot be acquired by greater and greater amounts of abstract objective knowledge. More and more knowledge of the physical parts, their function and their interplay, will not give Mary those experiences.
The problem, as I said, is the very problem of subjective knowledge and objective knowledge—you cannot increase the “quantity” of objective knowledge to acquire a different “quality” of knowledge. It’s apples and oranges. One cannot buy more and more apples and expect that somehow they turn into oranges.
I assume that part of the difficulty with this particular analysis is that it may threaten or contradict other accepted and important concepts about reality. And I certainly understand that. And, indeed, it should be examined from that standpoint. But I don’t think it’s really much of a threat. This is not an idealist analysis. And it’s not a dualist analysis. If it was either of those I wouldn’t advocate for it. As I’ve said, Mary’s experiential knowledge of color is casually reducible to physical explanations. I am merely arguing that Mary’s unique subjective experiential knowledge is irreducible. I advocate no conclusions beyond that.
I think the unique subjective experience of seeing red is a physical process. If it is a physical process, then every detail of that process is, in principle, knowable. Are there practical difficulties? Absolutely? But as I’ve noted to others, if we’re going to bring in practical difficulties, most of the philosophical thought experiment enterprise falls apart.
On the sports car analogy, if you know every detail of the workings of the car, and you understand the details of the roads it’s going to be driven on, the gas that’s going to used, etc, then you’ll be able to deduce how the steering wheel will feel, how rough or smooth the ride will fear, how much acceleration will be experienced when its punched, etc.
Of course, to get into your personal experience of driving the car, we’d need Mary’s knowledge of your nervous system.
As I noted to Wyrd, I think the only reason we’re tempted to think there’s some in principle uncrossable divide between objective knowledge and subjective knowledge, is residual dualistic intuitions. Everyone keeps saying this isn’t about dualism, but if we remove dualism from the picture, truly remove it, then the issue should disappear.
I do agree that Mary’s subjective experience is irreducible, but only from the subjective perspective. From the objective perspective, it is reducible.
Of course, that doesn’t mean objectively studying the system allows us to have that system’s experience, but assuming there’s knowledge only obtainable from having the experience is inherently dualistic.
“I do agree that Mary’s subjective experience is “irreducible”, but only from the subjective perspective. From the objective perspective, it is reducible.” Mike, I added the strategic quote marks to your quote! So, maybe we agree after all. I don’t disagree with anything there!
But, sorry Mike, I respectfully do disagree that my position is dualistic. And I know of working philosophers who think so as well. Actually, I think you and others are the ones still haunted by Descartes unfortunate mind-body dichotomy. Time to get over it! Early on in this discussion remember that I talked about humans cutting up knowledge into convenient parts and sticking labels on the parts. Sometimes that categorization process tends to handicap our thinking. I think that’s what’s going on here. Human physiology has numerous functions; cardio-pulmonary, digestion, immune responses, procreation, and cognition being a few. All of them are caused by, and thus reducible to, objective physical facts. Descartes gave us a mind-body dichotomy. But cognition is a normal function of the body, like our immune responses, or our heart beating. We don’t seem to have a heart-body dichotomy or a digestion-body dichotomy do we?
It’s only because nature gave us certain tools of cognition which enable us to grasp onto and know our environment that also, perhaps necessarily, result in subjective experience (which you agree is “irreducible”) that René Descartes was able to fool himself and us into that unfortunate false dichotomy which we call dualism. I’m emphatically not a dualist. All our cognitive powers are causally reducible to objective physical facts. That I think is beyond question. Our consciousness is not separate from our physiology. It’s a state that our brain is able to achieve which happens to produce an irreducible subjective experience. That’s not dualism any more than digesting my breakfast is dualism.
LikeLiked by 2 people
As a concrete analogy, knowledge about something is Searle’s Giant File Room. It’s how we typically write computer programs — they embody knowledge about. It’s the original attempt at AI — “expert systems” preprogrammed with knowledge about stuff.
Knowledge of is what we do with neural nets — give them inputs (experience) so they can recognize similar experiences (as being similar).
Those two are entirely different kinds of systems. It’s easy to reverse engineer — to deconstruct — the former, but notoriously hard to unpack the holistic nature of the latter. Experience itself is holistic in nature — which is exactly why it was so hard to unpack to create those expert systems.
LikeLiked by 1 person
I’m not sure what you mean by the distinction between knowledge about vs knowledge of. I’m guessing you mean knowledge obtained from how you feel from direct sensory impression vs knowledge you obtained through a thorough analysis of your own system. But if your knowledge of the latter is complete, then there should be no difference.
On Mary’s knowledge, for it to be relevant in the way the thought experiment implies, she has to have knowledge of all the physical facts involved. The associated technology is whatever she needs to acquire that knowledge. Again, if we’re going to bring in practicality, the whole thought experiment collapses under its own weight.
Why would the brain states of seeing red be inaccessible to her? If she understands the workings of every part of the brain, and all the activity throughout, then no state should be inaccessible.
Yes, the brain changes with experience. But if you understand how every component works and how they interact, then you should be able to predict what changes it will undergo on any particular set of stimuli, and how it will affect future brain states. You should be able to map those states to any self report the system may make. In other words, you should be able to know what the system is thinking and feeling.
I think the only reason we’re tempted to conclude otherwise are the vestiges of dualism. Consider whether we’d be tempted to have this conversation about a robot that could navigate its environment, have preferences about the state of affairs, and self report some subset of its processing.
On the neural net angle, yes there are practical difficulties. But again, practicality kills the whole thought experiment.
“I’m not sure what you mean by the distinction between knowledge about vs knowledge of.”
I’ll try to clarify. I think the skydiving example is a good one. Imagine you (like Mary) studied extensively, but the topic is skydiving, not color. But you’ve never actually jumped. As a result, you know everything there is to know about skydiving — all the procedures, actions, physics involved, everything. You could answer any question anyone could ask. You have knowledge about skydiving.
But you don’t know what it’s subjectively like. That requires of knowledge of skydiving. Knowledge of something is gained only through subjective experience.
You might know that skydiving is scary and produces all the associated physical feelings. You might know about the various experiences one is likely to have based on descriptions from others. But until you’re actually in a plane wearing a rig and they open the door and you jump out of it, you have no idea what it’s really like. No amount of study or description can convey the vivid subjective experience.
“But if your knowledge of the latter is complete, then there should be no difference.”
I think that’s just plain false. Subjective experience is a type of knowledge that cannot be obtained through learning.
“The associated technology is whatever she needs to acquire that knowledge.”
You haven’t answered the question I asked in my first comment: “How, specifically, do you propose she does that?”
Later I asked: “How are you claiming Mary experiences the brain states associated with seeing red?”
This is important. Are you, in fact, claiming the statement “Mary possess all the physical facts,” entails the physical fact of her brain having seen red? If not, how is she in possession of subjective knowledge?
“Why would the brain states of seeing red be inaccessible to her?”
Because, by definition of the experiment, she has never seen red and is not allowed to see red.
How would those brain states be available to her? Are you suggesting one can just intuit subjective experience? That Mary, because of all her facts, can just imagine red?
“But if you understand how every component works and how they interact, then you should be able to predict what changes it will undergo…”
Well, sure, but that’s not the same as having them.
“In other words, you should be able to know what the system is thinking and feeling.”
I agree, but that’s not the point. The point is you can’t share that system’s subjective experience.
“I think the only reason we’re tempted to conclude otherwise are the vestiges of dualism.”
No. Absolutely not. This has nothing to do with dualism. The whole point of my neural net analogy was to avoid any such notion.
“Consider whether we’d be tempted to have this conversation about a robot that could navigate its environment, have preferences about the state of affairs, and self report some subset of its processing.”
How are you suggesting that would change anything?
I’m not sure exactly what you’re suggesting, so I’ll punt: Imagine a robot with a malfunction that prevents it from seeing color — it sees only B&W. Its memory includes all physical facts about color. It can answer any question (it’s Searle’s room when it comes to color).
It also knows all the physical facts about temperature (and many other things about the world). However its temperature sensors work properly. Its memory includes real-time readings experienced by its temperature sensors — it has direct knowledge of temperature.
But it does not have similar readings from its malfunctioning color receptors. Like Mary, it has no knowledge of color.
When its color receptors are fixed, does it experience a new form of data? Of course it does.
“On the neural net angle, yes there are practical difficulties.”
What practical difficulties do you mean? (If you mean about unpacking them, yes, true, and people are working on that, but that aspect was an aside. The point is it’s not possible to create a trained neural net from scratch. They have to be trained. In contrast, expert systems are always built from scratch. They’re “turn key” systems.)
On skydiving, for it to be the same as the scenario in the the thought experiment, Mary would have to be an expert in skydiving, but also an expert in its effects on people’s nervous system. Together with her expertise on her own nervous system, including her dispositions, phobias, and other psychological factors in comparison to other cases, she should be able to put together a complete profile on what the experience would be for her.
On the robot, it shouldn’t change anything. That’s my point. But we can more easily imagine a robot obtaining external information, and constructing in its system the same image that would be built from sensory input. (We could also imagine it retrieving external images from other sources, but I’m sure that will also be seen as cheating.)
Indeed, if there’s a version of this experiment that actually could be done in practice, it would be a robo-Mary scenario, where the central protagonist is a robot with its color detectors turned off. Such a robot could have complete access to all its engineering diagrams. It should be able to investigate every effect of those color detectors being triggered throughout its system. Maybe it does so by running a copy of itself and simulating the results to see all the effects, including any neural network training.
Now, at no time prior to the color discrimination sensors being turned on, did its own internal models enter the state it would have entered. But there’s no reason it couldn’t completely predict all the effects it undergoes when they are turned on. It might observe some variances from the simulation when it actually is able to do the discrimination, but they should fall within known constraints of uncertainty.
The only reason human Mary wouldn’t be able to do what robo-Mary does are limitations in current science and technology. But again, if we apply practicality to the human version, we have to throw the whole experiment away anyway.
“Now, at no time prior to the color discrimination sensors being turned on, did its own internal models enter the state it would have entered. But there’s no reason it couldn’t completely predict all the effects it undergoes when they are turned on.”
I think we’re talking about two different aspects of this. I’m talking about the first sentence; I think you’re focusing on the second one.
In your view of the robot, you don’t say whether or if the robot places its own mind in the predicted states. From the language I think no? It matters which.
If your statement is that prediction alone is sufficient for experience — that some putative perfect prediction would result in literally no new information being presented to the mind, I think that’s flat out false. There is a before-mind and and after-mind, and the only difference is the after-mind has subjective experience of color.
No one is disputing the ability to predict. It’s that prediction doesn’t equal experience. Ever. It’s about a mental system that either has, or has not, experienced certain states.
If your statement is that the robot is able to place its mind in an after state — such that it effectively sees, or has seen, color — then I agree with the caveat that it’s the same as ending the experiment by fixing the robot’s sensors.
But all along I’m focused on the difference between the before-mind and the after-mind. If you want to include the technology that alters Mary’s before-mind to her after-mind, I’m fine with that (with the above caveat).
I’ll come back to the question I asked early on in this discussion. What information does the experience provide that a complete understanding of the experience doesn’t? What does such a system understand, either about the world or about itself, that it didn’t understand before the experience?
“What information does the experience provide that a complete understanding of the experience doesn’t?”
Same answer I’ve given all along: knowledge of having the subjective experience.
You haven’t been explicit about what you think “complete understanding” means, despite my asking repeatedly. Are you making the claim that “complete understanding” entails subjective experience? Or are you depending on magic technology that can alter Mary’s mind?
And knowledge of having the experience entails, what? What specifically does she learn that a thorough knowledge of all the sensory processing, affective reactions, and associated memories triggered, doesn’t provide?
The answer to your question was in the snippet you quoted in your previous response. I’ve said multiple times in this thread that it can happen either with or without simulating the experience in her own system. For discussion, let’s assume she doesn’t invoke the experience in herself prior to getting out of the room. (Or in the case of robo-Mary, prior to having the color sensors turned on.)
“And knowledge of having the experience entails, what?”
The memories of actually having that experience. She cannot create those memories without actually having that experience.
Unless you claim mind-altering technology. Which it appears you don’t:
“I’ve said multiple times in this thread that it can happen either with or without simulating the experience in her own system.”
Yes, and I’ve addressed both possibilities.
“For discussion, let’s assume she doesn’t invoke the experience in herself prior to getting out of the room.”
So you’re saying that Mary, because of all her facts, can just intuit the subjective experience of red?
I’ve been pretty clear all along I don’t think that’s true. Mary cannot create memories of an experience by knowing all about how having that experience would alter her memories. Those are different types of knowledge.
Well, apparently for you, “the experience”, is a meaningful answer. For me, it isn’t.
I don’t think Mary has to intuit anything because there’s no hidden knowledge. (See above.)
I fear we’re at an impasse and will just have to leave it there.
“Well, apparently for you, ‘the experience’, is a meaningful answer. For me, it isn’t.”
Yet you cannot explain how Mary is able to have memories of experiencing color.
Ah, a little bit of drilling down! Color is a galaxy of associations between sensory discrimination, affects, and memories, both at a conscious and unconscious level. To meet the premise of the thought experiment, Mary would understand all of that. Everything the color experience conveyed, she would already know. (She would also know that, due to her deprivation, her own associations would be sparse compared to a regular person’s.)
So you agree that, in the richness and vividness of the actual experience, Mary does gain new information.
Suppose we have two contemporaneous versions of Mary the Color Scientist. The only difference between them is that Mary1 has never had the experience of seeing a red apple, and Mary2 has had that experience.
Mary1, having never had the experience of seeing a red apple, could nevertheless answer *any* question concerning “what it is like” to have the experience of seeing a red apple, just as well as Mary2, who *has* seen a red apple. By tracing through the series of events involved in the sensory reception, sensory processing, and reporting of the experience, say with the help of a Vast (Dennett’s term for an impossibly large quantity) lookup table, Mary1 could describe the experience of Mary2 with exactly the same vividness and completeness. To the objection that it is impossible that both versions of Mary could exist, one could say that Mary1 could say with certainty exactly what she *would* say if she *did* see a red apple.
Consider someone who is expert in the construction and operation of mechanical clocks, or the cognitive system of a nematode, a frog, a bat, or even a human being. In each of these examples of agents on the spectrum of cognition, *every* response of the agent to any given sensory input could (in principle and in accordance with the conditions set by Jackson) be expressed with complete accuracy. True, Mary1 would still lack the experience of seeing a red apple (and so her cognitive state would differ from that of Mary2 in this important respect), but she could say with complete accuracy exactly what Mary2 (or even herself, if she were to analyze the cognitive processing involved in having the experience) would report. Mary1’s description of what Mary2 would report would be correct, word for word, including descriptions of every facial muscle contraction involved in the body language used to describe the experience. If Mary1 could perform the analysis in the same time that Mary2 would take to report her experience (and she – Mary1 – was a good actor) she could sound just as persuasive as Mary2 when questioned about her “experience” of seeing the red apple.
Would the cognitive states of Mary1 and Mary2 differ? Absolutely. Would Mary2 be able to report anything about the experience of seeing a red apple that Mary1 could not also report? Absolutely not.
LikeLiked by 2 people
Well said Jim! I wish I’d read your comment before my previous reply!
As I noted above, I think the only reason we’re tempted to conclude there’s some uncrossable gap here is the legacy of Cartesian dualism. Many people let go of that notion intellectually, but struggle to do so intuitively. It’s pretty much the same struggle Dennett identified with the Cartesian theater, the notion that there’s a private show going on that no one else can see. Still vestiges of dualistic thinking.
The thing to consider is whether we’d be tempted to have this conversation if Mary was a sophisticated AI system, a robot. We might note practical difficulties in working out its states, but not ones that would exist in principle.
“Would the cognitive states of Mary1 and Mary2 differ? Absolutely. Would Mary2 be able to report anything about the experience of seeing a red apple that Mary1 could not also report? Absolutely not.”
I don’t think the problem is whether both Mary’s are able to accurately describe the objective facts about color processing in the brain. I think the thought experiment was getting at something else. As you appear to state, Mary #1 does not have the experiential knowledge that Mary #2 has acquired. Mary #2 knows something Mary #1 does not. That, I think, was Jackson’s point.
Suppose Mary1 and Mary2 are (also!) expert watchmakers; they can disassemble and reassemble a Rolex blind-folded, with the speed and sure-handed efficiency of a Marine working with an M16. They know all about the watch, its parts, the purpose of each part, how they interact, etc. No question about a problem or normal operation of the watch is beyond their ken. However, there is one particular repair experience with the watch that Mary2 has had that Mary1 has never had. Mary2 was once asked by a customer to replace the bent minute hand of his watch. Mary1 has replaced minute hands before, but never a bent one.
This of course is a trivial difference between the experiences of Mary1 and Mary2. It is certainly not as profound as the experience of seeing the color red for the first time. It is significant only insofar as it is the one and only difference between their entire libraries of experience and, incredibly, is the only experience needed to account for the difference between their brain states (which I think all respondents to this blog post, including Mike, would agree *are* different).
Does this mean that Mary2 has learned something that Mary 1 has not? Well, I suppose the answer must be ‘Yes’ if “learned something” means that Mary2 has had some experience – however trivial – that Mary1 has not had. Furthermore, the answer must be ‘Yes’ because the amount of Shannon information needed to encode Mary2’s complete library of experience is (ever so slightly) greater than that of Mary1. But does this mean that Mary2 can explain the repair of a watch with a bent minute hand and Mary1 cannot?
I presume you see where I’m going with this; if Mary1 is still just as capable as Mary2 at explaining *everything* there is to know about Rolex watches, including this trivial difference in repair experiences, what’s to stop us from scaling up the complexity of the difference to the level of a color experience, given that Mary1 (and Mary2) know *everything* (Jackson’s word!) about the generation of a color experience? Or, as Mike likes to say, am I missing something?
LikeLiked by 1 person
Thanks Jim. Nice piece of analysis.
With regard to Jackson’s original thought experiment, the point is that while Mary will feel an involuntary thrill the first time she encounters the red apple, and may even find herself uttering the words she is fully capable of anticipating she would say (“So *that’s* what red looks like!), by the conditions of the thought experiment as laid out by Jackson, she would nevertheless be unsurprised by this thrill response (involuntary though it is) as she is omniscient with respect to the operation of her sensory systems and the signal processing that goes on in her cognitive system. To answer Jackson’s question, she experiences a thrilling (involuntary and private) sensation but she is fully prepared to anticipate that response and has no need to posit the existence of some previously unknown dimension, subatomic particle, quantum state or otherwise modify her understanding of physics; she literally learns nothing new.
I agree, but with one caveat. I don’t think her experience would necessarily be private. It certainly would be today, but that’s a limitation of our current scientific knowledge and technological abilities. In principle, there’s nothing stopping us from someday having a mind reading device. For Mary to have the kind of knowledge required, she would almost certainly have that kind of capability available to her.
Thanks Mike. I understand your point and agree that in the future (or at least in principle) we will have the capability to know a great deal more about what’s going on in someone’s mind (“So, Jim, I see you’re thinking about Jennifer Anniston again.”)
Thanks Jim. It’s admittedly a scary thought.
What’s an even scarier thought is we may reach the point where a mind can be altered to think and experience what someone else wants it to think and experience.
Couple of quick points / questions, Mike.
1. What Wyrd Smythe said.
2. You once told me people frequently make the mistake of assuming that if all the details of neurological functioning were known, that an experience should be produced. I don’t remember how you said it, but I think I’ve got the meaning right. Here you seem to be saying that if one understood everything, that would indeed include the experience. What is the difference in these two thoughts? Is it that what can be learned externally, by observing a live organism, will always be incomplete, and that in this arbitrary case Mary is presumed to have knowledge that is complete?
3. I agree with Wyrd and others that the brain states associated with actually seeing red are necessary to have the experience. My question is, is it even possible to predict this state in advance? Because no brain ever does anything but experience the color red. It does a myriad other things simultaneously, and has a state in any given moment that is probably based upon its unique history. So Mary’s first experience of red will be right on the heels of other thoughts, in the midst of previously ongoing processes, and her experience will be uniquely related to all the other things going on, right? So Mary would have to know the complete state of her body and brain to even have a chance at predicting this right? This begs interesting questions I think:
3a. Does seeing the color red produce the same brain states in all individuals? In any two individuals even? I’m guessing no, but I’m no expert in this area. Human brains have similarities but are all unique at the end of the day, right? And if Mary has never actually seen red before, then something that has never happened before in her brain will need to happen. Is the response of the brain to a novel stimulus precisely predictable at the level of individual neurons and such?
3b. It seems that brain states might be like fingerprints. No two are ever alike. Will we ever have the same brain states twice? It seems like we won’t. This is just an interesting question I stumbled into thinking about this. And it seems like no two people will ever have identical brain states either. If this is actually the case then either we all feel subtly different about the same objective things, or maybe there are common patterns that accommodate differences at lower levels in the brain, so that two different brain states can yield the same experience. What are your thoughts on this?
LikeLiked by 2 people
1. I’ve responded to Wyrd’s comments.
2. I’m not sure what I may have said before. I’ll admit I’ve confused this distinction in the past (both in my own mind and in communicating about it), between having the experience and knowledge from the experience. No matter how much we study a system from the third person point of view, we’ll never have its experience.
But that’s a different statement than saying we’ll never know what it is experiencing and all its effects. Not being able to have the experience is simply a hard truth, but no more profound than the fact my laptop can never have the same state as my iPhone. Knowledge of the details of the experience comes from the fact that these are physical processes, ones with no special physics that would prevent full knowledge of the information being processed.
3. There’s no doubt it would be profoundly difficult. But if we’re going to bring in practical issues, then the whole thought experiment can be dismissed. We can’t ignore the practical difficulties of the experiment that challenge the conclusions we want and then cite other practical difficulties to dismiss the aspects we’d rather not deal with.
3a. It’s definitely unique in every individual. But if Mary has all the facts, then she has all the facts about those variances, as well as how it will effect her nervous system in particular. If she doesn’t have that, then she doesn’t have all the physical facts.
3b. Everyone’s brain state is, of course, unique. Still, we’re all humans needing to control human bodies. We’re all able to communicate by referring to experiences we have in common. In other words, there are enormous overlaps in those states. Again, if we think about this in terms of technology, my laptop is different from yours, even if we use the exact same model, because I’ve loaded different software, exposed it to different inputs, environmental pressures, and many other things. Nervous systems are much more complex and biology far messier, but that doesn’t change that the principle.
Overall, as I noted in the post, it’s far easier if Mary just simulates the experience on her nervous system, but a lot of people cried foul at that. I think it’s still valid knowledge. But if we require that she do it the much harder way, it’s still possible in principle.
I’m probably starting to sound like a broken record in this thread, but I think the only reason we’re tempted to conclude otherwise are unexorcised remnants of intuitive dualism, even among people who have intellectually rejected dualism.
LikeLiked by 1 person
“Overall, as I noted in the post, it’s far easier if Mary just simulates the experience on her nervous system, but a lot of people cried foul at that.”
I think the reasoning there is it effectively allows Mary to see (or have seen) red. It’s creating the subjective experience, which is the same as letting Mary out of the room.
Accomplishing such a simulation basically ends the experiment by giving Mary the subjective experience.
Do you not draw a distinction between objective knowledge and subjective experience — that consciousness has an outside and inside aspect? The difference between the two is really all this experiment highlights.
LikeLiked by 1 person
Between objective knowledge and subjective knowledge, there is a difference in perspective, one of third person vs first person. Every perspective has its blind spots, but blind spots can be compensated for.
Sometimes yes, but sometimes no.
I guess my thoughts are reasonably aligned with people like Matti and Wyrd over here, though let me explain.
First it’s interesting to me that Frank Jackson developed a popular thought experiment to get people thinking about this stuff (as his occupation entails), but then decided “Hey, I don’t want people to continue treating me like I’m some kind of squishy substance dualist!” So apparently he’s getting his cake (which is fame), and trying to eat it as well (or be respected as a true naturalist). Sounds about right to me. Chalmers seems to have felt this heat as well, and so has rebranded his substance dualism into “property dualism”. Call it what you like Dave, but the title changes nothing.
Then there are people who propose modern consciousness theories under classifications like “computationalism” and “functionalism”. I consider “informationism” to be a more appropriate name given that they believe qualia exist when certain information becomes processed into other information regardless of substrate based instantiation. If true then this would be unlike anything known to science, or apparently a second kind of stuff. Daniel Dennett would be a prime example of such a substance dualist, and his skills in the art of rhetoric make him a tricky one at that! Perhaps some day my “thumb pain” thought experiment will help straighten out who’s who in this regard, as well as push such science to more often explore positions which seem causal.
A central issue here is that philosophers and scientists seems to be trying to discover whether or not qualia exist as a second kind of stuff. Not only has this been ineffective in the past, but it shouldn’t become effective in the future. Instead I’d have scientists and philosophers adopt my single principle of metaphysics. Here it’s understood that to the extent that causality fails (as in the premise of supernatural qualia), nothing exists to discover. So they’d directly choose whether they’d like to be naturalists, or would rather like to be in a “causal plus” category. Two separate varieties of science should thus emerge to facilitate each positions. Without substance dualists around in one of them, as well as the threat of getting pushed into the dualist variety of science given certain proposals, I suspect that things would work better than they do in today’s mixed club of science.
Regarding this thought experiment itself, if we define “understand” (which is to say something to potentially “learn”), to exist as any given qualic experience, then Mary would by definition learn something new by finally gaining the qualia of color, or any other that she didn’t previously have. So here we have a position which is true by definition, and so should be less than enlightening. But is it useful to define “understand” in such a way? Not generally, I think.
I like to consider my understandings not as expansions of more and more data, but quite the opposite. Here they’re instead effective ideas which seem to work, and so have been reduced into concise graspable rules. For example you might tell a student that force equals mass times acceleration, though that statement itself shouldn’t provide such an understanding. To potentially “understand” as I like to define the term, the student should need to solve associated problems from this premise, or gain an effective reduction of what F=MA both does and doesn’t mean.
Anyway if that’s an effective definition for “understand”, then a given qualic experience should be something to experience rather than to understand. Either way I don’t consider this thought experiment to render qualia beyond the realm of causal dynamics given my metaphysics itself.
My own position is that qualia should exist mechanistically, possibly by means of the electromagnetic radiation associated with neuron firing. And even if this were to become experimentally verified and accepted in science, would “the hard problem of qualia” end? Nope. It should never make sense to us why whatever produces it, does ultimately produce it. It’s the same with gravitational, electromagnetic, strong, and weak interactions.
Attacking Jackson for the temerity of changing his mind, and all these accusations of substance dualism, Eric, seem like defensive reactions.
I will say this for Chalmers, he, at least, is honest and self aware about his own dualism. I don’t agree with it, but his version is compatible with mainstream neuroscience.
In any case, I think the hard problem only arises due to dualistic intuitions.
LikeLiked by 1 person
Oh come on Mike, I didn’t say anything against Jackson. What I said was that he was doing his job, but also must not have liked being perceived as a substance dualist. I wouldn’t like being perceived that way either, and even if I did enjoy any associated fame. Should I ever be able to affect the likes of Daniel Dennett with my own thought experiment, I’m pretty sure that they wouldn’t like that either.
So apparently Jackson has denied the point of the very thought experiment which gave him fame, and given that he didn’t want to be a dualist. And if generally accepted, I’m saying that he could simply decide not to be a dualist by means of my single principle of metaphysics. Here his thought experiment itself would remain kosher as such and so he’d be off the hook. Conversely my understanding is that Chalmers instead simply devised a “naturalistic dualism” contradiction and so kept his belief in two kinds of stuff. That’s different, or condoned dualism. And wow, is it true that his dualistic premise is supported by mainstream neuroscience? That was actually my own point, though in an unacknowledged way associated with ideas such as global workspace theory. How do you perceive modern neuroscience to support the dualism of Chalmers?
Regardless, should I ever become famous for using my thought experiment to display to the world that a quite standard position in cognitive science today does also happen to be supernatural, you’ll get no retraction from me. I’ve been a very strong naturalist since about the age of about 12.
Regarding potential “hard problems”, don’t you consider there to be a “hard problem of gravity”? Well that’s all I’m saying regarding qualia. We humans can’t be expected to understand everything!
You and I have agreed to disagree on the position that qualia will result when certain information is processed into other information without specific mechanistic instantiation. Thus I’d hope for you to not think I’m attacking you personally when I generally propose this position to be dualistic, or at least if I’m not directing my assertions at you specifically. I don’t feel threatened when you state the converse. I’d simply hope for others to weigh in on my thought experiment itself. Does it make sense that qualia probably does exist as information processing alone? Surely you don’t mind if others here weigh in on this idea?
LikeLiked by 1 person
I’m very sorry for my commentary here Mike. You do so much for me, and then I come over and act like an asshole. Regardless of how it’s created, I’m experiencing the qualia of regret…
LikeLiked by 1 person
Oops. Crossed posts.
No worries Eric. We’re good. I’m grateful for the follow up.
LikeLiked by 1 person
I didn’t say Chalmers’ property dualism is supported by neuroscience, only that he reconciles with it, which is to say, it’s metaphysical add-ons that don’t contradict neuroscience. (I covered this in my post on Chalmers last year.)
On pre-deciding that you’ll never change you mind, that seems pretty dogmatic. I like to think there’s not a view I hold that can’t be changed by sufficient evidence or logic.
As we’ve discussed before, I think the hard problem of gravity was solved by Einstein.
On labeling computationalism and functionalism as substance dualism, that’s fine if you want to do it. Frankly, it seems like incoherent polemics, but if that’s what you want to say, knock yourself out.
LikeLiked by 1 person
I thought you took yesterday off. But I see you were still slaving over your keyboard. You didn’t offer a cogent rebuttal to my final remarks (September 6, at 10:59 am) where I noted your admission that “I do agree that Mary’s subjective experience is irreducible.” Something I’ve said in several previous posts. I was hoping, perhaps naïvely, that I hit upon one point where our minds may agree. I was especially hopeful after reading Wyrd’s last comment (September 7, at 12:50): “So you agree that, in the richness and vividness of the actual experience, Mary does gain new information.” Which, so far, has not been contradicted. But, then in your recent dialog with Mr. Gregoric it all seemed to evaporate. And back down the rabbit hole I go.
As I admitted at the outset, I’m a dilettante on this issue. Haven’t read much of the literature like you. However, after looking through the many comments, it appears to me that the thought experiment incorporates a fallacy of ambiguity. I think that’s what James Cross was actually referring to when he described one analysis as searching for contract loopholes. Many comments (mine included) apparently have substantially different assumptions about Mary’s knowledge. Not sure whether “irreducible” experience is in or out of that slippery concept. But, as I said initially, this is a interesting thought experiment. Thanks Mike.
LikeLiked by 2 people
Sorry Matti. I didn’t realize there was still an issue left to discuss with us.
As I’ve said numerous times in this thread, Mary’s experience is subjectively irreducible. But that doesn’t mean it’s objectively irreducible. It’s a matter of the blind spots within different perspectives. There are things inaccessible from a first person perspective that are accessible from a third person one. And the thing to realize about third person, it isn’t just one perspective, but a host of various perspectives we can take, which can compensate for the limitations of others.
To get an idea of what I mean by subjective but not objective irreducibility, consider something like a surgeon performing an operation. The surgeon is focused on the overall goal of the operation. But that surgeon likely has assistants, which may include anesthesiologists, nurses, and maybe even other surgeons specializing in areas the chief surgeon doesn’t know. The chief surgeon delegates the work to these people that they’re trained to do. The chief receives the benefit of their work, without knowing many of the details. For him, the result of each assistant’s efforts could be said to be irreducible.
Of course, someone tracking all the details of the operations from outside could very well reduce any result by drilling down into the details of each assistant’s area. Not being embedded in the operation process itself, they could, in principle, understand every aspect of the operation.
We could look at other analogies, such as the fact that the software you’re using to read this depends on low level operating system and device driver software to handle many details it’s not designed to handle. For the web browser, the resolution of a hostname the user enters into an IP address is essentially an irreducible thing. It makes a call and gets a result (or an error code or something). But again, for someone studying the whole system, they can choose to drill down and learn how those results are computed.
In the case of the brain, there are many things the overall operation of the system depends on various subsystems for. For each of those things, the global system can’t drill down into the preparation of that result. Which makes it irreducible. But not to someone studying the operation of that brain from the outside, particularly not to someone who has a full understanding of every aspect of its operations.
I agree that there appear to be a variety of interpretations in this thread on what kind of knowledge Mary has. My interpretation is based on what Jackson claimed the thought experiment demonstrated. Is it possible for any human being to have the detailed level of knowledge Mary is really purported to have? Maybe in principle. In practice? I doubt it, at least not without technologies far in advance of anything we currently have. But, again, if we’re going to require practicality for this thought experiment, it doesn’t even get off the ground. Well, except possibly for the robo-Mary scenario, but even that requires technology we don’t have yet.
You said, and pardon my pedantic manner here:
“I do agree that Mary’s subjective experience is irreducible, but only from the subjective perspective. From the objective perspective, it is reducible.”
I apparently wrongly assumed that meant the same as my many comments that Mary’s subjective experience is causally reducible but not ontologically reducible. As I tried to explain, Mary’s conscious experience is caused by and a manifestation of her physiology, her physical being. But, as I said, causal reducibility does not logically require ontological reducibility. Thus, the subjective experience is irreducible yet causally explainable by objective physical processes. But now apparently an irreducible subjective experience is in fact reducible to an “objective perspective.” Whoa!
I understand “subjective” to mean a first-person ontology—something that only exists as experienced. Which is contrasted by “objective” which has a third-person ontology—something that exists independently of experiences.
Well I don’t have any way to sort out your long explanation, so I’ll simply stand by my claim that we are mired in a fallacy of ambiguity. Actually perhaps two such fallacies, one about Mary’s knowledge and one about what subjective and objective mean. I’m going to go lie down now.
Ambiguity is always a problem. I’ve tried to be as clear as I can. But I don’t subscribe to the idea that consciousness is ontologically irreducible. I think the evidence from brain injury cases precludes it. People can lose aspects of their experience. But it seems to be an area where we just disagree.
Hope you had a nice nap.
“Is it possible for any human being to have the detailed level of knowledge Mary is really purported to have? Maybe in principle. In practice? I doubt it, at least not without technologies far in advance of anything we currently have.”
What technologies do you have in mind? I believe we’ve agree that actually simulating the experience or memory of seeing color counts as physically seeing color, so doesn’t count towards helping Mary objectively understand color?
So what other technologies might apply in helping Mary gain objective knowledge? Jackson specified she is educated over a B&W TV — what technology would make that better?
“Well, except possibly for the robo-Mary scenario, but even that requires technology we don’t have yet.”
What about an ANN trained strictly on B&W pictures? If then fed the same pictures in color, would it be experiencing new input? (Assume the inputs were always capable of discriminating color.)
On technologies, I don’t know. Whatever is necessary to meet Jackon’s premise of Mary having all the physical information. I could speculate, but it would be pure speculation, and the resulting debate wouldn’t resolve anything. If such technology is impossible, then so is having all the physical information.
On the ANN example, it would be a new experience. Whether it would convey new knowledge would depend on how thoroughly the ANN understood its own operations.
(It’s worth noting that color discrimination isn’t just in the inputs. It takes a lot of processing in advanced layers for it to happen.)
“Whatever is necessary to meet Jackon’s premise of Mary having all the physical information.”
Jackson is pretty clear it entails facts that can be learned from books or video (B&W, of course). I believe we’ve agree mind-altering technology isn’t in play. So what possibly remains?
What sort of knowledge do you think Mary can’t pick up from a book?
“On the ANN example, it would be a new experience. Whether it would convey new knowledge would depend on how thoroughly the ANN understood its own operations.”
To me “new experience” and “new knowledge” are nearly the same thing. The keyword being “new”.
“(It’s worth noting that color discrimination isn’t just in the inputs. It takes a lot of processing in advanced layers for it to happen.)”
Indeed! And don’t you think much different processing goes on in those layers with color inputs than with B&W ones? Processing that had never occurred before?
It seems to me that substance dualism remains a metaphysical question rather than something to prove one way or the other. I’m a monist because I like that something exists to potentially understand here. It’s convenient. To the extent that this isn’t the case, it seems to me that science itself becomes obsolete! So I presume that qualia is produced by means of causal stuff given my metaphysics, and possibly the EM waves associated with neuron firing.
Or am I missing something?
I think you’re missing the fact that third person knowledge is also acquired through qualia. It’s qualia from a range of different perspectives, but qualia nonetheless.
LikeLiked by 1 person
Third person qualia, Mike? You mean if science were to established an understanding of how to produce qualia, as well as produce it experimentally so that something experiences it, then its associated observations would provide “third person qualia”? Or if not, then what do you mean by this term?
If that is what you mean however, I don’t see how this challenges my assessment. The point is that we can define “learning” to happen whenever we experience new qualia, in which case she will by definition learn when she sees color. And I guess she wouldn’t have known all there is to know about color either. Or we could define it differently than experienced qualia and so she might not learn anything new with color experiences. As I’ve said, I prefer to define learning in terms of reductions, but the whole thing merely depends upon the definition of “learn”. Substance dualism would be another matter. Is it causally produced?
Furthermore consider my point that scientists needn’t continue trying to figure out whether or not qualia exists as another kind of stuff. I consider this a waste of time. Instead I’d have them realize that naturalism brings the possibility for understanding, whereas supernaturalism does not. So we should simply decide which of these we’d like to be and so continue from that perspective.
What I mean is something far more basic. Right now, reading this reply about third person information, you’re having qualia.
This is something we sometimes forget. Objective information is subjective information, just from a particular perspective. We only experience objective information through our subjective experience, that is, through qualia. We only experience objective information about ourselves through the mirror of systems that take in that objective information and allow us to experience it subjectively.
All of which is to say, if the requirement for learning is that qualia be in the mix, it always is.
That said, it’s worth noting that there is nonconscious learning, which will involve no qualia.
LikeLiked by 1 person
Your response strikes me as “preaching to the preacher”! My dual computers model is all about the brain existing as a non-conscious machine which creates the subjective medium by which existence itself may be perceived.
Anyway yes, in an epistemic sense we do often speak of third person information, even if in an ontological capacity that can only exist through the medium of first person qualia. Reading about thumb pain will bring qualia, even if not the variety associated with a whacked thumb itself.
My objection to the Mary’s room thought experiment does remain — it’s all a matter of defining the term “learn”. Either the qualia of experiencing color is required to know all there is to know about about color, in which case Mary could not have had such an understanding before her release as claimed in the thought experiment, or such qualia isn’t required in which case she could (hypothetically) know all there is to know about color and thus seeing it would merely be something phenomenal rather than mandate reductive learning about color. Right?
Then furthermore I claim that trying to demonstrate that qualia is natural or supernatural is a misguided quest. Instead I believe that we must decide for ourselves whether or not we’re naturalists, which is to say, believe that causality doesn’t fail in the end. Then we can assess various models for qualia’s creation to be natural or supernatural given what they imply, though perceptions of qualia itself will depend upon a given person’s metaphysics.
Regarding “non-conscious learning”, yes that would need to involve a machine which experiences no qualia. This could be one that’s humanly built, or given my dual computers model, be a full human brain. Here there is no agency, though an agent does tend to be created as well, or a teleological dynamic which feeds the brain with effective agency.
The question is, assuming Mary has a full understanding of the discrimination process we refer to as “color”, including all its affective effects and triggered associations, including the short term and long term effects it might have on memory and behavior, what knowledge does she acquire by actually engaging her color discrimination circuitry? What does she know about either herself or the world that she didn’t know before? And where was that information prior to it being revealed to her, or what was it constructed from?
LikeLiked by 1 person
I was actually trying to agree with you there, and from the premise that “learning / knowledge” could be defined to not require a given bit of qualia. I’ve thought some more about this however and it may be that you won’t agree with the following.
This time I’ll begin rather than end with the assertion that anyone can decide whether they believe that all things function by means of ontological causal dynamics. This includes Jackson deciding that he’s a naturalist as well.
Next, what do we mean when we talk of “knowing” or “learning about” a given quale? We generally mean “experiencing it”. If someone had never experienced color before, then when they do we’d say that he/she had thus learned something new.
So in that sense does Mary, who unlike a regular person has a perfect physical knowledge of her own physiology for the production of the qualia of color that she’s never experienced herself, learn anything when she does experience it? Yes, I think this would be useful to say. We wouldn’t say that she knew fear, hatred, or the taste of lemons without associated qualia, so this should be the same regardless of her perfect mastery of human anatomy and such. Furthermore from the position that qualia exist by means of causal dynamics (and I guess you could say the right non-conscious information processing if you like), that’s how she’d gain such an education.
Here she knows first hand how colors make her feel, or something which was formerly non-existent for her.
That “information” didn’t exist yet because it hadn’t yet been produced. You could say it was constructed by means of processed information, while I’d say mechanical dynamics of some kind.
LikeLiked by 1 person
Mike: “I like to think there’s not a view I hold that can’t be changed by sufficient evidence or logic.”
Matti: “I was especially hopeful after reading Wyrd’s last comment (September 7, at 12:50): ‘So you agree that, in the richness and vividness of the actual experience, Mary does gain new information.’ Which, so far, has not been contradicted.”
Interesting dissonance between those two…
Wyrd, you seemed to simply contradict my statement. Was there a point in particular you wanted me to address?
Just remarking that, in the eight or so years I’ve known you, I can’t recall anyone’s argument ever really changing your view. (It certainly seems *I* haven’t moved your needle any in all that time.) Here you seem to hold your original view despite having essentially agreed with Jackson’s point that Mary gains new information (which is what I think Matti was getting at).
I’ve discussed topics on which I’ve changed my mind many times on this blog, regarding theories of consciousness, the evolution of religion, and other areas.
But you keep bringing up this thing about you not being able to convince me to your positions in particular. So, when have I ever changed your mind to one of my positions? There are plenty of things we do agree on, but no two intelligent people are going to agree on everything.
I don’t know where you see me agreeing with Jackon’s point. I took a guess below but apparently that wasn’t it.
I was referring to your debates on this blog and others. And, yes, my opinion has evolved due to our conversations. I’ve gone from rejection of computationalism to skepticism, for one. I continue to explore MWI (rather than discount it), for another. And there are points about SF history I changed after our conversations. For that matter, giving The Expanse a second chance to in large part due to your interest in it.
You agree with Jackson when you replied to me that, “(She would also know that, due to her deprivation, her own associations would be sparse compared to a regular person’s.)”
That her own associations are, at best, “sparse” compared to someone who is familiar with seeing color is what Jackson is getting at. Mary can have perfect knowledge, even to the point of some intuition about seeing color (I’ll concede that point without agreeing with it), but she still is not in possession of full knowledge of seeing color.
Imagine Mary’s sister, Nora, who knows everything Mary knows, but also has seen color all her life. Doesn’t Nora know something more than Mary?
Sort of a tangential question that following all the discussion here made me wonder. Do you think brain states, if we ever had all the information about a brain’s state at time zero, are precisely predictable going forward in time? This sounds like Newton’s concept of the universe in a way. Or at least what we’d called the Newtonian view. That view proved a fiction in a sense because even if the world wasn’t quantum, there were still limitations on what we could actually predict–(like the three body problem, only imagine it expanded to an entire galaxy or something).
So I’m curious if you think the evolution of brain states from one time to the next is predictable like Newtonian dynamics? Or is it a chaotic system like the weather? Or is there even the possibility for short-lived quantum events to have far-reaching effects?
Part of the reason I ask is that it may not be the case that we can predict exactly what our experience will be to anything, which makes the thought experiment with Mary even trickier to analyze. But just more generally, I’m curious if you think the sort of processing the brain does is wholly described by current conceptual schemes or if it might exhibit new dynamics we’re unfamiliar with. What kind of system is it exactly?
LikeLiked by 1 person
Nothing I’ve seen in mainstream neuroscience implies we’re going to need new physics to understand the brain, or that an accounting in terms of quantum physics will be necessary. Given the environment, the estimates I’ve seen for how fast quantum effects collapse in the nervous system are orders of magnitude smaller than the time it takes for the most basic molecular operations of proteins. Of course, until we have a full accounting, who knows what might be found? But it’s not where the data is currently pointing. Even Christof Koch, who is prone to panpsychism, doesn’t go there.
(Of course, if you dig, you will find papers by someone claiming just about anything. I’m making my judgment based on what the most experienced people in the field choose to include in their textbooks and general neuroscience books.)
In terms of predicting the brain, that’s obviously a difficult question. If it hinges on the action of every molecule, then complex system dynamics say we’ll never be able to accurately predict what the brain does. But there are a couple of things I think that ameliorate that difficulty.
First, neural circuits have a lot of repetition and redundancy to minimize the stochastic effects.
Second, it’s worth noting that the brain itself can’t predict its own actions due to these limitations. It also can’t reliably repeat many of its operations. Despite this, it somehow manages to reliably meet the demands of managing the body and responding effectively to the environment.
This makes sense if you think about it from an evolutionary perspective. An animal that reacts to the smell of food or the sight of a predator with rampant indeterminism probably isn’t going to pass on its genes. And a brain that doesn’t effectively manage homeostatic parameters, is probably headed for the same result.
In other words, as we scale up, things may become more predictable. Of course, the brain minimizes its stochasity, but it doesn’t eliminate it. Evolution seems to have settled on allowing a certain amount of noise contamination in return for efficiency. The actual balance between these factors varies from circuit to circuit, depending on its individual reliability or performance requirements.
And as many like to point out, there may be cases where the stochasity even serves a purpose, such as when an animal is exploring. Although careful observation of many animals seems to show that their exploratory behavior tend to follow set patterns. But it may enable novel behavior in borderline or highly uncertain situations, providing an opportunity for the animal to learn.
Does that mean that the actual experience of Mary seeing color for the first time may have variances from what she is able to predict? Sure, but as I’ve noted before, given her knowledge, she would understand the range of the possible variances that might occur. I suppose someone could insist she won’t know exactly which way it would go for her first experience, but we’re really scraping the bottom of the barrel here in trying to identify “knowledge” she would learn. It seems pretty far from the conclusion Jackson originally took away from the thought experiment. We could also say she’d “learn” something every time she looked at the processing in an individual brain from a third person perspective, even if its processing is completely within the ranges she’s come to expect.
As I noted to Matti, a lot of people seem to be getting hung up on the difficulty of Mary acquiring the information as posited by the thought experiment. I agree, it is difficult. Perhaps never possible in practice. (Although given its ignominious history, I’m very careful with the “impossible” label.) But if so, she doesn’t have all the physical facts, and the thought experiment is moot.
LikeLiked by 1 person
Understood. I don’t place a lot of stock in where the data point right now in neuroscience, at least in terms of it offering a definitive or final answer to the question of how the brain and consciousness work, because it seems very early in our understanding of this exploration. It will probably take a while before we exhaust the questions we can ask with what we know today, and then we really don’t know where the answers will take us. One thing leads to another and we have to follow the path, of course, but I wouldn’t hazard a guess as to where it is going, or how close or far we are from actually arriving.
I didn’t explicitly bring up quantum effects. I was curious on your thoughts about such an astonishingly complex system as the brain. It seems probably at least as complicated as planetary weather systems, in a way. I really don’t know one way or the other on the quantum effects, but I think they’re in there somewhere because they just can’t be excluded from the scale of proteins and biochemistry. It doesn’t have to be a quantum computer qubit thing. It could be tunneling effects in certain interfaces, electron energy states or reservoirs that are only permitted by quantum mechanics, etc. So while I’m sure there are biochemical effects, I’m also quite certain we’ll find that electromagnetism, certain quantum effects, certain EM field effects, etc., all fit too, or are worked around in a very clever way if not used.
I’m imagining a symphony where all these possible architectures work together I guess.
LikeLiked by 1 person
A final effort: Given Mary’s perfect objective expertise on color, per your standard of being able to intuit the experience, if given two pieces of paper identical in every way, except that one is red and one is blue, would she be able to tell which was which visually? Does her perfect knowledge allow her to distinguish colors?
Note that the color on the papers is carefully prepared such that luminance and saturation are identical. The only observable difference — to anyone — is the chroma value.
You mean the first time Mary sees color paper? Yes, she’d be able to tell, because she’d understand the effects on her nervous system each would generate. (She’d understand this far better than we currently do.) For instance, her nervous system, being a primate one, would innately notice the red color more than the blue one. She’d be expecting it. (Assuming her nervous system, due to its deprivation, could even make a discrimination initially. She also know whether and to what extent to expect that.)
She would be more dependent on innate effects at first rather than learned ones. (Maybe that’s what you were referring to above?) But she’d also predict which associations would form as she experienced color in more situations. She’d know in advance about them.
The distinction we need to be careful of here is between knowledge and ability. If she knew all the physical facts, she’d have the knowledge. But she’d probably have to develop the discrimination ability. She’d know what to expect at each point in that development.
“For instance, her nervous system, being a primate one, would innately notice the red color more than the blue one.”
What do you mean by “notice” and how are you sure that’s a guaranteed effect? (For instance, I seem to very much prefer the cooler end of the color spectrum.)
Wouldn’t Mary be guessing? Specifically, isn’t it true she wouldn’t know?
Consider why red lights and stop signs are the color they are. Why are error messages red? Because red is innately vivid to us. It calls primate attention to ripe fruit. It’s a reflexive reaction in our nervous system. Maybe it isn’t true for you. Maybe you’re neuro-atypical. Or maybe you’ve had life experiences that altered that association for you. But for most people and primates in general, red stands out.
And Mary, having “all the physical information” would know the effects on people in general, and she would know her own nervous system in particular at that point in time well enough to know what effects it would have. She would know it to a far greater extent than our current scientific understanding. Again, if she doesn’t, then she doesn’t have all the physical information.
Red also has the longest wavelength, so it can be seen from a distance better than colors with shorter wavelengths. Yellow is the next easily distinguishable color with a long wavelength, so it’s come to mean caution. Red also stands out well against most natural backgrounds (trees, water, sky, ground). There may be some deep-rooted associations with fire and danger.
The thing about ripe fruit is that red should mean “go” then. (In fact, I read about an experiment where researchers wearing red, green, or blue, clothes fed monkeys, and the monkeys showed some aversion to the ones wearing red. So monkeys may have the same aversion association.)
Regardless, Mary still has never associated the brain states of seeing color with her perfect knowledge of its effects. Mary might guess (maybe even fairly accurately), but she can’t know until her brain actually has the brain states of seeing color. Only then can she connect experience (i.e. those particular brain states) with her perfect knowledge (other brain states).
(The problem with dualism is it makes one think of brain ectoplasm. The only dualism here is between subjective experience and objective knowledge. Consciousness from the inside rather than the outside. I think you’ve acknowledged this type of duality in the past?)
I’ve talked about epistemic dualism before. But as I noted to someone else on this thread, in the past I haven’t always been clear about the distinction between knowing the content of an experience and having the experience, not just in communication but in my own mind. Peter Martin actually made me realize the fuzziness in my thinking there. (A case where someone did change my view, although not immediately since it spurred additional reading.)
I can’t parse that reply, so I’ll jump back to the original point.
But let’s give Mary orange and yellow, or cyan and blue, or whatever to avoid red. It could be beige and pink, so long as the only difference is the chroma value.
Isn’t it true that Mary would only be guessing at which is which? And, in guessing, would be capable of being wrong? In contrast, people who’ve experienced color know, don’t guess, and would never make a mistake.
Mary, once the colors are experienced and she’s learned which are which, would also know.
I’m fatigued with all this sub-thread hopping, so I’m replying to everything down here.
On those changes, moving from rejection to skepticsm seems pretty nuanced, and your exploration of MWI doesn’t appear to be moving the needle. I’l take the sci-fi stuff though. I’d also point out that you did sharpen my understanding of special relativity, and I’m sure I’ve picked up (despite myself) some mathy stuff over the years.
Of course experience conveys information, under normal circumstances. But Mary knows what innate associations she has, and knows the common learned associations regular people have that she doesn’t have, and what they mean. In the case of her sister, she knows all the associations Nora has, and what they mean. As Mary experiences color, she’ll begin to form those associations, but she’ll already know what information they contain. So on the actual question of whether Mary gets new knowledge, she doesn’t. (She does get new pathways to that information.)
Jackson, like most authors of these types of thought experiments, didn’t take his own premise seriously enough. If he had, he might have realized the issues in 1982 instead of in 2008. Any limitation on the physical knowledge means Mary doesn’t have all the physical knowledge, and the original takeaway is nullified. (I know you don’t agree with that takeaway. Just pointing out what’s needed for it.)
On “new experience” and “new knowledge”, I find definitional arguments unproductive. It’s a new stimulus but conveys no new knowledge if the system already knows all the effects of that stimulus ahead of time.
On all the other colors, we can keep looping on this forever. If Mary has all the physical knowledge, then she knows the effects of all those colors on her nervous system. No, I don’t know what all of them would be. She has knowledge I don’t possess. (I, unfortunately, don’t have all the physical knowledge.)
“Mary knows what innate associations she has, and knows the common learned associations regular people have that she doesn’t have, and what they mean.”
Agree. Added emphasis mine — after she experiences color she does have those associations.
“In the case of her sister, she knows all the associations Nora has, and what they mean.”
Agree. The difference is that Nora has those associations; Mary does not.
“So on the actual question of whether Mary gets new knowledge, she doesn’t. (She does get new pathways to that information.)”
Disagree. Those new associations include subjective phenomena, and I don’t agree that even perfect knowledge entails actual subjective experience. It seems for you it all but does entail subjective experience? You feel it’s an easy mental jump from objective knowledge to subjective experience?
I suppose this is the bottom line, then. We disagree on what objective knowledge entails.
“It’s a new stimulus but conveys no new knowledge if the system already knows all the effects of that stimulus ahead of time.”
I think that’s a contradiction in terms. If it’s new to the system, it can’t be known ahead of time.
“If Mary has all the physical knowledge, then she knows the effects of all those colors on her nervous system.”
This, I guess, is where your technology comes in. Let’s grant that Mary simulates her nervous system perfectly from a scan and knows exactly how her brain will react when she sees color.
When she actually sees color, how does she know what her brain is doing? Let’s grant more technology; she has a real-time scanner that can compare her brain state to the anticipated brain states, so this machine tells Mary she’s seeing red. (This is really no different from someone just telling her that she’s seeing red. Or using a color meter.)
But how would Mary, on her own, be able to know the specific brain states she’s having? To be able to say, this thing I’m experiencing (her brain states from the inside) maps precisely to this other thing I know about (her brain states from the outside)?
I was going to start a new thread, but I won’t make you jump. New thought:
Mary’s experiences before involves only access consciousness. Her experiences after involve both access and phenomenal consciousness. Again, something is added.
On entailing subjective experience, I’ve noted many times that a third person perspective can’t have the first person experience. But the overall question is about knowledge, and since we’re talking about a physical system, there’s no reason in principle that someone with all the physical knowledge can’t have all the knowledge that normally comes from subjective experience.
As far as I can see, to conclude otherwise is to postulate that there’s hidden information somewhere. If so, where is it? It implies that a total knowledge of the physical system doesn’t include it. I don’t see how we go there without either, a) postulating some kind of strong magical emergence, a sort of generative dualism, or b) outright dualism of some kind or another.
On how Mary would be able to know her own brain states, she obviously under normal circumstances wouldn’t. But she has all the information and would know what those brain states will be, as well as what mental effects it will cause that she’ll be able to introspect. Again, if she’s wrong about that, then she doesn’t have all the physical information.
On access vs phenomenal consciousness, I think phenomenal consciousness is access consciousness, from the perspective of inside the system. I don’t see anything in phenomenality that can’t be accounted for in terms of access. The idea of an independent phenomenal consciousness is something people keep trying to find, but I think they’re looking for a contradiction. It posits a sort of anoetic consciousness, a consciousness that is unreportable and unknowable. A better term for the described state is “unconscious”, or maybe “preconscious.”
“I’ve noted many times that a third person perspective can’t have the first person experience.”
Indeed, which, to me, agrees with my point. Then you continue:
“But the overall question is about knowledge, and since we’re talking about a physical system, there’s no reason in principle that someone with all the physical knowledge can’t have all the knowledge that normally comes from subjective experience.”
Both Jackson and I disagree (from the end of “What Mary Doesn’t Know”):
“But, as we emphasized earlier, the knowledge argument claims that Mary would not know what the relevant experience is like. What she could imagine is another matter.” [emphasis mine]
Until actually seeing color, any amount of knowledge about what that would be like involves imagining or intuiting or guessing or whatever you might call it. It cannot involve certain knowledge until Mary physically encounters the facts of that knowledge (i.e. the actual experience of color).
“As far as I can see, to conclude otherwise is to postulate that there’s hidden information somewhere. If so, where is it?”
In Mary’s future. It’s not “hidden” in any mysterious sense anymore than a TV show we haven’t seen yet is “hidden” from us. We might know all about TV shows and be very capable of guessing what that TV show is like, but we can’t know that one until we watch it.
[I deleted paragraphs here; an analogy that “perfect knowledge” of a TV show involves knowing all the production data (script, actors, set, costumes, etc) in contrast to the rich and vivid experience of watching the show, but the fact of new system states isn’t really the point of contention. It’s that those new system states don’t represent “knowledge” in your view, so nevermind.]
“But she has all the information and would know what those brain states will be, as well as what mental effects it will cause that she’ll be able to introspect.”
And that is the core of the disagreement. I don’t think prediction, even highly accurate prediction, is the same thing as actual experience. You don’t either. You agree Mary experiences new brain states — but if I follow, you don’t see those new brain states giving Mary any new information. That seems a contradiction to me.
“I think phenomenal consciousness is access consciousness, from the perspective of inside the system.”
We’re pretty close together on that one, and I think we’ve also agreed the PC/AC distinction is a handy way of viewing two modes of the same system.
My point is that Mary only has the AC mode, whereas Nora has both AC and PC modes. Mary, upon her release, gains the PC mode.
I doubt that moves your needle on the claim that perfect knowledge would anticipate all aspects of the PC mode (and I still think that’s a contradiction).
All that’s been said here, by myself and others, none of it rises to your standard of “sufficient evidence or logic”? None of it has been at all persuasive?
The issue is whether knowledge of the experience, as distinct from the knowledge it normally provides about oneself or the world, is a meaningful concept. Pretty sure we won’t solve it here.
Wyrd, cycling over the same arguments repeatedly in marathon fashion doesn’t provide any more evidence or logic than they did on their first iteration. Have you been swayed by my arguments? Or those of Jim Greggoric? What about those of Daniel Dennett, Patricia Churchland, Keith Frankish, or Peter Carruthers?
Do me a favor and let go of this thing about how superior your arguments are and how unreasonable I am for not accepting them. Everyone thinks that about their own arguments. Airing your version of it isn’t persuasive.
You’re over-stating what I offered; it’s not about superiority, mine or others.
“The issue is whether knowledge of the experience, as distinct from the knowledge it normally provides about oneself or the world, is a meaningful concept.”
It seems a case of before and after data, and it’s hard for me to imagine a scenario where that isn’t meaningful, so this is definitely the sticking point.
LikeLiked by 1 person
From what appear to be your last comments of September 8 at 8:55 pm, and within that very cogent analysis you say: “I think that’s a contradiction in terms.” That, my friend, sums it up perfectly.
I’m still dazed over the proposition, now clear to me, that an admittedly “irreducibly subjective” experience can apparently become “reducible to an objective perspective.” That seems to be the very heart and soul of an oxymoron. I tried to argue by analogy at one point that you cannot buy more and more apples (objective knowledge) and then expect your pile of produce to suddenly turn into oranges (subjective experience).
But maybe we should take a step back and see what might be going on here. Frank Jackson devised this delicious thought experiment as a way to demonstrate that physicalism was erroneous. He was wrong and he admitted it. It does not prove that. As John Searle and others have clearly pointed out, conscious experiences certainly can be (and most likely are) causally reduced to a complex of objective physical processes—physiological processes that, in time, can be understood. But the subjective experience itself, highlighted by Jackson thought experiment, still presented a residual problem. However, some philosophers wisely pointed out, and this is important, that it is not logically required by the fact that consciousness is caused by physical processes that the first-person subjective experience is likewise reducible. In fact, that insight, that the subjective experience is causally reducible (what disproves Jackson’s theory) but not ontologically reducible, also avoids substance dualism and, as John Searle argues, property dualism as well. Although I think Mike will disagree with the latter If not both. And I think John Searle is right on this. Subjective experience is not a separate thing, it’s a process—a state the brain can be in.
However, and I’m sorry for the long digression, perhaps what is going on here is that having irreducible subjective experiences still out there bothers some physicalists. And I suspect that is because they cannot accept that this analysis really kills off for good the ghost of René Descartes and his dreadful dualism. I think it does. As I said previously, our consciousness is not separate from our physiology. It’s a state that our brain is able to achieve which happens to produce an irreducible subjective experience. That’s not dualism any more than digesting my breakfast is dualism.
Thanks, Matti, obviously I quite agree this has nothing to do with dualism (other than the Yin-Yang of objective versus subjective) and certainly isn’t about disproving physicalism — that’s why I used that neural net example. No magic sauce there.
“Subjective experience is not a separate thing, it’s a process—a state the brain can be in.”
Agreed. It’s a process we don’t understand right now, but (assuming physicalism) necessarily a process we can (probably) eventually figure out. I say “probably” because, as Jackson points out at the end of his “Epiphenomenal Qualia” paper, we might be in the position of the sea slugs, “And there might also exist super beings which stand to us as we stand to the sea slugs. We cannot adopt the perspective of these super beings, because we are not them, but the possibility of such a perspective is, I think, an antidote to excessive optimism.”
Jackson was referring to “excessive optimism” regarding our ability to figure out “epiphenomenal qualia” since that paper also suggested subjective experience was epiphenomenal — Jackson changed his mind about that, too, but I think the reasoning and caution about excessive optimism is broadly apt. It’s possible (as Michael suggests above) that brains, like weather and for the same reasons, are too physically complex for us to ever truly understand. We’ll get very good at understanding them, but never perfect.
I agree the problem can be framed entirely in terms of system states. New experience necessarily involves new system states. Which I know Mike agrees with; I think it’s that he doesn’t see these new system states as new knowledge.