The function of color

In the history of discussions about consciousness, there have always been ideas that some aspects of human experience are irreducible to physics. Colors have long had a special place in these discussions. During the scientific revolution, colors lost their status as objective properties in the world, with people like Galileo relegating them to secondary qualities dependent on the observer, similar to taste, odor, or heat.

It didn’t take long for people like John Locke to begin wondering, does this mean that my red might be different from yours? Maybe when I see a red apple, my experience is what you have when you see green things, and your experience of the red of the apple is my green. We never notice the difference because we both look at the apple and call it “red”. In other words, we could have a difference in conscious perception of color that leads to no difference in behavior.

This has long been seen as a challenge for functionalism, the idea that mental states are about their causal role. If such a scenario is possible, the argument goes, then it seems to indicate that there’s something to color beyond the functionality, and possibly beyond physics.

Is there any necessity to red things looking red, or green things green? What exactly is red anyway? Or yellow, green, or blue? In short, what, if anything, is the function of color?

I think when pondering this, it helps to back up and think about the causal chain that leads to color perception, and then the downstream causal effects that result.

On the upstream side, the surfaces of objects absorb some wavelengths of light while reflecting others. When that reflected light enters our eyes, it excites photoreceptor cells, light cones, that are sensitive to that wavelength. The pattern of excitations results in patterns of nerve spikes being transmitted to the brain, which interprets that pattern as a particular color. Colors commonly map to specific wavelengths.

Spectrum mapping particular hues to particular wavelengths of light, with blues around 440 nm, greens around 520, and reds around 620.
Click through for source

At this point, it might be tempting to wonder if Galileo was wrong. Couldn’t we regard the reflective properties of surfaces as objective colors in the world? The problem is that this doesn’t map the same for all observers. For example, most humans are trichromats, with three types of photoreceptor light cones, enabling the most common color mappings. But some people are color blind, able to perceive few colors, often being dichromats, only having two types of light cones. And while primates are typically trichromats, most mammals are dichromats. Many other animals are tetrachromats, having four types of cones. Some have much more.

(Interestingly, a bull in a bullfight doesn’t charge at the matador’s cape because it’s red, but because the matador is moving it around. The red color seems to be for the audience’s benefit. Similarly, a lot of artificial coloring in pet foods are more for the benefit of pet owners than the pets themselves.)

And that’s before getting into other things that can affect color perception, such as adjacent colors, assumptions about lighting conditions, and a whole host of other factors. The infamous dress incident a few years ago highlights many of these issues.

All of which seems to indicate that color is a reaction of the observer to stimuli. The question is, a reaction in service of what? And that leads us to the downstream effects. Here things are less certain and more controversial. What we can say is that color seems to enhance visual discrimination, adding to the capabilities provided by shape distinctions and brightness levels.

On top of that, it seems like specific colors have higher saliency than others, indicating that they help in prioritizing attention. Assuming equal brightness levels and the absence of other shape triggers, red jumps out to us more than other colors. Yellow does too but to a lesser extent. They seem to say, “Look here”. It’s not an accident that most stoplights, error signals, and other high priority messages tend to be red. It’s also not an accident that many construction workers often wear yellow. By contrast, green seems more soothing, and blue even more so.

All of this makes sense when we consider the evolutionary background of primates, where high calorie fruits tend to be reddish and yellow, with green often being more backgroundish, and blue usually indicating the sky, so very backgroundish.

Individual colors also trigger particular emotions. As a recent study indicates, some of these are learned, but many seem to be innate. And individual colors have a wide range of other associations.

So what does all that mean for color inversion scenarios? As Alex Byrnes indicates in his SEP article on inverted qualia, switching around colors without affecting behavior runs into a number of complications. There are relationships between colors, brightness, and saturation levels that get in the way. So simply switching individual colors at random won’t work. We might try rotating the standard color wheel by 180 degrees, which preserves many of these relationships.

Color wheel showing the relationship between the various colors
Click through for source

However, as Byrnes points out, this changes how distinct the colors are for common foreground objects, at least in normal human vision, which seems to mess with one of the functions identified above, enhancing visual discrimination. And it all seems to be without bringing salience and emotional reactions into the discussion. Once that’s done, it’s hard to imagine an inversion not involving a change in behavior.

Of course, someone could insist that we just need to rewire all the associations so that we have the same reactions to shades of blue and violet that we previously had to reds and yellows. But here’s where the conversation gets metaphysical. Can we do that without effectively turning those colors into their inverts? If we give a shade of blue all the reactions we have to red, haven’t we effectively made it red? (In comparison, what if we changed the effects of a bitter taste to match those of sweetness? Would it still make sense to call it “bitter”?)

To say the colors don’t change here is to assert that there remains an intrinsic nature to the color independent of all its causal effects. I understand why this stance feels intuitive. But at this point it seems like we’re really talking about something ineffable and epiphenomenal. And when pondering the nature of a color, we have to remember that we’ve already identified distinctiveness, saliency, and other associational triggering as functional attributes. Maybe something is indescribable and epiphenomenal because it isn’t there.

Or maybe I’m missing something. What do you think? Are there aspects of color not captured in this discussion? If they’re ineffable, is there any way for us to establish their actual existence?

Featured image source

119 thoughts on “The function of color

  1. I’d say colours are part of the ontology we use to describe what we sense so that we can have conversations about what we collectively see going on in the world and can compare our reactions to those things.

    If you see something that might be considered blue, perhaps your neuron 32 fires, whereas my neurons 25 and 27 fire in my visual cortex. Those are utterly different things happening in different brains, even if some anatomical structure may be similar. We find something happens for us mentally in a particular situation and agree the label blue for it, learnt as a child, so we can communicate with others about it. That’s a particular developmental stage for children, before which they don’t distinguish, knowingly, by colour.

    Your reaction to it, based on your experience with blue, is likely to overlap with mine because we find ourselves in the same physical world, and with similar bodies, but also to differ from it based on previous traumas or delights associated with blue things.

    Ontologies are to some extent arbitrary, partly determined by the world they describe and partly by the purpose to which they are put. I have in mind here work we did on ontologies for collaborating intelligent agents in AI systems a while back.

    I was reading recently of the odd status of indigo. Being between red (which is at one end of the colour spectrum) and blue (at the other), it does not actually appear anywhere on a continuous colour spectrum based on the wavelength of light, but rather is detected when we sense roughly equal quantities of red and blue light.

    Liked by 2 people

    1. I’m onboard with a lot of this. The only thing I’d note is I think there’s more of an innateness in the mix. So it’s not just that the relations fix the concept, but at its core there is an instinctual component. Our reactions to reds and yellows seem too universal for it. And monkeys who are put in red rooms reportedly don’t like it, becoming agitated, but seem much calmer in a green or blue room.

      Which isn’t to say there isn’t a major learned aspect to it all. The concept of each color is almost certainly refined and sharpened as we experience the world. That and we each have unique ratios of S, M, and L cones, which means that the internal patterns won’t be the same. And our ability to discriminate colors changes as we age, so even within the same individual, there will be variances over time.

      Violet, indigo, and purple are interesting hues. I do wonder what’s going on in visual processing when we see them. It seems like we should be getting signals from both blue-yellow opponency cells tilting to blue, and red-green ones tiling to red. But I think the R-G opponency cells are supposed to be a special case of the B-Y ones tilting yellow. So not sure what happens there.

      Like

  2. I would have thought Edwyn Land’s experiment back in 1970s gave a clear indication as to what it was that philosophers did (and still do) get wrong about colour. His Retinex theory of colour vision tackles the issue of colour constancy. See https://en.wikipedia.org/wiki/Color_constancy#Retinex_theory

    There is also a good explanation of Land’s seminal experiment in https://sciencedemonstrations.fas.harvard.edu/presentations/lands-retinex-theory-experiment

    The point is that colours we perceive are not independent of each other, but form a structure, which is why scrambling them at random makes no sense. What matters is not what a perceived colour “is like” in some abstract, absolute sense, but where it fits in this overall structure of colour perception. Thus it makes no difference how your experience of red-as-such would feel for me. The point is that it will be similarly placed in the overall structure of perceived colours for both of us.

    Which is also how colour constancy works: shift in lighting, and even in the colour of lighting, affects the whole structure, leaving colours actually perceived largely unaffected.

    All this seems to me to fit very nicely into the functionalist view of minds. 🙂

    Liked by 1 person

    1. Thanks Mike. I wasn’t familiar with retinex theory. It seems like there’s a predictive aspect going on there, which fits with a lot of current theories of cognition.

      I do think color constancy is an important phenomenon. Another reason we can’t just take light wavelength as the answer, since the brain make adjustments based on what it perceives the lighting environment to be. It means that color isn’t just straight wavelength, but an automatic judgment made by early visual circuitry. Color is more about what the object might mean for us, than the precise stimuli we receive.

      Like

      1. > Color is more about what the object might mean for us, than the precise stimuli we receive.

        More? I doubt it. In some part, probably. The structure of colour perception is basically given to us by evolution and fine tuned in early infancy. Land demonstrated that it depends on perception of colour contrasts as well as on light wavelength stimulating our retinas.

        So basically, seeing red is seeing a colour which is not blue, green, orange, yellow, purple etc etc… Which may sound trite, but isn’t because the structure readjusts itself in line with our automatic, built-in expectation of colours generally staying the same regardless of light conditions — e.g. if a cloud suddenly cuts off direct sunlight.

        Like

        1. Right, when I say what it means for us, I might in terms of evolutionary affordances, not what it means for us today in our day to day lives. Sorry, could have been more clear on that.

          I agree that distinction is the biggest part of it, but also salience. The two combined, I think, put a necessity into that overall structure, that prevents it from being completely arbitrary. (If it was arbitrary, I would think we’d see pathologies where it gets out of whack from one part of the brain to another. I haven’t come across anything like that in case studies.)

          Like

  3. I think I’m in agreement with you and the above comments, so I’ll just put it into my perspective: unitrackers!

    We give names to units of pattern recognition. When I refer to my “red”, I’m referring to my pattern recognition unit for “red” and the things that happen as a result of that recognition (so, the function). There is no extra property that can be compared to another recognition unit, say “green”, or to someone else’s recognition unit for ”red”. And as you suggest, if we change the function of the red unit to do the same things as the green unit, it becomes “green”. This is demonstrated when you start wearing goggles that invert the up/down of the image you see. What used to be ”down” simply becomes “up”.

    *

    Liked by 1 person

    1. Good points. In truth, I think there are unitrackers for each subarea of the visual field, a set of which is concerned with what color is in that area. And then there are other unitrackers concerned with more abstract notions of that color, and yet others that use it to recognize an object, like a ripe banana.

      I like your point that there is nothing to compare beyond the conclusion of “here is red” by the unitracker. So there’s no inherent redness we need to account for, or admit we can never account for.

      Liked by 1 person

  4. Galileo was indeed wrong; colors are, first and foremost, properties of reflective surfaces. And those objective colors do often have a function: to attract pollinators, for one example. Or to attract fruit eating animals who will disperse seeds and often provide a helpful dose of fertilizer, for another.

    Sure, bees see more color properties than we do, and color blind people see less; I’m not sure why this is a “problem” for objective color. Totally blind people see no light at all, yet this doesn’t show that objects lack reflectivity as an objective feature.

    But let’s talk about subjective color. You pose the question whether we could give a shade of blue all the reactions we currently have toward red. I think that is conceivable, yet I do NOT think “there remains an intrinsic nature to the color independent of all its causal effects”. Rather, what is conceivable (but probably not humanly possible) is that the subjective colors associated with objective colors could change over a person’s lifetime. The person would still be able to remember the old sensation, barring amnesia, but the new sensation now associated with stop lights would become the alarming, attention grabbing one.

    And before you say, “Aha! The person remembers that this shade USED to be associated with calm blue skies; therefore, functionalism!” — Let me point out that comparisons of purely historical interest are not the sort of input -> mental states -> output relations that Good Old Fashioned Functionalism relied on. If you dumb down the definition of functionalism so that it just means physicalism — well I can’t stop you, it’s a free country, but you’re not doing your conversation partners any favors.

    Liked by 1 person

    1. Certainly flowers and fruits use color to signal to their partner animals that their ready to be pollinated or consumed so their seeds are spread. But this is an arrangement that evolved in tandem. It’s like a code one species is sending to another, and they evolved it together. What matters is how the target species perceives it, not any objective standard.

      On objective colors, ultimately I think it depends on what we mean by “color”. You could define it in such a way that it’s objective, but as you noted, we then need to make a distinction between the objective color and the subjective one. This can get confusing, because humans frequently talk about “objective” colors, but what they typically mean are colors perceived by the vast majority of humans in ideal lighting conditions.

      On functionalism, yeah, we’ve debated that definition before. I sometimes think the name “functionalism” is unfortunate, since it implies teleological, or at least teleonomic processing. But most philosophers take it to mean something broader including all causal relations. A better name might be have been “processism”, “causalism”, or something along those lines. But for historical reasons, we seem stuck with “functionalism”.

      I don’t think it’s equivalent to physicalism. Identity theorists are physicalists but aren’t functionalists even in the broader sense. I’m sure they see it all as causal, but not that there’s a way for us to account for conscious sensations in a causal manner.

      All that said, I’m not seeing a version of functionalism where being able to remember past color experiences wouldn’t count. And as I noted in the post, I don’t think there would be any actual variance to remember.

      Even if there were, it seems like there are separate issues here related to how the brain works, with a vast overlap in the circuits used for perceiving red and remembering red. It’s not clear to me that someone could remember how red used to be except in terms of how it looks now.

      Like

      1. I could point out why traditional functionalists would not want comparisons of purely historical interest to figure into the definition of a sensation, but that would be a long journey for a small payoff. Instead, I want to hear more about why identity theorists don’t count as functionalists. Identity theorists are committed to the fact that if you cause certain patterns of activity in certain neural networks, you get pain. Does that not “account for conscious sensations in a causal manner”?

        Consult my thought experiment with the hot and cold water baths, for a closely related case where the memory of what lukewarm water felt like in the past is very much in contrast with what it feels like now: https://noghostnomachine.wordpress.com/2021/11/03/hard-fact-not-hard-problem/ We’ve all been there and done that (even if not as extensively and skillfully as the protagonist of my story), so clearly it is possible.

        Like

        1. As far as I know, no identity theorist denies that it’s causal processing. The difference is they’re typically not committed to providing a full account of that causal processing. This may be because they don’t think it’s currently possible, or because they doubt it will ever be possible. They’re willing to accept brute identity relationships as explanatory primitives. A functionalist isn’t.

          To be fair, identity theorists usually accuse functionalists of either dismissing or redefining the problem so they can aspire to that full causal account.

          Chalmers describes it in an interesting fashion. He sees functionalists as aiming for a full a priori account, while identity theorists are willing to settle for at least some a posteriori derived correlations. That doesn’t mean functionalists don’t use a posteriori knowledge, they just aren’t willing to settle with it. So another way of describing these may be a priori vs a posteriori physicalism.

          Liked by 1 person

          1. > Chalmers describes it in an interesting fashion. He sees functionalists as aiming for a full a priori
            > account, while identity theorists are willing to settle for at least some a posteriori derived correlations.

            Which leaves out people like me. As a physicalist, I have been persuaded by Davidson (yes, him again) that we have no reason to assume that functionalist ambitions can be fulfilled even in principle and that we may have to settle for correlations and (not to be sneered at!) heuristics. This is actually not that controversial. Take QM, where we can provide full explanations only in pretty simple systems and just wave our hands and assert that such explanations in principle scale up, even though we have (and quite possibly will never have means) to solve the relevant systems of equations for realistically complex scenarios. (E.g. for the inner structure of a proton!)

            Liked by 1 person

          2. Could be. A lot depend on our attitude toward understanding, and possibly reconstructing the problem. If we insist that only the problem as traditionally understood, with all the classic assumptions, be answered, then I can see the case for saying it will never be solved with a complete causal account. But to me it feels a bit like concluding the mystery of the celestial spheres is unsolvable.

            With QM, I do think we have a logical a priori account. Most people just hate it, and would rather make any assumption they can to avoid it. It still might turn out to be wrong, but it is there. Not that anyone is going to predict the stock market with it or anything.

            Like

          3. I am intrigued by this divergence. So I’d like to dig a bit deeper.

            Essentially all I am asserting in mind/brain relationship is P: identity does not entail reducibility. It seems to me that you disagree. I can think of only three possible reasons.

            1. You think P is wrong and identity does entail reducibility
            2. You agree with P in general, but think that in the specific case of mind/brain reducibility is possible
            3. You do not think that functionalism requires reducibility and hence P is not relevant

            Which of these would you put your name to? Or have I missed some other possibility?

            Like

          4. For 1 and 2, it seems like it depends on the particular identity relation. There are many identities we might only be able to establish empirically, but can’t explain how or why they’re the same thing. The classic toy example is saying pain is c-fiber firing. If that were true, we’d still be stuck explaining why c-fiber firing is painful. These are the type of identity relations I’m not satisfied with.

            What do we need for such an explanation? We need to be able to map the causal role of one side of the identity with the causal role of the other side. Once we can do that, I think our ability to reduce one side of that relation becomes a reduction of the other side.

            (Of course, this requires both sides to have causal roles, that one of them isn’t an epiphenomenon. But it’s always seemed to me that epiphenomena we can talk about aren’t epiphenomena, at least not in the broad philosophical sense.)

            Hope that helps rather than just muddy the waters more.

            Like

          5. Sorry for the late response…

            Yes, I should have made clear that I was talking about token identity between mental and physical states. For a physicalist it is a given that any *particular* mental states is de facto identical with the specific physical state at the time. On that level, mental to physical reduction should not be problematic and if that is all you would require of reduction, we have no disagreement.

            But reduction is generally taken to mean the ability to describe a “natural kind” of the level being reduced, solely in terms of its substrate, which is a much taller order. Specifically, non-physicalists complain that there appears no way to construct mental states out of physical states of CNS. Reduction of this kind is not entailed by the token-level reduction. E.g. to fully account for the general experience of pain in purely neurological terms. But physicalism does not actually entail this being possible and as you yourself have noted somewhere, there is no evolutionary pressure for aligning mental natural kinds with physical ones.

            A useful analogue is our in principle inability to characterise the notion of Game of Life’s spaceships in the language of GoL’s rules. There is no type identity available, despite any specific spaceship clearly arising out of GoL rules.

            Like

          6. @Mike A:

            Hope you don’t mind me (briefly) butting in! One thing I’d like to point is that philosophy of mind has moved a long way since Davidson. When Chalmers speaks of functionalism entailing a priori relations between functional and mental states, he’s not taking about bridge laws that actually translate functional truths to phenomenal truths. That is an antiquated notion of reduction that the original physicalist non-reductionists (like Davidson and Searle) were trying to challenge.

            Rather, Chalmers just means that all functional truths should logically account for all the mental phenomena, if functionalism is correct. We can cash this out by appealing to the notion of supervenience. If A reduces to B, then all the truths about A should supervene on the truths about B. The complaint of the phenomenal realists is just that mental phenomena don’t seem to (logically) supervene on physical phenomena. They do physically supervene of course, but there doesn’t appear to be an a priori relation here. A result of the fact that we can imagine invert and zombie cases where we hold physical phenomena fixed, but not mental phenomena (so the argument goes).

            Thus, non-reductionism has largely fallen by the wayside because it’s meant to fix something that no non-physicalist really believes anymore. If the reason the phenomenal realists thought that mental-physical reduction was impossible was a result of the belief that reduction entails the possibility of a translation between mental and physical truths, then non-reductionism would have validity. But that’s not true (anymore), although it certainly was true of many philosophers way back in the 70’s and 80’s. Now we know better though.

            Incidentally, one of the reasons for the abandonment of the bridge laws approach to reductionism has to do with the greater reliance on mechanistic, as opposed to deductive-nomological, explanations. Once we think in terms terms of mechanistic explanations, we can easily see why reductionism doesn’t entail psychological-neural translation. It’s because psychological explanations happen at a separate level of expanantion, one which invokes different and more abstract mechanisms compared to those of neuroscience. But while the mechanisms are different (and therefore not easily or not possibly translatable) they are still meant to account for the same phenomena.

            Anyways, my point is that I think you’re interpreting Chalmers (and functionalists) uncharitably here. He’s not saying that functionalism has to entail the kind of reduction that you have in mind. No one really believes that. If you think that functional (causal) truths can fully account for all mental phenomena, even if we can’t describe the relation using natural kind terms, then you’re a functionalist according to Chalmers (and most functionalists).

            Like

          7. Alex,

            A lot of people seem to misinterpret Davidson and think that his Anomalous Monism was intended as a solution to the mind/body problem. But that’s not how I read his Mental Events paper. As he clearly articulates in its opening, his purpose was simply to argue that “anomalously” monist view of the mind/body issue was also feasible. And that is all that the paper actually establishes.

            For myself I find it more useful to traverse the logic of the paper in reverse — this makes it clearer that physicalism does not entail reduction of minds to physically based sciences in the sense if necessitating some law-like relationship between the two. On that, I make no further claim and I do not believe Davidson was trying to make any deeper claim either.

            However, in practice I do find that functionalists tend to dislike the notion of non-reductive physicalism. Whether or not Chalmers does, is neither here nor there.

            Like

          8. @Mike A: I agree that Davidson is only making the claim that ontological identity doesn’t require a law-like relation between the mental and the physical. My point was that I think you mischaracterized Chalmers when you earlier wrote that his description of functionalism leaves out people like you (and that therefore it entails a false dichotomy between physicalism and non-physicalism). When Chalmers talks about an a priori derivability of the mental from the physical, he doesn’t have in mind a law-like relation, just that supervenience relation I was earlier taking about.

            I think the criticisms of non-reductive physicalists was heard loud and clear a long time ago, and as a result, very few reductive physicalists think there needs to be a law-like relation or translatability of statements from one kind to another. My point is that I think you’re being uncharitable in your interpretation of functionalists, as well as Chalmers’ characterization of functionalists. You make it sound like they still hold to an antiquated view from 40 years back, but that’s no longer the case. In fact, I think you would be categorized as a functionalist according to their terminology.

            Like

          9. Alex, I was responding to what our hosts said: “Chalmers describes it in an interesting fashion. He sees functionalists as aiming for a full a priori account, while identity theorists are willing to settle for at least some a posteriori derived correlations.”

            Now, I do not aim for a *full* a priori account because (a) I am a physicalist, (b) I agree with Davidson that non-reductive physicalism is philosophically coherent and (c) I see no reason to expect natural kinds of mentalist discourse to be to be sufficiently alignable with those of physicalist ones to permit full reduction. OTOH, I do not expect having to settle just for some a posteriori derived correlations either. Thus that summary does appear to exclude people like me. I offer no opinion as to whether it does Chalmers justice or not.

            Should I be classified as a functionalist? I don’t actually care. I just note that in my experience self-proclaimed functionalists do not favour the stance of non-reductive physicalism. They hope for more, while I struggle to find any justification for that hope.

            Like

          10. Mike A,

            I understand. What I’ve been trying to say throughout this entire conversation is that the belief that one can have full a priori derivability from the physical facts doesn’t contradict a-c.
            🙂

            Moreover, Chalmers and most modern functionalists don’t think it contradicts either. Nor do they think that mentalist and physicalist discourse have to be neatly aligned. That’s simply not what they mean by a priori derivability.

            Basically, everybody is already in agreement with you, and you’re not being excluded in the manner you think you are. If you’re interested, I can get into the details of why. But that’s been my main point that I’ve been trying to get (unsuccessfully) across from the beginning.

            Like

          11. Alex, you don’t seem to be paying attention. I was responding to Mike’s specific formulation and I gave my reasons. In this context what Chalmers and others think is simply not relevant.

            Like

          12. Hmm, not sure what you feel I’m missing? Anyways, I think Mike’s formulation hit it on the head, and I don’t think it potentially excludes your position in any way (for all the reasons I already mentioned). But now we are going in circles.

            Like

          13. Mike,
            If I may, I’m wondering if my wording may have caused a misunderstanding. I was distinguishing between
            1. a full a priori account
            2. a partial a priori account with some a posteriori identities

            Does that still sound like it excludes your position? If so, what’s in between?

            Like

          14. @Mike A: I apologize if I previously came off too strongly.

            @Mike (the other one!):

            Yeah I realize that I also probably didn’t word my objection to this distinction well either, since the exact distinction is ambiguous (not disagreeing with your definition to be clear). In any case, I think the main confusion here is really about the meaning of “a priori derivability”. To me that just invokes the notion that reasoning from the known physical facts can theoretically encapsulate all the truths about mental phenomena. Whereas “some a posteriori” to me means that we need to understand a bit more about the nature of the physical world before we can make a judgement call that an ontological reduction is, in principle, possible.

            But that is totally orthogonal to the issue of mental-physical terminological translatability. I mean, just because we can’t translate language about general relativity (e.g. curvature of space) into the language of the Ptolemaic geo-centric model (e.g. epicycles) does not mean that the former can’t fully explain the same phenomena that the latter was meant to explain (the movement of the celestial bodies). In other words, full a priori derivability does not entail inter-translatability, or bridge laws between different theories or conceptions of natural kinds.

            Like

          15. Alex,
            I’m not sure I agree on the inter-translatability. In the case of the Ptolemaic model, it had some major misconceptions, like the crystalline spheres and firmament, that make complete translation impossible.

            I do think it’s reasonable not to expect clean simple mappings. It’s kind of like trying to map the display of this comment to your device’s physical state. We know such a mapping is possible in principle, but in practice it’s extremely difficult, even for a completely engineered system. We lean heavily on several intermediate layers of abstraction to understand that relationship. I wouldn’t be surprised if that isn’t a pale imitation of what we’ll have to do with the brain. And of course, part of that will involve having to reassess some of our assumptions about mental states.

            Like

          16. I reckon it goes beyond mere practicalities, Mike. As I should know (having worked for 30 years as a systems programmer) there is no general necessary connection between software and hardware states of a modern computer. The connection is contingent, heavily dependent on past machine activity and these days is further scrambled by deliberate randomisation of memory mapping for security reasons. Why should we expect anything else for brains/minds, particularly since all brains are individually unique, just sharing a general structural plan?

            Like

          17. I still see it as practicalities. (I actually have a programming background myself, ranging from assembly to web programming, albeit increasingly in the past.) The main thing is we can follow the functional abstraction layers, from the web browser’s model-view-controller structures, to the OS UI framework and memory management, to the process specific specific memory locations, to the privileged kernel level ones, to transistor and LED states. If we took a memory dump while the comment was being displayed, maybe while displaying it in a virtual version of the device, we could take our time dissecting it.

            But we definitely do need all those intervening layers. Trying to jump from the comment display directly to the hardware state would be hopeless. If we didn’t know the device was engineered, we might be tempted to think there was no path at all, and start talking about what a hard problem it was.

            Like

          18. > But we definitely do need all those intervening layers. Trying to jump from the comment
            > display directly to the hardware state would be hopeless.

            Well, quite. Now, our brains are by orders of magnitude more complex than our most powerful computers. It’s a dead cert that evolution did not provide us with a neat scaffold for us to climb from neural wetware to higher cognitive functions. We would have to construct such a scaffold ourselves, but I see no reason to think that the actual, highly flexible architecture evolution came up with is sufficiently disentanglable for any such artificial abstraction to be neatly alignable with it.

            Re my question what is functionalism aiming to provide an a priory account for… It wasn’t intended as any sort of attack on functionalism. I would be as satisfied as you say you would be by accounting for the “easy problems” (plus affects). I was mulling over your disagreement with my pint that even highly successful theories as QM do not provide us with a full a priori account. (NB: I take it that we both use QM as a short-hand for the Standard Model.) This despite our current inability to verify that QCD accounts for properties of protons (we think it does because we have no reason to believe otherwise — though, of course, many physicists rather hope it does not :-)). And QM tells us that vacuum has energy but gives no handle on calculating this energy — apparently reasonable guesses lead to heroically incorrect values. I am sure you are aware of these facts, which made me wonder more generally about what is it that a theory (any theory) is supposed to give an account of. Which morphed into a somewhat awkward question as to when the hoped for a priori accounting of minds is supposed to cover and how would we know that it does?

            I agree that I put it badly. The thought was occurring to me as I typed and was not t that point properly formulated. I shuold have left it stew for at least 24 hours, as is my general habit, but I didn’t. 😦

            Like

          19. I thought your question about the target of what functionalism is trying to explain was totally valid. It’s a common criticism of functionaism that it moves the goalposts. But I might should have asked what the target you think is in principle unachievable. I assumed something like traditional qualia, but really shouldn’t have given what I know about your views.

            For QM, I actually wasn’t thinking about the Standard Model overall. From what I understand, it definitely has its ugly spots. (My knowledge of QFT and the other formulations is too scant to say I understand them.) My take is to look back at the history of science and all the other times people have declared that we’d reached the end of knowable things. But I’ll concede the point that we don’t have a full a priori account there. (Although Sean Carroll likes to say we do now fully understand the physics of the everyday world.)

            No worries on wording. It’s all friendly and informal here. I have no trouble with people rephrasing something later. I prefer for these discussions not to feel like legal filings.

            Like

          20. Mike,

            Not sure I understand your objection. I was pointing out that I don’t think an a priori entailment of all the mental truths from all the physical facts must entail inter-translatability. Are you saying that you still think inter-translatability is possible, or are you saying that it is in fact necessary?

            If the former, then we are in agreement, but if the latter my concern is basically what you just said. Certain theories have terminological disputes which are so vast, that their differences may be completely irreconcilable. Secondly, there is an issue of vagueness: our abstract natural kind terms may be too vague to give them exact necessary and sufficient conditions at the microphysical level.

            Like

          21. Alex,
            I think we’re probably saying the same thing with different words. Certainly many mental concepts are vague, using one phrase to refer to a wide range of physical processes. This is where the eliminativists often say we should just dump the psychological concept. I definitely think we need to be open to that, or reconstructing the concept. But, assuming it still does work in everyday discourse, it seems important to account for the complex relationship.

            Liked by 1 person

          22. The reason I keep pointing out that I was responding to Mike, not to Chalmers is quite simple: a paraphrase, accurate or not, is still a paraphrase. And indeed, it turns out that Mike’s version was ambiguous, which neither of us spotted. Ironically, disambiguated version makes me an identarian, rather than a functionalist, despite your vehement assertions to the contrary. Not that I particularly care either way.

            I hear what you say about the shift from nomological considerations to mechanics in some circles (by no means universally!), with non-reductivity being effectively internalised. However, just like one cannot have a language without *some* general terms, nomological considerations of translation can be set aside but are not thereby eliminated. Some level of abstraction is inevitable for any general theory.

            So if “a priori derivability” meaning simply “the notion that reasoning from the known physical facts can theoretically encapsulate all the truths about mental phenomena”, there is this not so little problem. How exactly does one establish that a particular theory can actually do so without abstracting away from mechanics to more generic, nomological-style considerations?

            BTW, as it happens, translating Ptolemaic epicycles into the geometric language of GR is perfectly feasible. The circular orbits and epicycles are the first two terms of the Fourier decomposition of Keplerian orbits, which imply Newtonian gravity. And Newtonian gravity (and dynamics generally) can be and has been geometrizied.

            Like

          23. Mike A,

            Thanks for pointing out the dis-analogy with the Ptolemaic epicycles. I had no idea that kind of mathematical translatability was possible; learned something new today. It seems to me that whether you should be technically classified as a functionalist or identarian according to the criteria in question will depend on whether you think a full a priori entailment should entail the kind of inter-translatability that you have in mind. We seem to be in agreement that the inter-translatability may well be impossible, so the relevant question is just whether the entailment is true.

            Assuming we concur on the above, let me now address your points. I think the answer to how we can avoid describing the physical mechanism in the language of mental natural kinds, while also knowing that derivability is still in principle possible, can be found if we distinguish between theoretical and observational terms. Sure, every observational term is technically theory-laden, so this is really a matter of degree, but the point is that the degree matters.

            Take modern astrophysical theories of star formation and contrast them with the theory of the pinhole firmament. Obviously, our most theoretical terms from both sides will be completely untranslatable. But references to the stars themselves are still universal to both theories (even if the understanding of what a star is is radically different). This partial translatability is all that is needed to achieve full a priori derivability in my view.

            In other words, all we need to establish is that the two different schemes or theories are in the business of explaining the same phenomena. In this case, we can call the shared understanding of the explanandum to be about those pinpricks of light in the night sky. Similarly, even if mental theoretical terms (e.g. pain) have no clear equivalent in the physical mechanistic description of the brain, a full a priori derivability is still possible provided that we agree that the different schemas are trying to explain the same thing (mental phenomena).

            In this case, it seems like both the mental and physical explanations are in the business of explaining the same phenomena, namely the phenomena that we can access from the data of our experiences, or that we can reference by ostension etc… And this remains true even if the categorizations are radically different.

            I agree that degree does matter, so perhaps it is conceivable that we may arrive at the conclusion that there is no shared translatability of any kind to be made sense of, in which case we would have to abandon an a priori derivability. But if the bare conception of a star as being a “pinprick of light in the night sky” is enough for a shared understanding between the firmament theory and modern stellar formation theories (we can still recognize that they are meant to explain the same phenomena), then I see no reason why we can’t do the same with mental-physical reduction.

            Like

          24. Sorry, I was away from my keyboard for a while (and I cannot abide virtual keyboard on mobile devices — call me old-fashioned, by all means!).

            Re Ptolemaic system… More generally, this is, a hobby-horse of mine. 🙂 Philosophers need to be careful in separating instrumental machinery of a theory from the interpretation of that machinery bundled with that theory. Classic example: Caloric. The interpretation got discarded but its formal equations were simply taken over and by thermodynamics. There is generally a continuity of formal theoretical apparatus, even if interpretations get discarded. Yet it is often said that Einstein “overthrew” Newton, while he was actuallu at pains to demonstrate that Special Relativity simplified to Newtonian dynamics at low speeds.

            Hence I agree with your examples from other sciences. But we are not talking about translatability between some old and new views, but between views from different levels of reduction and that’s a whole new ball game. The example I like to cite (and have done in this blog already) is Conway’s Game of Life (GoL). GoL’s rules are trivial and there is no room for any mysterious “extra” to be involved. And yet… As a matter of principle, the apparently obvious GoL natural kind of a spaceship provably cannot be expressed in terms of GoL’s rules. There is no translatability, because GoL is Turing-complete — i.e. it can implement a Turing machine. And guess what? Human minds are (obviously!) also Turing-complete! 🙂

            I am pretty sure that we shall never have a theory of mind that would allow us to decide whether a given entity (be it a human, a computer, a Martian, an interstellar cloud (black or otherwise :-))…) is a “person” or not. I use the term person in order to avoid a linguistic problem: “intelligence” carries too much baggage, “sentience” by definition includes feelings (thereby excluding Mr Spock), and other languages (at least Slavonic ones) use “reason”, though not in the narrower sense it has in English.

            As I understand it, on this I agree with Mike in drawing a parallel with our (already very considerable) understanding of biology, which does not give us a handle on defining “life”, except in narrowly parochial and clearly incomplete terms or terrestrial DNA-based entities.

            But I also go further. I really see no prospect for any kind of understanding of specifically human minds, which could give us an ability of deducing a general mental state or activity purely from knowledge of neural states and activity of CNS at the same time. That is, crudely speaking: you can observe C-fibres firing or not, but forget about presence or absence of pain being 100% entailed by such observations. Yes, there will be heuristics which will read mental states/processes off neural states/processes with varying degree of certainty, depending on circumstances and individuals, but such heuristic mappings will be of necessity post hoc in placing limits on a priori explanations.

            Similarly, relating a particular experience (e.g. *this* stab in my toe) to a general set of possible neural states/processes (as opposed to the the exact current one) is I reckon is also likely to be out of reach.

            To put it in more general terms, mappings between reasonably delineable general states/processes (a.k.a. natural kinds) will be necessarily one-to-many in both directions. These limitation seem to me to be a matter of principle, for reasons given by Quine and Davidson (“irreducible indeterminacy of translation/interpretation”) and by Wittgenstein and Davidson (language as both enabler and a constraint on any kind of communication).

            If you can square all this with your notion of “a full a priori accounting”, then we have no argument. But in my, rather extensive, experience of arguing these matters, that would be unusual.

            Liked by 1 person

          25. Mike,

            Ah, OK. I see that our actual formulation was rather ambiguous — you meant it one way and I read it the other way, hence my original grumble. So Chalmers’ classification makes me an identity theorist, despite Alex’s insistence that I should be classed as a functionalist. I can, therefore, gleefully rescind my agreement with you on this particular point! 🙂 Even so, the distinction would seem to be more on the lines of optimism v. pessimism, rather than anything more substantial.

            BTW, turning to the matter of “full a priori accounting”, given our apparent disagreement whether or not QM offers such, it occurs to be that it would be useful to clarify what exactly is it that a functionalist is supposed to want a full a priori account of? “Mind” is really too vague a term. Is it phenomenology of behaviour (in the widest meaning of the term)? Or private experience? Or conditions under which a system is judged to be conscious? Or something else? I suspect one’s thinking may be strongly influenced by the answer to that.

            Like

          26. Mike,
            Yeah, I suspected our agreement might be illusory. 🙂

            Your second paragraph gets to a common criticism of functionalism, that it redefines the goal to exclude the problematic stuff, like fundamental qualia. I think it focuses on meaningful goals: attention, memory, perception, object discrimination, reportability, confidence assessment, affects, etc. Essentially Chalmers’ “easy” problems. (Although, tellingly, he omits affects from that list.) Once those are accounted for, I’ll be satisfied. At least until there is evidence that something else remains.

            Like

          27. @Mike A:

            I never said there wasn’t a gap between the full a priori and partial a posteriori accounts. To repeat myself, my core claim is that “the belief that one can have full a priori derivability from the physical facts doesn’t contradict a-c.”

            You can have full a priori derivability while simultaneously denying that there needs to be any strict translatability between mental and physical languages, or bridge laws between mental and physical theories.

            So when Mike was giving his formulation of Chalmers’ account, he correctly noted that Chalmers labels functionalism as being in the business of giving a full a priori account of all mental phenomena from the physical facts, and you incorrectly inferred that this leaves out the position that there might not be intertranslatability between mental and physical discourse. Because one does not entail the other, or so I assert.

            Incidentally, not sure why you keep insisting that Chalmers’ opinion is irrelevant, given that Mike was pretty explicitly giving an (accurate) characterization of Chalmers’ definition, and you were pretty explicitly objecting to that characterization. But I digress.

            Happy to get into details of why a full a priori account doesn’t exclude your position, if you wish to discuss that.

            Like

          28. No worries on response time. These always proceed based on our time and interest.

            I’ve noted many times that the mind didn’t evolve to understand itself, but I generally mean that in terms of introspection and resulting intuitions. But to me, that just means that a successful theory of mental states probably isn’t going to be intuitive. Given how unintuitive many scientific theories are (QM, GR, etc), that doesn’t seem surprising.

            I think that’s the problem for many non-physicalists. They demand too much of a scientific theory. They seem to want it to feel right. It might feel right to someone who spends a lot of time studying the details, but probably won’t for most people. It doesn’t help that there will almost certainly never be just one theory. Similar to how we don’t have a theory of life, just a whole bunch of theories about different aspects of it, I think we’ll likely never have one theory to end all investigation. At best we’ll have an umbrella theory, similar to natural selection, that at least provides a framework for the rest.

            One of these days I need to take a deep dive on the Game of Life.

            Like

          29. We certainly don’t want to depend on just a priori analysis from our philosophical armchair. But we also don’t want to settle for only a posteriori knowledge. Our theories have to account for empirical data, and be constantly tested against them, but until we have a causal account, I don’t think we understand what’s happening.

            Like

          30. Would the following be a correct criterion for “a causal account” vs other accounts? Take the causal network involving color perception, e.g. X and Y cause A, A causes B and C, C causes D and Y, … L and M cause Z. Suppose variables A thru M are the variables we think of as paradigmatic color-perception things, while other variables X-Z are external stimuli or actions. (Variant: or other, not-obviously-color-involving, mental states.) Diagram this whole network. Then: an account is “a causal account” iff the *structure* of this diagram is all that matters for color experience, according to the account?

            Or – a more inclusive rule – an account is “a causal account” even if it specifies that the identity of stimulus/response variables X, Y, Z matter? I.e., you cannot swap out X, Y, Z, for different stimuli/responses U, V, W and still say that the being experiences the same colors.

            Like

          31. I’d say the second is less complete. The crucial question is why that identity matters, how it changes the causal mechanisms.

            For example, in IIT and Recurrent Processing Theory, the recurrent processing structure is argued to have an identity relationship with consciousness. Even if identical outcomes can be produced in an “unfolded” network, the argument goes, such a network would not be conscious. It would in fact be a type of behavioral zombie, an essentially unfalsifiable assertion. The question is why recurrent processing would have that identity relationship.

            Or in another case, some biologists say that invertebrates aren’t conscious because they lack a mammalian cortex, essentially arguing an identity relationship between the cortex (or at least some portions of it) and consciousness. But again, it omits what about the cortex makes it crucial?

            I don’t mean to imply that identity theories can’t be useful. We may find including identity relationships we don’t understand yet is the only way to move forward. But I don’t think we should ever be satisfied with them. They should only be viewed as placeholders until a more complete account can be found.

            Like

          32. OK, now I want to know where you stand on the recurrent processing versus unfolded network. Clearly, a recurrent causal structure has a different diagram than an unfolded one. So on any of my suggested versions of what counts as “a causal theory”, the difference between recurrent vs unfolded is a candidate for an important difference. A theory can declare that it matters, and still be a causal theory on a criterion I suggested.

            But now I’m getting the feeling that you want a theory that doesn’t care about that difference? And, at the same time, the theory doesn’t care which environmental variables the system hooks up to? What does that leave?

            Like

          33. On recurrent vs unfolded, if it makes a difference, I want a theory that explains why it makes a difference. Simply declaring that it’s necessary without that explanation might be a necessary temporary placeholder, but we shouldn’t be satisfied with it. (It’s actually not that hard to see possible answers, such as recurrence (which is essentially looping) allowing a lot more processing with less substrate. An unfolded network seems faster, but at a cost of a lot more neurons. That tradeoff may be why the cerebellum is mostly feed forward while the cortex is recurrent.)

            Not sure where the question about environmental variables is coming from. Obviously if the system interacts with them in a meaningful manner, the theory needs to deal with those interactions.

            Like

          34. OK, the recurrent network remarks help a lot. On environmental variables, I should have used an example. Standard humans are trichromats. We have cones sensitive to wavelengths around 445, 535, and 575 nm respectively, with partly overlapping ranges. We also have rods which are far more sensitive to low light, with a broader range covering roughly the shorter two cones’ wavelengths.

            Suppose Andromedans have three audio receptors with highly specific wavelength selectivity, centered at 445, 535, and 575 Hz. They also have rod-shaped audio receptors… They live in an environment with a very strong sound-source for roughly half of each day, and weaker sources for the other half. They navigate primarily by listening to the reflections of sound off of the surfaces in their environment. The frequency-dependence of sound reflectivity tells the Andromedans important facts like, for example, fruits typically becoming ready to eat when the 575Hz range shows the strongest reflection. Etc, etc., I think you can see where I’m going and fill in the rest of the details. We can also specify that within their brains, sound signals are processed isomorphically to how color signals are processed in ours.

            Would the Andromedans have “the same” experiences in some important sense? Or can we appeal to their different environmental interactions to deny this?

            For another example, suppose future posthumans develop and use the ability to run detailed, atom for atom, computer simulations of humans and their environments. Simulated humans are responding, not to reflected light, but to 1s and 0s of simulated light.

            Like

          35. Ah, I see what you’re getting at now. Auditory colors? I don’t know. Sounds are vibrations of gas (air), those vibrations are much larger and slower than electromagnetic radiation, and the dynamics of how they propagate in the environment seem pretty different. So it seems hard to imagine the Andromedan’s experience could be exactly like ours.

            There’s actually an alien in one of the sci-fi books I reviewed (the name of which I can’t reveal without making this a spoiler) that evolved in a thick atmosphere where no sunlight gets in. So they never evolved eyesight. But they did evolve hearing to such an extent that it allows them to map their environment. At one point the alien asks a human, “Do you hear light?” After thinking about it a bit, the human confirms that they can “hear” light. But the alien’s abilities allow it to perceive what’s going on in other rooms and around obstacles that block a human’s eyesight, so it’s not portrayed as identical to ours.

            Maybe another scenario would be if the Andromedan evolved around a red dwarf, whose peak of spectral emissions would be different. To us on the alien’s planet, things might all look very dim (with most of the light in the infrared range), but the alien’s color perceptions would be calibrated for that environment and their evolutionary needs. Their version of red might be calibrated to whatever their high affordance food is on that world, their green toward the immediate background, their blue to the wavelength of their sky, etc. So their experience might be similar to ours but only relative to our respective environments. On our world, everything might look like varying shades of bright violet to them, or it might all be in their ultraviolet.

            I don’t think there’s any reason in principle a simulated human couldn’t have simulated color experiences based on simulated light. Although if it went down to the level of atoms, it seems like it would have to be a pretty slow simulation.

            Like

  5. You’ve prolly seen this: “Miracle fruit contains a glycoprotein called Miraculin. (Really, that’s what it’s called). Miraculin binds to your tongue’s sweet receptors. The Miraculin found in mberry modifies the shape of your taste buds, confusing their recognition of sweet and sour.”

    Is that chemistry, bio-sensory or some mystical, quintessentially personal sensation?

    I like licorice. The original kind. That flavor of glycyrrhizin (similar but not identical to anise and fennel). It’s very distinctive and controversial, a love it or hate it kind of thing. No doubt some portion of humans have a biological reaction to it. Like that cilantro = soap experience. Others, I suspect, are socially conditioned to enjoy or detest the flavor. When I taste it, I’m reminded of fond memories, of a soothing, quiescent time from my past (non-distinct, but palpable). But that’s just me, my stored experiences. All based on training.
    You?
    M3GAN? AI-nGie?

    Liked by 1 person

    1. I hadn’t heard of Miraculin before. So I wonder if the taste is a mix of sweet and sour (which sounds kinda tasty), or something completely different. Since it’s screwing with the taste buds themselves, sounds chemical. Of course it has biological effects (which are special applications of chemistry and physics after all).

      It’s been a long time since I had licorice. I think I remember it being okay when I had it. But it seems like the last time it was presented to me as an option, I chose something else. (We’re talking late childhood here.) But I think your point about all the associations kicking in is a crucial one. And as you indicate, it’s not something we can necessarily bring into our consciousness, except in the haziest of senses.

      I haven’t seen M3GAN. I don’t have the relevant streaming service, and I’m not into horror, although I might watch it if I ever get incidental access to it. What is AI-nGie?

      Liked by 1 person

  6. Great post Mike. One thing I wanted to say is that “it’s hard to imagine an inversion not involving a change in behavior” ≠ “To say the colors don’t change here is to assert that there remains an intrinsic nature to the color independent of all its causal effects.”

    I feel like this is an unfortunate equivocation. One can hold that there is some additional internal aspect to color which isn’t being described or captured by “relationships between colors, brightness, and saturation levels” without that internal aspect being intrinsic or acausal. Take a simple 3-layer neural network, with the input, hidden, and output layers labelled [A, B, C] respectively. It’s sensible to simultaneously believe that 1) inverting the weights of C while leaving A & B intact will modify the overall output behavior & 2) There is more to the functionality of the network than just the parameters of C.

    By equivocating between the two statements, you unintentionally make it seem like anyone who holds the intuition that some aspect of color remains immutable in invert cases has to be a non-physicalist. But the neural network example shows that’s not the case. Perhaps there’s some internal structure to phenomenal color states that we are semi-aware of, and which we implicitly hold fixed in invert cases (where we only modify the external relations).

    Just some food for thought.

    Liked by 2 people

    1. Thanks Alex.

      I actually had intended to put a disclaimer in the post somewhere that we don’t know all the causal effects yet, and so there certainly is additional functionality to account for. My goal here isn’t a full accounting, just a weakening of the intuition that no accounting is conceivable.

      On the other hand, that statement was actually referencing the first sentence in the immediately preceding paragraph: “Of course, someone could insist that we just need to rewire all the associations so that we have the same reactions to shades of blue and violet that we previously had to reds and yellows.” Granted that language is very hand wavy, but it seems pretty comprehensive. I think it might cover what you described.

      But I wonder how robust the scenario could be anyway. It would require that there be remaining aspects in the hidden layer that propagate enough for us to have some awareness of it, but not enough to have any affect on behavior. It’s hard for me to see that without the possibility of it “leaking out” into behavior. At the least, it seems like it would be a fragile situation.

      Liked by 1 person

      1. One possibility is that we are aware of some experiential color characteristic, but not the structure that determines that color characteristic. Kind of like how we can’t directly observe the atomic structure of water, but the atomic structure still determines the macroscopic properties of water that we do see. In that case, we might easily form the impression that there’s more to the experience of color than its relational properties, even if that’s not true.

        Also, my point wasn’t so much geared to the phenomenal realist, but more so to the person with the above intuitions. It’s only if you form the (mistaken in my opinion) impression that this color characteristic is intrinsic, that you then run into the difficulty of the characteristic having to play no dispositional role in behavior. But if we concede that the characteristic can be formed by a structural property, then there is no reason to think that the belief in the intuition that “color goes over and above its relational properties” means we also have to accept a behaviorally inert property of color. We should actually expect such an intuition if we weren’t aware of all the relational features of color (but still aware of their effect).

        It’s also understandable why someone might form the impression of color intrinsicality in the above scenario. If you think you’ve accounted for all the relational properties of color experience, but there are actually additional relations beyond your awareness, then it will seem like there has to be something more to color than its structure.

        So I’m not technically disagreeing with your argument against invert cases, but just pointing out that people can be physically correct in their intuition that the colors won’t change in certain invert cases, because invert cases don’t actually modify all the relational color properties.

        Liked by 1 person

        1. It does seem like the intuition of color intrinsicality is hard to shake. I think the reason is we don’t have access or insight into all the things that make up our experience of color, just the result. So even if we did have a full accounting in terms of structure and relations, it will never feel like it. It’s hard to remember everything that’s been explained when pondering what seem like simple experiences.

          Of course, this isn’t an issue with just color perception. It applies to all conscious experience.

          Liked by 1 person

          1. I very much agree. I’m actually exploring this phenomenal-structural divide in my paper on a solution to the hard problem, wherein I argue that much, if not all, of the hard problem can actually be attributed to differences in how people conceive of structure, as opposed to differences in their phenomenal conceptions. If you start with an ‘abstract’ conception of structure, then phenomenal experience, no matter how deflated, just won’t be captured by your structural conception.

            One thing that I think might be helpful for those who have the intuition of a stark structural-phenomenal dichotomy is to start with a top-down approach, rather than a bottom-up approach.

            Instead of beginning by exhaustively characterizing the structure of color experiences, it might be better to start with a phenomenal color experience, and then begin decomposing it further and further, until we can intuitively grasp that there’s nothing left but the structure.

            Some might still hold that invert and zombie cases are still conceivable no matter how much structure we add (if they start with an abstract conception of structure). To remedy this, in my paper I put forward an infinite decomposition thought experiment. If we imagined that colors were composed of proto-colors and proto-colors composed of other things, ad infinitum, then there won’t be any phenomenal intrinsic component left over. By definition, in the infinite decomposition scenario, every phenomenal state will have some structure.

            It doesn’t matter if infinite decomposition is actually physically real, for it might not be possible to have “structure all the way down” depending on our physical theory of the universe. What matters is that it’s possible. Once we admit this, we have to admit that phenomenal consciousness is possibly structural, and that in turn means that any abstract conception of structure isn’t necessarily the right one.

            Liked by 1 person

          2. That sounds like an interesting strategy Alex. I haven’t done any reading in phenomenology, but I wonder if there’s anything in that literature that might help.

            Another thing is that it also isn’t always about reduction. At some point becomes about looking at the same reality from a different perspective. This gets to that bit vs transistor example I’ve used before. No matter how hard you analyze a bit within software, no further reduction is possible. It requires stepping outside the software paradigm, to the hardware one, in order to continue.

            Of course, that requires reaching a point where the entity in one perspective non-controversially maps to its equivalent in the other perspective. Admittedly not an easy point to reach, and really impossible in a manner that will satisfy everyone bar none.

            Anyway, this paper is sounding more and more interesting!

            Like

  7. I’m gripped by the introduction of “bitter” and “sweet.” This is a part of the argument that could use some unpacking.

    Taking the metaphor of a colour wheel we simply spin around, let’s say that “blue”signifies “cold and unpleasant” and orange signifies “warm and comfy.” This gives us, for the sake of argument, a direct correspondence with “bitter” and “sweet.”

    Spin the wheel, and now what I think of as “blue” because it’s cold and unpleasant is experienced by someone else as what I’d call “red.” For them, this phenomenal experience means “cold and unpleasant.” It happens to appear differently to them, what I’d call red, but they don’t call it red. They call it “blue,” just as I do, and neither of us is the wiser.

    Just the same happens with “bitter” and “sweet.” What I call “bitter” because its taste is off-putting, they also call “bitter” because its taste is off-putting. But if they have spun the wheel, to them, the phenomenon is actually what I’d call “sweet.”

    It may seem incomprehensible to me that the quale I call “sweet” could possibly be experienced by someone else as repulsive. But unless the quale itself is somehow inherently repulsive, it’s an idea we can entertain.

    Moving the argument back to colour, we’d have to ask whether the quale I call “blue” speaks inherently of “cold and unpleasant.” The answer could be in the negative. Perhaps someone experiences a nice warm fire as what I’d call “blue,” finds it comfy and pleasant, and uses the word “orange” for it, because that’s the name we use for whatever quale they associate with “warm and comfy.”The situation is easier to conceive for colour than taste, but there’s no real difference. We’ve simply allowed the more visceral significance of “bitter” and “sweet” to muddy our reasoning.

    But now we have raised the question of whether a quale itself might be inherently respulsive or attractive. A warm and comfy thing might always evoke the quale I recognize as “orange,” and anyone who experiences it would have the very same quale — almost as if it were a “property” of the warm and comfy thing.

    It’s an amusing speculation, but since we have no way to compare our qualia, quite pointless. We are dealing with the “beetle in a box” problem pointed out by Wittgenstein. But Wittgenstein did an odd thing: he privileged the public over the private. We all agree to call something a “beetle,” even though I have no idea what you are really looking at in your matchbox. The public use of “beetle”is all that can count; of our private experiences, we cannot speak. And yet it is “public,” not “private,” that is truly the mystery. What is a “public” experience? Who has one? From the perspective of the individual who enjoys experience, there is no such thing as “public;” it is no more than a helpful abstraction, invoked to make sense of things beyond our own bodily integrity.

    Functionally, the public “beetle” amounts to a grey blur. Only the private “beetle” is real. We tend to assume, for purposes of sanity, that the public “beetle” is much like it; and since qualia cannot be compared, there is no reason to assume otherwise.

    So “red” is what it is for me, and that must be the end of reasonable discussion. We can say with assurance that the function of the word “red” is the same for everyone who speaks English. The function of the quale I call “red” is directly inherited from that. When we talk about “the function of colour,” we aren’t talking about qualia at all, but about behaviours associated with wavelengths. The question of qualia is entirely untouched. But presumably we do have them, or why would we struggle to talk about them?

    Liked by 1 person

    1. I think I’m with you on a lot of the reasoning you go through here, at least until the end. The question is, if the concept we’re discussing can’t be touched by causal changes, and can’t itself have any causal effects in the world, then what reason do we have for keeping it?

      I suspect you’ll say because we all experience it. But then, what exactly do we experience? The functional processing, or some acausal add-on separate from that functionality? If we remove the experience (as in blindsight), it seems to have functional consequences. If we disrupt the functionality, it seems to have phenomenal consequences. Which implies to me that either we need to revise our understanding of qualia, or set the concept aside.

      In the end though, if someone insists that acausal qualities exist, I don’t see any way to establish that they do or don’t. I can only observe they seem redundant to a causal account.

      Like

      1. The function of “bitterness” is to keep us from being poisoned. Its function is not to make us like or dislike something; that’s a secondary consideration.

        We might encounter someone unable to sense bitterness, who manages to poison themselves, and suggest that surely here, the bitterness (or its absence) played a causal role. But since they can’t experience bitterness, we must also venture that perhaps they are physically defective in a functional way that fails to keep them from being poisoned. The epiphenomenon of unpleasantness needn’t enter into it.

        Alternatively, we might encounter someone who is able to sense bitterness, but is drawn to it, as we might say perversely, and who decides to put two cups of apple seeds in a blender and gulp down the resulting mush, perhaps delighting in the shivers this causes, and so poisons themselves. Here the bitterness looks causal. But again, if we are to stick to the program, their taste for bitterness is the sign of a physical malfunction whereby they failed to avoid poison.

        In the latter case we still have the option to say that, in a properly functioning system, bitterness must necessarily be unpleasant. Now, at last, we can claim that the bitterness is causal. Moreover it remains comfortably identical with the non-phenomenal causality of avoiding death from poisoning (setting aside other issues of identity). But now it is exposed as a wheel that does no work, as Wittgenstein might say..

        This is where we are stuck. Functionally, we have no use for bitterness that is not better served by a more “physical” explanation. Shall we say then that there is nothing to explain? But the proposal to explain bitterness, and colour and so on, was our starting point. So one way or another the program is a failure; either qualia exist and it has failed to explain their function, or qualia don’t exist — and it has failed to explain their function.

        Personally I find the first failure more comfortable. As you suspected, I would appeal to the “bitterness-to-be-explained-functionally” as the fly in the ointment of this particular causal approach. What is particular about it, decisively, is that it confines its search for “causes” in such a way as to render bitterness and whatnot useless from the outset. It has merely taken the long route back to its initial assumptions.

        An alternative initial assumption, for some sort of functionalism, might be that the purpose of bitterness is to help us avoid being poisoned, and that a concomitant physical apparatus has been constructed in support of the required quale. This is to switch the cart and the horse around; I’ll let you decide which way. But for what it’s worth, Whitehead put a sense of better and worse at the heart of his metaphysics.

        Like

        1. I fear I’m missing some logical steps here. At what point does bitterness and the functionality become disassociated? It seems like all the examples but one demonstrate the functionality. And the one that doesn’t you note could be a malfunction or pathology situation.

          On that case where the person responded non-adaptively to the bitterness, this gets into what we mean by “bitter”. We might refer to the stimuli from the chemical reaction of the substance interacting with the taste buds. But I think we all acknowledge that isn’t where the actual taste is. But that brings us into the territory of all the reactions to that stimuli. If those reactions are off, then the experience might not be of bitterness, even when it should.

          Of course, someone could actually experience bitterness, but then for some other reason outside of the gustatory system, eat or drink the substance anyway. That could be for a rational reason (it’s actually medicine), or due to some perverse psychological issue of enjoying the taste of bitterness.

          In any case, under normal circumstances, the bitterness seems to affect our behavior. Even if we override it, we do so with some effort (as in forcing ourselves to take medicine).

          It seems like the experience of bitterness has multiple functions. One is to avert us away from a potentially toxic substance, but also to enable us to learn which substances are like that and avoid them in the future. Of course, in modern life, it won’t always be right, like in the case of medicine, and we have to override our normal aversive reaction.

          So I’m not seeing how bitterness is epiphenomenal. Am I overlooking something?

          Like

          1. It’s more likely I’m missing something, being new here and not yet ready to toss around the terms of art. I might also be mixing up your position with those of other commenters. There’s a lot here for me to absorb.

            f I understand you correctly, the causal role of qualia can present a puzzle for functionalism, “the idea that mental states are about their causal role.” This can happen if the qualia turn out to be separable from the mental states. (I’m wary of “mental state,” but for present purposes it will get us by.) You and I might experience different qualia for approximately the same mental state, say the mental state produced by interaction with an apple. The functional requirements for interacting with the apple are satisfied by the mental state. If there were a tight relationship between the state and the qualia, then the qualia would effectively satisfy the same functional requirements, and to this extent, their role might seem less mysterious (though there would be more to talk about there). But if qualia vary from one person to the next for the same mental state, then it seems as if the mental state is doing all the work in every case, and the qualia might lose some functional significance.

            Is this what you’re getting at with this post? If so, most of what I’ve said so far is a restatement of the problem. We have a couple of people who poisoned themselves because their qualia were at variance with a presumed normal mental state that functions to avoid such disasters.

            Functionalists might like to be able to ascribe this unfortunate causality to the variant, one would say erroneous, qualia. But — and maybe this is where I wasn’t clear — functionalists, or some functionalists anyway, are also committed to ascribing the causality to a mental state. Thus, for the qualia to fail functionally, they are best understood as bound to a mental state that has failed functionally. If the mental state were normal, the poisoning would not have occurred, whatever the qualia may have tried to say about it. It is the organism and its mental state that has chosen to eat poison; any qualia present are causally “along for the ride,” although in some way that we hoped would have been helpful.

            When you say, “all the examples but one demonstrate the functionality,” I don’t understand. The person who has no sense of bitterness has not demonstrated the requisite functionality (avoiding poison); nor has the person who finds bitterness inviting. They have failed to demonstrate it at two levels: that of the quale, and that of the mental state (their brains being presumably broken in this respect). The problem with their quale is presumed to be tightly coupled to a problem with their brains. The only example that does demonstrate the functionality is the last one, where a properly functioning mental state is tightly bound to a properly functioning quale.

            The point of all this is that a random quale has no explanatory power. It might seem as if there’s a way open to say, “This person likes bitterness, but their mental state is normal for survival, and so they avoid it,” but put so baldly, it’s obviously problematic. We can hardly say that the person who cannot taste bitterness, or the person who enjoys it, has properly functioning mental states, if this causes them to poison themselves.

            It’s by way of this thought experiment, to answer your question, that the bitterness and the functionality of the mental state become disassociated. Functionalism refers to the mental state definitively for the causality, and invokes qualia if it must, with crossed fingers, hoping that it partakes in the same causality, but (if this post means anything) unable to be certain. Into this crack slips the question: does the qualia even have any causal power, or is it just an extra wheel?

            In your first response you asked essentially the same question: if what we’re discussing has no causal effects, why keep it? You do seem to want to keep it, but a physicalist brand of functionalism seems to weigh against the need for it. The mental state does all the work, and the qualia is made effectively epiphenomenal, or “along for the ride.”

            Does this help, or am I still on the wrong track in my response to your post?

            Like

          2. I can definitely understand having trouble keeping everyone’s positions here straight. Myself, I’m a functionalist, and the post was largely about why I don’t see color as an obstacle to functionalism.

            I mostly stayed away from the term “qualia” in the post because people often mean different things with it. Many scientists use it simply to refer to properties of introspected content. There’s nothing necessarily non-functional about that. However, philosophers typically present qualia as something separate from functionality and behavior, something irreducible, indescribable, unanalyzable, scientifically inaccessible, yet infallibly known.

            Obviously if qualia in the latter sense exist, the post’s premise fails. The question is what about these entities make them necessary? By definition they’re causally redundant. Why shouldn’t we regard them as ontologically redundant as well? The argument is that they’re part of our experience, but that seems to assume that experience isn’t functional. That’s what the inverted qualia thought experiments are supposed to establish, that there can be a difference between experience and function. Which is why I think the problems with those scenarios are important.

            So to your final question, I’m not really looking to keep epiphenomenal qualia. I think they’re a mistaken concept. But I’m interested in any evidence or reasoning that might show their necessity.

            Like

          3. You say, “By definition they are causally redundant,” and this is my point, or the main part of it. The other part is that “ontologically redundant” needs to be unpacked.

            First, causal redundancy. We have before us two candidates for causality: mental states (or whatever you think best describes the medium of causality), and what the philosophers call “qualia,” but most people would call ” my private experience.” Out of the gate, you’ve chosen mental states as the home of causality. Where else can this end? Of course qualia will prove to be of no use to the causal model.

            Yet people keep insisting on “my private experience” (MPE). From your standpoint, this is a philosophical confusion to be cleared up. We have to figure out why we think we have MPE, why we mistake it as causal, and depending on our conclusion, explain the mistake.

            The alternative is to reconsider. We’ve arrived here because of the way we allotted causality in the premisses. What if we were to suggest that the qualia, or MPE, were the locii of causality instead?

            I’ll drop that mic for the moment. Let’s move on to “ontological redundancy.” I susbcribe to the postmodern notion of ontology adopted by the taxonomists who came up with “Web Ontology Language,” or OWL, where “cup measure” can exist in an ontology of cooking. So from my perspective, you’re saying that in your proposed ontology, the one you associate with causality, MPE is redundant.

            There is an older definition of ontology that requires what exists to be more stable. It was in quest of this stability that Descartes said, “I think, therefore I am.” The one thing we do know exists is MPE. I hesitate to call it ontologically redundant.

            Still, I’ve not given up on the postmodern definition. Can we imagine an ontology of MPE with causality? Where would this lead?

            Like

          4. I would say if MPE is causal, then it isn’t something in addition to or in lieu of the functionality, it is the functionality, or at least part of it. It would imply that MPE makes a difference in the body and world, which seems to make it tractable for science. So it seems to lead us back to functionalism.

            I suppose we could still keep it away from science with interactionist dualism. In that case, it’s causal, but in some separate realm from the physical. But that leaves us figuring out how they interact in the classic mind-body problem, particularly with what’s widely acknowledged these days as the causal closure of the physical.

            A way around this is to insist that MPE’s causal effects only affect MPE, that the experiential causal chain runs parallel to the physical one, a notion that is often called psychophysical parallelism. It requires some principle to keep them in harmony. Some theists are comfortable with this, because they just invoke God. But for anyone not comfortable with that explanation, it seems like a problematic concept.

            If MPE is causal, it seems like the most parsimonious explanation is that experience and functionality are two sides of the same coin. But I would say that as a functionalist. 🙂

            Like

          5. What’s missing from this account is the possibility that MPE (I hope I haven’t started something with that acronym) is compatible with functionalism, but a functionalism from which physics emerges. If we approach functionalism as a physical thing, and try to explain how “my experience” (ME) makes a difference in the world, we are indeed contemplating Cartesianism. Our problems derive from the decision, at the foundation of modern science, to locate causality in “objects,” leaving us to wonder how “subjects” can affect them. If we find ourselves still pondering this question, it’s a sign we haven’t really escaped Cartesian dualism.

            But if we approach functionalism from the inescapable ontology of ME — a better footing, in my opinion — and, having accepted that “ME” exists, look for a way to understand it as causal, we are in a better position. We can now argue that what appears to us as the physical world, is simply the manifestation of causal interactions between MEs.

            This is different from pyschoparallelism, because the latter assumes “subjects” and “objects.” The idea I’m presenting — from Alfred North Whitehead, mostly — explains objects in terms of subjects.

            You could do this and still be a functionalist, and avoid Cartesian dualism (or rather, extricate yourself from its lingering clutches), and not have to contemplate psychoparallelism or monads, and you would not need to resort in the end to dual-aspect monism (which actually has a lot to recommend it). And you would find opportunities for keeping free will and morality. But you might have to say some weird things about rocks; and yes, you might up scaring people with the G-word.

            Like

          6. The issue with dual aspect monism is it matters what is being claimed to have the dual aspects. A lot of dual aspect theorists look to situate it in matter, or information. That fits if you’re working with a concept of fundamental experience. But it requires adding something new to the ontology of standard physics.

            I don’t know that most dual aspect theorists would be enthusiastic about situating it at the level of functionality, of causal processes, since such processes are complex, in other words, reducible, implying that the other aspect would be as well. So you could call functionalism dual aspect, but I’m not sure typical dual aspect monists would recognize their view.

            I’ve heard that Whitehead was a panpsychist, although the view you describe seems like it crosses over into idealism.

            I have no issue giving up libertarian free will or moral realism. On free will, I’m a compatibilist, accepting that the mind is a system that obeys the laws of physics, but yet social responsibility remains a coherent and useful concept. And to me giving up moral realism seems to mean accepting we have to actually convince people to live by the rules we want to advocate. Overall, my views tend to lean Epicurean anyway.

            I’ve actually already said weird things about rocks.

            Are rocks conscious?


            Admittedly it’s not nearly as weird as what others have said.

            Liked by 1 person

          7. To call experience and functionality (meaning, I think, physically-based functionality) “two sides of the same coin” is to make the same move as dual-aspect monism. Certainly, what is being claimed to have two aspects arises as the next issue. “Information” I could understand, but I hope no one has been so short-sighted as to suggest “matter.” For traditional dual-aspect monism, and I daresay for a variant that invokes physically-based functionality, “matter” is already one of the aspects.

            “Adding something new to the ontology of standard physics” as the go-to solution amounts to standing loyally by standard physics, and either trying to fix it, epicycle-style if necessary, or trying to ignore and minimize its problems, as Kuhn suggested. A lot of people want to save standard physics, and for good reasons; and I don’t mind, as long as it’s a choice. But for those firmly embedded in the paradigm, the sheer impossibility of comprehending an alternative presents an obstacle. I think this is where we are in this discussion. Your answers keep invoking the standard paradigm as definitive, and building from it, as if you hadn’t even heard what I’m saying.

            Liked by 1 person

          8. I promise I’m not trying to frustrate you. I can only tell you the reasons why I hold the views I do, and the obstacles that prevent me from accepting the current alternatives. I like hearing the same from others, even when we’re in totally different places. It’s rare anyone gets convinced during the conversation, but that’s okay. Changing our minds, if it happens, is almost always a long game.

            Anyway, enjoyed the conversation. Hope we have many more.

            Liked by 1 person

          9. Hi AJ,
            After skimming your conversation with Mike I have a suspicion where things went off. You were addressing a “My Private Experience” concept as causal. (“The alternative is to reconsider. We’ve arrived here because of the way we allotted causality in the premisses. What if we were to suggest that the qualia, or MPE, were the locii of causality instead?”)

            There seems to be an issue with positing the existence of inherent privacy in a perfectly causal world however. If it’s causal then couldn’t those causal dynamics theoretically be detected and reproduced by means of those same causal dynamics somewhere else? Thus wouldn’t privacy at least theoretically be lost?

            My though is that you might be able to restate your point without the inclusion of causal privacy. Or if that was an essential component then I guess you might explain why a causal world would permit the existence of intrinsic privacy and thus such dynamics couldn’t even theoretically be detected and reproduced elsewhere. My apologies however if I’ve misinterpreted your discussion altogether!

            Like

          10. Hi, Eric. (I’m Jim.)

            I don’t think this is where the conversation with Mike went off. That happened because his replies, thoughtful and informative as they were, continued to take for granted and explore an assumption I was trying to question, and the implications proper to that assumption.

            When I brought up “my private experience” as what an ordinary person might say instead of “qualia,” I knew the word “private” might bring trouble. Later I decided to amend it to “my experience,” which happily produced a more powerful acronym.

            But the problematics of privacy remain legitimate, and I appreciate your question. To my way of thinking, the concept “public” is equally problematic, as I mentioned in my remarks about the beetle in the box, but we needn’t get into that.

            Take for an example your finger poking my arm. This is causal in various ways. You feel pressure at your fingertip, and I feel pressure in my arm. The fact that only the two of us partake of this causality does not diminish it in any way.

            It also causes visual impressions in others. Each witness to the act has their own private impression. We can also interpret this causally, as an interaction between them and a “something,” perhaps turning to ideas like Manzotti’s to explain the “something.” We may or may not attribute a reciprocal experience to that “something,” but if we did, it would be a private experience for the something, or a collection of private experiences resulting from its interactions with each witness.

            Nothing about this commits us to everyone in the room having the same causal experiences — as if everyone had to feel it in a fingertip, and an arm, and see the interaction from everyone else’s angles, for it to make sense causally.

            But you could have something else in mind when you ask whether the causal dynamics might be reproduced somewhere else. We’re accustomed to the idea that we can reproduce the causal interaction between two billiard balls using any other two billiard balls. But how could my private experience of being poked be reproduced, and still remain private? Here we find ourselves on the set of Star Trek, talking about malfunctioning matter transporters and asking whether there can be two me’s (or two ME’s) in different places. If we both went into the transporter, and instead of dissolving us at the entrance it simply created duplicates in another room, and then in one room you poked me, is it conceivable that owing to the duplication, the other me would feel the poke? It seems unlikely, and therefore we would say the causal character of the poke has more to do with the fingertip and the arm than with the ME.

            But of course this assumes that we have reproduced ME, which on the standard account is technically possible by assembling the requisite molecules in the other room. I believe this possibility raises various conundrums for the standard account, although I can’t call the literature to mind at the moment (Bernard Williams, perhaps). On an alternative account that looks to qualia, or MPE, or ME, as the locus of causality, these conundrums don’t come up, because for this account no two billiard balls are interchangeable. Each has its it own private locus of causality.

            Does this address your questions?

            Liked by 1 person

          11. It’s nice to meet you Jim. I’ve gone through the conversation again, though I should admit that abstract reasoning tends to challenge me. Fortunately most around here seem to grasp my handicap and thus speak with me in ways that I find more accessible. And indeed, I’ve also found your response to me relatively accessible. I only grasp bits and pieces of your conversation with Mike however. I merely suggested on a whim that the breakdown there might be because there are causal issues with “intrinsic privacy”, though yes I now see that you did attempt to correct that yourself.

            Though I do agree with the general theme of Mike’s post (and might later bring up a model of my own which suggests that colors shouldn’t function quite the same switched around), I’m also a general critic of “functionalism”, “computationalism”, and “illusionism”. It sounds like you’re also at least somewhat critical. Thus perhaps we could add to each other’s positions? See if the following makes sense to you.

            Functionalism essentially states that something will exist as something else to the extent that it functions like that something else. Its origins should reside with the Turing test — if we can’t distinguish the output of a computer from the output of a conscious human, then the computer must be displaying consciousness as well. It’s a simple way to imply that consciousness must exist as information processing alone, and given that it’s commonly presumed that some day we’ll create an information processor that can output whatever a human does. Regardless of that specific unverified claim, observe that if taken strictly functionalism cannot possibly be false. Here it’s tautological.

            Computationalism is misleading since the name merely implies that the brain functions like a computer. Instead it posits that consciousness exists by means of the proper information processing alone. I consider this non causal because information should only exist as such by means of what it informs. So just as the information that animates your computer screen wouldn’t be informational in that sense given a broken connection, the premise of causality means that there needs to be something that your brain information animates to exist as your consciousness itself. And what might that informee effectively exist as? I’m not sure of a reasonable option other than neuron produced electromagnetic radiation.

            Finally illusionism seems to be a way to dismiss as magical consciousness notions other than the computationalist account. Nevertheless as just mentioned, I dismiss that account as magical as well. Whether in the form of a book, genetic code, or anything else, I can’t think of a single effective example of “information” without something that it effectively informs. Thus brain information simply converted into more such “information” cannot in itself create ME. ME will only exist as something which brain information appropriately animates.

            Theoretically when I poke you, neurons inform my brain of this. Causality then mandates that it create information that informs certain brain dynamics (like the right kind of electromagnetic radiation) that exist as such an experiencer, or My Experience of poking you. It’s the same for you being poked, as well as someone watching, and so on. Furthermore if there is a perfect duplicate of you in a different room that doesn’t get poked, then it’s brain shouldn’t be sent the right signals to inform the physics that exists as an experiencer of getting poked. To me that all seems pretty clear.

            Like

          12. Thanks for your thoughts on functionalism, computationalism, and illusionism. Based on what I’ve absorbed so far, and prior experience with similar ideas, I would put my understanding of these positions as follows:

            Functionalism identifies things of interest by the function they exhibit, which is to say, the causal effect they have. If there is no causal effect, there is nothing to talk about. If there is a causal effect, then there is something to talk about. If two things have the same causal effect, then they are functionally the same thing, which means there is really only one thing to talk about. A functionalist position on the Turing test would be that, if the computer’s responses are functionally equivalent to those of an awareness, then the computer is functionally aware.

            This is close to pragmatism, but unlike pragmatism, functionalism takes a stronger approach to the ontology of what is worth talking about. It is, as I see it, a general ontological position, which may be applied to issues of mind and consciousness among others. In a given case, it may either provide a coherent account, or raise as many questions as it answers.

            In the case of mind and consciousness, where two things appear to have the same causal effect — on the one hand what Mike has called “mental states,” and on the other, what we think of as the associated inner experience — a functionalist would argue that, since they are functionally the same thing, there must be only one thing.

            The problem here is that, if there is anything one could say about mental states that one can’t say about inner experiences, or vice-versa, this would suggest they aren’t the same thing. But for functionalism, “Anything one can say about” amounts to “A valid function of.” This allows some play. For example, we might ask whether one can say of a mental state, as one might of an inner experience, that it is “redness.” Functionalism looks to the function of “redness” for an answer, and concludes that since it serves the same function in each case (whatever that function might be), the things must be the same.

            But can one really say that a mental state is “redness”? For functionalism, this is required, but others might feel uncomfortable using the word that way. Mental states could be electrical field states, or static nodal relationships, but not “redness.” Likewise inner experience feels like “redness,” but it doesn’t “feel like” an electrical field state or a nodal relationship.

            Computationalism is specifically about issues of consciousness, rather than ontology in general. As I understand, it holds that consciousness emerges from complexity, typically of a richly looped, feedbacked, or self-referential character. (I’m sure there’s a better word for this, but it escapes me just now.)

            For a medium to support such complexity, we could look to electrical fields or neural networks, for example. Some would suggest “information” as the medium, but I have a problem understanding what “information” could be without something to inform.

            The medium provides the causality. But there is a further problem, because information can be independent of a medium. For example, two CDs can contain the same music, or a CD and a vinyl disc could contain the same music. If consciousness is information, then we ought to be able to reproduce it in two places, or even in two different media, and still have the same consciousness. This has to do with your position that the duplicate’s brain is different and therefore the experience is not shared. That’s reasonable, but if information can be reproduced in two places and still be the same information, and the information is the consciousness, then we are logically obliged to contemplate the unreasonable: ME, in two heads at once. To avoid it, we have to rethink the idea that consciousness is the same as information.

            Illusionism takes the position that we can’t talk usefully about inner experience, so we might as well ignore it and talk about something else. With regard to the current discussion about colours, there is perhaps something to be said for this rather obstinate position. We can talk forever about whether your experience of blue is the same as mine, but there is no way we could ever decide the question, so — apart from the opportunities it presents for abstract reasoning — there’s no point in even entertaining it. It’s meaningless, and all the terms associated with it, about inner lives and so on, are equally meaningless. Calling them “illusory,” to me, seems a bridge too far, and among illusionists there is a certain amount of waffling about this.

            I hope this helps us compare notes.

            Liked by 1 person

          13. That’s definitely helpful Jim. It sounds like our concerns are quite consistent. I do have some answers as well however, some of which you might appreciate.

            Regarding functionalism one additional point that I’d have you consider is that it’s definitionally impossible for this to be false, or at least when construed as broadly as “Something is the same as something else to the extent that it functions the same”. If anyone would like to present a scenario where this conception of functionalism could conceivably be false, then I suspect that they’ll have added something else to the definition for that theorized falseness to potentially exist. And it’s fine if someone does want to define functionalism in such a way. In that case however I’ll hold them to that specified tighter idea rather than let them potentially slip over to the “true by definition” version above for validation when the going gets tough. And yes I do suspect that those who have objected to functionalism but haven’t quite realized that it’s tautological, will smile once they do realize that it’s actually not some grand a posteriori position which requires assessment as advertised. Instead the position doesn’t tell us much at all (or at least when nominally construed).

            On the essentials of computationalism being “richly looped”, “feedbacked”, “self referential” or whatever, one concept that I like use for the position in general is “algorithmic”. This is to say “rules rather than reasoning”.

            On information not existing without something informed, yes exactly. But here’s where a common definition seems to conflict with a technical definition. In our daily lives it’s practical for us to consider information to exist inherently on a CD for example, though apparently not when we get technical. A CD siting on a table will not be informational in the sense that we generally mean given that it won’t be providing audio content in itself — it won’t be informing an audio player or whatever. So instead consider calling a lone CD “potentially informational”. Thus “information” could be paired down to what actually informs the sort of machine that instantiates the content that we’re referring to at a given point in time. Here a book would be potential information until it’s actually being read for example. Furthermore if we mean that the CD is informational to the table which it rests upon given that its mass affects it, then yes that works too. In that case the table would be no less an instantiation medium of the CD’s information than an audio player would be in the other case. So here information is defined such that it only exist in reference to what’s currently being informed in the specific sense that’s being referred to. Otherwise we might better consider this “potential information”.

            Note that this perspective suggests that brain consciousness cannot causally exist in the absence of some sort of consciousness mechanism that brain information animates as such. That’s the pattern for all else that exists informationally — potential information becomes actual information by means of mechanical instantiation. So here the question becomes, what sort of mechanism might be animated by brain information to exist as consciousness itself?

            This is where I’d have you at least observe that one candidate would be Johnjoe McFadden’s proposal that consciousness exists by means of electromagnetic radiation associated with certain synchronous neuron firing. All other consciousness proposals on the market today that I know of are impossible to test experimentally, and this is specifically because they do not propose a known element of the brain to exist as consciousness itself. Furthermore the only reasonable neural correlate of consciousness found so far does happen to be the firing synchrony which his theory depends upon. (Individual firing should tend to get canceled out by other such firing, though synchrony should tend to create more unique energy levels for field effects that may constitute consciousness itself.)

            Regarding illusionism, yes I’d say that this obstinance takes a bridge too far. Why not discuss the illusion of consciousness regarding John Searle’s Chinese room? I consider my own thumb pain thought experiment to be an even more devastating display of the ridiculousness of their favored (and of course unfalsifiable) conception of consciousness. I hope that you’ll give me your thoughts on my scenario at some point. In any case without the establishment of certain physics which exists as consciousness, yes it’s pointless to speak of whether one person’s conception of blue matches the conception of another person. This awaits an empirically verified proposal.

            Actually now that I think of it, there is one exception to this rule today. Have you heard of the two apparently still surviving Canadian girls who share a thalamus? Apparently each of them are able to partly experience what the other does. It’s a devastating case against all supernatural conceptions of consciousness. It’s the single one I know of today where subjective privacy is indeed lost. https://www.mamamia.com.au/krista-and-tatiana-hogan/

            Like

          14. I would say that functionalism is tautologous to the extent that it sets up a certain ontology, effectively defining “same,” and then insists on the primacy of that ontology over others, where “same,” being applied to a different set of entities, might have different results.

            Concerning information, we can draw a distinction between syntax and semantics. The syntax of the information on a CD differs from the syntax of the same information on a vinyl disc, but the semantics of the information is the same across both. (This is a handy example of how “same” works under different ontologies.)

            I’m not sure there is any way for me to verify, empirically, that you are conscious—and by extension, that a given computer program is conscious, or a cat, or a rock. This makes discussing the matter an exercise in frustration. I tend to give the benefit of the doubt.

            I’m not familiar with your thumb pain thought-experiment. What would be the best place to read about it?

            Liked by 1 person

          15. Syntax and semantics are ideas that I should keep in mind given how central these terms have been for the often Turing test based discussion of consciousness in academia so far. This is to say that it’s largely been about the prospect of creating consciousness by means of computers able to effectively speak natural languages. While I think academia needs to move beyond this simplistic and yet ridiculously advanced potential task, I should also prepare myself for questions based upon that paradigm.

            Given that two different mediums of syntax (like vinyl and compact disk) may be used to represent essentially the same audio content (or even semantic content), my response is this. I don’t think we should say that syntax ever exists independently of syntactical interpretation. Lacking that I’d instead call it “potential syntax” as in the case just above regarding “potential information”. Vinyl and CD might be used for the production of the same essential audio content, which is fine, though each should only be considered to provide syntax by means of an audio player by which such audio would emerge. Or if we mean that they provide syntax to a table that they rest upon given their mass, then the existence of that… and so on.

            In a more standard case there might be a formation of rocks on Mars which seem to display the letters of the word “Hello” when viewed from the proper distance. Shall we consider this to exist as an inherent demonstration of syntax beyond human symbolism, or rather only syntax in reference to what interprets it, such as a sufficiently educated human? The second of course. There shouldn’t be any truth to independent syntactic existence but rather only in reference to what’s causally affected. That would be a human interpreter in this case.

            I agree that there’s no way to establish non personal consciousness today, though system based causality mandates that determining this not be fundamentally impossible. Given the proper sort of physics theoretically consciousness could be detected with the right machine detector, and potentially even such content like the phenomenal experience of blue.

            I’ve just done a search and found that I first proposed my thumb pain thought experiment on September 15, 2019. https://selfawarepatterns.com/2019/09/08/layers-of-consciousness-september-2019-edition/#comment-34329
            It’s interesting to me that I hadn’t yet learned of the work of Johnjoe McFadden, which is to say the only person I know of with a sensible answer regarding the brain physics which certain information might animate to exist as an experiencer of existence.

            I’ll now provide a modern version since I must have made some improvements since then. My goal would be for the non-causal nature of computationalism to finally become understood in general, and so to eventually succeed where John Searle tried but failed:

            When your thumb gets whacked it’s of course presumed that neural information about the event is conveyed to your brain. That’s where things get tricky however. How does thumb pain then result?

            Computationalism holds that the brain accepts such information, though it merely needs to convert it into the right new information for such a feeling to result. What this means is that if certain markings on paper (correlated with what your thumb sends your brain) were scanned into a sufficient computer, and if this computer were to print out new sheets of paper with markings on it (now correlated with your brain’s response), then something here should experience what you do when your thumb gets whacked.

            Does that seem like a reasonable thing to believe? An experiencer of thumb pain by means of the right markings on paper properly converted into more markings on paper? Surely not, and specifically because in a causal world information should only be considered to exist as such to the extent that it informs the right kind of stuff. Just as processed brain information should go on to inform something that would itself exist as an experiencer of thumb pain by means of that information, those output sheets of paper should need to be fed into another machine so that its markings might animate the sort of physics which experiences what you do when your thumb gets whacked.

            There are several ways that I consider this thought experiment to be superior to Searle’s Chinese room, though I’m quite curious about your initial impressions.

            Like

          16. Searle’s Chinese room and your thumb-pain example both present computationalism in terms of a highly simplified sequential algorithm. Computationalists may insist on sufficient complexity and self-referentiality to allow an emergent consciousness, and therefore reject both out of hand. I am not defending computationalism, just pointing out that for purposes of addressing it effectively, the analogies may be wanting.

            That said, your example has the benefit of removing an apparent agent who passes things along, which can mislead people into becoming exercised over the “ghost in the machine.”

            Within the simplified terms of the algorithm, do I find it plausible that the output marks could be conscious? I find that exactly as plausible as any other explanation of consciousness in terms of a transformation of non-conscious conditions or states into other conditions or states. It may look more obviously ludicrous, but it’s the same model. When you suggest that a more plausible model needs to invoke “the right kind of stuff,” you leave open what would differentiate marks on paper from electrical fields. It seems to me that both can carry information, so there’s no difference there. I assume you are willing to grant them, for the sake of the thought-experiment, equivalent levels of complexity or sophistication, so there’s no difference there.

            Marks on paper are a static medium, whereas electrical fields are not. This may account for a difference in persuasiveness, but as far as I know, a distinction between static and dynamic conditions or states doesn’t figure in your account. If you were to appeal to dynamics, we would probably be going in the direction of self-referentiality, loops, and emergence.

            Turning to syntax, semantics, and information, I think we’re still working out between us how to use these terms. In a bid to clarify things, you’ve proposed “potential” forms of syntax and semantics. I hope we can come to an understanding without this complication. My own view is shaped by what I know of Claude Shannon’s ideas about information.

            Shannon worked exclusively with syntax, and expressly left semantics out of his thinking. As you’ve noted, it’s conventional to think of these things as dialectically related somehow, but Shannon was an unconventional thinker. His idea that noise contains a lot of information, and signal contains less information, is something most people find counter-intuitive.

            Working from this unconventional base, we can understand the moon rocks shaped to say “Hello” as an instance of syntax, but also some moon rocks next to them, organized in no particular way, as also an instance of “syntax.” On this definition, syntax is simply the arrangement of whatever. It may or may not carry meaning. Shannon was content to work on the syntax of transmitted messages with no thought for their semantic content. He didn’t care whether they even contained semantic content, as long as the correct syntax, the raw arrangement, the formation of the elements, was reproduced at the receiving end.

            This rather unconventional view helps us bypass the qualification of “potential” syntax. All formation is syntax; some formations of syntax have semantic significance.

            By extension, any arrangement at all constitutes “information.” The arrangement of microscopic pits on a CD is information. It doesn’t have to be meaningful information, and this means it doesn’t need transforming, say through a CD reader and an amplifier and a pair of speakers, to become information. Again, this lets us bypass the complication of “potential” information.

            Instead, we can talk about information that we find meaningful, and information that we find meaningless. That is, we can introduce an overlay of semantics. I would say there is no semantics without a subject for whom “meaning” makes sense. But so many of these WordPress conversations turn into the promotion of this or that blogger’s own views and opinions, rather than a helpful exploration of their fellow bloggers’ concerns, so I’ll stop there.

            Liked by 2 people

          17. It’s good to get your feedback on this Jim. Often people say nothing at all. Regarding your main concern, I must concede that replacing my provided conception of information with Shannon information is not something that my argument should survive. Nevertheless one might also question whether this should be considered a legitimate way to challenge a given argument? In that case I guess I could challenge one of your ideas by means of a definition that you explicitly do not intend me to use. (Still I think the act of redefining the terms of others to challenge their ideas is often done obliviously. Unfortunately science remains quite primitive regarding effective metaphysical, epistemological, and axiological principles from which to work. I’d like to help remedy that problem as well.)

            I’ve mentioned a CD harboring potential information until it goes on to animate an audio player, and also in another sense informational as its mass causally affects a table. I’ve also mentioned rocks on Mars that seem to spell “Hello” to merely exist as potential information until seen. In all cases here information will only exist in a specified intended sense, though technically any cause could be framed as informational in terms of a resulting effect.

            If you are able to go with my intended definition to better evaluate what I propose, then there’s the question you raised about the difference between my proposal and what computationalists propose. Consider your computer screen. Your computer sends it potential information which then becomes actual information as the pixels thus become animated. The model here is that to be causal, potential brain information will need to animate the right type of causal dynamics in order for consciousness to result.

            Conversely the computationalist account seems to stop a step short of full causality. Here the brain converts incoming information into output potential information, though without it actually animating anything that experiences its existence. I guess that’s why it seems so ridiculous to me for an experiencer of thumb pain to be created by means of sheets of paper with the right markings on it, properly processed into more marked sheets of paper. (Under computationalism it isn’t quite that the new marks on paper would feel thumb pain I think, but rather that the informational conversion would create an undefined experiencer of some sort.) But if that second set of marked paper were to inform the sort of dynamics which the brain presumably informs to create consciousness (possibly EM field based), then I could imagine it possible for something to causally experience “thumb pain” here.

            Like

          18. It sounds to me like you’re suggesting that the medium of what we could call “Shannon information” determines its causal ability. For example, the information on a CD cannot cause photons to be emitted (although of course it can reflect them), but the same information, translated to a computer screen, can cause the selective emission of photons. (Now we’re considering a CD that contains images or computer programs, rather than music.)

            When you distinguish information from “potential information,” I think you have in mind a specific type of causal ability. From my perspective, the “potential information” on a CD can cause things, such as an iridescence on the surface, but it can’t cause other things, such as the display of images or text characters on a screen, until it is appropriately transformed. The type of thing that Shannon information can cause determines, in your terms, whether it can properly be called “information” or merely “potential information.”

            To be called “information” in your terms, I suspect, the Shannon information must have a semantic aspect. Moon rocks that seem to spell “Hello” from a certain angle acquire this semantic aspect when seen from that angle; but perhaps more significantly, they acquire a semantic aspect when seen by an observer who can appreciate semantics.

            Similarly, the information that comes to our brain through the senses does not acquire a semantic aspect until it is instantiated in what we could vaguely call, for present purposes, “brain stuff.” You suggest electromagnetic fields as a good candidate for this “brain stuff.”

            Within this idea of information lurks the idea of being seen. There’s a danger of introducing a homunculus, which in these situations is usually avoided by careful wording of subtle concepts (functionalism comes to mind). But even if we avoid the infinite regression of the homunculus, the aspect of semantics seems to require a subject of some kind to appreciate them.

            As a panpsychist, I have no problem with the presence of subjects. I just assume they’re there, more or less as I assume you are there as a subject, although I have no way to establish it empirically. For other positions, the problem is how subjects arise, and I don’t see how your flavour of computationalism helps. It seems to require a subject to distinguish true information from potential information, where true information is instantiated in a medium that supports the effect of “semantics” appreciated by said subject. But what is special about the medium, that it should give rise to a subject?

            Liked by 1 person

          19. I think you’re roughly grasping my position Jim, though in some regards not yet in a simple enough way for true mastery. Furthermore you’ve addressed my endorsement of McFadden’s “consciousness as electromagnetic information” theory. I’ll see if I can help simplify things and then apply my argument to his theory so you might better grasp and assess the perspective in general.

            As I see it, ultimately any cause may be referred to as “informational” regarding its effects. So before its effects we might refer to this epistemically as only “potential” information. I guess “pre-information” would be reasonable as well. But using the term this broadly may cause grumbles from Wittgenstein supporters. In truth however I’ve only done this for hypothetical demonstration purposes. My actual point regarding brain function concerns complex code which goes on to operate various things. Thus an “ordinary language” conception of information.

            If it’s useful to say that information only exists as such by means of what’s informed, then this suggests that consciousness by means of non-instantiated information, and so potential information, transgresses the premise of causality. Thus computationalism should have some funky implications, as displayed by my thought experiment for one. Surely whacked thumb information animates the brain in all sorts of ways. If it results in an experiencer of such pain however, then the brain must be informing something that exists as that experiencer. Just as your computer causally animates your screen, the brain should be causally animating something which experiences what the brain causes it to.

            Conversely a prominent and scarcely challenged perspective regarding consciousness holds that information can exist independently of anything being animated by it. Here the brain accepts whacked thumb information and then processes it into non-instantiated information that itself somehow exists as such an experiencer. Thus if the right markings on paper were properly converted into another set of markings on paper, supposedly something here would feel what you do when your thumb gets whacked. Though this may be a convenient supposition, my argument suggests that there’s a non completed chain of causality here. For causality to be preserved, brain information should need to inform something which exists as such an experiencer.

            As you implied earlier, even though EM fields are dynamic like consciousness, and unlike marks on paper, I agree with you that it shouldn’t thus make sense to me that consciousness would exist by means of such a medium. This is indeed a hard (which is not to say supernatural) problem. But if the brain sometimes creates such an experiencer by means of processed information, and if that information can only exist by means of certain brain dynamics which it animates, here we’re able to put on our Sherlock Holmes caps and reason out what those dynamics might be given general understandings of the brain. This would seem to be a reasonable rather than faithful way to go.

            I’d never given this much thought, though was offended when I realized the heart of what computationalists propose. As earlier mentioned I first published my thought experiment in September 2019. I now see that it was only 3 months later in December that the following post from James Cross got me interested in EM field consciousness. https://broadspeculations.com/2019/12/01/em-fields-and-consciousness/

            Now something began to make causal sense to me. If there is a second plausible element of the brain which is animated by brain information to exist as consciousness, I haven’t come across it yet.

            The theory is essentially that when neurons fire in synchrony, they amplify their EM radiation to get into a consciousness zone. Thus your eyes transmit what will become information to your brain, and the output of that processing then causes synchronous neuron firing to create an EM field which itself exists as the experiencer of what you see. Theoretically “you” exist as such a field.

            One reason that some people suggest that the theory must be false is because the radiation of power lines and such do not alter consciousness. In one of his papers however McFadden goes through the associated physics and thoroughly demonstrates why such fields should not affect consciousness. He also proposes several ways to test his theory, some of which have already been validated somewhat, though none seem like a true potential “smoking gun” that might get academia out of its current rut here. Consider the following proposal however.

            Given that scientist can detect neuron firing through EEG machines and such (and presumably because the radiation suggests neuron firing), they should already know quite a lot about about the field energy levels associated with synchronous firing. So if scientists could figure out how to create similar electromagnetic disturbances inside someone’s brain (since outside the brain they should be too weak), if done in enough ways experimentally, shouldn’t this eventually disturb someone’s otherwise standard phenomenal experiences for report? It seems to me that with enough such experimentation, McFadden’s theory should become known as either true or false. Can you imagine the upheaval in academia if his theory were truly validated?

            On panpsychism, I’d like to discuss this with you some time as well, here or at your blog. For the moment however I’m wondering what you make of this?

            Like

          20. If something exists only in relation to its causal effect, then talking about “it” before it has an effect is not talking about anything at all. Is this what you meant by Wittgensteinian grumbles? Anyway I don’t think the distinction adds to the main discussion, so I won’t get wrapped up in it. As far as I can tell, you can probably say everything you need to without bringing it up. the only reason you might do so would be to set it aside as a confused concept of “information” for your purposes.

            More than once, you mention something that “exists as an experiencer” and is causally informed by the brain. I think you intend here to emphasize the causal, and therefore embodied, nature of the information. The information causally “creates such an experiencer” via the brain; but at the same time, the brain information “need[s] to inform something which exists as such an experiencer.” This language leaves me wondering whether the experiencer exists prior to the causal information, and is affected by it, or whether it exists as a result of the causal information, and is its effect.

            Besides the electromagnetic field theory, I’ve recently become aware of another theory of brain mechanism and consciousness that seems testable. In fact the authors take the trouble, at the end of their paper, to make “Testable predictions” (Section 5.7) This is the theory of Roger Penrose and Stuart Hameroff called “Orch OR.” The paper is entitled “Consciousness in the universe: A review of the ‘Orch OR’ theory,” and is available as a PDF through http://www.sciencedirect.com. As one expects with Penrose, it’s a formidable read, complete with extremely complicated equations. Its thrust is that the brain contains structures called microtubules, which the authors believe may employ quantum effects. Strikingly (for me), they specifically relate their theory to the ideas of Alfred North Whitehead. You can find an approachable discussion here.

            The EM theory seems testable, but given the amount of EM that surrounds us every second of our lives (everything from light waves, to radio transmissions across a vast range of frequencies, to cell phones next to our heads, to house wiring, to the EMI that comes from TVs, computers, and just about all other electrical appliances, subject to FCC regulation), I’m surprised that we haven’t noticed any effects on the brain already. (Although maybe it explains the mass insanity that increasingly characterizes our civilisation!) If you’re looking in this direction, you should also keep in mind the problem of EMI cross-talk in circuit boards, integrated circuits, or anywhere two unshielded conductors are in proximity; its uncontrolled existence can cause havoc, but perhaps if it were artfully employed rather than strenuously avoided, we could use it to induce consciousness in a microchip.

            I have to say also that, if you are looking for a testable hypothesis for influences on consciousness, and by implication sources of consciousness, you could do worse than consider the effects of psychoactive drugs. We know for a fact that they affect consciousness, and therefore it’s reasonable to suppose that consciousness has something to do with whatever brain structures they influence. This direction not only has the benefit of being testable, but also of having demonstrable results to work with.

            Liked by 1 person

          21. On Wittgenstein, you referenced him in passing at least a couple of times. I thought I’d try doing the same for a bit of fun. I might have gotten this wrong however, or at least failed to sufficiently clarify my reference. Didn’t he ultimately posit that words are ultimately communal rather than private and therefore you shouldn’t go around defining terms however you like? It’s a position which I’m actually not pleased with. I believe a theorist must be granted the freedom to at least explicitly define their terms however they like in order to potentially get their ideas across. I call this my first principle of epistemology. It seems to me that most people wouldn’t say that they “inform” a door when they open it for example, or indeed that any effect would be informed by its cause, even though that’s what I said. Thus the worry about troubling Wittgensteinians here. But I also admitted that this was merely a hypothetical extension of the term since in general I meant for it to remain “ordinary”, as in the case of something transmitted to something else for operational purposes. Regardless your intuition is correct that the point is tangential.

            On me mentioning something existing as the experiencer that is causally informed by the brain, you’re right that it would thus be embodied. In the case of marked paper converted to more marked paper creating a disembodied phenomenal experiencer, nothing should causally exist this way given that nothing would be informed. Conversely in the case of brain information animating an EM field, the experiencer would exist in that specific capacity.

            Here you wondered “whether the experiencer exists prior to the causal information, and is affected by it, or whether it exists as a result of the causal information, and is its effect”? It’s the latter. Theoretically the experiencer here is an instantaneous entity that exists or not based upon the causality itself. And then if so, each time you’re freshly created by means of that causality, why aren’t you effectively “new to the world”? I’d say because the existing brain provides you as a newly created being with a degree of memory of the past and anticipation of the future.

            It’s interesting that you bring up the Penrose and Hameroff “Orch OR” as another potentially testable consciousness proposal. I recall reading that this was the very material which inspired McFadden’s consciousness proposal in the late 90s. Essentially he said to himself, “Even though we can say that this highly exotic proposal shouldn’t be correct given how hot and such the brain happens to be, neuron produced electromagnetic radiation might do the trick!” Furthermore McFadden ought to be reasonably versed in the engineering potential of quantum mechanics in a biological capacity, since he and physicist Jim Al-Khalili founded a now successful program at Surrey University. It’s lately been producing full doctorate students in the new field of quantum biology.

            On it being strange that the many EM fields associated with modern life tend not to affect electromagnetic consciousness (should McFadden’s theory be true), I agree. But then observe that this is all well known physics which may thus be calculated. Here’s an excerpt from one of his 2002 papers on the matter:

            “Prediction 6. The high conductivity of the cerebral fluid and fluid within the brain ventricles creates an effective ‘Faraday cage’ that insulates the brain from most natural exogenous electric fields. A constant external electric field will thereby induce almost no field at all in the brain (Adair, 1991). Alternating cur- rents from technological devices (power lines, mobile phones, etc.) will generate an alternating induced field, but its magnitude will be very weak. For example, a 60 Hz electrical field of 1000 V/m (typical of a powerline) will generate a tissue field of only 40 μV/m inside the head (Adair, 1991), clearly much weaker than either the endogenous em field or the field caused by thermal noise in cell mem- branes. Magnetic fields do penetrate tissue much more readily than electric fields but most naturally encountered magnetic fields, and also those experienced dur- ing nuclear magnetic resonance (NMR) scanning, are static (changing only the direction of moving charges) and are thereby unlikely to have physiological effects. Changing magnetic fields will penetrate the skull and induce electric cur- rents in the brain. However, there is abundant evidence (from, e.g., TMS studies as outlined above) that these do modify brain activity. Indeed, repetitive TMS is subject to strict safety guidelines to prevent inducing seizures in normal subjects (Hallett, 2000) through field effects.”

            Click to access MCFSFA.pdf

            On EMI cross-talk for circuit boards, yes shielding is required for our non conscious machines to help ensure that one part doesn’t thus mess up another. And it’s the same in a non conscious brain as well I think. Theoretically when all brains were non conscious however, organisms with extraneous and useless conscious EM fields emerged, often called epiphenomenal. But given the inherent challenges of non conscious function alone, evolution must by chance have given such experiencers opportunities to decide certain things (with less shielded areas), and those decisions must have done somewhat better than otherwise to thus evolve consciousness into what it’s become today.

            If the physics of EM consciousness were worked out, theoretically the easy part would be to create such a field that experiences something that we might refer to as “pain”. Creating an EM field that experiences something more like vision should be more challenging. But imagine having this field be something which reasons out what to do given the circumstances, and then the effect of a field based decision goes on to affect the computer by which it exists to cause what’s decided to occur through its connected output mechanisms. No problem for evolution obviously, though I’m not optimistic that our machines will ever get too far in this regard.

            The thing about psychoactive drugs is that they essentially just alter or degrade consciousness. We expect neurons to thus fire differently and so EM fields should exist differently, whether or not they exist as consciousness. When such drugs actually end consciousness however, that’s where we should expect much less synchronous neuron firing. And that’s exactly what’s observed. For one display this is noted as “Prediction 4” in the paper I linked to above.

            Like

          22. Wittgenstein argued that “meaning is use”: that the meaning of a word is something we come to understand as we learn to use it.

            Learning how to use words is learning to play a “language-game.” This implies that language is communal. But there are many kinds of language-games, suited to different communities and their purposes: religious, moral, artistic, to name a few. Your ability to use a word in a certain way is constrained by the language-game you are playing. If you want to use it another way, you must agree to the rules of a different language-game.

            Most philosophy, he believed, is an exercise in linguistic confusion arising from what Gilbert Ryle came to call “category mistakes.” This occurs when the usage in one language-game is inappropriately applied to another. Ryle gave an example of a new student who says, at the end of an orientation, “You have shown me all the buildings, but you haven’t yet shown me the university. Where is the university?”

            If I may offer a more provocative example: Imagine an atheist standing up in a devotional service and demanding impatiently to be “shown God.” Everyone is there to have an experience of God, but of course the kind of experience she’s looking for is something different. She has made a category mistake.

            We tend to think of words, even words like “God,” as names for objects in the world. Wittgenstein noted that this overlooks many nuances of language. Its elements also encompass non-referent behaviour, such as exclamations, gestures, sighs. What “God” means to devotional community has more to do with the way they live their lives, than with objects and attributes. Wondering about God as an object is a symptom of mixing up different language-games. Wittgenstein wanted to cure us of such symptoms. He wanted to cure us of philosophy.

            His ideas might apply to the present discussions about colour as follows. Mostly when we talk about colour, it is about, say, what colour to paint the barn. This is the language-game of phenomenalism, where experience is taken for granted, and colours are part of it. We can also talk about “colour” in terms of electromagnetic waves and their effects on the structures of the eye or the brain. But here, like the atheist, we are looking for something different. A discussion of what colour to paint the barn that focussed on wavelengths and brain effects would be as beside the point and unproductive as an aesthetic discussion would be for science. Yet while painters tend not to dabble in physics (with practical exceptions for filling their pallettes), physicists seem more than willing to “explain” things like aesthetics and morality in the terms of their own language-game. It’s the same with phenomenalism. We fixate on phenomenal discussion with an intent to discover the “objects” to which it supposedly refers, because objects and properties are what we like to talk about. We import our uses inappropriately, to language-games where they have no application, and then become confused and begin to say all sorts of strange things.

            On the functionalist theory of consciousness, we are making a little progress. We seem to be transforming information from one medium to another, looking for the medium that might serve for “consciousness.” Marks on paper are not sophisticated enough. The workings of our sensory apparatus won’t do. Nor will the mechanics of brain chemistry. Not until we posit electromagnetic fields in the brain do we find a medium adequate to make consciousness out of information. I believe this is your view.

            Thanks for explaining that the ideas of Penrose and Hammeroff inspired McFadden. I haven’t researched Orch OR, but it feels a little arbitrary; as with everyone else in this game, they have settled on a medium they feel is suitable for consciousness, and stuck with it. As far as I’m concerned, when it comes to selecting such a medium, one is as good or bad as the next. None of them offers any particular reason to cross the bridge between physicalism and phenomenology (“the Rubicon,” as Mark Solms called it in his own appeal to Markov blankets and the mid-brain). But if we do insist on collapsing physicalism and phenomenology, through new transcendental concepts involving objects, relationships, and information as seems to be the 21st-century fashion, one point of collapse is as good as another. Marks on paper at least have the benefit of simplicity, as does Manzotti’s Mind-Object Identity theory.

            Liked by 1 person

          23. That was a wonderful reduction of Wittgenstein’s position Jim. It reminds me that he and I essentially observe the same problem here, though our solutions go different ways. Or maybe in truth he’d have endorsed my potential cure as well? This is to say perhaps he’d appreciate the convention of assessing the ideas of others on the basis of their explicit definitions, and even their implicit ones if they seem reasonable enough to infer. I believe that effective theory requires effective definitions and thus we shouldn’t always rely upon what’s standard. Stating such definitions explicitly should be mandated however at least occasionally.

            Regarding color in terms of EM waves, the eye, the brain, and such, I’m not sure it’s a category error to seek an explanation of the dynamics by which it arises. I’ve presented my falsifiable thoughts on the matter. If empirically demonstrated quite well, it could be that a new age of science would emerge in which its perpetually softest forms begin hardening up.

            It’s not quite that I consider the proper marks on paper to be too unsophisticated to exist as consciousness. Instead it’s that here a clear truncation of causality is proposed. Remember that those final marks on paper would merely be correlated with the brain’s response to a whacked thumb. I’ll grant you as much fidelity here as you like, but do we have reason to believe that correlated marks alone should get the job done? And specifically when the resulting theorized information, informs nothing? Seems like a causal void to me. And yet the widely believed theory of computationalism holds that something here must feel what you do when your thumb gets whacked. I’d like interested people to realize that this is what their position posits so that more informed choices might be made in the future.

            Conversely card carrying computationalists should rather that people not be able to reduce their position down to basic implications, since that should hurt their case. Thus my problem with people being shuffled into such a belief by charismatic figures like Daniel Dennett. There are many ways that I consider my thought experiment to be more to the point and parsimonious than Searle’s Chinese room, so I’d like people to give it a try.

            My presentation of McFadden’s theory was mainly to demonstrate what’s practically required for a theory to not inherently violate the premise of causality. I can understand why many would try to wash their hands of debate regarding how nonconscious dynamics convert to conscious dynamics. The more that one learns here the more crazy things seem. So maybe consciousness is elemental? But if a truly falsifiable theory were to become quite empirically grounded, shouldn’t panpsychism and all other unfalsifiable notions (with computationalism merely a start) be abandoned?

            Perhaps that can be a precursor for a full discussion between us on the matter sometime. But you also seem to be enticing me forward to the post where Manzotti has been fielding questions about his MOI theory. He surely won’t be around for long. My recent job has been taking an extra two hours from my day however, though I did listen to most of the discussion on the way home with the Speechify app. Sounds fun! I guess I’d try to reduce his position back to practical implications, or as standard, the very thing which might make it look silly.

            Like

          24. I think we agree that definitions need to be flexible. But this also means that no one definition is special. Each way of using a word has its context, and within that context, its uses. The context is at least as important as the definition.

            The “empirical demonstration” so important to your view suggests a context for your definitions. Is it the right context for discussions about phenomenal experience? Is its mechanical notion of “cause and effect” appropriate for a context where the cause may be beauty, and the effect admiration; where the cause may be injustice, and the effect indignation?

            You have explained your views, and I’ve tried to respond. At this point we’re beginning to repeat ourselves. I reiterate that marks on paper will do as well as any other medium for the general project of embodying “consciousness” in “matter.” Moving the question to electromagnetic fields doesn’t change anything. The context is frankly Cartesian, despite all protests to the contrary. Unfortunately its proponents, who imagine they have overcome their latent Cartesianism by fiat, are confused into projecting it onto anyone who brings the point up. I prefer an alternative tradition that takes process as its starting point: a tradition that has not enjoyed its due over the past four centuries or so, but now seems ripe for development.

            I don’t mean to push anyone towards Manzotti’s position, with its potentially misleading emphasis on “objects.” Many of his conversations here were—how shall I put it?—unsatisfactory, and I doubt he will be back for more. If he’s still reading, I’d go so far as to apologize for the tone of some of the conversations, to say nothing of their content.

            Liked by 1 person

          25. How would an empirically verified notion of consciousness, complete with mechanical cause/effect explanations, be able to deal with the cause of “beauty” creating the effect of “admiration”, or maybe the cause of “injustice” with the effect of “indignation”? That’s a great question Jim. I’ll give it a try.

            I theorize the brain essentially as a non-conscious computer. While the ones that we build force algorithmic operations by means of standard electricity, the brain forces algorithmic operations by means of electrochemical neural dynamics. I suspect that you’re familiar with particulars of how neurons function this way.

            Furthermore I theorize the creation of a phenomenal computer that the brain creates in the form of neuron produced EM fields. So how might consciousness as an electromagnetic field created by the proper synchronous neuron firing, effectively function? Not through electricity or neural dynamics as before I think, but rather by means of a desire to feel good rather than bad. I theorize three forms of input here (sentience, senses, and memory), one form of processor (thought), and one form of output (decision). You may find them in the following diagram under the heading of “consciousness/qualia”. Here there are arrows from the inputs which feed the thought processor, as well as an arrow from there which feeds decision output. Theoretically this is all in the form of a complex EM field. A decision however is theorized to feed back to the brain to potentially cause what’s decided to occur. Essentially if you decide to wiggle your toe for example, the EM field causes the brain to neurally get it done.

            So now to your question. The eye accepts light information that provides an image of a scene in the form of some element of a complex EM field that itself exists as an experiencer of existence. Let’s say that the image is of a painting that affects the EM field experiencer with the positive sense of beauty. Furthermore with a thought processor which realizes that someone created this, a sense of admiration for the creator might naturally occur as it’s being witnessed. So now with input of beauty as well as admiration, the thought processor might decide to do something such as speak the words “Wow, I admire the creator of this painting”. Here the EM field should affect the brain to cause the proper muscles to function so that these words are said.

            The wonderful thing about this proposal versus virtually any other on the market I think, is that it’s possible to test empirically. And indeed, the neuron firing synchrony which the model depends upon is known to be highly correlated with consciousness. But I’d like scientists to go further. I’d like them to somehow induce the sort of EM radiation known to occur by means of synchronous neuron firing, right inside someone’s head. As scientists are inducing all sorts of typical EM parameters to see if they can affect that person’s consciousness, the person would be instructed to tell them if they sense anything at all phenomenally strange like unexpected colors or feelings. If they were to find something then this should be explored further to see if reproducibility could be achieved. But if scientists were not able to tamper with someone’s consciousness this way then at some point it should be concluded that McFadden’s theory must be wrong.

            Like

          26. We’ve wandered far from our host’s topic and into expositions of our own theories, so perhaps we should move these discussions to our respective blogs.

            You spoke of “a complex EM field that itself exists as an experiencer of existence.” This in not incompatible with the idea of the EM field as the physical expression of active occasions. I’m not sure it’s necessary to assume things are built from the other end, where EM fields are dependent on brain structures, and thus basic chemistry and electricity and other physical science. We could also understand the physical world as what happens when occasions work out their relationships. This has the added advantage of building in their perspectives as foundational, so that consciousness as we know it becomes a natural prospect. From the other direction it becomes a problem, indeed the hard problem.

            Liked by 1 person

          27. Theoretically evolution merely took advantage of the physics based possibility for consciousness by means of brain dynamics. Conversely if we were to build something conscious, it would surely be by means of own style of computer. To do so we’d essentially reverse engineer the brain.

            I don’t do much posting at my blog site, or even visit very often. I generally like adding content to the sites of others through my commentary. But since you post often, your blog may be a good spot for me. Perhaps my contributions over there will generally be appreciated. We’ll see.

            Like

          28. My blog is mostly about the aftermath of modernity. Comments are always welcome, especially if don’t try to convert me to some other theory. It’s the bloggers’ bane.

            (This replaces an earlier response sent to the wrong thread. I’ll figure this out, I promise. . .)

            Like

  8. I think you have covered the most important aspects of colour. colour exists to make life beautiful. Imagine going to a hospital painted grey. You just think you are going to die. But add some variety of colours and the space has life.
    Maybe I should say colour exists to create excitement or to increase.

    Liked by 1 person

    1. I think it’s definitely true that colors have a valence, although the relationships are complex. It seems like any color can produce positive feelings in some situations and negative ones in others. In many situations, the valences are so slight that we take little account of them, but they still seem to have subtle effects on our state of mind.

      Like

  9. How do you explain synesthesia and, for example, savants seeing numbers having a color?

    Why are sounds so different from color for most people?

    My guess is that senses, and even thoughts, work in something like frequency bands. For example, color works in particular range for most people that stays distinct from the range for sounds, for example. However, in some individuals these bands spread or overlap so color can become attached to abstractions like numbers or sounds. This sort of spreading or wandering of the bands could also be produced by psychedelics over-inhibiting or over-exciting to produce subtle changes in the ranges. The variations in human color perception would translate at some point to differences in capabilities to generate the entire range. Those variations could arise from either differences in input (for example, cones in the eye) or differences in the backend processing.

    I am using this term “frequency band” very loosely and I have no idea actually how neural patterns get translated to perception and thought. But a complete theory would need to account for synesthesia and such.

    Liked by 1 person

  10. Some of criticism of functionalism in the social sciences applies to the functional approach to mind.

    The logic is circular. Functionalism is a tautology. Mental states exist because they are functional. They are functional because they exist. No predictive power to explain how it works or why it works one way and not another.

    Liked by 1 person

    1. I haven’t done any serious reading on synesthesia, so I’m not able to even speculate intelligently about it. Anything I might say would come from a hasty wikipedia skim.

      On functionalism, it’s worth noting that it’s more a philosophical outlook than a scientific theory. Although it will affect which theories you think are promising. I also think it sets a standard for what we should hold for before deciding we understand something.

      A functionalist would find equating pain to c-fiber firings, for instance, as utterly unsatisfying. What about c-fibers and their firings makes them pain? What causes it, and what does it cause in the chain leading us to respond as we do when in pain, such as reporting it?

      Liked by 1 person

      1. The exceptions and the abnormal can be used to test theories. If the theory doesn’t explain the exceptions, it likely doesn’t explain the normal.

        A functionalist might have a difficult time explaining the “function” of colored numbers.

        Liked by 1 person

        1. We can learn a lot from abnormal conditions. I know I did in reading about various neurological case studies.

          But I’m not seeing their problem for functionalism. Functionalism doesn’t require that everything work adaptively, or that there not be spandrels. It only requires that they have causal roles.

          Liked by 1 person

          1. “Functionalism doesn’t require that everything work adaptively,”

            Except it can’t explain what doesn’t so, at best, it would be incomplete.

            Why would be the criteria for something to be functional or not? Is seeing a yellow banana functional for recognizing something to eat, but seeing a yellow nine not functional for anything? Do we conclude the yellow banana is functional, hence explainable. But the yellow nine is unexplainable because it is not functional except perhaps as an mistake or side effect. But we could later discover the yellow nine actually triggers some exotic circuit in a savant’s brain to enable lightning calculations. Would the yellow nine then be explainable because it is now functional?

            It seems what is the definition of functional is fluid and adaptable to whatever ever our state of knowledge. If we can name a function or functions (so many to choose from in most cases), it is explained. If we can’t, it is a spandrel. Later knowledge or opinion can change either a spandrel into a function or vice versa. The theory can accommodate any evidence.

            Liked by 1 person

          2. Again, the causal roles in functionalism aren’t required to be teleological, or even teleonomic.

            I’ll grant that the name does imply that, which I’ve often noted is unfortunate. But then many names in philosophy and science are misleading, often existing for historical reasons. Astronomers refer to any element heavier than helium as a “metal”, even though no one in chemistry does.

            Like

  11. I feel like it’s worth mentioning the Sapir-Whorf hypothesis. Language can affect our perception of color, too. There are several languages in the world that use the same word to mean green and blue, and people who speak those languages have a hard time picking out the one blue object among a selection of green things. It’s sort of like if turquoise were one of the basic “Crayola colors” in English, and green and blue were just seen as different shades of turquoise.

    Liked by 1 person

    1. Sounds like I might need to do some reading on Sapir-Whorf and colors. I knew that language affected a person’s ability to categorize a color, but not that it affected their ability to discriminate between colors when shown different ones side by side. I’m wondering if all the green things were the same shade of green, or varying shades. If so, I could see the difficulty in that case, but it still would seem like a categorization issue. Still, pretty interesting.

      Liked by 1 person

      1. As I understand the experiment, people were presented with several different shades of what English speakers would call green and one shade of blue, and they couldn’t say which one was not like the others better than random chance. I think this experiment involved native Japanese speakers. I also remember reading about a similar experiment with purple, which I think involved several different indigenous groups in Africa.

        I don’t remember all the details, but these experiments seemed to be well thought out. It’s definitely a categorization issue more than anything else.

        Liked by 1 person

          1. I hadn’t thought about that before, but I vaguely remember reading that Isaac Newton labeled blue and indigo differently than we would today. His blue is our cyan and his indigo is our blue. It’s a small difference, but there still might be some Sapir-Whorfian stuff there.

            Liked by 1 person

          2. I’ve read that too. Although based on what I’ve seen, his blue seemed more like light blue to me, and indigo more dark blue. In contemporary usage, the boundary between indigo and violet seems pretty blurry to me, seemingly more about brightness than hue. (Maybe similar to the distinction between brown and orange.)

            Liked by 1 person

  12. My blog is mostly about the aftermath of modernity. Comments are always welcome, especially if don’t try to convert me to some other theory. It’s the bloggers’ bane.

    Like

    1. Black on black 😉

      It is interesting that the phrase “objective color” refers to what humans with typical vision perceive in normal lighting. In other words, it’s more intersubjective than objective. The objective part is the wavelength of light reflected (when it’s reflected).

      Liked by 1 person

      1. Yes, “objective” with its double usage is problematic. The two meanings confused. To say “objective reality” and saying “to be objective” are different. The former is replacing “material” while the latter is about not being subjective. These are different ideas.

        Liked by 1 person

        1. Maybe another way to say it is with scope: independent of any one particular mind vs independent of all minds. We can say that the color of a material under ideal lighting conditions for most humans is independent of any one particular mind. But the color of the material is not independent of all minds, although the wavelength of the light is (at least in theory).

          Liked by 1 person

          1. Colour seems to entail teleology. Wavelength does not. By categorizing it as “colour” we are trapped within universals. With “wavelength” we are talking about particulars. I don’t know how to better put this.

            Liked by 1 person

          2. I agree that it does imply teleology. It’s all about evolutionary needs. We see the color of ripe fruit because it was a high calorie food for our ancestors. And of course that fruit reflects the right wavelengths because, in part, it’s in its evolutionary interest to be eaten by species that will spread its seeds. (A similar symbiosis exists between flowers and their insect pollinators.)

            Like

  13. Since Galileo and continuing through Descartes and Locke is the assertion that sense qualities only exist in the mind or soul of perceivers and are not really out in the world. Berkeley also accepts mind dependence and therefore draws the conclusion that since all we know about the world is sense qualities than the whole world must be mind dependent.

    But there is no strong argument for believing in this mind dependence. Galileo needed to only say that the color of a falling object is irrelevant to its place in physics, not that it has no color. In distinguishing between primary and secondary qualities Locke needed to only say that certain spatial configurations and movements of matter had the power to create sensations but not that the sensations have to exist “in us”.

    If certain configuations of matter cause certain sensations then objects are really colored as naive common sense realists claim. However, the view is also compatible with indirect realism for the brain is continuous with the matter of the world and so as the world may be colored so may be a visual field within the brain. Then sense qualities are created through psycho-physical laws that existed before the evolution of brains and those laws biological evolution did not invent but employed.

    I speculate as to what these psychophysical laws may look like by ignoring (for now) the seperate problem of subjective awareness and considering qualities as purely physical.
    see https://philpapers.org/rec/SLESTU

    Liked by 1 person

    1. This is a conclusion a lot of people reach. However, consider the following image.

      If you have “normal” vision, that is, the vision most common for humans, you’ll perceive a “74” among the dots. But if you have red-green color blindness, you’ll likely only see a “21”. Most mammals are red-green color blind. If you have a rare form of tetrachromacy, you may perceive additional colors here, and patterns, that the rest of us can’t.

      But what’s the objective fact of the matter on the color of the dots? You might be tempted to reach for what normal humans perceive, but that’s an accident of evolution, of ripe fruit mattering to our primate ancestors in a way they didn’t for the ancestors of dogs, cats, and pigs.

      Myself, I wouldn’t say colors are completely mind dependent, at least not completely. But they are a reaction of a mind to particular stimuli (often in particular circumstances). Another mind may react differently. What is objective is what causes the stimuli, and the material on an object which absorbs certain wavelengths of electromagnetic radiation while reflecting others. If you look at the wavelengths of light for certain colors, you won’t find anything in the wavelengths themselves that mandate any particular color.

      Like

Leave a reply to paultorek Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.