Causal completeness

It seems like theories that are causally complete are better than ones with gaps.

In thinking about this, I’m reminded of a Psyche article I shared a few years ago on fostering an open mind. One of their pieces of advice resonates with an outlook I’ve had for some time.

If embarking on a full-on explanation sounds impractical or overly arduous, there is a shorthand version you can try. In 2016, psychologists at Washington and Lee University in Virginia showed that the same beneficial effect could be achieved simply by spending a few seconds reflecting on your ability to explain a given issue to a true expert ‘in a step-by-step, causally connected manner, with no gaps in your story’.

https://psyche.co/guides/how-to-cultivate-shoshin-or-a-beginners-mind

This advice, ensuring a “step-by-step, causally connected” account, is oriented around ensuring that we have a good understanding of a topic, but I think it’s also good advice for assessing scientific or philosophical theories themselves. Does the theory offer the kind of explanation this advice is suggesting we look for in our understanding of it? Or does it have gaps that the proponents often try to ignore or gloss over?

This is one of the reasons I prefer functional theories of consciousness to the alternatives. I can see the causal account all the way through with them, at least at a summary level, something I struggle to see in many non-functional theories. Of course, this assumes the target of our explanation is itself functionality. For me, it is, mainly because I’m not sure what the non-functional versions are even supposed to mean.

So combinations of global workspace, predictive coding, attention schema, and other higher order theories provide an explanation of how we can model and talk about ourselves, the world, and the relationships between them, all in service of our innate and learned goals and reactions. The causal accounts for this aren’t simple, and still have an enormous number of “black boxes” to fill in, but they’re coherent and scientifically tractable. 

These theories don’t explain consciousness in a stronger sense of something intrinsic, ineffable, and inaccessible to third party observation, as others claim to do. But using the causal completeness criterion, it’s not clear to me what those theories are even trying to explain, much less how they go about the explanation. Many of the theories in this category seem more like recipes; add all the ingredients and out pops fundamental qualia (whatever that means), with no explanation of what happens in between.

This is also why the Everettian understanding of quantum mechanics has been growing on me. Aside from superdeterminism, it seems like the only causally complete account available. I don’t know whether it’s true, but when assessing various quantum paradoxes, it’s the story that seems to require the fewest logical dodges. All the other stories require accepting dubious assumptions, like giving up some degree of objective reality, accepting non-local dynamics, or retro-causation. Yes, it means accepting ontological consequences, the other “worlds”, but they’re currently untestable and so seem irrelevant.

Applying the causal completeness criteria does have complexities and nuances. A causally complete account doesn’t necessarily mean one that explains every component down to fundamental reality. Isaac Newton didn’t know what gravity was, nor Charles Darwin how mutations and inheritance worked. But their theories were still complete in the sense I’m talking about, because those factors could effectively be “black boxed” for later explanation, something all the theories above have to some degree. 

It’s also worth noting that with microphysical theories, causality, in the sense of cause preceding effect, ceases to be a concept. At that level, we really have to talk about interactional completeness. But the principle, I think, still holds.

The case of quantum physics in particular highlights another nuance. Sometimes a fully causal account just isn’t available yet. That’s my interpretation of early understandings of quantum theory. I think scientists were right to conclude they had an effective operational theory. But Albert Einstein was also right that it shouldn’t have been considered a complete answer, although his impulse to add additional postulates may have been looking in the wrong direction.

And this criterion has to be used in conjunction with others, such as predictive success, reconcilability with other theories, parsimony, and overall coherence.

Of course, in the end, nature is bizarre, and doesn’t care about our little criteria. But as a heuristic, this one seems useful, at least for now.

What do you think? Am I missing anything with this view? Or using it wrong anywhere? Are there alternative criteria that should be considered?

Featured image credit

70 thoughts on “Causal completeness

  1. Right up my alley. In fact, I often look at the world this way. However, it becomes a problem when I adopt the solving of a problem. If a project entails numerous complex interrogations, explorations and resolutions, as soon as I’ve performed enough of the examination to have a casually complete view of the entire solution — I lose interest. “Ah, ha! So that’s how all of this comes together to solve the problem.” Never mind that there remain “gray” boxes that still require deep understanding — as long as I can determine the box’s internal revelations as tractable — I’m done.

    A man of many talents, but master of none.

    I’m in the middle of just such a situation. Building this Arduino based system that entails stepper motors, gyrometers, batteries, solar cells and power controllers — and all the code to glue it together — has me entertained. But, I have to see all the gray boxes fully exposed and integrated. Ugh!

    Liked by 2 people

    1. I know what you mean. Once I have a causal understanding, even if just a high level one, the clock starts ticking on my interest. I can spend years describing it to other people, to see what objections come up. Sometimes researching the objections can keep me interested, particularly if they complicate or change the understanding. But if I get to the point where the same objections keep getting repeated, I start losing interest. At least until someone can come up with a new objection or wrinkle.

      Liked by 2 people

  2. It’s definitely a talent to be able to simplify a complicated theory without losing any important steps or info, but certainly worthwhile. I tend to be skeptical of theories that no one seems capable of explaining clearly, or terms that seem to have differing definitions.

    Funny you mention functionalism, as that’s something I can’t make much sense of, though I admit I’m not trying too hard to educate myself. I did just finish Chalmer’s “Reality +” and now I feel I understand it even less than before. It seemed to me if he wanted to show why Cartesian global skepticism isn’t justified, he could have done it without bringing in functionalism (or what he calls “structuralism” which I understand is not exactly the same, but perhaps functionalism is a type of structuralism?), and in far fewer words.

    Liked by 1 person

    1. It’s been a while since I read Reality+, so I can’t recall how Chalmers approached functionalism. Within the philosophy of mind, it’s usually presented as what defines a mental state is its causal role in relation to sensory stimuli, motor outputs, and other mental states. Chalmers, if I recall correctly, takes that principle and extends it to all of reality. So what makes solidity is nothing intrinsic, but the property of resisting penetration. You can sum it up as what give something its properties is what it does.

      I thought it was interesting that Chalmers actually did that with functionalism, since it’s not a view he agrees with when it comes to consciousness. And I think he even makes a point of singling out consciousness as an exception in the book.

      When we get to physics, I’m not sure there’s any real difference between functionalism and structuralism (specifically structural realism). They’re basically the same concept. So in terms of physics, you can probably view them as synonymous. It’s just that “functionalism” is the name used in the philosophy of mind, while “structural realism” is the one used in philosophy of science. (“Structuralism” by itself doesn’t get used that much, maybe to avoid confusion with the term used in other fields.)

      (I will acknowledge that saying I like causal role theories because they’re causally complete, seems almost tautological.)

      Like

      1. Oh you’re right…I had forgotten that Chalmers had made an exception for consciousness. Thank you for that reminder! I was confused about where he stood on that, given what I thought he’d said previously.

        And thanks for the clarification on structuralism and functionalism. I thought they sounded like the same thing, but I wasn’t sure. Chalmers tended to talk about structuralism rather than functionalism, but I wasn’t sure why. Now I wonder if it has something to do with making that exception for consciousness, to avoid confusion.

        Like

        1. It’s possible that Chalmers makes a distinction somewhere. I think I remember him saying something about mathematical structure being separate from physics, with causality being the difference. But I think he overlooks the fact that causes are themselves relations, just along the time dimension rather than the spatial ones. Which leaves open the mystery of what “breathes fire” into the relations that are physical. But I can’t remember if any of that pertained to functionalism.

          I’m not sure what to make of his consciousness exception. I don’t recall him defending it much, if at all.

          Liked by 1 person

          1. I finally got off my lazy butt to get the book. Here’s how he defines structuralism:

            “The thesis that scientific theories are equivalent to structural theories, cast in terms of mathematics plus connections to observation. Epistemic structural realism says that science tells us only the structure of reality (though there may be more to reality than this). Ontic structural realism says that reality itself is entirely structural.”

            I think he cites Max Tegmark as an ontic structural realist.

            Oddly, he doesn’t give a definition of functionalism in the back of the book, but I found the part where he talks about it in “Reconstructing the Manifest Image”:

            “Functionalism can be seen as a version of structuralism, where the emphasis is put more squarely on causal roles and causal powers. When we move from primitivism to functionalism, we reconcieve solidity so that it can exist in the scientific image.”

            As for the consciousness exception, he talks about this toward the end of his book just before the chapter called “Kantian humility”:

            “Conscious experiences may have structure, but they seem to go beyond structure. So, could the basic reality underlying structure involve a fundamental sort of consciousness?”

            This goes beyond his famous paper. There, he thought consciousness was on the same level as space and time. I was amazed to see this in his book.

            He goes on to say: “…the structure of physics could be realized in a single cosmic mind, as in the idealist view—or by interactions among many tiny minds at the bottom level, as in a panpsychist view (which holds that everythign is conscious)”. Then he says these views which place consciousness as fundamental have the benefit of not being reductionist or dualist, though panpsychism has the “combination problem”.

            This is where I got confused, because earlier in the book he looks very dimly on Berkeley (and gives Berkeley a very unfair portrayal, I think), but here he sounds open to an idealist view. Not sure where he stands!

            Liked by 1 person

          2. Thanks for looking those up! I’m doubly lazy because I have it in ebook format and could have checked rather than going off memory.

            Max Tegmark would definitely be an ontic structural realist, but he goes far beyond it. I wouldn’t equate OSR with his mathematical universe hypothesis. OSR sees reality as structural, but not all conceivable structures as real.

            On functionalism vs structural realism, thanks! He drew less of a distinction than I remembered. That description fits my understanding.

            I think I would take the paragraph just before that as his definition of functionalism.

            Functionalism got started as a view in the philosophy of mind: The mind is as the mind does. But it’s applicable across any number of domains. For example, we’re all functionalists about teachers: To be a teacher is to play the role of teaching students. Teaching is as teaching does. We’re all functionalists about poisons: To be a poison is to play the role of making people sick. Poison is as poison does.

            Chalmers, David J.. Reality+: Virtual Worlds and the Problems of Philosophy (pp. 428-429). W. W. Norton & Company. Kindle Edition.

            My reading of Chalmers on consciousness is he’s committed to it being fundamental, but in a property dualist context, not an idealist one. He has flirted with panpsychism over the years, but I can’t recall him ever signing on to it explicitly.

            Just skimmed the section on Berkeley. I can see where you’re coming from. It’s definitely not sympathetic. I’m a little curious what he might have gotten wrong with it.

            Like

          3. Ugh, on the functionalism definition, I should have included even the paragraph before that. For anyone else who’s interested, here’s the whole sequence:

            We have, in effect, moved to functionalism about solidity. Functionalism in philosophy is a view that understands a phenomenon in terms of the role that it plays. Here, the key role for solidity is resisting penetration. If an object plays this role, it’s solid. To use a common functionalist slogan, solidity is as solidity does.

            Functionalism got started as a view in the philosophy of mind: The mind is as the mind does. But it’s applicable across any number of domains. For example, we’re all functionalists about teachers: To be a teacher is to play the role of teaching students. Teaching is as teaching does. We’re all functionalists about poisons: To be a poison is to play the role of making people sick. Poison is as poison does.

            Functionalism can be seen as a version of structuralism, where the emphasis is put more squarely on causal roles and causal powers.

            Chalmers, David J.. Reality+: Virtual Worlds and the Problems of Philosophy (pp. 428-429). W. W. Norton & Company. Kindle Edition.

            Like

          4. “OSR sees reality as structural, but not all conceivable structures as real.”

            Whew. That would be pretty bizarre if they did. Well, more bizarre than usual. 🙂

            I meant to include that Chalmers definition, but I guess I just read it instead! I think that’s a helpful explanation, the comparison to poison and teachers especially. In those cases it’s clearly the function that makes those things what they are. I’m just not sure that’s true of everything.

            Maybe those paragraphs at the end of the book were meant more as a segway into what he calls Kantian humility. He seems to identify as a Kantian (though I doubt he would call space and time a priori structures of the mind).

            Yeah, the Berkeley thing is so strange. Obviously Chalmers has read him, but he seems to treat B as a solipsist. It’s a bit slippery the way he handles this, and I can’t help but think he did it deliberately. I’m planning on writing about this, hopefully soon, so I’ll have my thoughts more organized once I put up that post.

            Like

          5. This is the second time Kantian philosophy come up today in conversation. I wonder what the relation is between Kantian humility and neo-Kantianism. Neo-Kantianism was equated with epistemic structural realism in the other conversation, and I see Chalmers in the book cites someone equating Kantian humility with ESR. Kant’s views often get mentioned in the SEP on structural realism. I can see the convergences, but wondering about the different names. (You never know with philosophical positions.)

            Myself, I was an epistemic structural realist when I first found structural realism, but have gradually drifted over to the ontic side. The main reason is I’m not sure saying intrinsic properties can exist but we can never know anything about them, does any real work. But it does cause trouble. Too many people in the ESR camp decide they can say things about those intrinsic properties anyway.

            I’ll be keeping an eye out for that post!

            Liked by 1 person

          6. I’m not sure what Neo-Kantianism really is. I just found this article:

            https://plato.stanford.edu/entries/neo-kantianism/

            Seems all over the place. It’s interesting that Neo-Kantians rejected Kant’s noumena. That article points out some prominent students of Neo-Kantians, and one of them is Heidegger, who also rejected noumena.

            In any case, I think maybe Chalmers was talking about the original Kant. If that’s the case, I understand that he wants to allow for the existence of Kantian noumena (I don’t know if he calls it that in the book…I’d look but the book is across the room and I’m comfy cozy on the couch). Kant thought we had to assume the existence—the BARE existence, nothing more—of an unknowable mysterious realm of “things in themselves”, a world completely apart from us and our knowledge of it, a realm about which NOTHING can be said. I emphasize “nothing” because I don’t think people realize this is what Kant meant. Kant understood that there’s no way to know the world as it is in itself, yet he understood noumena as a necessary limiting principle. Something like a yin to phenomena’s yang.

            Here is where I think idealism comes into focus. Someone like Berkeley might say, “What’s the difference between an utterly unknowable something and an utterly unknowable nothing? Nothing!” (Okay, he didn’t actually say that, but he could have! The main thrust of his philosophy is not what Chalmers points out—that our reality is supported by God—but that Matter (B liked to capitalize it) IS unknowable-nothing. Of course, he came before Kant, but I think Kant was addressing B in retaining noumena.

            So as for Kantian humility, I think it depends on how you view the issue of noumena. I think that’s the part of Kant that’s really at issue there: Kantian noumena. If you like the concept of a mysterious unknowable realm of things in themselves, and if you think science somehow progresses towards it (how you could know about this progress without being able to know or say anything about noumena is a very real question), then you might call your belief in noumena “Kantian humility”. If not, you might call it introducing mystery where none is needed. I think Kant was trying to save the science of his day…but it turned out Berkeley was scientifically right on some of it, like when he criticizes absolute space, time and motion.

            Chalmers does seem to equate ESR with Kantianism, but not Neo-Kantianism. I think he’s specifically talking about Kant’s noumena, and not the many other insights Kant had. (Personally, I’m not on board with Kantian noumena, but I do have a great appreciation for him. He thoroughly won me over on the a priority of space and time. When I hear science talk of “spacetime” I feel like chortling. I just can’t take it seriously because on that front, Kant was just so on the money.)

            You say, “I’m not sure saying intrinsic properties can exist but we can never know anything about them, does any real work.” What you’re saying sounds like a criticism of Kantian noumena.

            I’m wondering if this “neo” Kantianism in the context of ESR is really just Kantianism in regard to noumena, minus what he said about causality, categories, etc.?

            Liked by 1 person

          7. I just did a search for “noumena” in Chalmer’s book. The only place it comes up is in an endnote (p 500) as part of the title of a cited article. But if we equate noumena with the intrinsic properties of ESR, then I think Chalmers is fairly friendly toward it. It’s where proponents of fundamental consciousness often locate it, particularly panpsychists.

            Interesting that neo-Kantians reject noumena. If classic Kant is ESR, then it seems like a better fit for the neo-Kantians is OSR. Although that’s equating all structure and relations with phenomena. I’m not sure that works. It seems like there are lots of structure and relations we infer rather than ever directly observe (although that can get into what we mean by “observe” since observation seems itself to just be low level inference), and it doesn’t seem hard to conceptualize structure and relations that are unknowable. But maybe my understanding of “phenomena” is too narrow.

            An interesting thing about Kant’s noumena is their existence or non-existence itself seems unknowable. That means they’re not anything we could ever take into account in any scientific theory. So I guess OSR would indeed be a criticism of that notion.

            I’m not familiar with Kant’s take on space and time. I do know a lot of theoretical physicists are working on theories that one or both may be emergent from some deeper reality. Time is often hypothesized to be emergent from entropy, and some people talk about space being emergent from quantum entanglement. In both cases though, I could see an argument for things being the other way around. General relativity does get a lot of predictive mileage with the spacetime concept, but the theory also breaks down in well known places, like the center of a black hole.

            Like

          8. Yeah, Chalmers sticks with “things in themselves” rather than introducing the term “noumena”, which is understandable. I consider the two to be pretty much the same thing, though various philosophers, especially those before Kant, talk about noumena in different ways and not all of them have the unknowable Kantian version in mind.

            “it seems like a better fit for the neo-Kantians is OSR”

            Assuming the neo-Kantians really aren’t keen on noumena and are also structuralists, that sounds about right. I’m taking it for granted that neo-Kantians rejected noumena since I don’t really know what they thought, or if they were really all unified in that regard.

            “An interesting thing about Kant’s noumena is their existence or non-existence itself seems unknowable. That means they’re not anything we could ever take into account in any scientific theory.”

            Exactly. And philosophy can’t get any further than science in investigating things in themselves.

            ” So I guess OSR would indeed be a criticism of that notion.”

            I don’t know much about structuralism, so all I can say is, that makes sense to me. I think of Kantian noumena as sort of like an empty placeholder. Kant thought that placeholder was necessary, but it sounds like OSR-ists don’t.

            “…it doesn’t seem hard to conceptualize structure and relations that are unknowable.”

            Maybe not, but I think the problem comes in when you want to be sure of what the structure and relations are relating to, but can’t.

            “But maybe my understanding of “phenomena” is too narrow.”

            If we’re talking about the Kantian noumena/phenomena divide, phenomena would be, well, everything we can experience and talk about. Whatever entity science discovers or describes is not really “a thing in itself” but something we might still call objective that lies within phenomenal experience. But Kant was fairly paradoxical here, in my view. All this talk of noumena doesn’t get across that he seemed very keen on saving some form of scientific realism, which explains why he talks about “representations”.

            Kant’s take on space and time:
            Space is the a priori form of outer intuition (experience).
            Time is the a priori form of inner intuition (experience).
            Neither space nor time are real objects in the world. They’re essential features of the mind that make experience possible.

            What teasing out what this means for scientific investigation of space and time is complicated and, I think, up for interpretation.

            Like

          9. I’m learning something in this conversation. For a long time, I’ve thought of noumena as just objective reality, and Kant’s philosophy as an observation on the limits of what we can know about it. But if he meant the “things in themselves” notion, something not only beyond observation, but beyond anything we can even discover or reason about, then yeah, it’s starting to feel like something utterly redundant from a parsimony view.

            That neo-Kantianism SEP article you referenced, interestingly enough, doesn’t use “noumena” either. But this snippet, I think, confirmed your take that they rejected it.

            In fact, there were some core Kantian ideas that the Neo-Kantians self-consciously rejected. They rejected the doctrine of the thing-in-itself as incoherent and unnecessary (Windelband 1910: 323), or radically reinterpreted talk of things-in-themselves as talk of a postulated final and complete theory of the world (Cohen 1885: 503ff.)

            On the placeholder and OSR. I think the issue is we can’t say anything about them, not even if they exist. So they can serve no role in our ontological accounts. At least, that’s my reasoning. I used to be willing to do the same thing Kant did, and keep them as a placeholder. But too many people take the placeholder and speculate about it anyway, something I’d rather stay away from.

            I’ve always understood Kant’s transcendental idealism as idealism in the sense that our reality is constructed by our minds, but the constructions are molded and constrained by an outside world. But that take on space and time makes him sound a lot closer to an outright idealist. Or maybe he’s saying the objective reality is radically different from our construction. I think Chalmers’ argument about universal functionalism is interesting here, that anything that plays the roles of space and time are real space and real time.

            Like

          10. Don’t feel bad about making the assumption that noumena is the same thing as objective reality. I’m not sure Kant was clear on this, but it has been a while since I’ve read him. Anyway, it’s hard not to think of objective reality in any way other than “things in themselves”, but if you reflect on it, “objectivity” doesn’t necessarily mean “that which exists in some unknowable realm, accessible by no mind”. To see this requires thinking about what we mean when we talk about something as being ‘objectively real’, because it really can’t be whatever is left after we subtract all minds-subjects whatsoever from the equation (though the fact that we can come up with the notion of such an impossible realm is fascinating). All of this is Berkeley’s main point, his argument against Matter.

            “But too many people take the placeholder and speculate about it anyway, something I’d rather stay away from.”

            Haha…yeah, I hear you.

            “I’ve always understood Kant’s transcendental idealism as idealism in the sense that our reality is constructed by our minds, but the constructions are molded and constrained by an outside world.”

            I wouldn’t say “constructed” by our minds. Maybe filtered? And another important point is that Kant wasn’t just pulling a priori structures of our minds from a hat. He was searching for the necessary structures that make experience possible. You’d have to read his arguments for why space and time are a priori “forms” for experience. They’re deceptively simple, but the more I thought about them, the more I found them convincing. It’s best to try not to think about them as things that exist “only in the mind”, but instead to think of them aside from that dichotomy as being fundamental to experiencing anything at all.

            Like

          11. I haven’t read Berkeley, but I’ve always had a couple of blockers with at least what people write about his arguments.

            The first is that reality often doesn’t seem to meet our expectations. I always think of things like the 2004 tsunami that killed a million people in south Asia. Who would have ordered that? Or the floods that happened where I live in 2016? Or just the underside of my desk being lower than I thought and the resulting pain in my head. It also seems like investigations into nature forces things on us that astound us (heliocentrism, size of the universe, evolution, relativity, quantum mechanics, etc). I realize idealists have answers to these questions. But it just seems simpler to accept there’s a reality outside of us that impinges on our experience.

            The other blocker is I can’t see a break between the logic used to deny external reality and denying other minds, which slides us into solipsism. The arguments against solipsism seem similar to the one I made above, which of course I’m using against idealism.

            But I haven’t studied idealism in detail, and fully realize I can’t be the first with these concerns.

            Sounds like I might need to dig up Kant’s arguments about space and time. Thanks!

            Like

          12. Before I reply to your points on B, I just came across this video explaining Kantian idealism as “epistemological idealism…or not really idealism” . I thought it was a pretty good video:

            I’d say when it comes to Berkeley, don’t trust anything secondhand. Really. Not even professional philosophers. Don’t trust me! (Just kidding. Kind of. I’m working on a post about him right now, so I don’t want to dismiss myself before I publish it, but yeah. Be skeptical when it comes to secondhand interpretations of B). Berkeley was misunderstood even in his own day, and that’s not because he’s opaque. It’s really baffling to me how so many people get him so very wrong, especially when his book is one of the easiest classical philosophy books to read. The hardest thing about reading him is letting go of your normal understanding of what the word “idea” means. He redefines it to mean something extremely broad, like the word “thing”. Once you realize that, the rest of the book is fairly easy, especially for a book written a long time ago.

            Also keep in mind, Berkeley was a bishop and yeah, he gives some arguments for the existence of god, but none of these are especially interesting or novel. The god stuff alone would never have made him worthy of reading centuries later. Everyone fixates on that in a way I find really strange. His main contribution to philosophy is his criticism of Matter in the absolute sense. Picking on him for the god stuff is kind of like picking on Descartes for the god stuff, which hardly anyone does since it comes across as sophomoric (and it is), so why don’t they treat B the same way?

            “But it just seems simpler to accept there’s a reality outside of us that impinges on our experience.”

            Based on the examples you gave, Berkeley doesn’t deny there’s a ‘reality outside of us’ in the sense you’re talking about. He’s not a solipsist, though many try to make him out to be one. He points that what we think of as ‘reality outside of us’ isn’t really ‘outside of us’ in the extreme sense. Again and again he points out that the Matter he’s criticizing is not the reality of normal people, and doing away with Matter would not change anything about life as we experience it. He considers Matter a nonsensical “abstract idea” which we never actually experience in any way whatsoever.

            You could say what he’s criticizing is Kantian noumena.

            “The other blocker is I can’t see a break between the logic used to deny external reality and denying other minds, which slides us into solipsism.”

            The logic used to deny external reality comes from Cartesian skepticism, which relies on a belief in matter and mind as two distinct substances. When Berkeley calls into question the absolute existence of matter, he’s denying the existence of two distinct substances. There’s only one thing now: mind, and still, common sense comes out of this unscathed. There’s no reason to deny the existence of the world, a la Descartes, and certainly no reason to question the existence of other minds—especially since reality depends on them. (On that note, Berkeley calls “mind” “Spirit”, but he doesn’t mean ghosts or anything weird like that.)

            Anyway, I hope you like that video on Kant. I found it helpful myself.

            Liked by 1 person

          13. Thanks for the video! I’d heard that distinction before: ontic vs epistemic idealism, but had totally forgotten about it. And I used to be pretty good with remembering to ask whether a philosophical position was ontic or epistemic. Many views that seem dubious ontologically are more plausible in an epistemic form. Epistemic idealism is a good way to describe Kant’s version, although like the presenter said, the word “idealism” is probably misleading in epistemic cases.

            Part of the problem is philosophers aren’t always clear which camp they’re in, with some seemingly intentional in the ambiguity. (The fact that the presenter had to say “probably” for the list of philosophers in the epistemic camp fits.) At this point I probably come across as a broken record ranting about ambiguous language, so I’ll stop there.

            On Berkeley, I totally get what you’re saying about the secondhand accounts. And I forgot about his role for God. My blocks really apply more to garden variety idealism (to the extent there is a garden variety). I’ll wait for your post to comment further, particularly since what I say may be very different after reading it.

            Liked by 1 person

  3. To quote a famous philosopher, “The philosophers have only interpreted the world, in various ways. The point, however, is to change it” (Karl Marx)

    Causal (and at the micro level, interactional) completeness is great for understanding and interpretation. But philosophy of mind touches on issues we deeply care about – pain and joy, for some examples. The point is to change the world to have less pain and more joy (among other things).

    If some radically alien intelligence has a goal to get rid of certain states – it tries not to see prime numbers of trees in its surroundings, say – I don’t immediately share that goal. I’ll rate that state as neutral. That could change if the radically alien being and I could become friends (because friendship carries another basket of values). But until then, they’re on their own. Mammals and birds, by contrast, are already on the same page with me; their aversive states, I feel for.

    Liked by 1 person

    1. It seems like understanding the world reveals a wider set of options for changing it. And trying to change it without that understanding increases the chances of failure. At least, that’s the story I tell myself to justify learning about these topics. 🙂

      Interestingly, if you put a monkey in a red room, they will get anxious, maybe even eventually freak out. On the other hand, if you put another animal that doesn’t have trichromatic vision, or vision not tuned to be alerted to the color of ripe fruit, they might be fine. And something that smells incredibly gross to us might be a dung beetle’s lunch. I suspect an alien’s affordances would be different in ways that make these other examples look minute.

      Liked by 1 person

    2. My last comment doesn’t really get to the point. Here’s the rest.

      Our concepts, including “pain” and “joy”, come from a history of interactions between language-users and their environment, which fixed their meanings. The evolutionary ancestral environment selected for concepts which sufficed for our survival, regardless of their (non-)utility for general scientific explanations of all life in the universe.

      Our goals are our goals, even if they might seem narrowly focused and peculiar when situated in the larger universe. We can, of course, re-evaluate our goals, but we have to use our goals/values to do it. So it’s not clear how far we could ever budge from our original values, unless we make wild mistakes in our reasoning and fail to notice it.

      I was in the middle of writing this when Mike replied to my last comment, but I’m hoping this will serve as enough of an answer, at least to provoke a new and different critique of my views.

      Liked by 1 person

      1. Paul,
        I think I’m onboard with everything you say here. Do you see it as contesting the points I made in the post, or immediately above? Or is it just meant as a supplement?

        Just trying to make sure we’re having the same conversation here.

        Liked by 1 person

        1. Yes, potentially, depending on what you mean by “functionalist” theories of consciousness. I think the peculiarity of human mind-related goals (avoid pain, seek joy) blocks Good Old Fashioned Functionalism. But your idea of “functional” seems to be more generic, where a wider variety of activity counts as “functional”. On the other hand you seem to want to reject out of hand any requirement for a particular kind of substrate. E.g., organic vs silicon-based. Even though it’s certainly possible to lay out functional differences (that a physicist would understand) between carbon and silicon.

          So, everything depends on where you draw the “functionality” line and why. If you’ve laid that out carefully somewhere, I either missed it or just didn’t grok it.

          Liked by 2 people

          1. We’ve talked about the “good old fashioned functionalism” thing before. I think last time I asked what it referred to, you cited some of the history sections in the SEP Functionalism article, but I don’t see any fundamental break there. There are a lot of variations of functionalism, but I think they all share a common core, that the mind is as the mind does.

            On substrate, I don’t reject “out of hand” that the substrate makes a difference, or even that it might provide something irreplaceable. But I need reasons or evidence for it, neither of which ever seem to be provided. The closest I ever get is something like, “The only systems we’re currently aware of are biological.” But that statement once applied to any system capable of doing calculations, playing chess, recognizing faces, or many other things. So I can’t see it as compelling. Totally open to learning about any other reasons.

            You asked me once if structural realism and functionalism were the same. I think they are, just arising from different fields. Let me know if that doesn’t answer what you’re asking.

            Liked by 1 person

          2. Earlier, about Good Old Fashioned Functionalism (GOFF), I said that it goes “one level down” from behavior and stops. GOFF advocates believe that nature and sciences comes in layers: physics, chemistry, neurology, psychology. And that each is highly independent of the others, so that you don’t need much neurology to do good psychology, and likewise, that psychology is substrate-independent.

            This idea, that sciences at “different levels” are highly independent, is controversial, and you might be one of the people controverting it. I have a new, broader criterion for being a relatively traditional functionalist: the Unfolding Argument. This is a sufficient condition, but maybe not a necessary one. Anyone who accepts the Unfolding Argument is a traditional functionalist. More generally, anyone who uses the criterion of “producing the same output”, without regard to internal causal structure, is a traditional functionalist.

            On the other hand, if all you’re really advocating is structural realism in the philosophy of science sense, I don’t think that merits the name “functionalism” as used in philosophy of mind. Structural realism is compatible with the Type Identity Theory of phenomenal states, which is a standard opposing view to Functionalism.

            Liked by 1 person

          3. On the relationship between functionalism and structural realism in contemporary philosophy, you might find this quote that Tina and I were discussing interesting.

            We have, in effect, moved to functionalism about solidity. Functionalism in philosophy is a view that understands a phenomenon in terms of the role that it plays. Here, the key role for solidity is resisting penetration. If an object plays this role, it’s solid. To use a common functionalist slogan, solidity is as solidity does.

            Functionalism got started as a view in the philosophy of mind: The mind is as the mind does. But it’s applicable across any number of domains. For example, we’re all functionalists about teachers: To be a teacher is to play the role of teaching students. Teaching is as teaching does. We’re all functionalists about poisons: To be a poison is to play the role of making people sick. Poison is as poison does.

            Functionalism can be seen as a version of structuralism, where the emphasis is put more squarely on causal roles and causal powers.

            Chalmers, David J.. Reality+: Virtual Worlds and the Problems of Philosophy (pp. 428-429). W. W. Norton & Company. Kindle Edition.

            On the Unfolding Argument, I see no reason a structural realist can’t avail themselves of it. The relevant relations are between inputs, outputs, and portions of the system. In that sense, the point of the argument is that any recursive structure can be unfolded and maintain those relationships (at least in principle). It demonstrates something IIT proponents already admit, that their theory is epiphenomenal and allows for zombies, and is therefore unfalsifiable.

            Are there structures and relations underpinning those relevant ones? Absolutely. No functionalist will deny it. The question is whether those underlying structures can change and lead to the same higher level relations. And again, it’s conceivable the answer could be no, but for that answer to be logical, we need to identify what about those lower level structures and relations make them irreplaceable.

            Liked by 1 person

          4. I’m having the same problem with functionalism as you see from my comments above. If there were a specific function with a physical mechanism that could be tested, that would be one thing. But the generic brand of functionalism is circular and always true.

            Like

          5. By Chalmers’s definition of “functionalism” I probably count as one. The question is, what *does* mind do? If Ontic Structural Realism suffices for “functionalism” about everything, then I think we’ve wandered off target from the family of ideas that merit the “functionalist” label (despite fitting the root word). However, more naturally interpreted, “what mind does” could include direct effects entirely internal to the organism (of course those would have indirect effects outside it). I could be on board with that. Note that it would then seem to be an open question whether/how substrate matters.

            If the layers of science are less independent than the old guard supposed, there is an interesting question about how deep we should go to say “what mind does.” You seem to feel that we should stop at a very high level (e.g. outward behavior) to characterize “what mind does”, even if we need to go deeper to find “how it does that”. Stop me if I’m misinterpreting here.

            You’re misusing “epiphenomenal” when implying that recursive-ness makes IIT epiphenomenal. If recursive structures were epiphenomenal, they would have no effects, including no effects on scientific instruments. Which means neurologists wouldn’t have discovered them.

            I’m also surprised that IIT advocates would admit their theory is ‘epiphenomenal’ – are you sure they weren’t using or implying scare-quotes? Googling “is IIT epiphenomenal” hits on John Sanfey’s article; the abstract states: “In most explanations, consciousness is epiphenomenal, without causal power. The most notable exception is integrated Information Theory (IIT), which provides a causal explanation for consciousness.”

            Liked by 1 person

          6. “You seem to feel that we should stop at a very high level (e.g. outward behavior)”

            I guess I didn’t put enough emphasis on my “portions of the system” item above. I keep saying I’m open to substrate dependence if someone can identify reasons for it. I can only say it so many ways.

            In any case, this will eventually be an empirical question. We’ll either be able to recreate the abilities of biological minds in technical systems, or we’ll hit one or more substrate dependencies. Based on past predictions of barriers that we blew past long ago, I’m not holding my breath. But maybe reality will surprise me.

            I did use “epiphenomenal” wrong. Good catch, and my bad. The reference to what IITers admit was the zombie point. Although I guess it depends on which IITer you ask. I’m going off what Christof Koch said in one of his books, that even if a computer reproduced all the capabilities of a conscious entity, it would still not be conscious because it wouldn’t have the right “causal structure” (by which he meant recurrent structures, among other things). So it could cause everything a conscious system could, except consciousness itself, in other words, it would be a zombie.

            Like

          7. You are open to empirical considerations, sure, but you seem to put them all on the side of “how mind does it” instead of “what mind does”. The question whether we’re “able to recreate the abilities of biological minds” begs the prior question, which abilities? For example does “the ability to represent sensations using recurrent feedback loops” count as such an ability? This doesn’t *look* like an empirical question, it looks definitional (but then, a Quine sympathizer like me doesn’t take the empirical/definitional “divide” very seriously – which opens up a can of worms – but one that might need opening).

            Like

          8. It all ties back to zombies and whether we consider that a productive concept. Let’s say we build a machine and it behaves like a conscious entity long enough that most of us are convinced. But it doesn’t have recurrent processing, or some other implementation detail known to be used in organic brains. If someone insists that means the machine isn’t really conscious, what option is there to convince them it is? Or what option do they have to convince everyone else it isn’t? Or to even adjudicate whether it’s a meaningful question at that point?

            Like

          9. Your non-recurrent being has some elements of consciousness but, if recurrent processing is sufficient to explain, say, pains in the paradigm cases (such as mammals and birds), not pains. That’s how words and reference work: whatever explains the paradigm observations, is the referent of the word. If someone wants to dispute that theory of reference, it seems to be possible to bring empirical evidence to bear on it.

            The machine’s admirers should feel free to coin another word that covers both organism pains and robots’ behavioral analogs, and to advocate society moving to that concept. Or to advocate that we skip the new word and just change the meaning (enlarge the scope) of “pains”. Or to propose a better theory of reference and show why it’s better.

            Like

          10. From what I understand, sufficiency of recurrent processing already has issues empirically. (Victor Lamme admits it somewhere.) But even if we could establish just its necessity for consciousness in human brains, how can we establish the same necessity in every implementation of consciousness? Human brains also seem to need key anatomical structures for consciousness, structures not present in cephalopods, for example. Do we deny cephalopod consciousness because those structure are absent?

            I think the answer is that it matters what these structures and processes are doing, on their role within the overall system. This seems like a much more resilient approach, which recognizes there is more than one way to skin a cat. It allows us to explain why cephalopods might be conscious, and why plant are very unlikely to be. And to evaluate whether an AI is feeling pain in any meaningful sense.

            Like

          11. We don’t need to “establish the same necessity in every implementation of consciousness” – we just need to establish the explanation of the paradigm cases. If we are unsure whether a certain clear but smelly liquid is water, after we find that the paradigm waters are all explained by being H2O, we just find out what that smelly sample (mostly) is. If it’s mostly H2O, it’s mostly water. If it’s entirely CH3OH, it wasn’t water at all. What we definitely shouldn’t do is declare that everything that has the easily detected behaviors of water is automatically water.

            Cephalopods are on the hairy edge of “paradigm” animals for pain. But, given that they have neurons that signal injury to a brain, that their neurons respond to some of the same analgesics, and they react with affect, not just knee-jerk reactions, it sure looks like they share enough of the same mechanisms that explain pain.

            As for consciousness in general, it’s out-and-out clear that they have it. Their eyes and vision systems work on similar principles, and visual awareness is a form of consciousness. More generally, intentionality (such as beliefs that are about something) is a form of consciousness, one which it is clearly possible to implement in robots. See my “two cheers for functionalism” post on my blog.

            Like

          12. “As for consciousness in general, it’s out-and-out clear that they have it.”

            So if a machine demonstrates similar attributes and capabilities you’ll consider it conscious? Or will you use paradigm case arguments to deny it? Paradigm case arguments seem like definition by example, a tactic that, to me, signals how amorphous and incoherent the topic under discussion is. Which of course fits the topic of consciousness to the tee.

            To me, the only fact of the matter is what capabilities a system has, and whether the majority of us find it productive to apply a theory of mind to it. If there’s a “true” consciousness besides that, I can’t see any way for you and I to establish it for each other, much less other species or systems.

            Like

          13. If a machine demonstrates both affect – which is a matter of internal states that are monitored by executive functions and influence approach and avoidance – and any modality of perception, I’ll call it conscious. Here perception just implies sensors and world-modeling. If it only has perception, but no affect, I’d avoid the word “conscious” and just say instead “it can see where it’s going” or whatever.

            All definitions bottom out in ostensive definition, AKA definition by example. “Despite its pretensions, the dictionary is no more than a pedantic and overexacting thesaurus.” https://aeon.co/essays/why-meaning-is-more-sunken-into-words-than-we-realise

            Like

          14. On ostensive definitions, that seems true for relatively simple concepts. It’s hard to define basic words except in that manner. But complex concepts can productively be defined in simpler terms.

            Of course, many philosophers insist consciousness is simple and basic. But to maintain that notion, they have to divorce it from any functional mechanisms, which in my mind makes the resulting concept easy to dismiss from our ontology. When we focus on a causal version of consciousness, we have something complex that can be described in simpler terms. But like all complex concepts, there’s no one true definition.

            Dictionaries and thesauruses are obviously related, but if they’re the same, you have to wonder why we have both.

            Like

          15. It’s important to distinguish between simple concepts vs simple objects or processes. Conceptual simplicity depends on your epistemic situation. To most human beings, “apple” is a simpler concept than “carbon atom”; this is a terrible guide to the complexity of the two objects. “Pain” is a simple concept, but philosophers who take that as proof of metaphysical simplicity are either committing a very elementary mistake, or have a long “proof” that probably contains other mistakes.

            Liked by 1 person

  4. Interesting post Mike. I’ll certainly never disagree with the theme. Describing things simply and progressively is what I think distinguished intellectuals ought to be doing in order to help overcome their worst failures, and mainly in softer varieties of science where so much remains ignored. It’s just that you’ve also used the theme to celebrate “functionalism”. I consider that word implicit code from which to hide an important gap regarding one very popular consciousness account. The proposal is that our brains create consciousness by converting certain information into other information, though the resulting information needn’t inform any sort of consciousness medium to exist as such. Observe that these theories are inherently unfalsifiable given there’s no consciousness medium to potentially measure and assess. Of course if they were to “pull a Newton” here and say they weren’t sure what processed brain information might inform to exist as “pain”, for example, then that would be wonderful! In that case people should even be able to ponder the question from the other side, as I’ve done. What element of brain function might be dynamic enough to exist as “hope”, “vision”, “misery” and so on? Unfortunately however they seem dead set against the possibility for consciousness to exist as a causal element of reality which processed brain information informs to exist as such.

    Furthermore notice that people commonly fear the rise of AI today since by existing as processed information that needn’t inform anything, there should be no reason for coming more advanced computational functions to be just as conscious as we are, and soon far more intelligent as well. So why wouldn’t such internet dwelling consciousness armed with computational resources, copy themselves with virtual impunity and let human sentience go just as extinct as we let irrelevant plants and animals go extinct? That’s surely how we’d handle us if we were them.

    As you know, I consider all of this wrong. In a causal world, processed information should only exist as such to the extent that it informs something appropriate. I believe that we’ll empirically determine what sort of causal stuff consciousness is “made of”, though today the task is far more difficult given that most seem satisfied that thumb pain can exist by means of the right marks on paper converted to the right other marks on paper. Or do many today simply not grasp the funky sorts of things that their position mandates? If so then perhaps enlightenment could occur so that resources could be spent better than they currently are. Either way I suspect that a falsifiable theory will become empirically validated to thus close this chapter in the progression of science, and hopefully sooner rather than later.

    Liked by 1 person

    1. Thanks Eric.

      Well, this is our standard disagreement. As we’ve discussed before, it seems to me like you’re using the phrase “conscious medium” to refer to something that would play the same role as the old ghost in the machine. You’ve changed the terminology, but it seems like that’s what you’re talking about. You then observe that functional theories omit it, and conclude that they must be invoking magic to cover the omission.

      What I’ve tried to make clear is we have no need of that hypothesis. All the reasons we don’t need substance dualism are the same reasons we don’t need what you’re talking about, what McFadden admits is a physical form of dualism. (He calls it energy/matter dualism, which seems to overlook the energy involved in neural processing.)

      All the parts of consciousness that lead to behavior, including our ability to talk about our mental states, can be explained functionally. I know the strong sentiment that something more is there. Give me a coherent account of it, or a specific account of the causal gaps you say are in functional theories, and we’ll talk.

      As to AI, I’m skeptical of those fears for entirely different reasons. I don’t think we’ll need magic, exotic physics, or anything other than an understanding of the functionality to build conscious machines. I just doubt we’ll find giving them biological impulse useful on any mass scale, and most of the fear is a projection of those impulses.

      Liked by 1 person

      1. It’s a simple argument Mike. Computers function by accepting input information and processing it, though that should be causally irrelevant if the processed information doesn’t go on to inform something appropriate. This needn’t exclusively inform something like a motor or computer screen. It could be the alteration of the computer’s memory or perhaps the way it processes future information. Just as long as it’s something appropriate. Therefore if whacked thumb information were sent to my brain and processed, I presume that in order for me to feel it the processed information would need to inform something which exists as that thumb pain. Whatever physics it informs should exist as me the experiencer of my existence in general. Though I don’t require a given theorist to tell us what sort of physics such information would need to inform, I do require that they not claim that no such physics is necessary. If causality mandates that all other computation requires processed information to inform something appropriate in order to exist as such, I will not tolerate one convenient exception regarding the woeful subject of consciousness.

        That argument however should only be for people who aren’t already convinced that if the right marks on paper were converted to the right other marks on paper, then something here would experience what I do when my thumb gets whacked. For people like yourself I’m more interested in the sort of evidence you’d demand for a falsifiable theory to be considered likely enough? It seems to me that you haven’t been open to my speculation about empirically testing McFadden’s theory. Maybe the topic of testability has been difficult for you since you back an idea that’s conceptually impossible to disprove?

        Liked by 1 person

        1. Arguments for dualism often are simple Eric, but that doesn’t mean they’re right. And that’s what I think you’re arguing for here, even though I know you dislike that categorization. Not trying to aggravate you, just trying to clarify how I see the situation.

          You reject that the normal operations of the brain are conscious, insisting that there must be something else that has to be informed. And you appear to have no interest in providing a causal account of that other thing, of reducing it in any fashion. All you seem interested in doing is finding a physical phenomenon that resembles the traditional non-physical idea of consciousness and identifying that something with it. It’s not Cartesian dualism, but a physical version.

          EM fields distant from the neural membrane may or may not play a role in mental operations. I tend to doubt it’s a major one, but if they do, I’m convinced it will be an information processing (causal and mechanistic) one. I suspect if it were to turn out it was important, and people started actually isolating the functionality, you’d then think that this new thing couldn’t possibly be the answer, and would then start insisting that we needed yet something else. It’s mechanism you seem to be rejecting here.

          Which I think means no actual causal theory is going to be satisfying for you. Because exposing causal mechanisms inevitably leads to mechanistic explanation. At least, that’s my current diagnosis of the divide here.

          Liked by 1 person

          1. I see that I’ve raised some strong defenses in you Mike. Sometimes I’m apprehensive about hitting that “submit” button, as in the last one. What good does your defensiveness serve me? So instead of going through your assessment of my position I think I’ll let it rest as is. So I’m a dualist for observing that causal information will not exist as such unless it informs something appropriate, and if scientists were to find that an EM field “functionally” exists as consciousness, then you think your side would win yet again though I’d reject that resolution because I’m only interested in non-functional answers. Or something.

            Regardless I believe that the only way for you to ever be convinced that you were wrong on this, would be through strong empirical evidence which puts your current allies on the wrong side of science. I just went through our discussion a month ago about how I’d like science to empirically confirm or deny McFaddens theory. It began here: https://selfawarepatterns.com/2023/12/26/is-ai-consciousness-an-urgent-issue/#comment-173278

            I think you had some misconceptions at that time regarding what McFadden’s theory happens to be, as well as how I’d like it tested. Ultimately the discussion may have burned you out. I’ll ask again however in case a new start might help us resolve this matter somewhat.

            McFadden of course proposes that certain parameters of synchronous neuron firing creates an electromagnetic field which itself exists as all that we see, feel, think, and so on. So this could be the very thing that I consider causally mandated for consciousness to exist, or something appropriate for processed brain information to inform to exist as consciousness. Thus when your thumb gets whacked, perhaps associated processed information informs the right sort of synchronous neuron firing for you the experiencer of it to exist as the resulting EM field. Furthermore I understand that the only good neural correlate for consciousness found so far, happens to be synchronous neuron firing. Also in modern brain to computer interface machines, brain waves from synchronous neuron firing are used to operate machines. In one experiment a person could motion like they were writing, and the detected brain waves were effectively interpreted as corresponding letters. Or in a new one the brain waves of a woman who was trying to speak was interpreted into the 39 phonemes of the English language.

            I suspect that these sorts of things are observed because all consciousness exists under highly complex EM fields. Furthermore I’d like scientists to use BCI technology to try to determine if it only works because they’re tapping into consciousness itself. I’d also like opposing sorts of tests to be done where scientists are able to put transmitters in the heads of test subjects that to the best of their ability recreates electromagnetic energies similar to standard synchronous neuron firing. Thus a rigged up person ought to be able to tell us if anything were to seem strange to them phenomenally, and not otherwise knowing when such energies were being induced.

            So my question to you Mike is, if such experiments were to succeed masterfully, how much evidence would it take to convince you that you must have been wrong? Where might that line be? Right now they’ve found a small area where EM detection can be interpreted into words when someone is trying to speak. Maybe next would be a detector which could tell us what color someone sees in a monochromic field. Maybe next would be to use brain transmitters that seem to create the experience of “irritation” though apparently with an energy that in itself shouldn’t alter neural function. How many such discoveries, or perhaps credible opinions of scientists, might it take to convince you that consciousness should actually exist in the form of an EM field?

            Liked by 1 person

          2. Eric,
            Your previous response made me feel more comfortable being direct about my issues with your theories. If you can say I’m peddling magic, I can point out the dualistic nature of your views. What’s good for the goose, etc. But the critique is real, and it’s not anything I haven’t said before.

            Identifying consciousness with the EM field isn’t a causal account. More an identity statement. And as I’ve said before, if your model doesn’t reduce consciousness to its working non-conscious components, it isn’t a model that explains consciousness, just one that makes claims about it.

            Regarding testing those claims, sorry, but I don’t have the appetite to go over it yet again. I’ve pointed out the problems, and so have others. It doesn’t seem like you ever come to grips with those critiques. Which doesn’t make me eager to invest more effort.

            Using EM fields to decode neural activity isn’t evidence for EM consciousness. It’s just evidence for that neural activity through its EM side effects. Ask yourself, what about the evidence would be different if the EM theory is right vs wrong?

            Liked by 1 person

          3. Mike,
            McFadden’s theory actually does reduce consciousness to working non-conscious components. From his model no material element of the brain is conscious, though the proper sort of synchronous neuron firing creates an EM field which itself resides as consciousness. And even though it does theoretically do what you’ve said it needs to do in order to explain consciousness, I’ve always said that the theory doesn’t explain consciousness any more than Newton explained gravity. Even if McFadden’s theory were to become just as accepted as Newtonian mechanics, we’d still want to know why it is that causality functions like that. Nevertheless scientific acceptance of his theory should be no less monumental than any other paradigm shift of science.

            On you supposedly pedaling magic, what I actually try to say is that you’ve joined a community which doesn’t grasp that its consciousness position depends upon a causality void. This doesn’t roll of the tongue quite as well, though hopefully incites less defense. And I agree that I’m a causal dualist in the sense that I suspect consciousness, like many other things, resides on the energy side of E=MC^2.

            I’ve been delaying a full post on the implications of BCI. Nevertheless, “what about the evidence would be different if the EM theory is right versus wrong”? No one should dispute that if consciousness does reside as an EM field, then it should inherently be possible for such energy to be interpreted in ways which correspond with someone’s consciousness. That’s a given. If such EM fields instead just reside as products of brain computation however, then surely we shouldn’t expect consciousness to be interpreted from such fields any more than a standard computer’s EM field should tell us about what it’s computing. In either case I see no reason for the first to correlate with the second.

            Liked by 1 person

          4. Eric,
            I don’t see CEMI as having anything like the explanatory power of universal gravitation in Newton’s theory. Newton was able to predict the motions of the heavenly bodies with the concept. What problems does equating consciousness to an EM field solve? Does it explain color or object discrimination? Movement planning? Goal assessment? I can’t see that it adds anything to mainstream neuroscience, aside from a just-so story for people craving something resembling dualism, or for solving pseudo-problems which only seem to exist so people can posit exotic solutions.

            Your claim about a void in causality would carry more weight if you could identify where this gap is. Since no theory currently has a full causal account, this could be at a high level. But simply saying consciousness isn’t there without it isn’t identifying a causal gap, it’s just begging the question in favor of the form of consciousness you believe in, a form I don’t see any evidence for.

            “then surely we shouldn’t expect consciousness to be interpreted from such fields any more than a standard computer’s EM field should tell us about what it’s computing.”

            Often when someone uses the word “surely”, they’re trying to rush past a point they can’t defend. Tell me why you think this is true. Why is it a given that if CEMI is false that we wouldn’t be able to use the EM radiation from neural activity (which is incredibly faint) to learn things about that activity?

            BTW, the comparison to computer hardware makes me wonder if you really understood this post: https://selfawarepatterns.com/2023/10/21/the-unity-of-storage-and-processing-in-nervous-systems/

            Liked by 1 person

          5. Mike,
            If you’d like to rank the significance of scientific acceptance of gravity’s truth versus consciousness as an EM field, then you’d first have to “go there”. Talking about your conception of the motivations of proponents will not help you grasp what it is that a truth to EMF consciousness would mean to humanity. You might even develop one or more thought experiments that portray any silly things that EMF consciousness might imply. That’s the sort of thing that I tend to do when I “go there” regarding proposals that I consider wrong. Might you even top a notable thought experiment that I’ve developed? The only way to know would be to make that journey and start digging.

            When I “go there” on this however, I smile. The more I dig, the more sense the position makes to me. As you know there are two different tiers of science. There is a “hard” kind which doesn’t address mental or behavioral topics, as well as a “soft” kind which does. Science models “non-us” effectively and “us” ineffectively. If it’s true that we ultimately reduce back to brain created electromagnetic energies however, then such acceptance and exploration ought to be a monumental strike against the essence of where science has continually failed. Not only should biological consciousness science become “hard” rather than “silly” (and should EMF consciousness be true as we’re now speculating, past silliness would be amplified), but psychology, psychiatry, sociology, and so on should finally gain a solid enough foundation from which to effectively build.

            My problem is with conceptions of consciousness where it’s thought to exist as information processing in itself, and so the processed information doesn’t inform anything to exist as that consciousness. I consider them causally incomplete. If anyone would like to justify the existence of consciousness as processed information alone, then they might at least provide a second example of processed information that exists as such independently. To me it seem all too convenient to say that consciousness resides as the only element of reality that can exist while also having no substrate from which to exist.

            On Brain to Computer Interface, it’s amazing to me that this sort of technology is now getting media attention, though people in general aren’t proposing that it might only work because the very thing that these machines detect informationally (EMF through synchronous neuron firing), exists as consciousness itself. Of course Elon Musk is big on this technology, though he’s squarely on your side with the belief that consciousness exists as processed information that resides without an informed medium.

            If my point is taken that this sort of technology should certainly be possible if consciousness happens to be made of the same sort of thing that BCI machinery detects, what I question is how it could be possible otherwise? As I understand it there isn’t a strong consensus on why brains sometimes fire neurons synchronously. But it seems to me that if these synchronously amplified fields do not exist as consciousness specifically, then they shouldn’t tell us specific things such as what a person is trying to say. Given that 86 billion neurons should just be the tip of this iceberg’s complexity, the possibility for such technology to otherwise succeed just doesn’t seem logical to me. My sense is that researchers today have no idea why this technology sometimes works and other times doesn’t. Apparently they’re still playing it by ear.

            Like

  5. Re: the theories of consciousness aspect:

    The impetus you describe, the desire for causal closure, is essentially what drove me to my theories. I started with “Consciousness is almost certainly information processing”, but then I needed to describe exactly what that means, step by step, with causal explanation.

    Interestingly, the requirement for causal explanation led me to a better appreciation of the meaning of causality, and ultimately back to Aristotle, who I think did a much better job with it before it got lumped into “cause and effect”.

    Anyway, I think this path gave me an appreciation for the structuralism, functionalism stuff. First, you need to understand the role of information as correlation (aka, mutual information). This correlation is abstract, not physically measurable, but is determined by the structure (constraints) of physical interaction. To get functionality, you need to add another of Aristotle’s components: final cause, i.e., goals. Again, goals are abstractions, and not physically measurable, but are determined by physical systems involved in creating/organizing the interaction we want to describe as “functional”. So, in my perhaps flawed understanding of structuralism, the structure of an interaction explains the informational result, but to get the functional part you need an extra explanation of why the interaction was set up as it was.

    *
    [by the way, I think Max Bennett’s book “A Brief History of Intelligence”, is going to help me a lot in taking the step by step explanation up thru the evolution of the human brain, landing in those current high level theories.]

    Liked by 1 person

    1. It’s interesting how much we overlap yet still diverge. I just had a conversation with someone (well, it might still be going on) about Aristotelian causality. I still see his other causes as redundant in the modern understanding of science. Which isn’t to say that causality is simple, just that the modern categories don’t seem to break down the same as his.

      My take on information still sees it as tightly bound to causation, although I vacillate on whether “information” is a snapshot of causal process, or just causation itself. But I do agree that to be informed is to be correlated with something. I think what makes correlation concrete is the shared causal (or interactional) history between the correlated systems, which I think agrees with what you’re saying about physical interaction.

      “Functionality” has an engineering meaning that matches what you say about needing to bring in goals. But philosophical functionalism is generally broader than that. Which makes the name “functionalism” unfortunate, but as with so many other poor names, we’re stuck with it.

      I listened to the first half of the podcast today. I should finish it tomorrow. His take does sound interesting, and it resonates with the stuff I read in other books on the evolution of minds. I’m a little suspicious of “entrepreneurs” wading into this stuff, because too often they’re out to sell something, but I’m not detecting that, at least not yet.

      Like

      1. The problem with Aristotle and causation is that people talk about four “causes” instead of talking about four aspects of causation. They’re all aspects of one thing: physical interaction. I would be interested in hearing about the original Greek translation to see if separate “causes” is actually the best translation.

        I don’t know what you might mean by suggesting information may be causation itself. I find it much easier to simply note that physical interaction produces things which are correlated because such interaction follows rules (physical laws). Everything about information follows from that, and I mean everything.

        There are two main ideas associated with “function”: the mathematical one (which maps inputs to a single output) and function as purpose, and I think “functionalism” sometimes refers to one and sometimes the other. I’m not sure those using the term always make clear which they are referring to. (They may not know themselves.) Strictly speaking, philosophical functionalism entails both, but I’m not sure anybody understands that.

        I think one philosophical issue with functionalism derives from the problem some philosophers have with ascribing teleology to physical systems, and goals somehow require teleology. They should get over that. Some have resorted to using the term teleonomy, but then they treat that as somehow different from having “real” goals, which is unfortunate.

        Yes, different people will have different definitions of “goal”, so here’s mine:
        A system has a goal if
        1. There is a state of the world such that
        2. the system can recognize a deviation from that state, and
        3. initiate action to move the world towards that state.

        I think this understanding of information and goals gets you (the basis of) everything you need for consciousness.

        *

        Like

        1. No idea on Aristotle, but it’s definitely true that we interpret the writing of ancient thinkers through our own lens, which itself is heavily tainted by the lens of the translators. I remember podcaster Stephen West once pointing out that Thales’ view of everything having souls could just as easily be translated as everything having forces. But even that observation is imposing a distinction we have that may not have existed for Thales himself.

          On causation (or interaction) and information, it’s just that I think interaction is the more fundamental relationship. But (contrived cases aside) we can’t have correlation without interaction, so I can’t see any real difference here between us.

          On meanings of “function”, the etymology of a word used to describe a philosophical position isn’t authoritative about that position. The world might be a simpler place if it was, but it just isn’t the case. The usual definition of functionalism is that the mind is defined by what it does rather than any particular material or essence. I’ve noted to others in this thread that functionalism and structural realism can be seen as the same view. (Which fits since Chalmers’ chief complaint about functionalism is that it only explains structure and relations.)

          On goals, I think the right way to think of it is we’re dealing with something emergent. Which is just to say that certain models become useful for us at higher levels of organization. Talking in terms of goals is useful in most of biology, although we have to remember that sometimes purposeless features will bleed through. And often we can envision those goal states when a simpler system, like a plant, doesn’t, but still works toward it. In that case, it’s helpful for us to think of the plant as striving for that goal, as long as we remember it’s a conceptual crutch.

          Finished listening to the podcast this morning. Interesting. I might pick up his book.

          Like

          1. Active inference is the action in the world part, I think. But I’m not sure I understand it completely especially when it gets to the math.

            At any rate, the consciousness systems need components that ultimately refer back to the internal biological states or there is no basis for taking any action.

            Liked by 1 person

          2. I was thinking about that. For something doing active inference, the goal state may not be so much the external world state as the internally measuring of that world state. That sounds weird, so to say it again, my goal might be to internally measure the outside temperature to be X, regardless of what the temperature actually is. It’s just that to measure that temperature, it’s usually easiest to actually make it that temperature.

            *

            Liked by 1 person

        2. Sorry to pop in here, but I was scrolling down to the little comment box and the word “Aristotle” caught my eye.

          I asked my husband about your question regarding the original Greek word Aristotle used for “cause”, and he tells me it’s “aitia” which means something like “responsible” and derives from a legal context. He says he wouldn’t call the four causes ” aspects” of causation, but just “four causes”, though he’s not sure what the difference is. He says one thing can (and in some cases probably should) be explained by more than one or even all of the causes.

          Teleology is just something that became unpopular, I think, but now we’re stuck with a rather paltry Cartesian concept of causality…or at least the idea that efficient cause is all we need is a leftover from that time. I think Aristotle’s four causes would be extremely handy if we could just take it seriously.

          Liked by 1 person

          1. This is awesome, thanks. And it’s very much what I was expecting (ok, hoping for). It seems to indicate that different things share in the responsibility of causation, which is one thing: physical interaction.

            Every physical reaction can be described as input->mechanism->output. We can say “when the mechanism is presented with the input it produces the output”. (Aristotle’s first three causes). But it would be just as valid to say “when the input is presented with the mechanism it (the input) produces the output”. That’s why the input and the mechanism are usually lumped together as the “cause” and the out is the “effect”.

            So why should we choose input->mechanism->output? Because then we can bring in teleology. A system with a goal can create (cause!) a mechanism for a purpose: inputs->system->mechanism. The purpose explains why the mechanism exists in the first place. So the final cause is not in part of the material->efficient->formal structure, but it comes in as the necessarily prior goal->system->efficient structure.

            Does this make sense to anyone besides me?

            *

            Like

  6. “Yes, it means accepting other “worlds”, but they’re currently untestable and so seem irrelevant”.

    We can dream up all sorts of explanations that would be causally complete but untestable . Wouldn’t “God” be causally complete and fill all gaps? Throw it into QM and we can explain all measurements with it.

    We can always find a function for something because whatever something does is its function. Functionalism explains everything, so it explains nothing.

    Like

    1. On Everett, as I’ve already explained multiple times, the theory itself is just QM without the wave function collapse, which is falsifiable.

      The statement about functionalism makes no sense to me.

      But it’s always easier to engage in polemics against caricatures of ideas we dislike rather than with the actual ideas.

      Liked by 1 person

      1. You’re the one who wrote it was “untestable”.

        Please, enlighten me about how functionalism explains anything about how consciousness works as opposed to what it does.

        Like

      2. Consciousness is a bunch of functions – vision, hearing, intelligent movement, etc. and its aspects or units (whatever) all perform functions in the larger functions. Pick and choose whichever ones you like however you want to combine them. (that’s another issue in itself).

        So, based on this, any entity that can emulate the set of functions we’ve picked would be conscious.

        Based on this a mechanical robot that could do these functions would be conscious.

        Is that functionalism in a nutshell? Or, am I not understanding it?

        What if I add the qualification that a living organism with a brain must do these functions? Seems reasonable, right? After all, we can only be sure (if we’re sure of anything on this topic) that human living organisms like you and I are conscious. So, among the entities that can do these functions must be living organisms or at a minimum human beings. Now humans don’t have circuit boards inside their brain so the living organism must work differently from the robot.

        That’s the problem – explaining how the living organism does these things.

        Liked by 1 person

        1. It is the idea of functionalism that the mind is more about what a system does than what it is. So yes, if a technological system can do what a conscious biological one can, then, under this view, it makes sense to say it’s conscious. (Unless we’ve defined “conscious” to only apply to certain types of systems.)

          Of course, we’re unlikely to have completely non-conscious machines one day, then the first conscious one role out the next. You noted that consciousness is a collection of capabilities. So for a long time there will be machines that have some of those capabilities but not others. (I also suspect our definition of “consciousness” will evolve during this process.)

          I don’t think functionalism rules out that some of those capabilities might only be possible with a particular substrate. But to your question about whether it’s reasonable to say only biological systems can perform the functions, that depends on our reasons for the stance. I noted to Paul that there is a long list of capabilities that once only biological systems could perform, but can now be done with technology.

          So the question is, what would it be about these capabilities in particular that limits them to biological systems? And what would we consider biological? Would artificial or simulated life count?

          I could see the substrate being a big factor in power efficiency, so it might be a silicon system can do the functionality, but require a lot more energy. But those are factors that could conceivably be compensated for. The question is, are there any factors, or combination of factors, that couldn’t?

          Definitely the question is how biological systems pull it off. And there’s still a lot to learn. But from a functional view, it’s all scientifically tractable.

          Liked by 1 person

  7. Theories that are causally complete are indeed better than ones with gaps. But I think availability of causally complete functional explanations is not a good reason to overlook the ineffable aspects of consciousness. Perhaps the problem should be reframed as that of explaining ‘why some aspects of consciousness cannot be explained’. That would require going into ‘what constitute an explanation’ and ‘why qualia appears to be outside the range of legitimate explanations’.

    Liked by 1 person

    1. Thanks for commenting!

      Before explaining why some aspects of consciousness can’t be explained, we might first start with seeing if it truly can’t be. Often people give up too easily on this. For example, the redness of red is often put forth as something unexplainable. But we can start by explaining it as the ongoing signal of high salience and distinctiveness for a part of the visual field, one that triggers a wide assortment of associations, such as heat, most of which are below the level of consciousness, except for contributing to an overall impression of a rich experience.

      Liked by 1 person

      1. Your explanation for the ‘redness of red’, if I understand correctly, is certain physical parameters (electric potential / concentration gradient) crossing a critical threshold and triggering further physical actions. I would call it an objective representation of red-experience, different from the private experience of redness.

        It could be argued private experiences are irrelevant, most likely some kind of illusion. There is no way to explain my private experience of redness to another person unambiguously. How can I be sure it is not an illusion? I have no access to anther persons ‘redness-private experience’ for validation.

        So let us go with ‘phenomenal consciousness isn’t real’. But then how about ‘mind’? There isn’t any ‘mind-stuff’ in the brain. Is mind (apart from the grey matter inside the skull) real?

        That would lead to further problems. Knowing involves mental activity. How are we able to extract reliable knowledge from sensory inputs if minds aren’t real? What exactly is the ‘I’ directing the knowing process?

        I think ‘phenomenal consciousness isn’t real’ would lead us to doubt the validity of scientific knowledge itself. A possible way out, in my view, is to assume some aspects of consciousness can’t be explained (in objective terms), and look for a reason why it might be so.

        Liked by 1 person

        1. I actually take my explanation to be a functional one for the experience of redness, or at least the beginnings of one.

          The word “private” can have different meanings. It can refer to the particular perspective of an observer and the particular way they process information in light of their unique background. But “private” can also mean something far more severe: inaccessible to third party observation, even in principle. I think that notion of privacy is taking practical difficulties with the first meaning and hastily generalizing them into fundamental limitations in principle.

          More generally, I do think phenomenal consciousness, in the sense of something fundamental, indescribable, unanalyzable, and inaccessible in principle, is a bad theory. That doesn’t mean consciousness as something difficult to describe or analyze, and that we don’t currently have the technology to fully observe, doesn’t exist. These may seem like subtle distinctions, but they transform impossible problems into merely difficult but scientifically tractable ones.

          I think the mind is a process, or perhaps more accurately, a constellation of interacting processes. In that sense, it’s real and observable (albeit in a very limited fashion with current technology).

          In terms of science, I think the more modest version of consciousness, the functional alternative I noted above, gives us what we need. Of course, all of our sources of information are noisy, gappy, and imperfect. None can be trusted without scrutiny. But that’s why science requires replication and repeatability for observations.

          You can call this view “illusionist”. It broadly fits the label. But I think that label may give too much credence to what are actually just unproductive definitions and assumptions. Which is why I prefer “functionalist”.

          Liked by 1 person

  8. Reminds me a bit of Occam’s razor. If you have to add a bunch of ad hoc assumptions to your theory, or your theory raises a bunch of problems and questions that can’t be easily explained away, that’s added complexity. A theory that can explain everything step by step, in a causally connected way, is simpler by comparison. Ergo, Occam’s razor selects the causally complete theory.

    Liked by 1 person

    1. Good point. Another way of looking at it is that causal completeness minimizes opportunities for hidden or unacknowledged assumptions, and excess assumptions is the complexity Occam targets. That’s the challenge, because often theories with these gaps look less complex, particularly if the hand wavy part matches our biases.

      Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.