What is it like to be you?

In 1974, in a landmark paper, Thomas Nagel asks what it’s like to be a bat. He argues that we can never know. I’ve expressed my skepticism about the phrase “what it’s like” or “something it is like” before, and that skepticism still stands. I think a lot of people nod at it, seeing it as self explanatory, while holding disparate views about what it actually means.

As a functionalist and physicalist, I don’t think there are any barriers in principle to us learning about the experience of bats. So in that sense, I think Nagel was wrong. But he was right in a different sense. We can never have the experience of being a bat.

We might imagine hooking up our brain to a bat’s and doing some kind of mind meld, but the best we could ever hope for would be to have the experience of a combined person and bat. Even if we somehow transformed ourselves into a bat, we would then just be a bat, with no memory of our human desire to have a bat’s experience. We can’t take on a bat’s experience, with all its unique capabilities and limitations, while remaining us.

But the situation is even more difficult than that. The engineers hooking up our brain to a bat’s would have to make a lot of implementation decisions. What parts of the bat’s brain are connected to what parts of ours? Is any translation in the signaling necessary? What if several approaches are possible to give us the impression of accessing the bat’s brain? Is there any fact of the matter on which would be “the right one”?

Ultimately the connection between our brain and the bats would be a communication mechanism. We could never bypass that mechanism to get to the “real experience” of the bat, just as we can never bypass the communication we receive from each other when we discuss our mental states.

Getting back to possible meanings of WIL (what it’s like), Nagel makes an interesting clarification in his 1974 paper (emphasis added):

But fundamentally an organism has conscious mental states if and only if there is something that it is to be that organism—something it is like for the organism.

This seems like a crucial stipulation. It is like something to be a rock. It’s like other rocks, particularly of the same type. But it’s not like anything for the rock. (At least for those of us who aren’t panpsychists.) This implies an assumption of some degree of metacognition, of introspection, of self reflection. The rock has overall-WIL, but no reflective-WIL.

Are we sure bats have reflective-WIL? Maybe it isn’t like anything to be a bat for the bat itself.

There is evidence for metacognition in mammals and birds, including rats. The evidence is limited and subject to alternate interpretations. Do these animals display uncertainty because they understand how limited their knowledge is? Or because they’re just uncertain? The evidence seems more conclusive in primates, mainly because the tests can be sophisticated enough to more thoroughly isolate metacognitive abilities.

It seems reasonable to conclude that if bats (flying rats) do have metacognition, it’s much more limited than what exists in primates, much less humans. Still, that would give them reflective-WIL. It seems like their reflective-WIL would be a tiny subset of their overall-WIL, perhaps a very fragmented one.

Strangely enough, in the scenario where we connected our brain to a bat’s, it might actually allow us to experience more of their overall-WIL than what they themselves are capable of. Yes, it would be subject to the limitations I discussed above. But then a bat’s access to its overall-WIL would be subject to similar implementation limitations, just with the “decisions” made by evolution rather than engineers.

These mechanisms would have evolved, not to provide the bat with the most complete picture of its overall-WIL, but with whatever enhances its survival and genetic legacy. Maybe it needs to be able to judge how good its echolocation image is for particular terrain before deciding to fly in that direction. That assessment needs to be accurate enough to make sure it doesn’t fly into a wall or other hazards, but not enough to give it an accurate model of its own mental operations.

Just like in the case of the brain link, bats have no way to bypass the mechanisms that provide their limited reflective-WIL. The parts of their brain that process reflective-WIL would be all they know of their overall-WIL. At least unless we imagine that bats have some special non-physical acquaintance with their overall-WIL. But on what grounds should we assume that?

We could try taking the brain interface discussed above and looping it back to the bat. Maybe we could use it to expand their self reflection, by reflecting the brain interface signal back to them. Of course, their brain wouldn’t have evolved to handle the extra information, so it likely wouldn’t be effective unless we gave them additional enhancements. But now we’re talking about upgrading the bat’s intelligence, “uplifting” them to use David Brin’s term.

What about us? Our introspective abilities are much more developed than anything a bat might have. It’s much more comprehensive and recursive, in the sense that we not only can think about our thinking, but think about the thinking about our thinking. And if you understood the previous sentence, then you can think about your thinking of your thinking of….well, hopefully you get the picture.

Still, if our ability to reflect is also composed of mechanisms, then we’re subject to the same “implementation decisions” evolution had to make as our introspection evolved, some of which were likely inherited from our rat-like ancestors. In other words, we have good reason to view it as something that evolved to be effective rather than necessarily accurate, mechanisms we are no more able to bypass than the bat can for theirs.

Put another way, our reflective-WIL is also a small subset of our overall-WIL. Aside from what third person observation can tell us, all we know about overall-WIL is what gets revealed in reflective-WIL.

Of course, many people assume that now we’re definitely talking about something non-physical, something that allows us to have more direct access to our overall-WIL, that our reflective-WIL accurately reflects at least some portion of our overall-WIL. But again, on what basis would we make that assumption? Because reflective-WIL seems like the whole show? How would we expect it to be different if it weren’t the whole show?

Put yet another way, the limitation Nagel identifies in our ability to access a bat’s experience seems similar to the limitation we have accessing our own. Any difference seems like just a matter of degree.

What do you think? Are there reasons to think our access to our own states is more reliable than I’m seeing here? Aside from third party observation, how can we test that reliability?

59 thoughts on “What is it like to be you?

  1. I think your analysis and conclusion are exactly right.

    My immediate response to reading Nagel’s paper was: surely the experience of e.g. echo-location is fungible. It is very likely feels like sight does to us. So in that respect at least, bats are in principle no different from a caver carrying a light on their head-band.

    BTW, I expect you are aware of Fodor’s response in “What it is like to be a rock”!

    And as for for our own WIL, many moons ago I wrote a story a snippet of which went like this:

    [QUOTE]

    “Speaking of gulls and Chinese sages and butterflies…”

    “Fish, actually, if you are referring to Lao Tzu’s famous sophistry.”

    “Whatever! What is it like being a woman?”

    “How in blazes should I know?”

    “Mmm… I rather thought what with you being one… Or have I been misinformed? I know my parents neglected my sex education, but…”

    “No, silly. Look, what is it like being a man?”

    Pause.

    “Hm… Tricky one!”

    “Exactly. It’s that word ‘like’ that’s the problem. I know what it is being ‘me’. And you know what it is being ‘you’. But not having ever been anybody else, how could one possibly know what it is like being ‘me’, ‘you’ or anybody or anything else?”

    […]

    “Like, take that gull for example. What is it like being a gull? The gull has no way of knowing the answer, and neither do you. You’d have to become a gull having first been human…”

    “… and I still wouldn’t know anything, because gulls don’t know.”

    “How, not being a gull, can you…”

    “Oh, shush!”

    “But you are right, I suppose. You need comparable intelligence at both ends. Turning you into a gull wouldn’t do a thing…”

    “Except making a seriously useless gull.”

    “How’s that?”

    “My dear, I have no idea how to fly, and I have no taste for raw fish.”

    “Bah! Being turned into a gull would make you a gull. Did you think such things were somehow separate from the gull’s gullness?”

    [/QUOTE]

    Now, I’ve been told more than once that I am wrong on that, because I am interpreting Nagel’s “like” too literally. But I disagree. I think there is a crucial distinction between “what it is being me” and “what it is like being me” in that the latter presupposes communicability, whereas the former does not.

    If I read you right, that’s essentially you position, isn’t it?

    Liked by 1 person

    1. Cool dialogue! I’m onboard with its main points.

      Hey, your interpretation of “like” in Nagel’s phrase is as good as anyone else’s. It’s what he gets for using something so ambiguous.

      On presupposing communicability, not exactly. I think it’s very possible for a bat to have some sort of metacognition without any ability to communicate it, at least other than through its actions. The fact that its behavior makes it so hard for us to establish whether metacognition is present is, I think, a clue about how limited it is, if it’s present. But maybe that’s what you mean by “communicability”? If so, then we might be on the same page.

      I suppose someone could argue that a bat has self reflection even if it can’t show it in behavior. But my question then becomes, how would it have evolved? Natural selection requires traits that make a difference in gene preservation. It could be a spandrel, but if so it’s an expensive one, and it seems highly adaptive in species where it’s more detectable.

      Like

      1. My apologies for the length of the below. There is a lot to unpack in a somewhat convincing manner. 🙂

        The dialogue may be cute, but it does make a couple of serious points. To my mind those points are crucial, which is why I decided to quote it.

        Nagel’s “like” may be vague, but I can see no way of rephrasing his question without simultaneously deflating the mystery it is intended to to point out.

        Firstly, it presupposes a commonality between experiences knowable only subjectively. Hence the woman in the dialogue counters “What is it like to be a woman?” with “What is it like it to be me?”. The assumption of commonality is generally accepted as obvious (an echo of Dennet’s “beware the ‘surely’ operator!”), but as a predicate dualist, I see this assumption as unproven and hence unwarranted. It relies on language and language disguises our differences in order to enable surface communication.

        Secondly, Nagel’s question presupposes a degree of separability between a subjective experience and the gestalt of the organism involved. Thus the woman’s question “Did you think such things were somehow separate from the gull’s gullness?”, which also repeats the shift from the generic “a gull” to the specific “the gull”. You yourself touched on this difficulty in musing about inevitable arbitrariness of choices involved in any putative “cross-wiring” a bat’s sensory apparatus to a human perception.

        To illustrate both point, let’s imagine that our computers are sentient and can communicate with each other (without our being aware of it). Imagine two computers chatting about the pleasure of sorting a large array of numbers. The may muse about how the difference in “feel” between values being compared depending on the size of the gap. They may agree on the “ineffable” pleasure of finding the right place to insert a new value. One of them may remark on the surprising fact that a degree of randomisation may improve average performance, which surprises the other one, but after comparing their chipsets (both Intel! :-)), they agree that this is down to the classical trade-off between memory and storage, which is much more relevant in an earlier chip generation. They “part” with the feeling of having really understood each other. But have they? One is running an insertion sort, while the other is constructing a B-tree.

        This tale is only fanciful because it has computers communicating. It is not at all fanciful where humans are involved. In one of Feynman’s books (I think it is “Surely You are Joking, Mr Feynman”) he describes his surprise at discovering that he and Freeman Dyson used completely different neural mechanisms to estimate the duration of a minute. Feynman was counting it off, while Dyson visualised a clock with a seconds hand.

        The phenomenon of aphantasia, well documented and neurologically explored by now, confirms this point. As I think I’d noted in one of my past post, I have aphantasia to a very strong degree, but it was not until the age of 40 or so that I finally worked out the when people spoke about visualising things in their mind’s eye they were not being merely metaphorical. Language communication obscured that important fact.

        Now, our brains do have largely conform to a generic plan, even though startling anomalies are also known. However the specifics of their wiring are thoroughly individual. Unless one is a panpsychist, believing that a human embryo is “conscious” from the start (from the fusing of a sperm with an egg?), it seems obvious (beware the “surely” operator!:-)) that our self-awareness develops on subtly — or even not so subtly — different wetware. Thus your experience of redness could be equivalent to my experience of “furriness in C# minor”. The comparison is meaningless. The WIL of a gull is not separable from the gestalt of that gull.

        Like

        1. I’m onboard with a lot of this. Although to your point about the wiring being different for each person, the question is what difference it makes in experience and in the overall causal stream.

          One thing I’d add is that the way the wiring works changes over time. For example, the ratio of S, M, and L cones change as we age. For that matter, they can be different in each eye. But if you ask me if something I remember seeing as a boy that was green was the same green I see today, I wouldn’t be able to tell the difference. Or usually see much of a difference in each eye.

          All of which gets back to the old inverted spectrum argument. But that depends on us actually perceiving the difference. If we can’t perceive it despite the changes over time, or in different eyes, it’s hard for me to imagine how it could make a difference between us, at least in case where it doesn’t behaviorally (such as one person being color blind while the other isn’t).

          Like

          1. If anything your eyes example supports my point. Any actual differences between the two eyes, may not be fully reflected in what actually gets passed to vision centres for processing and any remaining differences are further trimmed down to parallax ones in the massive post-processing. And we know it is massive — e.g. the whole of colour constancy phenomenon proves that (remember “the dress”?). By that time it almost does not matter how different eyes actually are, as long as their output can be post-processed into useful information. Yuo probably would soon be able to replace one of them with a prosthesis. How’s that for a WIL difference? 🙂

            I am simply suggesting that the same happens with humans. Just like visual centres “know” nothing about actual eye wiring and physiology, so the subjective experience of a human not accessible on the collective level. What is accessible is the behaviour (verbal and physical) and language specifically evolved to facilitate communicating. So it will further discard non-useful information — in effect hiding our differences.

            And those differences are hardly minor. I already gave to examples where apparently the same function was carried out by completely different brain areas. And that’s to do with matters which are relatively easy to articulate. E.g. I now know what non-aphantasic people mean by their mind’s eye. Things get much worse with less articulable experiences. Take pain. Some find pleasure int it, which I find completely unimaginable. And Morphine can induce in some people a state in which pain is undiminished, but no longer “painful”. (And some people are apparently born with that as their normal experience and have to learn to avoid non-painful pain or perish.) Totally baffling.

            In short, Nagel needs to justify proceeding from the token (singular instance) of a particular WIL to any kind of non-fungible type collective one. For my money, such an attempt is unlikely to succeed.

            Liked by 1 person

  2. The distinction between “overall-what-it’s-like” and “reflective what-it’s like” relies on the idea that it’s actually “like something” to be a rock; it’s just not “like anything” <i>for</i> the rock. That seems like a huge concession to panpsychism or panexperientialism. As a strategic move to question the notion of private experience, it may be forfeiting too much. A safer strategy might be to ask, along with Wittgenstein, whether the concept of “private” even has any meaning in certain contexts. We apply it to experience, modelling as it were two beetles in two boxes where there is only one beetle between us, and then we become confused as to how this model is supposed to work, when I can’t see your beetle and you can’t see mine. Clearly the real problem is that we need a different model. Not that we have one yet, but it’s probably where we should be focussing our efforts, instead of misconstruing Wittgenstein and trying to rescue preconceptions he has brought into question.

    But let’s say that there is an overall-WIL. Rocks might have it, mice almost certainly have it, and no doubt humans have it too. We may speak coherently of this type of WIL because it is open to public observation. By contrast there is a reflective-WIL, and for various reasons it might not align completely with the overall-WIL. The question seems to be whether the reflective-WIL is spurious and unreliable, and therefore has to yield to the overall-WIL. The tempation is to drop it altogether in our analysis of WIL, leaving only the overall-WIL as of any real use.

    The trouble here is that we have already distributed the WIL over two terms. Whatever the reliability of the reflective-WIL, we have conceded its existence. There is something that it’s like <i>for</i> certain things. You can attack its importance, but not the phenomenon itself. So I don’t see how this line of thinking is going to help dissolve the concept of a reflective WIL, or the whole “private-ineffable-infallible” notion of experience that Dennett (whose thought seems to be lurking here) has latched onto (in an unfortunate misunderstanding Wittgenstein) in order to argue for an older physicalist model. It just leaves us grappling with the same problems, and arguing over whether the reflective-WIL may be mistaken, or how one could possibly be wrong about what’s in the matchbox.

    Liked by 2 people

    1. I think you’re loading more into the phrase “overall what it’s like” than I am. And it’s worth noting that this isn’t what Nagel meant. He meant something stronger, that it is like something for the system concerned. Everyone who uses that phrase tends to cite his 1974 paper, implying that’s what they mean.

      Of course, the issue is that people mean all kinds of things by the phrase. It’s interesting that all the citations go to Nagel, since people don’t always take his meaning, and it existed long before his paper. It shows up as early as B. A. Farrell’s 1950 paper on Experience. Although Farrell didn’t put the same significance on it Nagel did.

      I actually wondered if you’d contest that “something it is like for the organism” actually implies metacognition. I suspect Nagel might resist it. Of course, my challenge then would be, well, what does it imply above and beyond the more basic “like something”?

      Saying it’s like something for rocks but not for rocks themselves, to me, just means that rocks are comparable to other things. So it doesn’t feel like any kind of concession.

      But the blocker I’ve long struggled with for panpsychism is that to make it plausible seems to require making it trivial. If we say that anything that is comparable to other things is conscious, then anything that isn’t utterly sui genesis is conscious, which seems like everything. But that to me doesn’t seem like an interesting version of “consciousness.” To make it interesting, it has to be a stronger concept, which seems to inevitably result in a narrower scope. At least if we’re going to keep it causally relevant.

      My views overlap a lot with Dennett’s. We’re both “Type-A materialists” in Chalmers’ classifications. So I’m sure there are things here which resonate with stuff he’s written or said. I think he once said something similar about the implementation decisions that would have to be made when linking brains together.

      Most people’s interpretation of Wittgenstein’s beetle analogy is a denial of private experience, but his writing is so obscure, it wouldn’t surprise me there are other interpretations. I’m personally not a fan of writing that requires a lot of exegesis.

      Like

      1. Wittgenstein’s writing style is actually quite clear compared to that of many philosophers. It mostly consists of simple sentences with simple words: “Why can’t a dog simulate pain? Is he too honest?” The problem resides in what he’s trying to get across. I happen to be reading Norman Malcolm’s Wittgenstein: A Memoir. In the Introduction, George Henrik von Wright explains: “[Wittgenstein] was of the opinion — justified, I believe — that his ideas were usually misunderstood and distorted even by those who professed to be his disciples. He doubted that he would be better understood in the future. He once said that he felt as though he were writing for people who would think in a quite different way, breathe a different air of life, from that of present-day men. For people of a different culture, as it were.”

        As you know, I’m all about different ways of seeing. I’ve just embarked on a fascinating volume called Goethe and Wittgenstein: Seeing the World’s Unity in its Variety, with contributions by various professional philosophers. This is not some “woo” book I picked up at a new-age bookshop; I came across it in the philosophy lounge at Dalhousie U, among some other old books they were giving away. Of course, Goethe was also on about a different way of seeing to complement conventional science.

        So what did Wittgenstein mean with his beetle in the box? Almost certainly that it’s the wrong way of seeing. The point of it is not to deny private experience, but to get us to think about how we are looking at the problem.

        I’ll have to come back to your other points later.

        Like

        1. A philosopher on Twitter, somewhat facetiously, once remarked that to be a great philosopher, your writing needs to be vague.

          I countered that to be a great philosopher, you need to write in a way that initially seems clear to everyone, but where everyone takes away something different, and then proceeds to endlessly debate the proper interpretation. To really lock it in, late in your career, you must complain bitterly that no one understood you.

          I noted that executed properly, people will be debating your musing for centuries.

          Liked by 2 people

      2. I wondered if you might be loading less into “overall-what it’s like” than most people would. If you ask someone what it’s like to be a rock, you would probably just get a puzzled look. You would have to add,”By ‘what it’s like, I just mean how it behaves.” Then you might get an answer such as, “Well, it’s formidably hard, and it sinks in water,” and so on; but a more probable answer would be, “Are you asking me what a rock is like? That’s not quite the same as ‘what it’s like to be a rock.’ I thought you were asking me how it feels to be a rock.” And of course, that’s what Nagel meant. What he didn’t mean somehow came up, but I’m not sure it was me who brought it up.

        The philosophical legacy of “what it’s like” goes back further than Farrell; in Adventures of Ideas, written in 1933, Whitehead says that Leibniz “explained what it must be like to be an atom” (p. 132). But its use in ordinary speech goes back much further, and it has a clear enough meaning if you don’t overthink it. Again, I’ll quote Wittgenstein: “When philosophers use a word — ‘knowledge’, ‘being’, ‘object’, ‘I’, ‘proposition’, ‘name’ — and try to grasp the essence of the thing, one must always ask oneself: is the word actually used this way in the language-game which is its original home? — What we do is to bring words back from their metaphysical to their everyday use.” Normally when we talk about what something is like, we might be discussing what it was like when you had your first kiss, or what it was like to walk home through a cold drizzle. There are no hard, once-for-all answers, but we all know what is being asked and how to respond, in a hand-waving way. True, a statement like “What it’s like to be a bat,” if taken too literally, leads to the sort of impasse Mike Arnautov points out in his dialogue about seagulls; but really we mean it less existentially, as in, for example, “What might a sense of echo-location feel like? Would it be comparable to seeing with your ears?” The answer may stump us, but the question is perfectly intelligible.

        I didn’t feel the need to go there, but certainly I’d be prepared to contest that what it’s like for something requires metacognition. When we ask, “What is it like to walk in a cold drizzle?” we are inviting metacognition, or reflection, on an experience that probably would not involve much metacognition, or even cognition, at the time. We would not explicitly notice the feeling of wetness on our damp trousers,the relative dryness of our torso under our jacket, the individual drips running down from our hair, the clamminess of the breeze, the greyness of the sky, the squishiness or hardness of the grass or pavement; not anyway unless these things obtruded on our consciousness for some special reason. We would probably be thinking about where we could catch a bus, or what we were planning to write in a paper, or whether there was any cocoa left at home. Nevertheless, when people ask what it was like, we would selectively recall some of these things, hauling them into the light of metacognition for illustration. We would not say, however, “It’s like when you think about what to write in an essay.”

        — “Saying it’s like something for rocks but not for rocks themselves, to me, just means that rocks are comparable to other things. So it doesn’t feel like any kind of concession.” — I confess I’m having trouble understanding what you’re saying in this and the next paragraph, where you express the concern that panpsychism trivializes consciousness. That said, my view is that on the contrary, it gives consciousness quite a special position, one that changes everything. We are left with puzzles about levels of consciousness, or how micro-consciousnesses or proto-consciousnesses combine into more complex consciousnesses. These are still interesting questions. By no means are more advanced manifestations trivialized. But the idea that consciousness permeates everything, even in some metaphysical sense, has the potential to transform our attitude to the world. It moves us away from a certain Cartesian detachment respecting it, and towards a stronger sense of relationship.

        Liked by 1 person

        1. One of the things philosophers and philosophy mavens have to be careful about is projecting the intuitions we have after years of reading about this stuff onto the general population. The good news is that when disputes about what the population actually believes comes up, it’s something that empirical research can weigh in on. When it does, we find that many of the terms kicked around aren’t even properly folk terms. https://philsci-archive.pitt.edu/21370/1/XPhi%20Consciousness%20-%20Reuter.pdf

          I think that before I had read much philosophy, if you had asked me the difference between “what a rock is like” and “what it’s like to be a rock”, I would have been puzzled, except to see the second as awkward wording. Of course, if you had added “…for the rock” to it, then I would have understood what you meant. Although asking “what it feels like to be a rock” would have been much clearer.

          On panpsychism and triviality, to me if when talking about the consciousness of rocks and protons as something without perception of the environment, emotion, learning, or imagination, then what we’re discussing no longer seems interesting. It seems like a way to preserve our ability to use the word “consciousness” by jettisoning everything that made the concept interesting.

          It feels a bit like the pantheists who redefine “God” to mean the universe or laws of nature, or some other basic aspect of reality. It preserves their ability to keep using the word “God”, but if it’s not the personal god who answers prayers or takes care of us in an afterlife, I’m left wondering what the point is.

          Put another way, to make panpsychism interesting to me, it has to morph into a type of animism. But then it seems to become something we should be able to find evidence for. When evidence becomes irrelevant, it seems like the proposed concept does as well.

          Liked by 1 person

          1. The Simon and Garfunkel song “I Am a Rock” has to do with what it’s like to be a rock, in a metaphorical way. I think most people would say the lyrics trade on imagining the internal life of a rock, weird as that may sound, rather than highlighting its external properties. “A rock feels no pain.” Your intuition about “what it’s like to be a rock” as awkward wording might be an outlier. As you say, it’s amenable to empirical research.

            A concept of consciousness without perception of the environment would indeed be uninteresting, but that’s not something a panpsychist would entertain. The interest lies in the responsiveness of one thing to another, or if you prefer, in their mutual responsiveness. If ontic structural realism holds that everything is relationships, then we can go on to ask whether the relationships involve any sort of active participation. Since the relationships themselves are clearly active, we have to wonder whether activity is imposed on them externally, and if so by what, or whether activity is just their intrinsic nature as they respond to one another. An intrinsic responsiveness seems to me a promising ground for consciousness.

            Your remarks about a non-personal God remind me of Steven Weinberg in Dreams of a Final Theory. The point of talking about a non-personal God might be to muse about “top-down” aspects of the workings of the universe, but not necessarily ones obsessed with human affairs, or even especially aware of them.

            Liked by 2 people

  3. A quartz rock is like something that is different from a granite rock. Dunno what Nagel would say about whether this means there’s something it’s like to be a quartz rock. But it really doesn’t matter; stipulate it either way. He’d clearly say that it’s not like something for the quartz rock.

    I agree with you that metacognition is implicated here. But I feel it’s important to emphasize that it’s a pretty slim kind of metacognition that’s required. There just needs to be a separable state that represents a state that represents something about the world.

    As I understand it, Wittgenstein’s box for beetles is completely impenetrable. Neither Xrays nor gamma rays can see through it. I’d accuse him of constructing a straw man, were it not for the fact that you can always find someone who advocates anything, no matter how crazy. Still, the whole metaphor should be bypassed. Nothing to see there.

    Liked by 1 person

    1. I actually wonder if Nagel would agree that “like something for the organism” would necessarily imply metacognition. But I haven’t read that much from him, his 1974 paper, and a few scattered articles here and there, so I’m not sure what his stances are. The SEP on panpsychism implies he’s a panprotopsychist. So who knows.

      I agree that it doesn’t take a lot of metacognition. Although if the metacognition is fragmented across diverse capabilities, without even occasional unification, does that make a difference? Of course, that presupposes that humans have unified access to all our metacognition. It could conceivably be that the metacognition involved in talking about our mental states is isolated from other metacognitive functionality.

      Like

  4. What is it like to be (me)? A free-write. I see in color, perceive depth and affordances in my environment. I run to catch a baseball, always more efficiently in practice than I would have imagined in planning / theory. I feel accomplished when the ball lands in my leather glove, lined with fuzzy wool that sometimes itches when my hand sweats on a hot summer day. I remember playing baseball as a grade-schooler, always being introverted and anxious about socializing with the rest of the team. I remember being fearful when pitchers started hitting puberty and hurling at me faster than I could swing the bat. The BAT

    I’ve never read Thomas Nagel’s famous document, but heard it referenced many times by the likes of Sam Harris and cognitive philosophers, some of who I read and attended lecture from in undergrad as a psychology BS graduate… Biology / geneticists find that we share ~80% of our genes with bats. Between 25-30% with flowering plants. My though is not scientifically or logically rigorous here, but It’s non-controversial to me that it is like something to be an any animal with a biological brain. Probably only a biological body is necessary for consciousness, i.e., ‘something like-ness’.

    I’ll bet it is like something to be YOU too. Believing your experience is every bit as vivid is mine, I have no reason to assume otherwise. I’m not versed in ‘isms’ but take issue with philosophies that render the inner world null. Behaviorism, and this “Illusionism” you mention in your linked post which purports no inner subjective experience is simultaneously patronizing and preposterous to me.

    “Oh no I’ve said too much. I haven’t said enough…” R.E.M.

    Liked by 1 person

    1. Thanks for your thoughts.

      It is indeed like something to be me, at least within the interpretation of “like something” you seem to be using. But that gets into what illusionists are actually denying, and this isn’t it. It’s the “like something” that can’t be communicated that they question.

      Personally, I prefer to say I’m a functionalist rather than deny other people’s conceptions. I’ll only say I don’t understand what’s even being proposed for non-functional versions of “like something”.

      Like

  5. If you hit your finger with a hammer and feel pain (Eric’s favorite example), is the pain reliable or not?

    Do I need “self reflection” to feel the pain?

    What is “reliable” access to our own states? How is it different from “unreliable” access of our states?

    Liked by 1 person

    1. On pain, you’re talking to someone who’s had hard to identify dental pain for years. Is that pain reliable? Doesn’t seem very reliable to me.

      Reliable access to our own states means that we can consistently understand them. Modern psychology seems to imply that’s a much shakier proposition than most people want to accept.

      Liked by 1 person

      1. I think most people can reliably access their own states. We walk about without running into things. We understand a bright blue sky means rain is not imminent. We put on warm clothes when it is cold, less clothes when it is warm. We have a good sense of where we are most of the time. All of these actions and understandings result from our mental states.

        You either seem to want to look at exceptions or seem to be trying to make a larger claim but it’s never been clear to me what it is.

        Liked by 1 person

        1. A lot of progress in neuroscience has come from looking at the exceptions, particularly people with rare brain injuries. It’s the exceptions that stress test our ideas. Of course, for people who cherish those ideas, that’s typically not welcome. But I think the history of science shows that holding too tightly to any idea is often an obstacle to progress.

          I think I’ve been pretty consistent over the years in pointing out that introspection is as fallible as any other type of perception. And that we should accordingly adjust our credence in notions we only hold due to introspection.

          Liked by 1 person

          1. Stating that in the glass half-full manner: introspection is as reliable as any other type of perception.

            But my suspicion is that only certain thoughts and ideas reached through introspection bother you. If it really boils down to a philosophical disagreement, likely all sides are wrong and fooled by their own introspection.

            Liked by 1 person

          2. On introspection and other perceptions, people are usually okay admitting the limitations of perception. (Brain in a vat scenarios have been around for centuries.) It’s inner perception they usually dig in on.

            I’d say any ideas based solely on introspection are likely all wrong to one degree or another. Ideas based solely on third person data (like behaviorism) seem to ignore too much of the problem. Ideas based on the intersections or reconciliations of the two seem on firmer ground.

            Like

          3. I would think the default understanding of “inner” in “inner perception” refers to the target of the perception rather than where the perception happens.

            Like

  6. So this is precisely where my understanding of consciousness kicks in. If we accept that pattern recognition is the fundamental basis of consciousness, we can give a definite explanation for all this. Here goes:

    You can say all matter does pattern recognition. The rock “recognizes” the pattern of being kicked by redistributing the energy among its molecules, etc. If this pattern recognition is enough for consciousness, you get panpsychism. Otherwise you get panprotopsychism, which is where I am.

    You can say consciousness requires pattern recognition which is coupled to an output which moves the world toward a goal state. This is cybernetics and includes all life, the goal state being the existence of life, but this also includes Bernoulli cells(?), vortexes (tornadoes), rivers, etc.

    You can say consciousness requires pattern recognition which is “communicated”. This is cybernetics with the addition that the immediate output of the recognition is a symbolic sign (to use Peirce’s jargon). The sole purpose of a symbolic sign is to carry information, specifically, the information that the pattern was recognized. The physical form of the sign is arbitrary, as long as there is coordination between the recognition mechanism and the response mechanism. Communication first appears in life, and mostly because it is sometimes useful for the response to a pattern to happen at some distance from the recognition mechanism. For example, in bacterial chemotaxis, a cell surface receptor may recognize a sugar on the outside and cause a signal molecule to be produced inside the cell. This signal can then reach the back end of the cell where the flagella motor can respond appropriately. This is the first level where information is key, and you can start talking about “aboutness”, and “likeness”. One experience (pattern recognition) is “like” another to the extent that it carries (approximately) the same information and is coupled to a similar response. A system’s Umwelt is the set of all patterns it can recognize. This is where I put the smallest unit of consciousness, the Psychule.

    You can say consciousness requires a system with more than one identifiable goal, and those goals are potentially competing. For example, a bacterium wants to move toward a food source and away from a toxin source. Given two competing signals, the system has to make a choice as to how to respond. Here the system is making a choice based on information.

    *

    [gonna stop this reply here, but I’m going to continue in another reply, where I get meta …]

    Liked by 1 person

      1. Yeah, I wish WordPress would put more attention into their commenting system. If they ever give us a way to enable comment editing (for someone other than the site owner), I’ll definitely turn it on.

        Like

  7. I like the “what it’s like” acronym. Nice coincidence there. 🙂

    Well, you know where I stand on many of these issues. The only addition I have to that is, I’m not so sure metacognition or self-reflection is necessary for WIL. That seems to me to reflect a human bias, one that assumes cognition or intelligence is required for agency, desire, experience in general. But I’m not sure about that. Unless I just don’t understand what you mean by those terms.

    Liked by 1 person

    1. I can’t claim originality on the WIL acronym. It’s been used in Twitter debates for years.

      I wondered if anyone would question the metacognition implication. I don’t spend a lot of time on it in the post. But I wonder how it can be present in the way Nagel discussed it without metacognition. Admittedly, I’m looking at it with an expectation of some kind of causal processes.

      I actually think the whole WIL concept is unavoidably anthropomorphic. It may be us projecting it on other organisms and systems is just that, projection. As I noted in the post, bats may have limited metacognition, but whether it amounts to what most of us mean by “what it’s like” seems like a major assumption. I think we’ll learn more as the science progresses, although conceptual hang ups may delay understanding what’s being found.

      Liked by 1 person

  8. [comment continued …]

    With cooperation between cells we get the need to communicate pattern recognitions between nearby cells, and with multicellular life we get the need to communicate recognitions between particular subsets of cells. Thus we get electromagnetic communication (see Michael Levin’s stuff) and then we get the cell whose sole purpose is to communicate pattern recognitions to specific other cells: i.e. the neuron.

    You can say consciousness requires the ability to recognize new patterns, i.e., to learn. One way to learn is by simple association: firing together so wiring together. But sometimes learning requires a delayed response. New pattern recognitions are created by variation and selection. A pattern recognition (A) is coordinated to a response (A1), and later another pattern recognition determines that “things are bad (there is too much variation from a goal state)”, but the response to this recognition has no way to determine what previous recognition (A) caused it, so instead it sends a systemic response (like serotonin?) such that any recent recognitions (eg., A) are more likely to change their responses, say to A2. These systemic responses are likely to have a whole raft of internal effects, possibly leading to a whole raft of new interoceptions (recognitions of internal patterns). Usually these sets of interoceptive states come in groups, which we call emotions.

    You can say consciousness requires recognition of patterns within sets of previously recognized patterns, i.e., meta pattern recognition. This becomes important when a system needs to distinguish between something it might want to hunt (deer?) and something it probably doesn’t want to hunt (bear?). [warning, some educated speculation here] Mammals (at least) have developed a generic system in the neocortex for the purpose of recognizing patterns generated from a generic template in the thalamus. Sensory signals come to the thalamus and generate a pattern there, which gets transmitted to a part of the cortex(macrocolumn?). Units in the cortex (unitrackers, minicolumns) recognize patterns in that input and send multiple outputs, including motor or internal actions as well as feedback, but also including feedforward to a new section of thalamus, which transmits *those* patterns to a new section of cortex, which continues the process. At some point there will be a highest level for which there is no more thalamus to send to, and for which all responses are essentially motor, systemic, or feedback to earlier parts.

    Finally (for now, but yea!), you can say consciousness requires recognition of abstract concepts, such as self, mind/intentions, etc. Or possibly you can say consciousness requires the ability to not only recognize abstract concepts but to be able to assign arbitrary labels to them, so …, language.

    Bottom line: it’s pattern recognition all the way down. We can recognize the mechanisms involved in another system, explain what the patterns recognized are, and how and why they are coupled to specific responses. But we can’t take the informational perspective, the subjectiveness of that system. As you say, we can’t “have” that experience. We can only speculate which of our own informational perspectives is most like them.

    *

    [prolly gonna copy these replies and post ‘em on my blog]

    Liked by 1 person

    1. There’s a lot here. Definitely you should put it together in a blog post. Having it all in one of my comment threads won’t get it the coverage it deserves.

      I think pattern recognition is definitely part of the answer. Although I prefer to call it “prediction”. I like prediction better, because it covers model free, or very spartan model responses. And the gradual development of increasingly sophisticated patterns.

      I don’t buy a kicked rock doing pattern completion. I mean, I’m sure if I twisted around and looked at it askance enough, I could work up a way to talk about it in that way. But all that twisting feels forced to me. I guess I’m just not panpsychist (or panprotopsychist) material.

      But while pattern completion/inference/anticipation/prediction is a crucial component, I don’t think it’s sufficient for most people intuition of a conscious system. Of course, intuitions vary, and as I’ve said many times, consciousness is in the eye of the beholder. I just think most beholders need the higher level organizations you discuss as go up in scale.

      So I’d say it’s covering something important. I just think trying to reduce consciousness to one thing is always going to be forced. It strikes me as an unavoidably complex phenomenon. At least for now.

      Like

      1. Quick comment 1: The key to the Psychule is information. You don’t have to worry about looking at the rock sideways to see pattern recognition. That part’s just to appease panpsychists somewhat. The panprotopsychist property is the generation of (mutual) information. Information is the affordance for pattern recognition for a purpose.

        Quick comment 2: I appreciate simple pattern recognition does not match people’s intuition about consciousness, but then a simple H2O molecule would not meet people’s intuition about rain. Thus the Psychule. People gotta fix their intuitions if they want to understand what’s going on.

        Discussion comment: People’s intuitions of consciousness are based on the subjective perspective of the highest recursive level of the most complex form of it created so far. So one of the things I wanted to point out in my response was that the exact same mechanisms (cortical minicolumns as unitrackers) for recognizing edges and textures in the visual cortex are used iteratively to recognize whole objects (human), and then specific examples of objects (Jennifer Anniston), and then relations of specific objects (cast of Friends), and abstract relations (actor, “Brannifer”), and then personal goals (“I wanna be famous like …”), and so on. Once you see the building blocks, you can begin to understand where ideas like “what it’s like”, Integrated Information, Global Workspace, Attention Schema, Higher Order Thought, etc. all come from.

        *

        [c’mon, drink the coolaid… 🙂 ]

        Like

        1. On coolaid, you should know me better by now. 🙂

          You call yourself a panprotopsychist, but I think you’re actually a physicalist, identifying building blocks of consciousness in information processing. I’m onboard with that project. But calling those building blocks themselves “conscious” to me is just deflating the term to the point where it’s usefulness is questionable.

          BTW, to be a panprotopsychist, in Chalmers’ original conception, is to believe that phenomenal properties can be logically reduced to protophenomenal properties. But protophenomenal properties cannot be logically reduced to anything physical. I don’t detect that stance from what you’re describing. Although maybe you do have it somewhere and I’m just missing it.

          One might worry that any non-panpsychist materialism will be a form of panprotopsychism. After all, non-panpsychist materialism entails that microphysical properties are not phenomenal properties and that they collectively constitute phenomenal properties. This is an undesirable result. The thought behind panprotopsychism is that protophenomenal properties are special properties with an especially close connection to phenomenal properties. To handle this, one can unpack the appeal to specialness in the definition by requiring that (i) protophenomenal properties are distinct from structural properties and (ii) that there is an a priori entailment from truths about protophenomenal properties (perhaps along with structural properties) to truths about the phenomenal properties that they constitute. This excludes ordinary type-A materialism (which grounds phenomenal properties in structural properties) and type-B materialism (which invokes an a posteriori necessary connection).

          https://consc.net/papers/panpsychism.pdf

          Like

          1. Again, the key is (mutual) information. This is the protophenomenal property which can’t be reduced to the micro-physics. This is the protophenomenal property which provides “aboutness”, and “likeness”. This is the property which becomes a phenomenal property when it is *used* for a *purpose*. If this doesn’t match Chalmers’ idea of protophenomenal, that’s too bad for Chalmers, because there isn’t a better alternative word for it.

            So yes, I’m a physicalist and a computational functionalist (computation = information processing) and a panprotopsychist. Information is not a physical property, but it has a physical explanation, and it’s there in every physical thing, and it’s a necessary component of consciousness.

            Oh, and consciousness isn’t a property of pattern recognition mechanisms. Consciousness is a property of a system that creates/uses pattern recognition mechanisms. The creation and use of pattern recognition establishes the property of consciousness.

            Is that better?

            *

            Like

          2. On information, I’d point out that if you remove all physical instantiations of information, it doesn’t exist anymore (at least in this universe). If you alter some of those physical instantiations, then you’ve altered at least those instances of the information. If you alter all instances, then you’ve effectively changed the information. And any such alterations take energy. It’s why data processing centers require vast amounts of energy and brains are the most energy-expensive organ in the body.

            So I’d say any information anyone can actually interact with is physical. We could talk about platonic information. At least in contemporary platonism, that’s acausal without any spatiotemporal extent. But it seems like we can deny the existence of platonic objects with no cost.

            And information seems structured by default. To Chalmers’ point in the quote above, that makes a view of the building blocks of phenomenal properties being information seem inherently what he calls type-A materialism. You could make his move, and talk about information having a “double aspect” nature, a physical one and a phenomenal one. I’ve wondered myself if this amounts to anything other than a verbal preference.

            But as an analytic functionalist, I’m not satisfied with a brute relation between information and those properties (whether proto of the full thing). I want them logically related to each other, and I don’t see any obstacles in principle to doing it. Admittedly that involves defining “phenomenal” functionally. But for me, it’s the only way I know to understand it.

            Like

          3. Hmmm. There’s a long discussion to be had here about what I mean by “information” and “information processing”. It’s why my first reference to it is always “(mutual) information”. Mutual information is a property of some physical thing, but it’s a relational property relating to another physical thing, and that other physical thing may or may not currently exist. So yes, if you take away the physical thing there is no information to talk about.

            And any physical interaction which changes a thing will change the information in that thing, but it does so by adding new information to it without (necessarily) removing the old information. Again, we’re talking about mutual information. In fact, any physical thing will have some degree of mutual information with *every* physical thing in its causal history. That’s why you can’t really talk about what information is there until you specify what it is used *for*. This gets complicated, but I’d be happy to get into it. (BTW, the thing that uses all that energy in the data centers is *erasing* information.)

            So I’m not sure what you need explained. There is a double aspect of any system which processes information: the objective and the subjective/informational, and I assume the “phenomenal” refers to the latter. Am I missing something?

            Like

          4. Information as a subject is definitely a rabbit hole, one you and I have gone down a few times. I’m onboard with much of what you say. Although your point about the relation not always existing is why I consider physical information to be causation, or a snapshot of causal processes, or maybe potential causation.

            For example, a new gene that has just mutated into existence will have causal effects in the new proteins it leads to, but initially the protein side of that relationship doesn’t exist, so it starts off as just a causal mechanism. Of course, when agents come along who understand its causal effects, it is semantic information for them.

            “(BTW, the thing that uses all that energy in the data centers is *erasing* information.)”

            I wonder what you mean by this. It seems like all bit switching requires energy. Erasing is just a particular type of bit switching. It’s the processing and the need to deal with its waste heat that are usually noted as major factors.

            On double aspect theory, if one aspect is structure and relations but the other isn’t, I would need the idea of something non-relational explained to me. To be honest, I don’t think it can be. I suspect the idea is a conceptual muddle. But maybe I’m wrong. Lots of people assert that. The difficulty seems to be getting them to provide details.

            Like

          5. [for the point about erasing energy, I was referring to Landaur’s principle, see https://en.wikipedia.org/wiki/Landauer%27s_principle, but that stuff is beyond my pay grade and not pertinent to our discussion, i think]

            I’ll say I tend to gag when you say information is causation, but I’m good if you say information is a snapshot of causation. For me, causation is actual physical interaction, and causation determines mutual information, but isn’t identical to it.

            Also, mutual information is another phrase for correlation, and I think it’s perfectly acceptable for correlation to be relative to a future state. If I release a glass at head height above a concrete floor, there is a high correlation with a future state of broken glass. Same for your gene example.

            On the double aspect theory, I’m not sure how you are using the term “relational”, so I’m not sure if we’re agreeing or not. I’m essentially referring to the hardware/software divide. A system that processes information the way I’m considering will necessarily have an informational (software) subjective perspective. Its Umwelt will be the set of patterns it recognizes, and this perspective will be divorced from the actual physical hardware doing the processing. The perspective may be very simple. If your only tool is a hammer, everything looks like a nail. But if your set of patterns includes recursive recognition of patterns of patterns of patterns … , then your Umwelt will be very rich, but it will still be the software perspective.

            So what am I missing?

            Like

          6. On information and causation, maybe it would help if I said “inform” is to “cause” (verb version) as “information” is to “causal snapshot.” But I’m onboard with correlations. And there are always correlations. Any interaction leads to them. (I guess we could have an interaction that exactly reverses a previous one and erasing the correlation, but then that would lead to other correlations elsewhere.)

            We can imagine a scenario where a gene just mutated into existence, hasn’t had a chance to result in its new protein, but something destroys it before that happens. In that case the correlation doesn’t exist, even in time. Although I guess other correlations would exist from the destruction (itself a complex of interactions).

            I’m not using “relational” in any non-standard way, at least not intentionally. To be precise, maybe in the second sense in this Merriam entry: https://www.merriam-webster.com/dictionary/relations

            The hardware/software divide is, I think, tricky when talking about brains. People almost always mess up with the move. I’ll just point out here that software is a configuration of hardware. Anything done in software could in principle be done in hardware instead. (Which is what happens when we use special electronics for graphics processing and other specialized work.)

            Would you see an unwelt including the patterns that can be learned as well, or are you envisioning something hardwired? If so, wouldn’t the learning be an addition to the pattern completion you’ve been discussing?

            Like

          7. On correlations: correlations are a probabilistic thing and they reference a time point. To say A is correlated w/ B is to say “if you measure A now, then the probability of measuring B at time n (past, present, or future) would be be X.”

            On hardware/software: I was afraid of using that phrasing for this reason. You say “ … software is a configuration of hardware. Anything done in software could in principle be done in hardware instead. …” The way I’m using it, software is *not* a configuration of hardware. “Software” is a useful abstract, hardware agnostic, description of what hardware is (physically) doing. Nothing can be *done* in software. And the same abstract description could apply to a completely different organization of hardware, as long as the significant correlations are maintained.

            The problem comes in when you see “software” and think “computer program.” You describe how a written program could be implemented physically (with a system of buckets of water, etc.?). You’re using “software” to mean the program, where I’m using it to mean the description of what both those systems, the computer and the buckets, are doing.

            So let me know what term I should be using instead of “software”, and let me know what still needs explaining.

            Like

          8. On correlations, that sounds like an epistemic description. I’d say to hold it requires either an understanding of the shared causal history of A and B, or enough past observations to have established a statistical relationship between them (which implies a shared causal history maybe not understood yet).

            ““Software” is a useful abstract, hardware agnostic, description of what hardware is (physically) doing.”

            Yeah, “software” is ambiguous. In technology it can refer to anything from machine language (which is tied to particular hardware) to high level language code compiled to machine language or running in an interpreter (a type of virtual machine), to code running in actual virtual machines. I’m not sure trying to use “software” outside of these contexts is going to communicate what you’re trying to get at.

            For that I usually use “functionality.” But even with that, we have to be careful about importing too many concepts from software engineering. For example, the idea of data being communicated between different system components doesn’t really translate well in connectionist neural networks, where it seems to make more sense to talk about the effects one system has on the other.

            Umwelt usually refers to the environment as experienced by an organism, which typically includes what it can perceive or learn to perceive with its sensory capabilities. Which seems broader than the way you’re using it.

            Like

      2. The problem I have with “functionality” is that it doesn’t obviously include the concept of information processing. The role of processing (mutual) information is key to understanding the role of pattern recognition in consciousness. It’s the only way to understand that communicating via neurotransmitter over here is significantly different from the same type of communication with the same neurotransmitter over there. Information is absolutely being communicated in neural networks, and understanding how and why is the key to the whole shebang [I think].

        Like

          1. I’m using “information processing” as a process which has a component whose sole purpose is to carry information, which component could be arbitrarily replaced by a different physical component as long as it carries the same information. In computers it’s the voltage in a transistor. In the brain it’s the neurotransmitter (and possibly some other things).

            So the goto example for a system with functionality but not necessarily information processing would be the Watt governor. All of the components require specific physical interactions.

            Strictly speaking, you can describe any physical interaction as information processing. But that way lies panpsychism, if so desired.

            Like

        1. It seems like a Watt governor is doing information processing. We could replace it with a computer controlled valve that does the same thing, that is, perform the same function, have the same truth table in effect.

          Rather than describe all interactions as information processing, I prefer to describe information processing as interactions. That admits that our current engineered systems, particularly digital ones, probably don’t have all the causal forms down yet.

          I don’t think it leads to panpsychism, but it does imply a limited form of pancomputationalism. But I realize using your definition of “consciousness” that may be a distinction without a difference.

          Like

          1. My question is whether the Watt governor is an information processing system in the sense I have defined: “I’m using ‘information processing’ as a process which has a component whose sole purpose is to carry information, which component could be arbitrarily replaced by a different physical component as long as it carries the same information.”

            Which component of the Watt governor can be arbitrarily replaced with a single physical component with completely different physical properties (as opposed to replacing with a subsystem)?

            Maybe a better way to say this is to say an information processing system makes use of Peirce’s “symbolic sign”, but I’m not sure how familiar you are w/ Peircian semiotics.

            Like

          2. I thought I noted a way the governor could be replaced above. If we’re talking about the components of the governor itself, then probably not. But then while a bit can be implemented in a number of ways, I don’t think we can arbitrarily replace the components of a transistor, vacuum tube, or mechanical switch, or mix and match between them, at least not in any practical sense.

            I have to admit that every time I try to understand semiotics I bounce off of it. Do you know of a source that eases into the forest of terminology?

            Like

          3. [think I sprained my brain … but I’m back …]

            Regarding an information processing system: a transistor is not so much a system as a component of a system, so in theory you could replace a given transistor with the vacuum tube equivalent as long as the informational result is the same. And you could diagram the process as using a logic gate w/o specifying what you’re logic gate is made of.

            Would it help to instead use a standard thermostat versus digital thermostat as an example?

            [can’t really help you with semiotics … I just picked up the basics and then gave up on details. Too many idiosyncrasies, like Peirce’s obsession with things coming in three’s.]

            Like

          4. [I know what you mean. My own brain is pretty fried tonight.]

            On replacing a logic gate, right. But my point was that you can replace the whole governor in the same manner. And if we get into components of the governor, to be consistent we also have to get into the components of the logic gate. You can’t replace components of a transistor circuit with the components of a vacuum circuit. There’s always an implementation level that isn’t mult-realizable at that level, even if the higher levels are.

            [That’s my issue with semiotics. Every time I try to read about it, I get an avalanche of terminology with abstract meaning I have trouble fitting together. Ok, maybe I’ll find a decent guide to it at some point.]

            Like

  9. I have this mental image now of a bunch of bats, hanging upside down in their cave, chittering at each other about what it’s “really” like being themselves. I kind of want to go draw this now.

    Liked by 1 person

    1. Thanks. I saw it. Convergent evolution of intelligence. We already knew it was possible due to the cephalopods, octopuses in particular, but knowing it happened elsewhere strengthens the idea.

      Of course, mammalian levels of intelligence and sapient levels are different things.

      Liked by 1 person

      1. [haven’t read the article yet, but …] I am so looking forward to mapping out the corvid brain structures to see if they correlate with cortical minicolumns (unitrackers) and something analogous to the cortex/thalamus interaction.

        Liked by 1 person

          1. Try this:

            https://www.science.org/doi/10.1126/science.abc5534

            “Whereas mammalian cognition emerges from the canonical circuits of the six-layered neocortex, the avian forebrain seems to display a simple nuclear organization. ”

            Study also suggests a common evolutionary origin of an “ancient microcircuit” which became elaborated differently in mammals and birds.

            “Our findings suggest that it is likely that an ancient microcircuit that already existed in the last common stem amniote might have been evolutionarily conserved and partly modified in birds and mammals. The avian version of this connectivity blueprint could conceivably generate computational properties reminiscent of the neocortex and would thus provide a neurobiological explanation for the comparable and outstanding perceptual and cognitive feats that occur in both taxa.”

            The difference extends to the hippocampus and hippocampal analogs where birds also lack the layered structure that mammals have. Mammals may be outliers with the layered structure in the cortex. I think (not 100% sure) that arthropods and cephalopods have structures more like birds.

            Liked by 1 person

Your thoughts?