Philosophical semanticism

This week, while working through my podcast backlog, I came across an interview of Jacy Reese Anthis. We discussed Anthis’ paper on consciousness semanticism a few months ago. Like me, Anthis sees the term “consciousness” as ambiguous, one that has had a variety of different meanings over the centuries, and continues to have a range of meanings for different people today.

Consequently, asking whether a particular animal or system is conscious, as though consciousness is a definite property that is present or absent, is meaningless, at least with the most common ambiguous definitions of “consciousness”. Anthis’ takeaway from this is that consciousness, as commonly understood, doesn’t exist. He’s an eliminativist and strong illusionist.

My own takeaway is that while some versions of consciousness don’t exist, others do, particularly in terms of functional capabilities like perception, attention, learning, deliberation, or self reflection. But then my inclinations are generally more reconstructionist and weak illusionist.

Anyway, the interview spurred me to reread his paper, and check out his website, including a blog post covering his stance on “the big questions”. One thing I didn’t fully appreciate on the first reading of the paper is that he uses the semanticism approach for much more than consciousness.

From his blog post:

The best approach to virtually all problems in contemporary philosophy is to treat them as pseudo-problems by making their semantics precise then delineating the often straightforward solutions to each precisification.

This is similar to my own approach to dealing with philosophical questions, although I never thought to give the strategy a name like “semanticism”. I’m not sure it renders “virtually all” problems in philosophy “pseudo-problems”, but clarifying definitions does seem to make many problems much more tractable.

So when we ask whether fish are conscious, it helps if we specify whether by “conscious” we mean perception, attention, and reinforcement learning, or something more demanding such as episodic memory and self reflection, or even something non-physical distinct from any functionality. Or in a discussion on free will, it seems to matter whether we mean a will free to use foresight to select which desires to inhibit or indulge, or a will that is, to some extent, free of the laws of physics.

Failing to make these kinds of clarifications often leads to what David Chalmers calls verbal disputes, where the participants are talking past each other with different definitions. In a straight verbal dispute, the participants agree on all the facts, but not on the meaning of key terms. In other cases, there may be actual fact-of-the-matter differences, but their nature is clouded by the different semantics.

In my experience these verbal disputes are so common, that it makes sense anytime a philosophical question comes up to ensure we clarify our terms.

It’s a bad tactic, I think, to simply insist on a particular definition for a concept. It seems more productive to admit that there are multiple meanings at hand and label each one. And then to address one or more of those meanings. Usually, as Anthis notes, the more precise meanings are easier to deal with than the initial ambiguous one. It ends up being a divide and conquer strategy.

Often this move is resisted. Sometimes it comes from epistemic caution. Someone may want to stick with the initial ambiguous term because any of the more precise ones involve theoretical commitments they feel are premature. They may simply not find any of the current precise options worth considering and want to keep the door open to additional alternatives. In other cases, there may simply be resistance to ceding definitional ground to any alternate versions.

In these cases, it seems crucial to avoid the bad tactic. Acknowledging each version with its own label may make that version’s partisans more likely to tolerate the alternatives, at least for purposes of discussion. And acknowledging that the listed options may not exhaust all the possibilities can avoid giving the impression that accepting them is premature.

Of course, sometimes this type of analysis is simply rejected out of hand, or ignored. The more grounded and defensible versions of a concept often provide rhetorical cover for the more dubious ones, making intense partisans of the dubious versions hostile to clarification. It’s difficult to have productive conversations with someone in this mindset.

A broader aversion may be because it often seems to lead to eliminativism toward the original concept. But eliminativism is always a judgment call. As Chalmers noted, when we discover that the scientific image of a concept is different from the manifest one, we always have options, which do include eliminativism, but also reconstruction of the original concept.

Most of us are eliminativist toward terms like “ghost” or “fairy”. We’ve judged that those concepts no longer do any real work, except maybe in metaphors. On the other hand, our notion of what stars and planets are, is radically different from the pre-modern ones, yet we’ve held on to the concepts because the words “star” and “planet” continue to do work. We’ve reconstructed “star” and “planet” to mean something different.

Do terms like “consciousness”, “free will”, “religion”, or “morality” continue to do work? I think an argument can be made that they do, that we’d have to find alternate terminology to describe what they mean, at least in their more grounded forms. Of course, the more grounded forms usually aren’t the ones with deep metaphysical mysteries. And it is the ones associated with those mysteries that are often the most vulnerable to the eliminativist conclusion.

Still, it’s hard to see that we’re going wrong when we’re clarifying things. It seems like the whole point of clarifications is to give us a perch for a possible revision of our views.

But, as always, I may be missing something. Are there issues with the semanticist strategy that I’m not seeing? Or better strategies we should consider instead?

Featured image source

44 thoughts on “Philosophical semanticism

  1. This is a good one. Well crafted and presented in logical, substantiated fashion.

    Word and their meaning, eh? Imagine early dictionaries.

    “First entry: ‘a’.”
    “What’s it mean?”
    “Well, I’d say it stands for an instance of something.”
    “An instance of what?”
    “Anything, I suppose.”
    “Why can’t we just skip it and use the name of the thing?”
    “What do you mean?”
    “‘Bucket sits empty.'”
    “What bucket? All buckets? The bucket named ‘Bucket’?”
    “That bucket.” (points)
    “What if I want to mean not a specific bucket? Some other bucket. An abstract bucket.”
    “How can a bucket be abstract?”
    “Let’s move on shall we? Second entry: ‘be’.”
    “You mean like, ‘to be’ or ‘be the bucket’ or ‘be ware the bucket’, or maybe, ‘there’s a be in my bucket’?”
    SIGH

    I like your list of the components of consciousness. Reductionism is a requirement, is it not, for all conversation (like the above stupid example) is it not? Let’s agree on the code we’ll be using to assemble higher level concepts that we’ll use to build our arguments. We do this until we get too high and the pillars of our components become, themselves, contended. “Consciousness,” I’d say, has reached this precarious height.

    But, as you mention, its still useful as a catch-all for the broad communication of the theory in general.

    Beware the bucket called Consciousness.

    Liked by 1 person

    1. Thanks.

      You made me look up the definition of “a” out of curiosity. Wow.
      https://www.merriam-webster.com/dictionary/a

      “Bucket” is a good way to describe it. I’ve also seen “grab bag”, as in a grab bag of capabilities we’ve grouped together and given a name, and treat as something special, because it’s the ones we possess. Although in recent years we’re becoming a bit more inclusive about who’s in the club.

      Liked by 1 person

  2. I guess my problem with illusionists/eliminativists is this. They tend to say things that I agree with, such as that we should have clear definitions in order to comment intelligently, but contradict this theme by calling themselves “illusionists” and “eliminativists” regarding consciousness — this seems to ironically beg for misinterpretation. You’ve said similar things as well Mike. Effective reductions should be needed to right things here. Schwitzgebel’s illusionist sanctioned definition of consciousness would be one such reduction that they might support, though instead seem to ignore. Furthermore they might propose something like my first principle of epistemology. (It’s that there are no true/false definitions but rather only more and less useful ones in a given context. This should obligate a reviewer to accept both explicit and implicit definitions in the quest to understand what’s being said before replying.) Reductions should only be effective if illusionists/eliminativists want to help scientists do their jobs better however. Otherwise I support them continuing on with philosophy as a sport to play and witness for its own sake.

    At the end of the interview with the yes/no questions, I wish Anthis would have been asked whether or not Searle’s hypothetical Chinese room would phenomenally understand Chinese. My take is that Dennett would say “yes”, and even though naturalists may quite validly be illusionists about that conception of consciousness. Anthis did mention Dennett favorably, which makes sense since Dennett seems to be the founder of his position. But I wonder if a much younger person like himself could be shamed out of a position which is technically inconsistent with illusionism/eliminativism? I doubt that Dennett could be so shamed.

    Liked by 2 people

    1. In fairness to the illusionists, most of them, including Dennett and Frankish, are clear they think consciousness exists. It’s only certain types of consciousness they deny. (Frankish points out that dualists deny his version of consciousness.) Anthis leans a bit harder into eliminativist terminology, but even he’s clear that consciousness as self-reference exists.

      And as I’ve noted before, I think the difference between a strong illusionist, a weak illusionist, and a reductive physicalist amounts to differing definitions, a difference in emphasis and communication tactics. Granted, not everyone agrees.

      I do think the illusionists could be clearer in many cases, but I they seem clear enough that the constant strawmanning is based on something else. It seems like this view really unnerves people, and they feel the need to lash out at any expression of it. (To be fair, panpsychists get their share of this kind of visceral reaction in the other direction.)

      I’m onboard with accepting someone’s explicit definition, although with the caveat I noted in the post, that it’s a lot easier to do that if the person acknowledges that’s not the only definition. On implicit definitions, maybe if it’s for a common definition already in circulation. If it’s unique to whoever is discussing it, I think the burden is on them to be clear about how they’re using the term.

      I haven’t seen Anthis address the Chinese room, but you can get an idea based on the credence he gives for AGI: https://jacyanthis.com/big-questions#artificial-intelligence
      Dennett has been pretty consistent that he’s onboard with the system response. He’s not the founder of these kinds of views. It goes back to people like Wilfred Sellars, WVO Quine, and others in the 1950s and 60s. And I noted Georges Rey’s 1983 paper in the last post. It is true that Dennett has been the chief champion in recent decades.

      Liked by 1 person

      1. Yes Mike, that chart does seem to illustrate that Anthis considers Searle wrong. Much of what he puts high credence in appears spooky to me. I suppose the worst is that he’s 69% sure that we do not exist as experiencers of existence directly, but rather by means of a simulation of existing. Wow.

        Looking over Searle’s rejoinder to the systems reply (https://iep.utm.edu/chinese-room-argument/#SH2a), for that I think he blundered. Instead of changing his thought experiment (which in a sense should have legitimized this reply as at least rhetorically a way out), I’d say that he should not have altered his though experiment and asserted that a systems answer is exactly what’s being assessed here. Would such a machine create something that phenomenally understands Chinese by means of worldly causal dynamics, or rather by means of magic?

        My own thought experiment is far more direct than Searle’s. I don’t know how a non-magical solution could practically be proposed. If inscribed paper that’s scanned and printed into the proper second set of inscribed paper to causally create something that experiences thumb pain, then why? How? What exactly would have such an experience? But if that second set of inscribed paper were fed into a machine which used the printed information to create something, like an associated electromagnetic field, in that case causality might still be preserved. Here the experiencer would theoretically exist as that created field. In fact all phenomenal existence could be said to causally exist this way, such as an understander of Chinese (which should be ridiculously more involved than mere thumb pain!).

        I think I once read on your blog that a survey was done in which 75% of modern philosophers believe that a Chinese room would not understand Chinese. Given how popular Dennett and others happen to be, this shocked me. It suggests that philosophers in general grasp and agree with the point of Searle’s thought experiment, and even though it’s effectively ignored. It could be that only the empirical validation of a theory like McFadden’s will end the popularity of informationism among people who are otherwise strong naturalists. Still I’d love to see what would happen if my thumb pain thought experiment were widely considered. The hope is that by countering this unfalsifiable status quo consciousness proposal with McFadden’s falsifiable proposal, enough interest in truly testing his theory would emerge for it to happen.

        Liked by 1 person

        1. Eric,
          It seems like you’ve become so convinced of this two computer model that the idea there may only be one system now seems inconceivable to you, to the point that anytime you contemplate a one system theory, you just assume it could only work if there’s magic to instantiate the second computer.

          However, my position is that the second system (computer or whatever we want to call it) doesn’t exist. It’s a mistaken concept. Call it an “illusion” if you want. Of course, the intuition that something like it does exist is very powerful. But then there are lots of very powerful intuitions that science has invalidated over the centuries. So what you’re claiming requires magic, I’m saying just doesn’t exist. Until you acknowledge that, you’re not talking about either my, Dennett’s, or any other functionalist’s position.

          On the survey, it was actually 67% thinking the room doesn’t understand Chinese, with only 18% thinking it does. Since 33% identified as functionalists and up to another 5% as eliminativists (“up to” since respondents could select multiple categories) on the consciousness question, that leaves a discrepancy of 15-20%. Much of that 15% may have gone into other answers on the CR question, such as saying the question is unclear, there is no fact of the matter, or other variations of just refusing to credit this thought experiment as valid.

          But it does seem that at least some portion of philosophers either don’t understand or accept the full implications of their view, that we aren’t as special as we take ourselves to be. I suspect that’s always going to be the case.

          Liked by 1 person

          1. Hi Mike,

            If I might quickly butt in. About the Chinese room, I think it’s possible that many of those who think the Chinese room doesn’t understand Chinese, simply conceive of Searle’s thought experiment as being a disanalogy.

            That’s how I view it in any case. The room’s lack of understanding is not incompatible with functionalism, if you believe for instance that a disconnected language model (which is what the Chinese room is) is insufficient for semantic understanding. Maybe what’s required is sensory functionality, and direct causal linkages to the world, which the room lacks. People might think the Chinese room is just “doing syntax” because it only understands the relations between symbols on the page, and not the relations between sense data and actual objects in the world. To truly understand the meaning of terms, arguably you need the latter.

            Like

          2. Hi Alex,
            That’s a good point. This gets to the ambiguities involved in the thought experiment. Are we only talking about it passing the common (weak) Turing test (fooling 30% of respondents for five minutes of conversation) or something more robust that goes on for longer. In the latter case, I think we have to consider the possibility that something like virtual sensory processing might be happening. But to your point, it’s not specified (at least in the versions I’ve read). So a significant number may be answering under the first assumption.

            Like

          3. Mike,
            I’m pleased that you’re trying to find good reason to assert that my thumb pain thought experiment does not address the perspective that you, Dennett, and functionalists in general hold. Apparently since you know that I’ve developed a psychology based “dual computers” model of brain function, the hope seems to be that I’m not addressing the functionalist position (since your side considers there to only be one computer here rather than the two I see). Actually however you’ve been quite clear that your side does consider the brain to create a self referential consciousness, and even though you choose not refer to this as a computer as I do. So we seem to merely be talking about the same brain created thing under different names. I like the computer metaphor both for the brain as well as for consciousness in a different sense of computer that the brain sometimes creates. This is merely a nominalistic difference between us however and so tangential.

            Since your side agrees with me that the brain does sometimes create a phenomenal experiencer of existence (or self referential consciousness), the question is, how does it do this? While I believe that brain information processing should animate some sort of physics that exists as a consciousness experiencer, such as EM fields, your side holds that the brain merely needs to do the right information processing alone. Thus if information correlated with what a whacked thumb sends the brain were inscribed on paper, as well as converted to another set of inscribed paper correlated with the brain’s processing of that information, then something associated with this paper to paper conversion should thus experience what we do when our thumbs get whacked. Can you think of an intelligent way to deny that this is one implication of your movement’s position?

            You’ve observed that the Chinese room tends to be interpreted ambiguously. I agree wholeheartedly. I consider it far too messy in several ways. Therefore I’ve cut out a great deal of this mess to create a succinct thought experiment that should eliminate virtually all of the extraneous baggage. If widely considered however, can you think of any ways that my thought experiment would still be considered ambiguous? And in the end do you still bite the bullet to say that this thought experiment is consistent with your current beliefs? I also wonder if you’re confident enough to let people in general decide for themselves whether or not your position thus violates worldly causal dynamics? Without the Chinese room’s ambiguity I think your side would fare far more poorly than it does today.

            Liked by 1 person

          4. Eric,
            Your response only illustrates the point I made above. I say I don’t think there is any additional entity being generated, and you claim I’m really saying a second entity is being generated, and proceed to argue against a notion you project on me and other functionalists.

            Self reference doesn’t imply any additional entity. Open up the task manager, activity manager on whatever device you’re using right now. That’s self reference, and happening without generating any additional device.

            You might argue that type of self reference isn’t sufficient. If we’re talking in terms of degree, I might agree, but a higher degree of self reference functionality doesn’t lead to any metaphysically hard problems. I can understand the position that something more is required, but it’s that intuition that I’m saying is wrong.

            In your thumb pain thought experiment, are you saying the thumb (and overall body) exists virtually within the paper and procedures? If so, then your scenario seems identical to the question of whether simulated entities can be conscious and feel pain. I think every functionalist accepts that possibility. If you read the Georges Rey paper I shared above, you’ll see that he not only bites that bullet on stuff like that, but pours gravy on the whole gun and wolfs it down.

            Honestly, how “my side” fares is irrelevant. If that was my priority, I’d be telling people that there is indeed something special and magical about the way our brains process information. Any view that goes against that sentiment will be dismissed out of hand by large numbers of people. Anyone can say the right things and be popular. I care more about getting it right.

            Liked by 1 person

          5. Mike,
            I agree that being popular is not your general goal. Some people are extremely concerned about popularity though you don’t strike me that way at all. You say what you believe in the attempt to get things right regardless of any popular backlash. But one thing that does seem quite important to you, is the health of the movement that we’re now discussing. As purpose based creatures we all invest in causes, and some of these causes naturally become more dear to us than others. Therefore we need to assess our dearest causes as objectively as we’re able to. Otherwise they tend to become based upon faith rather than reason whether we’re right or we’re wrong. I’m sure that you could count a number of positions in which you perceive me to be faithful rather than reasonable. I challenge anyone to make such arguments effectively. In truth you’re one of the few who does try to challenge my positions. I doubt you grasp how valuable I consider this service to be.

            The issue at hand is that your movement, known as functionalists, illusionists, eliminativists, and such, makes a positive claim about the existence of consciousness, though without acknowledging any apparently spooky elements of that positive claim. The claim is that consciousness exists by means of information processing alone, which is to say an information processing that creates consciousness without animating any associated instantiation mechanisms. Perhaps Searle merely blunt the popularity of your movement somewhat, and I’d say largely because he failed to parsimoniously reduce it back to basic enough elements of belief. I hope to do better.

            There was a period of time when you’d admit that if paper with the right inscriptions on it was properly converted to another set of inscribed paper, then something here would feel what you do when your thumb gets whacked. At the moment however you seem to think that there’s a better alternative. Just as many have legitimately ignored Searle’s thought experiment given its unclarity, maybe you’ve found a way past my concise and clear thought experiment?

            Your point seems to be that I’m claiming that the brain creates an additional entity (or even a new device?), while your movement holds that no such addition exists. But is this a valid way out? One of the points that you’ve recently been mentioning is that even the eliminative band of your movement believes that brains can convert non-conscious function into conscious function. Given the way that it’s claimed to do so, this specifically puts your movement in the cross hairs of my thought experiment. This is where I reduce your position to bare essentials so that a rational assessment of the premise might be made.

            On the specifics of my thought experiment, no I’m not saying that a thumb or body exists virtually in the proposed paper or procedures. Occam would not abide such extraneous additions. Instead there is paper with markings on it that correlate with the information that your whacked thumb sends your brain, and it’s scanned by a computer that then prints new paper with marking on it which correlate with your brain’s response to that information. As I understand your movement, its premise mandates that such a conversion would in itself create something that experiences what you do when your thumb gets whacked. Who knows what would do this experiencing, but still.

            If I’ve now been clear enough to illustrate that you haven’t found “an out” here (since I’m merely assessing a positive claim about what the brain does to create even eliminativist consciousness), you might try arguing that I’ve got this position wrong. Would the right inscribed paper that’s properly converted to other inscribed paper, not be “information processing” such that thumb pain would result? If so then you might provide an explanation. Or otherwise you might decide that in the quest for reason to prevail, this does seem like a bullet that your movement should bite.

            Liked by 1 person

          6. Eric,
            “The claim is that consciousness exists by means of information processing alone, which is to say an information processing that creates consciousness without animating any associated instantiation mechanisms.”

            Sorry to sound like a broken record, but I don’t know how to get through on this other than to just keep repeating the point. The first part of that sentence doesn’t entail the second part. There is information processing, some of which we label “consciousness”. That’s it. Nothing is created. Why do you think eliminativists have that name? What do you think the illusionists are saying is the illusion? All there is, is the information processing, the functionality.

            On your thumb pain scenario, you are right that the virtual thumb isn’t needed. Not sure why my mind went there. Brains after all do experience phantom limb pain. All that’s needed is the same information processing that takes place in the brain when the experience happens. If we stipulate that’s happening in the paper processing, then sure, my answer remains unchanged, there is something there experiencing thumb pain. (I don’t really consider my answer here any different than the one I gave above for simulated entities.)

            As far as I can see, the only reason to doubt this would be to assume something magical about brains. (You’ll probably say EM fields, but if they’re significant, and the information processing in the EM fields is also reproduced in the paper processing, then we still have the experience. The only way out is to say there’s something magical about the field.)

            On “my movement”, as I’ve said before, and whether you believe it or not, these -isms aren’t ideologies I signed up for. They’re labels that more or less accurately summarize a set of conclusions. If further investigation eventually makes them less accurate for me, it’ll be time to find new labels.

            Liked by 1 person

          7. That didn’t sound like a broken record to me Mike. I was concerned that you were trying to escape the implications of your current beliefs as portrayed by my thought experiment. Instead you’ve accepted those implications in full to the rebellious theme of “You call this ‘biting the bullet’?” Thus if certain inscribed paper were properly converted into other inscribed paper, you believe that something here would experience what you do when your thumb gets whacked. Conversely I suspect that in order for such an experience to exist, an electromagnetic field would need to be created which exists as that experiencer itself (and the parameters of the field would in some sense be described by the second ream of inscribed paper). For causality to be preserved, some such additional step should be necessary I think — computer information should never exists independently of associated actualization mechanisms. To change my mind on this I’d at least need some contrary examples.

            On your “nothing is created” assertion, that depends. If the thumb pain experiencer does not initially exist, but then comes to exist with the printing of the second set of inscribed paper, I think many of us would say that this experiencer would be “created” by associated information processing. And how might something exist without any medium from which to exist? Theists propose souls, McFadden proposes an EM field, and I guess you propose mediumless processing in itself. In any case my psychology based dual computers model is set up to address all such proposals.

            I can see that you won’t be changing your mind without experimental evidence that my extra step is causally required. On the method I propose to test McFadden’s theory, I use to think that we’d need to put transmitters inside someone’s head that fire about like individual neurons to see if we could get into a firing zone which creates a field that tampers with someone’s standard phenomenal experiences for oral report. But that might take too many implanted brain transmitters to be practical. It seems to me now however that we should be able to set up enough fake neurons for firing outside the brain, and then transmit the full EM field product into the brains of sufficiently outfitted test subjects. Any thoughts on this proposal?

            Like

  3. “a will that is, to some extent, free of the laws of physics”, “deep metaphysical mysteries”…..hmm! Presumably here you merely refer to somebody else’s view or opinion that either of these phrases have any validity?

    Liked by 1 person

    1. Pretty much. I’m a stone cold physicalist, although totally open to evidence that could change that view. And usually when you see me use the word “metaphysical”, it’s a clue I’m talking about other people’s view.

      Liked by 1 person

  4. I agree with your inclination to doubt that “almost all” problems of philosophy are verbal pseudo-problems. Often they turn on substantive issues. Of course, verbal disputes can still get in the way of a discussion that actually centers on substantive disagreements, but that doesn’t make the problem a verbal one. Free will provides a good example.

    The big substantive question hidden inside the free will debate is: how do time and causality work – both in human action, and in general? Most people have an intuitive model of physics, time, and causality, that is somewhere in the neighborhood of Aritstotelian physics, with a bit of Cartesian “matter” thrown in. In this model: Matter is dumb, inert, made of tiny billiard balls that get pushed around. Time is a moving river that sweeps literally everything along from past to future. Causality applies to all matter, acting from past to future, in an action that has no equal and opposite reaction. The intuitive model doesn’t say whether physical causation is determinisitic or probabilistic, although individuals may have opinions.

    If this model were correct, a familiar philosophical argument that determinism precludes (an important aspect of) free will, would be correct. (Its name – for a carefully spelled out variant – in philosophy is “the Consequence Argument”; the name might not be familiar.) Moreover, indeterminism wouldn’t help either. Unless there are spooky non-physical feats of human minds, matter is dumb and inert, so Lucretius’s swerving atoms or QM’s chancy collapses cannot be your choices in motion. Choices are not dumb and inert, but those things are.

    Based on this intuitive set of assumptions about physics, many people define “free will” in contrast to determinism, or even to materialism. This is understandable, relative to their assumptions about physics. It’s like including “mammal” in the definition of “cat”. Why wouldn’t you? It’s blatantly obvious that any cat must be a mammal. If anyone says they think something is a cat, but not a mammal, most people would think they’re crazy, or playing silly word games.

    The truth about time and causality is every bit as surprising as finding out that cats are actually from the Andromeda Galaxy, and not even members of what Earthlings call the animal kingdom. Causality doesn’t go all the way down to the fundamental physical constituents. At that level, natural laws have an elegant symmetry in time that erases the master-slave concept people associate with causality and the flow of time. And time is just a coordinate on a 4D manifold in which all times “to be” equally real. (Here I use “to be” in place of “are” because I am monolingual and the root verb of “are” is “to be”.)

    Liked by 1 person

    1. I agree that the deterministic vs indeterministic issue is irrelevant for free will. But as you noted, thinkers throughout history, from the Epicureans forward, seemed to think it does. I remember reading about physicists in the 1920s welcoming the indeterminancy of quantum physics because they thought it mattered in this way. I’ve never understood why.

      But I’m not seeing how time symmetry at the microscopic level matters either. Just because the mechanisms can be looked at from opposite directions doesn’t seem to provide any notable freedom. The interactional relationships are still just as clockwork as before, except we can run the clock backward. And of course as we scale up, the Second Law of Thermodynamics kicks in and takes that away. Am I missing something?

      Like

      1. What you’re missing is that the past isn’t “fixed”; i.e. it isn’t immune to what you do right now. Let me spell out the Consequence Argument. The zero numbered statements are terminology:

        (0a) Let P be a true statement about the complete past at some distant past time t. (0b) Let L be a true statement of all the laws of nature.
        (0c) Let Z be a true statement about what you will do tomorrow.
        (1) There is nothing you can do or could have done such that, had you done it, P would be false. (premise)
        (2) There is nothing … such that, had you done it, L would be false. (premise)
        (3) Necessarily, (P & L) -> Z. (determinism: assumed for conditional proof)
        (4) Therefore, if determinism is true, there is nothing you could do to avoid doing Z tomorrow.

        Premise 1 is false and in fact, everything you do now is nomologically (i.e. as a matter of natural law) relevant to the past state P. If we use the verb “affects” in a way that does not commit us to causal asymmetry, we can say everything you do affects the past – it just does so in chaotic and practically unknowable ways, and perhaps only in microscopic detail rather than macroscopically.

        In the macroscopic world – like, for example, clockwork – irreversibility applies and nomological relations support asymmetric causality. But this doesn’t hold for the entire world, and the macroscopic phenomena are not the whole story.

        See the last section of the SEP article on Causal Determinism, for more. In Hoefer’s “Freedom from the Inside Out” paper cited there, he argues that we can (intellectually) and should (practically) begin our nomological reasoning from the inside of time – now, in our reference frame – outwards to both the past and future.

        Like

        1. Thanks for the explanation. I see what you’re saying. But it seems to depend on using language in a certain manner. “Affect” is a word that evolved in terms of macroscopic phenomena. It seems like using it this way is misleading.

          I could see the argument that there is a relation between how things are today and how they were yesterday. We can even say today and yesterday interact with each other, just as today and tomorrow do. But when looking at those interactions, we can always choose to tell the story forward, or tell it backward with equal validity, at least at the microphysical level. I think the need for the story to be coherent in both directions puts constraints on the relations.

          As we noted, as soon as we scale up into macroscopic phenomena, things become asymmetric due to entropy. Which to me means that I as a macroscopic system can’t make a decision to do anything that affects (in the traditional sense of the word) what happened yesterday. So I remain unclear how this might make a difference with free will.

          Liked by 1 person

          1. It’s not really clear that “affects” in ordinary language implies asymmetry. For example, people say that economics affects politics and vice-versa. Now of course, economics and politics are long-term ongoing processes, and you can divide them into small time-slices, etc etc. But yeah, any term remotely related to causality is likely to encourage sneaking-in of assumptions. Interaction is a better word, so kudos to you for that.

            This, however, is just wrong:

            I think the need for the story to be coherent in both directions puts constraints on the relations.

            When self-reference is involved, constraints can vanish. Try this exercise: draw a pie chart with any number of colors; e.g. red, green, and blue. The red part will represent the fraction of the chart which is red. Similarly for green and for blue.

            Draw carefully! You wouldn’t want to get the proportions wrong! 😉

            Liked by 1 person

          2. I have to own up to getting the use of “interaction” for the microphysical from Carlos Rovelli in his description of RQM.

            I’m totally not catching the point with the self-reference thing, or the pie-chart exercise. But it’s been a long day. Maybe it’ll click later.

            Like

          3. Well, I didn’t explain the significance of the pie chart, so it’s not surprising you didn’t intuit the connection. I explain it here. How you map this onto human decision and the past and future is as follows. The constraint in a normal pie chart (say, one about the major sectors of the US economy) is that if the green wedge represents agriculture and agriculture accounts for 6% of the economy, you have to draw the green wedge to cover 6% of the territory.

            The “constraint” in a deterministic universe is that you “have to” pick an action that “corresponds” to the past state. Except, that’s not really a constraint: it’s a gimme. The past state is not independent of your present choice – in fact, it comes along for the ride. The “correspondence” “requirement” is self-referential, just like the self-referential pie chart. You can pick whatever present action seems likely to correspond to a future result you like, without worrying about the past. Or maybe you don’t care about the future, you just feel like dancing/running/whistling in the present Whatever – it’s up to you.

            Like

          4. Ok, thanks. I’m still not getting it, but I really need to re-read your post again slowly and deliberately when I’m not tired, as I’ve been at the end of every day recently.

            Like

      2. Just wanted to suggest that some folks base their concept of responsibility on there being non-deterministic free will, as in if things are determined, there’s no basis to hold them responsible first their actions.

        Liked by 1 person

        1. Right. But if I can say the deterministic laws of physics made me do it, why can’t I equally say the indeterministic laws of physics made me do it? At least in the deterministic case, my learned and innate preferences will be in the causal mix. With indeterminism, a random fluctuation may actually undermine my previously established values at a crucial moment. I don’t see how anyone finds responsibility in that. It seems like unpredictability is being conflated with responsibility.

          Liked by 1 person

  5. “The best approach to virtually all problems in contemporary philosophy is to treat them as pseudo-problems by making their semantics precise then delineating the often straightforward solutions to each precisification”

    This is vintage Ludwig Wittgenstein, i. e., the Tractatus Wittgenstein of 1921. I hope Anthis gave him credit. Like so many who fail in love with the thinking of the Tractatus, I’m guessing Anthis fails to mention Wittgenstein’s latter, much different, and, in my opinion, much better work, Philosophical Investigations of 1953.

    Liked by 1 person

    1. He does cite early Wittgenstein in the paper, but not late Wittgenstein. And he’s pretty much the only contemporary thinker I’ve come across who calls himself a verificationist in the logical positivist tradition. I haven’t been able to find anywhere where he writes about that in detail, only this footnote: https://jacyanthis.com/big-questions#fn:positivism
      My impression is he’s using verification in a weaker sense than the historical logical positivists. His usage seems closer to falsifiability.

      Liked by 1 person

      1. Thanks Mike for the ready reference. I’ve never read anything of his. And now that I see that he promotes his book on Animal Farming with a quote from Steven Pinker, it is less likely I ever will.

        Liked by 1 person

    1. “Soul” is another one of those words which can be used in various ways. There’s no evidence for the most common contemporary version of an immaterial essence of us that lives on after death. But just by me saying that, I’m sure someone will accuse me of being an eliminativist toward the soul in its more grounded form as a synonym for the mind, and then ignore all clarifications.

      Like

  6. Yeah, seems to me that jargon, or semantics, is a way for persons/individuals to (non-violently, non- physically) distinguish themselves from others. Not dissimilar to language in a broader sense. The root of which is status. Within and between groups. And the root of status elevation is?

    Liked by 1 person

    1. Psychoanalyzing those we disagree with is always a dodgy endeavor, and I’ve gotten in trouble for doing it in the past. But I suspect the motivations for being ambiguous and obfuscatory come from all kinds of places. Some of it might come from seeking or preserving status, but also some from existential angst, a desire to be kind, or at least diplomatic, or maybe even to be entertaining. Wherever it does comes from, if truth is our goal, the lack of clarity is a hindrance.

      Like

  7. This is where the distinction between 1st and 3rd person points of view (POV1/POV3) is of importance.

    LW’s “beetle in the box” parable comes to mind. For me, consciousness semanticism is correct in POV3 — hence I accept Dennett’s intentional stance as the only feasible POV3 approach. But in POV1, there is nothing semantic about consciousness. At the very least, it feels like a primary “given”, underlying all else. This also needs to be accounted for, and semanticism offers no clues in that direction.

    (This is really a more general scientific point. Whenever science declares that something is at variance with how it appears to us, it also has to explain why that something appears to us as it does. Classic example: once it was realised that solid matter is almost entirely empty space it became necessary to explain why solid matter is, well, solid!)

    That primacy of consciousness in POV1 may be necessary or it may be a mere contingent quirk of evolution. We have no idea, since it is impossible to generalise from a single data point (ourselves). But that’s a separate issue.

    Liked by 1 person

    1. POV1 and POV3 are cool abbreviations. Good points here Mike. I do think a couple of additional ones are worth noting.

      The first is that POV1 has a lot of limitations. As you noted, we only have access to our own POV1. And there is plenty of cognitive research demonstrating that our access to our own mind is pretty limited. It’s very difficult to determine what functionality from POV3 is included in our POV1.

      The second is that the POV3 is a construction, something we create in collaboration through all of our POV1s. That makes it more resilient, or at least gives it the potential to be more resilient, to be plagued with fewer blind spots and hidden biases. (Although there are cultural and species level biases to be on alert for.)

      These points are worth reviewing because the hard problem, explanatory gap, mind-body problem, etc, arises due to a disconnect between the two views. When you have two views that appear to be irreconcilable, the question to ask is, is there a difference in reliability between them?

      As you noted, we still have to account for the aspects of POV1 that lead to these discrepancies. I like Michael Graziano’s theory here, but there are other plausible ones. I think the common property is that our perceptual models are abstract, but they don’t seem abstract since we have no access to the processing they’re abstracted from. We have no access to the intermediary steps leading to the conclusion of redness in a part of the visual field, we only have access to the redness itself, which makes it seem disconnected and otherworldly.

      But there remains a lot of scientific work to fully test and refine these explanations.

      Like

      1. Yes, POV3 is constructed by abstracting away from idiosyncrasies of POV1. In a different way, POV1 is, of course, also a construction — assembled by us pre-consciously by synthesis and extrapolation from a mixture of sensory data and memory. This, I think, applies to our self-perception as much as to our perception of the world around us.

        We probably agree so far. But when you say “When you have two views that appear to be irreconcilable, the question to ask is, is there a difference in reliability between them?” I have to ask: reliability a sensible measure here? If introspection is taken as the main function of consciousness, then it is thoroughly unreliable. But if it is viewed as a kind of to-whomever-it-may-concern information clearing system, then is it really unreliable?

        If consciousness is an unreliable introspection tool, I suggest the logical assumption is that introspection is not its evolutionary purpose. My take on it is that POV1 and POV3 are irreconcilable because they serve different purposes and hence carve their respective application domains into conceptualisations in ways that are misaligned and quite possibly resist alignment (except, perhaps, in ways which are in principle impossible in practice).

        Like

        1. It seems like whether introspection is the main function of consciousness depends on what we mean by “consciousness”. Some people define consciousness as introspection. Broader views take it to be the result of deliberative attention, reflexive attention, emotions, or something more basic, all of which seem to leave introspection as more of an add-on.

          A related question might be, what is the function of introspection? Metacognition in general may have started as an ability of an animal to assess how confident it is in particular belief. (How sure am I that I’ll make it if I leap to that branch?) That’s the version most easily discoverable with most mammals. It may have additional functions for a social species as a mirror to an intuitive theory of mind.

          When we get to humans and language use, it may rise to the level of accessing our own mental states so they can be shared. I say “may” here because there are people who challenge how much access we really have to our mental states. We may only have access to a theory of mind turned inward. (I tend to think the reality is a mix. We have some privileged access to our mental states, but probably not as much as we think.)

          But one thing it’s hard to see is any evolutionary role for using introspection to understand the architecture of the mind. That definitely seems outside of its evolved purpose.

          So I do think POV1 and POV3 are reconcilable, if we can accept that introspection isn’t more reliable than any other form of perception. Which isn’t to say we don’t eventually need to account for the misintuitions it gives us. Of course, it could be argued that’s not so much reconciling as simply privileging POV3.

          Like

          1. Mike,

            What I am saying is that introspection being unreliable seems to me to rule out any identification of consciousness with introspection. I have too much respect for evolution to think otherwise. 🙂 It is surely (!) quite possible that in evolutionary terms,introspection as such is not actually *for* anything. I suspect it is more like Gould “spandrel” — an inevitable side-effect of something else; in this case of a need to be able to predict one’s future states (hunger, thirst, exhaustion…) and indeed to make sense of the behaviour of others

            Can POV1 and POV3 be reconciled? Depends on what you mean by “reconciled”. I think it is likely that we shall eventually collectively convince ourselves that any specific mind state is explicable in the corresponding specific brain state. I also think it is exceedingly unlikely that the eliminativist project of achieving translatability between mind states and physical states could come to anything.

            My reasons go back to Quine’s radical indeterminacy of translation (and Davidson’s elaboration on that). In daily practice, this indeterminacy is unproblematic, because it can be always resolved by contingent context, without which even human language translation can be wobbly (apocryphally, an AI translated “out of sight, out of mind” into Chinese and back as “invisible idiot” :-)). And how does one establish the relevant context in attempting POV1/POV3 translation? (NB: I sweep under the rug the awkward issue of the frame problem: what exactly *is* the relevant context?)

            Like

          2. Mike,
            I’m not sure we’re that far apart on introspection. Although I’m not inclined to think of it as a spandrel since it seems pretty complex. But this might come down to what we’re including in “introspection”. It seems like we agree that introspection doesn’t have providing insight on how the mind works as an adaptive role.

            We’ll have to see on providing translatability. I’m a type-A materialist (in Chalmers’ taxonomy). If I recall, you’re a tybe-B, a non-reductive materialist. I think we’ll eventually be able to do better than identity relations, at least better than identity relations that remain controversial. But maybe I’m wrong and we’ll ultimately hit a wall.

            I might have to do some homework on Quine and Davidson and indeterminacy of translation, but I do agree that the ambiguity of language means many mental terms won’t have a clean mapping to physical realities, at least unless we settle down to a precise meaning. The problem is that often every precise meaning is controversial. Still, for most precise meanings, the problem becomes tractable, if not straightforward. Of course, getting to that precise meaning is often the hard part.

            Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.