Panpsychism and definitions of “consciousness”

Disagreeable Me asked me to look at this interesting TED talk by Professor Mark Bishop.

The entire talk is well worth the time (20 minutes) for anyone interested in consciousness and the computational theory of mind, but here’s my very quick summation:

  1. The human mind, and hence consciousness, is a computational system.
  2. Since animal minds are computational, then other computational systems that interact with their environment, such as the robots Dr. Bishop discusses in the video, should be conscious.
  3. Everything in nature is a computational system.
  4. Given 3, everything in nature has at least some glimmers of consciousness.  Consciousness pervades the universe.

The conclusion in 4 is generally a philosophy called panpsychism.  It’s a conclusion that many intelligent people reach.

First, let me say that I fully agree with 1.  Although it’s often a ferociously controversial conclusion, no other theory of mind holds as much explanatory power as the computational one.  Indeed, many of the other theories that people often prefer seem to be more about preserving and protecting the mystery and magic of consciousness, forestalling explanation as long as possible, rather than making an actual attempt at it.

I also cautiously agree with 3.  Indeed, I might say that I fully agree with it, because if we find some aspect of nature that we can’t mathematically model, we’ll expand mathematics as necessary to do it.  (See Newton’s invention (discovery?) of calculus in order to calculate gravitational interactions.)  We could argue about exactly what computation is and whether something like a rock does it in any meaningful sense, but with a broad and long enough view (geological time scales), I think we can conclude that it does.

When pondering 2, I think we have to consider our working definition of consciousness.  We could choose to define it as a computational system that interacts with the environment.  If we do, then everything else follows, including panpsychism.

But here’s where I think pansychism fails for me.  Because then the question we need to ask is, what follows from it?  If everything is conscious, what does that mean for our understanding of the universe?  Does it tell us anything useful about human or animal consciousness?

Or have we just moved the goal line from trying to understand what separates conscious from non-conscious systems, to trying to understand what separates animal consciousness from the consciousness of protons, storm systems, or robots?  Panpsychists may assert that the insight is that there’s no sharp distinction, that’s it’s all only a matter of degree.  I’m not sure I’d agree, but even if we take it as given, those degrees remain important, and we’re still left trying to understand what triggers our intuitive sense of consciousness.

My own view is that consciousness is a computational system.  Indeed, all conscious systems are computational.  However, the reverse is not true.  Not all computational systems are necessarily conscious.  Of course, since no one can authoritatively say exactly what consciousness is, this currently comes down to a philosophical preference.

People have been trying to define consciousness for centuries, and I’m not a neuroscientist, psychologist, or professional philosopher, so I won’t attempt my own.  (At least not today. 🙂 )  But often when definitions are illusive, it can help to list what we perceive to be the necessary attributes.  So, here are aspects of consciousness I think would be important to trigger our intuitive sense that something is in fact conscious:

  • Interaction with the environment.
  • An internal state that is influenced by past interactions and that influences future interactions, i.e. memory.
  • A functional feedback model of that internal state, i.e. awareness.

I think these factors can get us to a type of machine consciousness.  But biological systems contain a few primary motivating impulses.  Without these impulses, this evolutionary programming, I’m not sure our intuitive sense of consciousness would be triggered.

What are the impulses?  Survival and propagation of genes.  If you think carefully about what motivates all animals, it ultimately comes down to these directives.  (And technically survival is a special case of the gene propagation impulse.)  In mammals and social species, it gets far more complex with subsidiary impulses involving care of offspring and insuring secure social positions for oneself and one’s kin (in other words, love), but ultimately the drive is the same.

It’s a drive we share with every living thing, and a system that is missing it may have a hard time triggering out intuitive sense of agency detection, at least in any sustained manner.  I think it’s why a fruit fly feels more conscious to us than a robot, even if the robot has more processing power than the fly’s brain.

Of course, a sophisticated enough system might cause us to project these qualities unto it, much as humans have done throughout history.  (Think worship of volcanoes, the sea, storms, or nature overall.)  But knowing we’re looking at an artifact created by humans seems like it would short circuit that projection.  Maybe.

Anyway, those are my thoughts on this.  What do you think?  Am I maybe overlooking some epistemic virtues of panpsychism?  Or is my list of what would trigger our consciousness intuition too small?  Or is there another hole in my thinking somewhere?

Update: It appears I misinterpreted Professor Bishop’s views in the video.  He weighs in with a clarification in the comments.  I stand by what I said above about general panpsychism, but his view is a bit more complex, and he actually intended it as a presentation of an absurd consequence of the idea of machine consciousness.

This entry was posted in Mind and AI and tagged , , , , , , . Bookmark the permalink.

116 Responses to Panpsychism and definitions of “consciousness”

  1. Hi SAP,

    I don’t have much time right now but I stopped when I read your summary of his argument because I think you’re significantly misunderstanding his argument.

    His argument is much more about how pretty much any computation can be attributed to pretty much any physical system and a rigorous argument to demonstrate that this is the case. It’s not that we are surrounded by glimmers of consciousness, it is that every physical system of reasonably complexity is implementing every conceivable algorithm simultaneously! All possible minds are instantiated pretty much everywhere. (Which is absurd so there has to be more to mentality than computation, according to Bishop)..

    This is not the panpsychism of David Chalmers in other words, which is more what you seem to be addressing.

    More tomorrow.

    Liked by 1 person

  2. ratamacue0 says:

    I’m interested to watch the video when I get a chance.

    Meanwhile, I read the post. On 2, it seems to me like you’re redefining consciousness to exclude qualia.

    Like

    • Not necessarily (although I guess it depends on what exactly you consider qualia to be), but the definition I associate with 2 was really me just attempting interpretational charity, admitting that, with an appropriate definition of consciousness, panpsychism can be logical. As I discuss further down though, it doesn’t really work for me.

      Like

  3. lemarkle says:

    Dear Michael (?),

    Firstly thanks for your very generous comments on my recent TEDx talk. I am replying to your post as I fear you have misunderstood the argument a little.

    The position I outline in philosophy is called a ‘reductio ad absurdum’ argument; a form of argument which seeks to demonstrate that a statement is true by showing that a false, untenable, or absurd result follows from its denial, or in turn to demonstrate that a statement is false by showing that a false, untenable, or absurd result follows from its acceptance.

    In my TEDx talk i initially show how we can implement any input-less finite state automata via a large digital counter, or, after Putnam, any open physical system (such a rock). I extend this to show how in addition, over any finite time interval, any FSA with fixed input can be similarly implemented.

    Thus, IF we accept that the execution of a computer program can give rise to a consciousness, then such ‘consciousness’ is found in any open system (i.e. a vicious form of panpsychism is true). I.e. The Dancing with Pixies argument demonstrates that computational explanations of consciousness lead to (for me, but perhaps not everyone) an absurd conclusion (panpyschism). This leads me to reject the first horn of the argument (that the execution of appropriate computations necessarily gives rise to consciousness).

    Of course, this argument has no force to those who are content to live with such panpsychism, albeit in this case the cherished computational model ceases to explain consciousness (because consciousness is everywhere); this, assessed in terms of understanding consciousness, the panpsychic ends up no further forward, he has simply explained the hard-problem away by a handwaving manoeuvre that concedes that everything is conscious. And to me this just caches out as just another quasi-religious explanation of mind ..

    All the best,
    – prof j. mark bishop (TCIDA, Goldsmiths, London UK)

    Liked by 1 person

    • Professor Bishop,
      I appreciate your clarifications. I’ll add a link to your comment in the post. I did indeed misunderstand your position in the video.

      I’m not familiar with Hilary Putnam’s observations, but they sound similar to the Boltzmann brain problem, which is that we can never be sure we’re not a consciousness with false memories that just briefly came into existence out of chaotics pattern forming, and will soon fade back into chaos.

      I see now why Disagreeable Me highlighted your talk, although knowing his views, I suspect his takeaway is different from yours. (Although I’ll wait for him to describe that takeaway himself.)

      My own first reaction is to want to know more about exactly what Putnam observed. Like Boltzmann brains, it sounds like an interesting thought experiment, but I can’t see any way to prove or disprove its speculation. I certainly don’t see it, in an of itself, weakening the computational theory of mind. As I noted in the post, that theory remains the one with the most explanatory power and should only be discarded in favor of another with more explanatory power.

      Liked by 1 person

      • Hi Mike,

        I don’t really see much of a connection to Boltzmann brains.

        I’ll try to break down the argument as I understand it (not having read Putnam on this either).

        A computation can be modelled as a finite state automaton. In this model, the machine is in one of a finite (though often very large) set of states. For instance, when a computer plays chess, the state may be the state of the board (although that’s simplifying a great deal because we’d also need to consider the states it goes through as it tries to pick its next move). At each clock tick, it moves to a new state which is determined only by its current state and any input it might receive.

        The algorithms we write are really just a way of condensing and abbreviating an exhaustive list of state transitions when there are exponentially many states to pick from. We prefer to write a for loop that iteratively prints the numbers from 1 to ten, but we could just print 1, then print 2, then print 3 etc.

        You can easily associate particular output with each state, so that a relatively trivial bit of wiring up could drive a motor on a robot or show on a monitor what a user would expect to see at this point of the computation (e.g. if in state X, display this image).

        If we determine the input ahead of time (as Bishop suggests when he talks about replacing the sensory input of the robots with a pre-recorded feed), we can also make the input part of the state of the machine, and so now at each clock tick it moves to a new state determined only by its current state. The machine charts a predetermined path from state A to state B to state C and so on. If we number the states according to the order in which they appear, then it simply moves from state 1 to state 2 to state 3 and so on until it terminates or repeats.

        Now, if you have any sort of digital counter which increments a number every clock tick, you can interpret it as implementing this finite state machine. All you need to do is wire up the outputs to map the numbers on the counter to the display corresponding to each numbered state.

        Any physical system which deterministically moves from one identifiable state to another without repeating for at least as long as our target state machine could be used as such a digital counter. There are such systems everywhere. This system could be as simple as a particle falling into a source of gravity, where the “count” is how many units of distance it has travelled. Or even in the empty vacuum of space, the “count” might be how many units of time have passed. It seems we may not even need a rock. Or the solar system itself could be such a counter, with successive states being different non-repeating (because they all have different orbital periods) configurations of the planets as they move in their orbits a little every nanosecond.

        We can’t use “rocks” for computation in practice because
        1) The internal state of a rock is not easy to read, if for example you’re relying on the precise disposition of its molecules
        2) The number of states involved is frequently greater than the number of atoms in the universe and cannot be handled in practice with a brute listing of state transitions. For this reason we usually don’t actually have the state transition table even though it is implicit in the algorithms we write.
        3) We frequently do not know the input in advance and so we need to be able to build systems which can react dynamically

        BUT!

        1) If we could read the relevant state of any deterministically evolving non-repeating physical system (which we’ll call the rock for brevity)
        2) And we had a computer program which could pass the Turing Test for a set of pre-scripted questions
        3) And we worked out all the state transitions for that program on that input
        4) And we worked out a mapping of states of the physical system to output states on a terminal

        THEN

        A rock could pass the Turing Test.

        Mark Bishop takes this to imply that computationalists ought to believe that the rock is conscious, which is absurd, and so we ought to reject computationalism.

        There are of course some differences between a rock and a computer. For a start, it’s easy to read the state of a computer (it pretty much reads its own state for us). There is also more of a direct physical connection between its state and the output it produces so that we don’t need to do very much to have the output wiring just work for all possible states (even those it will never actually visit).

        Bishop presumably does not regard these as very significant, being only practical in nature, but perhaps they provide some room for disagreement with his conclusion.

        Liked by 1 person

        • lemarkle says:

          Pretty good summary of the underlying argument DM! In addition, recall all real computers simply are finite state automata, so I think the DwP argument is both robust and general (and actually does not leave much room for ‘wiggle room’).

          In addition, to me at least, the very idea – if computational accounts of consciousness are true – that a suitably large counter would be conscious, is troubling enough to give-pause to all those not forever stuck in the vice-like grip of a computational ideology 🙂

          In the context of the above, btw, I think one underlying reason for the lingering romance with computationalism stems from a concern that, if computational accounts fail, there may be no other ‘games’ in town (vis a vis understanding cognition and the mind). Although this may once have been true, it is most certainly not the case any longer.

          Personally, I would urge anyone who wants to engage seriously with modern thought on cognition (and takes seriously the criticisms of computational accounts, of which there are many) to engage with the more recent “embodied, enactive accounts” of mind. For a nice recent summary perhaps check-out Evan Thompson’s “Mind in Life”. Also, the sensorimotor accounts of consciousness (from Alva Noe and O’Regan) and Mark Bickhard’s Interactivsm. At the very least all these begin to take seriously foundational issues of teleology and autonomy.

          In addition, our group (Nasuto, Bishop, De Meyer, Spencer, Roesch, Tanay et al) have some interesting work in progress – watch this space ..

          Cheers,
          – j. mark bishop

          Liked by 1 person

          • Hi Mark,

            I intend to write up my thoughts on your argument at some point on my own blog. Again, I do think it is a very good, even a compelling argument. But I do still see the wiggle room and I am still a computationalist, not just because it’s the only game in town but because I think it’s less absurd than its negation (I do intend to look into the other “games” you mentioned but I’m not familiar with them right now).

            You referenced David Chalmers’s paper “Does a Rock Implement Every Finite-State Automaton?” in yours, and though you mentioned some of his concerns I don’t really think you do them justice. In particular, computationalism could be interpreted not as the view that any suitable FSA is conscious but as the view that any suitable CSA (Combinatorial State Automaton) is conscious, where CSA is a term of Chalmers’ invention to describe a system such as a computer that has not only state but meaningful substates and causal connections between them. The state of a CSA is a vector and so has content, unlike the state of a FSA which is just a label.

            You say that every computer is an FSA and that is true, but it could be argued that being an FSA is not really what makes it a computer or that means a computation is happening. Every computer is after all also just a lump of matter with a certain mass and made of certain materials, but of course computationalists don’t usually believe that every lump of matter of the right mass and materials is conscious (DwP notwithstanding!). Treating the physical body of the computer as a lump misses the point, and perhaps treating the state as an undifferentiated lump is no better.

            (I should briefly mention that of course every CSA model has a corresponding FSA model, so every CSA is also an FSA, but physically implementing the FSA does not necessarily mean you have physically implemented the underlying CSA.)

            FSA is just a model of computation, and is not necessarily the right model for our concerns. If consciousness is about being the right sort of CSA rather than FSA, then it’s much harder to find Putnam examples in nature and panpsychism is averted. There might be an unconscious Putnam FSA that could pass the Turing Test, but it might take the action of a conscious CSA (such as an AI or a human being) to generate the state transition tables and mappings to physical states that would allow the test to be passed, and so the Turing Test would remain a plausible consciousness detector (in this case detecting the consciousness of the author of the FSA).

            In all honesty I should note that this particular objection of Chalmers is not ultimately why I am not swayed by DwP, and so even if Chalmers is convincingly refuted I will remain unpersuaded. In fact I hope you do have a convincing refutation, because it will only help me in my efforts to show that all computationalists should be mathematical Platonists and supporters of the Mathematical Universe Hypothesis, positions which I think sidestep the DwP (and CRA) argument entirely.

            Of course, that still leaves the Penrose/Lucas argument from Gödel, but I think that has other vulnerabilities.

            Liked by 1 person

        • Hi DM and Professor Bishop,
          I very much appreciate the detailed argument layouts. I suspect I need to read them several times to make sure I’m not missing crucial details. However, after a couple of perusals, here are my current thoughts.

          If we’re patient enough, we can observe a rock (or any physical system) doing computation. But as I mentioned in the post, I don’t think all computation is conscious. Consciousness is, it seems to me, a very particular type of information processing architecture. (I’m heavily influenced here by the work of Michael Graziano.) While it’s conceivable that a rock could pass the Turing test, I don’t see it as probable in any meaningful sense. I think we’d have to wait well beyond the heat death of the universe, perhaps beyond the time when the Earth’s orbit would have decayed and it crashed into the black dwarf remnant of the sun. (Unless we heat the rock and maybe make other adjustments, but then we’re now talking about a piece of technology.)

          DM mentioned that there is room for disagreement in the practical aspects of this idea. Ultimately, to me, it all comes down to practicalities. It’s why I have disdain for things like the Chinese Room thought experiment, because once you fix the scenario so that it could happen in any reasonable time scales, the intuitive absurdities associated with it largely disappear.

          Like DM, I’m a computationalist (although not a mathematical platonist), and my computationalism only seems to get stronger as I read more neuroscience and empirical psychology. There are potentially absurd consequences of the theory, but they seem to require profoundly improbable circumstances or time scales that the universe may not ultimately permit, and many of the absurdities disappear with more careful versions.

          I’m also interested in alternatives to the computational theory, although as I mentioned in the post, I’ve grown weary of theories whose only goal seems to be preserving mystery, or to reintroduce old but perhaps cherished paradigms.

          Liked by 1 person

          • Hmm, again I think you might be missing the point a little.

            If Bishop is right, then every physical system is implementing every algorithm simultaneously all the time. Patience doesn’t enter into it. We don’t have to wait for the heat death of the universe, it’s happening right now.

            Also, nobody assumes that all computation is conscious. The computationalist view is taken to be, as you say, that only particular computations are conscious. That’s not a problem for Bishop because his argument is that any computation you like can be found in any physical system.

            Liked by 1 person

          • Totally possible I’m still missing something.

            “If Bishop is right, then every physical system is implementing every algorithm simultaneously all the time.”

            I think I need some additional justification on this point. It doesn’t seem self evidently true to me. (Which might well arise from my lack of detailed knowledge of particle physics.) I do understand that there’s a lot of physics going on between the fermions and bosons in, say, a cup of coffee, a wall of iron, or a wooden table, but I can’t say it’s clear that every algorithm is being implemented.

            Liked by 1 person

          • You don’t need physics knowledge. All you need is a system which evolves through identifiably different states without repeating.

            I tried sending you a couple of links to papers by Chalmers and Bishop but wordpress spam filters ate it.

            Like

          • Thanks. Just fished it out of the spam folder and approved it. I’ll check them out.

            Like

      • OK, I’ve been reading a paper by Chalmers responding to Putnam. Very detailed and very good. It has some plausible ways out for computationalism that Bishop doesn’t seem to fully address (although does mention and dismiss) in his paper.

        Chalmers:
        http://consc.net/papers/rock.html

        Bishop:
        http://www.doc.gold.ac.uk/~mas02mb/Selected%20Papers/2009%20Cognitive%20Computing.pdf

        Liked by 1 person

  4. john zande says:

    Integrated Information Theory (championed by neuroscientists Giulio Tononi and Christof Koch) slots in nicely with this, coming to the same conclusion that consciousness permeates the entire universe. The theory states that any system—organic or inorganic—that processes and integrates information experiences the world subjectively to some degree. Zircon crystals, cells, plants, computer chips, even protons, they argue, are all examples of such systems. Consciousness, Tononi and Koch assert, is integrated information, represented as Phi Φ, and the quantity—or body—of consciousness corresponds to the amount of integrated information (Φ) generated above and beyond the information simply generated by its parts. Anything with a non-zero Phi has subjective experience, and this includes subatomic particles.

    Liked by 2 people

    • I’m personally not a fan of IIT. From everything I’ve read, integration is crucial for consciousness, but not sufficient. As soon as we understand that integration by itself doesn’t cut it, the idea of conscious crystals, cells, etc, lose their justification. I think we should be suspicious of any theory of consciousness that might lead us to conclude that the tax code has awareness.

      Liked by 1 person

      • john zande says:

        LOL. Granted, but do you think you might be confusing “Consciousness” and “Consciousness” here? I don’t think IIT is saying crystals, for example, are “sentient,” rather a strange, alien, almost unfathomable species of primitive consciousness. The way I see it is as something that endeavours to hold onto itself. Does that make sense? (And believe me, I can see how dangerously close this line is to Deepak Chopra Woo)

        Liked by 1 person

        • I can see their point (without going Deepak), but it gets back to what I asked in the post. What follows from it? What does it tell us about animal or human consciousness? If we accept it, I’d still be interested in the distinction between what we intuitively perceive as a conscious being and whatever consciousness a crystal has.

          The distinction between sentience and consciousness is one I’ve always struggled with. Can we have non-sentient consciousness or non-conscious sentience? I suppose we could if our definition of one was substantially narrower than our definition of the other, but my intuitive sense of those words see them as roughly synonymous. (Note I’m using the definition of sentience as the capacity to feel or perceive, not the science fiction one which is often used in place of “sapience”, i.e. human level intelligence. I think we can definitely have non-sapient consciousness.)

          Liked by 2 people

  5. lemarkle says:

    Hmmm, Of course Integrated information theory ‘slots in nicely’ with the DwP reductio if one `bites the horn’ and accepts panpyshism .. But then, ex hypothesis, by definition consciousness is everywhere, this ‘solution’ is not without its own problems (in the context of understanding, and developing a scientific theory of, consciousness). In effect recourse to panpsychism simply ‘explains’ away the hard-problem by philosophical sleight-of-hand that effectively concedes that everything is conscious but fails to enlighten us as to us why (or how) this is so. And to me that simply caches-out as just another quasi-religious explanation of mind ..

    Liked by 1 person

    • lemarkle says:

      Indeed, as i replied above, it is more scientifically coherent to reject computational explanations of consciousness (and mind). In fact, I can only assume an underlying reason for this lingering romance with computationalism in the borader community stems from a concern that, if computational accounts fail, there may be no other ‘games’ in town (vis a vis understanding cognition and the mind). Although this may once have been true, it is most certainly not the case any longer.

      As I stated above, I would urge anyone who wants to engage seriously with modern thought on cognition (and takes seriously the criticisms of computational accounts, of which there are many) to engage with the more recent “embodied, enactive accounts” of mind. For a nice recent summary perhaps check-out Evan Thompson’s “Mind in Life”. Also, the sensorimotor accounts of consciousness (from Alva Noe and O’Regan) and Mark Bickhard’s Interactivsm. At the very least all these begin to take seriously foundational issues of teleology and autonomy.

      Liked by 2 people

      • Do you mean that teleology is taken seriously now? Or dismissed seriously? (I haven’t heard that word in quite a while. Wrote my undergrad thesis on it, but have since avoided using it out of fear of scaring people away.)

        Like

        • lemarkle says:

          Dear rung2diotimasladder; i certainly take teleology seriously; it is not clear to me that computationalists do, as i believe it can only be genuinely grounded by taking embodiment seriously. In this world, matter matters..

          Liked by 1 person

  6. Some additional links on this that amanimal emailed me.

    The Incomputable: dancing with pixies
    https://prezi.com/sdanwlky820i/the-incomputable-dancing-with-pixies/

    A Cognitive Computation Fallacy? Cognition, Computations
    and Panpsychism, Bishop 2009
    http://www.doc.gold.ac.uk/~mas02mb/Selected%20Papers/2009%20Cognitive%20Computing.pdf

    Précis of Mind in Life: Biology, Phenomenology, and the Sciences of Mind, Thompson 2011

    Mind in Life: Biology, Phenomenology, and the Sciences of Mind – preview
    http://lchc.ucsd.edu/MCA/Mail/xmcamail.2012_03.dir/pdf3okBxYPBXw.pdf

    … and I just came across this(which you may be familiar with, but just in case:

    Shall We Tango? No, but Thanks for Asking, Dennett 2011
    https://ase.tufts.edu/cogstud/dennett/papers/shallwetango.pdf

    Liked by 2 people

  7. amanimal says:

    Oops! 🙂

    Précis of Mind in Life: Biology, Phenomenology, and the Sciences of Mind, Thompson 2011 https://evanthompsondotme.files.wordpress.com/2012/11/thompson_precis_jcs_author_proof.pdf

    Liked by 1 person

  8. Discovered this Stanford Encyclopedia of Philosophy article on computation in physical systems.
    http://plato.stanford.edu/entries/computation-physicalsystems/

    In particular, this portion gives a name to Putnam’s proposition: unlimited pancomputationalism (rocks do computation and implement every algorithm ever, including minds if they are computation). It also summarizes (briefly) the arguments against it.
    http://plato.stanford.edu/entries/computation-physicalsystems/#UnlPan

    This section is followed by a discussion on limited pancomputationalism (rocks do computation, but not every conceivable algorithm), which I find far more plausible.

    The section after that is one DM may find interesting, if he hasn’t seen this article already. It discusses the universe as a computing system.

    Like

  9. lemarkle says:

    Hi DM (et al).

    Just to say that I am very much enjoying the continued discussion of these ideas ..

    As it happens, in earlier work I did endeavour to reply to Chalmers [and Chrisley] (cf. “Bishop, J.M., (2002), Counterfactuals Can’t Count: a rejoinder to David Chalmers, Consciousness & Cognition, 11:4, pp. 642-652.” https://www.dropbox.com/s/qehk8a6fumlb80m/article.pdf?dl=0) and (“Bishop, J.M. (2009), Why robots can’t feel pain, Mind and Machines 19:4, pp. 507-516.” https://www.dropbox.com/s/ounxit785jn61d4/2009%20Mind%20and%20Machine.pdf?dl=0); however, to be frank, I have never been fully satisfied that either of these attempts elegantly nails the argument ..

    That said – current work pressure(s) notwithstanding – I am slowly putting together (what i think of at least) to be a much simpler and yet much more robust response to Chalmers (and Chrisley), and if anyone (DM?) emails to remind me, I will send a draft for comment when the work is finished.

    In this new approach i contrast, in what I hope is a robust and novel way, (a) a putative computationally, counterfactually-sensitive conscious robot processing fixed input (and investigate the coherence of the claim that it is it still conscious etc) with (b) a digital counter + mapping replicating the state transitions of any input-less FSA; as it isn’t clear to me that there is a substantive difference, I maintain that the computionalist still has serious problems to overcome ..

    Cheers,
    – mark

    ps. I am, of course, familiar with the Stanford review – and the the works cited therein – btw 😉

    pps. Sensu stricto, DwP supports a very limited form of pancomputationalism as the argument simply asserts that – over a finite time period and with a suitable mapping every open physical system can be made to implement any FSA with fixed-input ..

    ppps. It is central to my position that all computation is observer relative. Ie. all computation is relative to a mapping that defines the relationship between physical states of the system and the computational states. I.e.

    TTL:
    LOW (FALSE) as (0V to 0.8V);
    HIGH (TRUE) as (2V to 5v);

    ECL:
    LOW (FALSE) as (-5.2 to −1.4);
    HIGH (TRUE) as (V−1.2 V to 0);

    CMOS (VDD = supply voltage):
    LOW (FALSE) as (0 V to VDD/2);
    HIGH (TRUE) as (VDD/2 to VDD).

    Liked by 1 person

    • Hi Mark,

      I look forward to your forthcoming paper and will read those shared by dropbox.

      I’m quite sympathetic to computation being observer relative, but I’m still on the fence. I can see arguments each way.

      Your example of different digital logic standards could perhaps be answered by identifying thresholds by careful attention to state transitions. While you are 100% correct that it is possible to interpret an AND gate as an OR gate or vice versa, depending on how you map the inputs and outputs to true and false, there remains an underlying mathematical isomorphism of behaviour. AND and OR share the same kind of “input/output contour” (perhaps an unhelpful expression, but that’s how I think of it) so at least we can tell that the gate is AND/OR and not XOR/XNOR or NAND/NOR.

      The way I picture it (and the reason I call it “input/output contour”) is to think of an image of pure black and white, say a black star on a white background. Now invert it, so we have a white star on a black background. Either way it’s still a star on a plain background. If we ran a photoshop filter “find edges” or “trace contour” or whatever, we would end up with pretty much the same result: the outline of a star. It seems to me that it is this contour that is the heart of the computation, and so whether particular inputs or outputs are true or false is irrelevant. What matters is the causal chain being enacted, and that is the same whether we interpret AND gates as OR gates or vice versa. In other words, “true” and “false” are just labels for different types of signals in a digital computation and we shouldn’t regard them as having inherent meaning outside of their causal roles.

      Liked by 1 person

    • Hi Mark,

      Regarding the first paper, “Counter-factuals cannot count”, and the fading qualia argument.

      Very good, compelling argument and it almost persuaded me.

      But I can see a counter-argument.

      Unlike replacing neurons with electronics one by one (the usual fading qualia argument), your version here makes the transition from conscious to automaton in a time-sensitive way. For an intermediate in your version, some state transitions fully implement the original algorithm (supporting counter-factuals), and some do not, essentially scripting in advance what state transition will follow.

      Now, computationalists already need to reconcile their intuitions with what happens when a conscious physical computation is suspended or resumed. Physicalist computationalists presumable imagine consciousness blinking in and out of existence in such cases, albeit in a fashion which is undetectable to the consciousness itself. One must also suppose that if the state of the computation were altered while paused, the consciousness would be unaware that its state had been manipulated. In such a way false memories could be inserted, for example.

      Your intermediate where some but not all counter-factual state transitions have been deleted seems to me to be similar to this case. For every fully implemented state transition, the consciousness is realised, but for every scripted state transition the consciousness is not realised. As such the consciousness blinks in and out of existence just as it would if it were paused and resumed repeatedly with manual alterations of its internal state while paused. While the consciousness is realised, it is in exactly the state it would have been in had it been had the state transitions not been deleted, meaning it remembers thoughts it had even while it was not conscious and so is not aware of anything fishy going on.

      So it’s not that qualia fade, exactly, it’s that they flicker in and out of existence to match how the state transitions have been tampered with.

      I offer this counter-argument but I don’t particularly endorse it. I would answer these criticisms instead by appealing to the continued Platonic existence of the original algorithm (to which I attribute consciousness) even as its physical implementation is messed with.

      Liked by 1 person

    • Hi Mark,

      Regarding the second paper, as before, some very nice arguments but I don’t think they’re quite water-tight.

      The first argument concerns a number of robots A,B,C and D who either contingently or necessarily perceive red as a consideration of counterfactuals in the light of optimising compilers which often delete conditionals which can be detected to be necessarily true or false.

      Let’s concentrate on the most extreme example Robot D, where the latch ostensibly storing the value of the sensor is designed to always report red.

      You don’t really distinguish between two important cases. One case is where it is hardcoded to report red in the software being compiled by the compiler, and one case where it is hardwired to report red through some other mechanism (e.g. the layout of the physical circuit).

      In the first case, the output of an optimising compiler is not the same as of a simple compiler. It is possible that the optimising compiler produces unconscious software but the simple compiler produces conscious software. It’s not hard to see this in the case of radical, extreme optimisation. For any candidate conscious software with fixed inputs, the optimising compiler could determine all the outputs in advance and replace the complex code with a simple script to produce those outputs. In this case it has drastically modified the code and the output software certainly would not be conscious. On the other hand, in order to determine the outputs, the compiler would effectively have to run the original algorithm itself, and so one could argue that consciousness exists during the course of the compilation process and the output of the compiler is just a recording of what that conscious mind does (and so bears the same relationship to the original algorithm or compilation process as a video recording does to a conscious person). In the case of moderate or partial optimisation one might imagine that parts of the conscious process are shared between the compiler and the running compiled code. This is certainly weird (as it means that the conscious experience is spread out temporally and not in chronological order), but it perhaps yields a picture not unlike that I sketched in my previous comment where a consciousness blinks in and out of existence where parts of the execution rely on the output of computations carried out earlier.

      Since this kind of argument does not appear to work in this extreme case (a compiler predicting all the outputs and behaviour of an algorithm beforehand), I doubt it works in the simple case either (a compiler replacing a single conditional branch).

      In the other case, where the output of the compiler is the same but the “red” sensory data is hardwired in, we can it seems to me draw a conceptual border around the perceptual software to separate it from the hardcoded sensory data. While the system as a whole is incapable of perceiving anything other than red, the software within the border could (without modification) perceive other counterfactual sensory inputs, and so it makes sense to consider those counterfactuals. It remains tenable then to suggest that the software within the border could be conscious even though it is embedded in a system where inputs are fixed.

      This paper also presents the argument about computation being observer relative with reference to AND and OR gates. I answered that argument above so I won’t go over it again.

      Liked by 1 person

    • Hi Mark,
      I’m grateful for your time and interest. I’ve been fascinated by you and DM’s exchanges. I’m not going to get into the details to the extent DM has. While I’m a career programmer, I find much of this discussion right on the edge of, if not beyond my comprehension. However, I do have a few high level remarks.

      As a computationalist, I think I, and any computationalist, have to bite a philosophical bullet: if the algorithm that implements consciousness can exist in substrates other than a biological neural one, then it can exist in substrates we might not normally think of as computational systems. However, can and does are two very different things.

      It should be noted that if rocks have the algorithm embedded somewhere in them, that in and of itself doesn’t falsify the computational theory of mind, unless we expect rocks to demonstrate their consciousness to us in some manner. Clearly rocks show no sign of being conscious. But if we’re imagining that the consciousness is embedded in its own environment (in my view, the only way this is a meaningful assertion), we wouldn’t expect it to show us any signs.

      Such a consciousness might be unaware that it’s in its own solipsistic universe. (DM, this is why I made the comparison with Boltzmann brains initially. In both cases, we can never be certain that we’re not ourselves one of these.)

      But the assertion that a rock is implementing Wordstar or Autocad or any other non-trivial algorithm is, I think, an extraordinary one. For me to buy it, I need extraordinary evidence. Certainly with the appropriate mapping, they are, but then with the appropriate mapping, any complex system can be anything of similar or lesser complexity, but I perceive that we’re then doing a lot of work with the mapping, so much work that the mapping is now part of the implementation, which means to me that talk of the original pattern being Wordstar or whatever isn’t very meaningful.

      Is computation observer dependent? If that computation requires an observer in order to translate its effects into the wider world, then yes. But imagine a computer that is the central controller of a robot. It receives input from the robot’s sensors, performs computations, and produces output in robotic movement. The computation happening at the heart of the control unit is observer dependent, but it is harnessed to an I/O framework. Together with the I/O framework, the result is an objective force in the world. It doesn’t matter what voltage ranges we’ve engineered for its transistor states. What matters is that the algorithms translate objectively into real world manifestations. (You could argue that the robotic system’s actions are still relative to the physical world, but then what observable thing isn’t?)

      So, my reaction to all of this is at two layers. The aspiring sci-fi author in me sees very cool story possibilities. But the skeptic says that just because something can exist doesn’t mean its existence has anything above an infinitesimal probability. And even if it does, like Einstein’s “spooky action at a distance” objection to quantum mechanics, it might simply be an absurd consequence that is real.

      All that said, I’m not at all dug in on any of this. If presented with evidence or airtight logic, I would definitely change my mind.

      Like

      • Hi Mike,

        As a computationalist, I think I, and any computationalist, have to bite a philosophical bullet

        Commendable, but biting this particular bullet leads to a result so bizarre that it’s probably better to abandon computationalism (or embrace something radical like the MUH as a solution). If all minds are instantiated everywhere, there’s really no reason to believe that you are real (I get your Boltzmann connection now). And if there’s no reason to believe that you are real because all minds are instantiated everywhere, you might as well just go with the MUH, because Bishop’s conclusion pretty much entails the MUH anyway albeit with an unnecessary real physical world about which we can say or know precisely nothing but on which all the mathematical objects of the MUH can supervene as computations.

        However, can and does are two very different things.

        Which is why Putnam and Bishop put forward a pretty meticulous argument for “does”.

        that in and of itself doesn’t falsify the computational theory of mind

        Granted. But it does make it absurd. Too absurd even for me! The MUH is practically common sense by comparison.

        But the assertion that a rock is implementing Wordstar or Autocad or any other non-trivial algorithm is, I think, an extraordinary one.

        Yes, if and only if you are a computationalist committed to the view that there is an objective fact of the matter regarding which computations particular physical systems are implementing. If you are like Searle, Bishop and I, and sympathetic to the view that the computation of a physical system is entirely in the mind of the beholder, then the claim is not extraordinary at all, merely interesting. I’m on the fence about whether the claim is true, but it certainly seems plausible, prima facie.

        To be clear, it would of course be extraordinary to claim that a wall was computing Autocad specifically. That would be an incredible coincidence. But if it’s computing all algorithms then it is not so surprising.

        or me to buy it, I need extraordinary evidence.

        I don’t think you can have evidence. What would that look like? Instead you have a pretty convincing argument from Putnam and Bishop. I get that you feel you don’t fully understand all the issues, but I don’t think that’s enough to dismiss it. Unless you have iron-clad reasons for holding to computationalism, perhaps the existence of this argument should be cause for doubt, even if you don’t fully get it.

        The argument is put in simplest terms something like the following.

        1) A conscious experience is just a certain kind of physical computation running on some input.
        2) A physical computation is just a physical system evolving in accordance with a mapping to a specific logical finite state automaton.
        3) If the input for an finite state automaton is known in advance, this can be incorporated into a derived inputless logical finite state automaton (equivalent to scripting the input to the algorithm as part of a wrapper around that algorithm).
        4) Mappings exist for any open physical system to any logical finite state machine.
        5) The input for a conscious computation can be known in advance (e.g. by running the computation in advance as to record its input as it interacts with its environment).
        6) From (1) and (2), there is a finite state automaton that completely captures the computation of any conscious experience.
        7) From (6) and (5) and (3), there is an inputless finite state automaton that completely captures the computation of any conscious experience
        8) From (7) and (4) there is a mapping for any open physical system to an inputless finite state automaton that completely captures the computation of any conscious experience
        9) From (2) and (8) any open physical system is physically computing the finite state automaton for all possible conscious experiences.
        10) From (1) and (9), any open physical system is experiencing all possible conscious experiences.

        For me, Bishop and Searle, the weakness is (1). For Bishop and Searle, computationalism is simply false. For me, Platonism comes into it and the physical side is not so important. But that’s a fringe position. More mainstream criticism would focus on (2) and (4), (2) because there may be more to computation than implementing a FSA and (4) because counterfactuals muddy the water of whether you can really say that a physical system is fully implementing a given FSA.

        but I perceive that we’re then doing a lot of work with the mapping,

        The mapping only needs to be instantiated if you want to do input/output. The algorithm should be happening regardless without any physical mapping. The mapping only exists in the abstract — it must exist, but only Platonically, so if you need things to be physical to do the work then the mapping doesn’t do anything.

        If that computation requires an observer in order to translate its effects into the wider world, then yes

        Right. So consider that possibility rather than the robot. The computationalist would assume that a simulated person in a simulated environment would be conscious even with no input/output with the outside world. But whether such a computation is taking place is observer dependent.

        Like

        • Hi DM,
          “Commendable, but biting this particular bullet leads to a result so bizarre that it’s probably better to abandon computationalism (or embrace something radical like the MUH as a solution).”
          Computationalism (broadly construed) is attractive because it provides an explanatory framework for the results of neuroscience and psychology research. In that sense, it plays a similar role to natural selection in biology. The operation of neurons and synapses make sense within the assumptions of the computational theory. Now, at some point, data may come in where computationalism may start to fail as an explanatory framework, where some new paradigm would be needed. At that point, I would feel justified in ditching it for a new theory that would have at least as much explanatory power as it does.

          But ditching it because of a particular line of logic showing an absurd result? I’m much more inclined to conclude that there’s an error in the logic, a hidden assumption in the premises, etc. At least until I see evidence that compels me to accept it.

          “I don’t think you can have evidence. What would that look like?”
          Isolation and observation of whatever pattern is implementing an algorithm. If every algorithm is being implemented, show me a physical example. Any known non-trivial algorithm in a natural object would do. (Other, that is, than a physical process that resembles a post-facto computer model of that process.)

          “The computationalist would assume that a simulated person in a simulated environment would be conscious even with no input/output with the outside world. But whether such a computation is taking place is observer dependent.”
          Actually I agree. And if it’s a completely self contained simulation, such as a simulated universe, whether the entities in it are conscious would be largely a matter of perspective. This has always been a matter of concern with this line of thought. What do we owe simulated entities? Does it change if they are a copy of a physical person? Why, or why not? I fear there will not be easy answers.

          Reality has shown no obligation to avoid muddying our intuitive boundaries. It’s already mucked up our once cherished divisions between humans and animals, between life and non-life, or between waves and particles. I’m not expecting consciousness to fare any better.

          Like

          • Hi Mike,

            I agree that computationalism is attractive. But in the face of a conclusion like Bishop’s, it’s really not so attractive any more. If Bishop were right, I think it would constitute a failure of computationalism as an explanatory framework.

            I’m much more inclined to conclude that there’s an error in the logic

            That’s possible. But to conclude that it must be the case without actually finding that error is perhaps a little narrow-minded?

            Of course I have an agenda here. I want to push for Platonism. If there were a choice between ditching computationalism and embracing Platonism, which way would you go?

            Isolation and observation of whatever pattern is implementing an algorithm. If every algorithm is being implemented, show me a physical example.

            The wall behind your desk is implementing the Wordstar program as it opens and renders the first page of “War and Peace”. If you divide up the state of the Wordstar program into discrete chunks as it does this task, you can assign each successive logical state to each successive unique state of the wall behind your desk (this succession is continuous but we can divide it up into discrete slices however we like). The succession of physical states in the wall are causally linked (each state causes the next state) just as the succession of states in a computer are causally linked. For this reason it’s hard to think of an objective reason why we can say a computer is implementing the algorithm but your wall isn’t.

            whether the entities in it are conscious would be largely a matter of perspective.

            Well, this is controversial. Usually, it is assumed that it is not a matter of perspective whether an entity is fully conscious or a zombie. The only perspective that really matters is that of the entity itself. Either that entity has a perspective or it doesn’t.

            What do we owe simulated entities? Does it change if they are a copy of a physical person? Why, or why not? I fear there will not be easy answers.

            I agree.

            Reality has shown no obligation to avoid muddying our intuitive boundaries.

            Agreed. And I appeal to this kind of argument to defend the MUH. But if Bishop’s conclusion is right, then all it would take for an infinite multiverse of possibilites to exist is a single trivially simple physical universe that has to do nothing but change state without repeating forever. A universe where some value increases monotonically forever, for instance. It seems we ought to conclude from this that our universe probably is just one of these ghost computations, and the real physical universe is probably just a trivial monotonically increasing value (or equivalent). But it seems far stranger to me to suppose that this is the case rather than just to embrace Platonism and do away with the unnecessary physical universe altogether. It’s much simpler, it’s much more elegant and it’s more in line with Occam’s razor.

            Like

          • Good morning DM (or good afternoon in your case),

            “But in the face of a conclusion like Bishop’s, it’s really not so attractive any more.”
            You’re okay accepting the consequences of the multiple worlds interpretation of quantum mechanics, but this bothers you? Aren’t they somewhat similar in their absurdity?

            “But to conclude that it must be the case without actually finding that error is perhaps a little narrow-minded?”
            I don’t consider it a conclusion, but I do see it as a distinct possibility, particularly in the complete absence of any signs these pixie minds exist. Empirical evidence, or the lack of it, matter a great deal to me.

            “If there were a choice between ditching computationalism and embracing Platonism, which way would you go?”
            I don’t see myself near that particular crossroad yet, but if I did get there and computationalism continued to have the strengths it has, I’d have no problem accepting Platonism. (The only reason I don’t currently embrace it is that I can’t see any requirement for it yet.)

            “For this reason it’s hard to think of an objective reason why we can say a computer is implementing the algorithm but your wall isn’t.”
            I guess I need someone to put this in rubber-meets-the-road terms. Take, say, the first ten instructions of Wordstar (or any well known algorithm) and demonstrate how the molecules, atoms, and/or subatomic particles of the various silicate compounds in a rock could implement each instruction in sequence. All the literature on unlimited-pancomputationalism seems to simply assert this or accept it uncontested. Maybe it’s blazingly obvious to everyone else how this would work, but I fear I need the dots connected.

            “Well, this is controversial.”
            Absolutely, but I’ve always seen it as a consequence of computationalism, and I suspect it’s one of the reasons people resist it so much.

            “But it seems far stranger to me to suppose that this is the case rather than just to embrace Platonism and do away with the unnecessary physical universe altogether.”
            I guess I see them as equally fantastic. My view is that reality beyond our observations is probably unfathomably strange, and will probably defeat our attempts to predict that strangeness.

            Like

          • Hi Mike,

            You’re okay accepting the consequences of the multiple worlds interpretation of quantum mechanics, but this bothers you? Aren’t they somewhat similar in their absurdity?

            I think it’s absurd to think that every physical object is host to every possible mental experience. True, I think every possible mental experience exists, I just think it’s absurd to locate these experiences physically in our walls, such that there is one copy of me in this wall and a distinct though identical copy of me in the other wall.

            Rather I think each mental experience exists in its own right. Where mental experiences are identical I don’t think there is a distinction to be made. MWI/QM is just a specific case of this kind of thinking. All branching universes exist. I just don’t think they all exist in my cup of coffee.

            Take, say, the first ten instructions of Wordstar (or any well known algorithm) and demonstrate how the molecules, atoms, and/or subatomic particles of the various silicate compounds in a rock could implement each instruction in sequence.

            Instructions are kind of a high level abstract concept. They are features of the algorithm rather than objective features of a physical implementation. All we really have in a physical implementation is a sequence of states. State 1 causes state 2 causes state 3 and so on. You can find that sequence in both a computer running WordStar and in a wall. The instructions are there to tell a compiler how to produce software (a sequence of bits) that will, when stored in a computers memory and executed, produce a desirable sequence of states such that the output is intelligible to humans. But all that is objectively happening without the gloss of human interpretation is that sequence of states. The difference between the sequence of states in a computer and in a wall is really only that input/output is straightforward in the former case and hopelessly difficult in the latter (more difficult indeed than just performing the computation by hand).

            The problem is that it’s just really really hard to define objective criteria by which we can say a computer is executing a certain algorithm but natural systems are not. Think how difficult it would be to build a mechanical algorithm detector, which could scan any physical system in as much detail as you like and tell you if it was implementing a particular algorithm. Indeed, I think this task is not only difficult, I suspect it is intrinsically impossible, because there is no fact of the matter about whether a given physical system is implementing a particular algorithm. There are only degrees of complexity of mapping. But that must mean that there is no fact of the matter about whether it is conscious or not, which is a problem for computationalism.

            Like

          • By instructions, I meant the raw machine language instructions, the machine states, mapped appropriately to whatever substrate we were talking about. I’m even open to considering a sequence that wouldn’t be a one to one mapping, providing that they just had the same truth table transformations.

            “The difference between the sequence of states in a computer and in a wall is really only that input/output is straightforward in the former case and hopelessly difficult in the latter (more difficult indeed than just performing the computation by hand).”
            Again, this feels like we’re putting a lot of work on the mapping. When the mapping becomes more complex than the original algorithm, I feel safe in saying we’re putting an illegitimate amount of work on it. With a sophisticated enough mapping, anything can be anything else, but I’m not sure what we’re then proving.

            “But that must mean that there is no fact of the matter about whether it is conscious or not, which is a problem for computationalism.”
            I said in a post early in this blog’s history that consciousness lies in the eyes of the beholder. I don’t see it as a problem for computationalism so much as simply a stark fact, much like the divide between the objective properties of a system and the experience of being that system. It isn’t so much a hard problem as simply a fact. Profound facts in both cases. (From your comments, I’ve always assumed this was something you were aware of. When you said consciousness is what it feels like to be a human mind, that’s what I thought you meant.)

            Like

          • Hi Mike,

            By instructions, I meant the raw machine language instructions,

            I think even the raw machine instructions are in the same boat. Instructions are components of machine language, which is an abstract model of what it is a computer is doing. There is no such thing really as a physical machine instruction or machine language.

            the machine states, mapped appropriately to whatever substrate we were talking about.

            Since you mention states I just want to clarify something. In the FSA model, to be clear, the machine is only in one state at a time, so there isn’t really any modelling of instructions per se. So all the state of the machine, all the registers, all that data and detail, is just one state. And the FSA model says that for an inputless FSA in a given state, the next state is just some other state, again without content, identified by a label. It might go from state alpha to state bravo, for example. There is no concept of a machine instruction (apart from maybe simple branch statements which define what state to go to next), but the execution of any machine instruction is supposedly equivalent to the evolution of the FSA to a new state.

            So in the view of Putnam and Bishop, while there is no explicit modelling of machine instructions, there is a modelling of the changes in state caused by machine instructions. If a machine instruction is just that which causes state to change, then this should be an acceptable move. The equivalent of an implementation of the branch statement in a physical system would just be the laws of physics causing the state to change.

            The thing is, when a computer executes a machine instruction, we can say the change in state was caused by that machine instruction, but that is just an interpretation. What is actually happening is just that a physical system is evolving according to the laws of physics. And so it is with a wall. Each state is caused by the evolution of previous states according to the laws of physics. There is no objective fact of the matter about whether this evolution corresponds to the execution of a machine instruction or is just the normal natural evolution of that system.

            When the mapping becomes more complex than the original algorithm, I feel safe in saying we’re putting an illegitimate amount of work on it.

            But, again, the mapping does not have to be instantiated anywhere. It is enough that it exists in the abstract. This leaves us with a computation that has no input or output, but I/O should not be central to consciousness. This is what Bishop is getting at when he proposes replacing the sensory input of a robot with a prerecorded feed. The fact that it’s not getting any input from outside the system should not render it unconscious.

            I said in a post early in this blog’s history that consciousness lies in the eyes of the beholder.

            And I agree. Your consciousness is only real to you. I get that. But there’s still a problem, because if we want to agree that “you” really exist, that you have a perspective to talk about, then Bishop’s argument suggests that we need to acknowledge the existence of all his pixies too.

            If we’re happy to say “mu” to the question of whether you really do objectively exist then that leads us straight to the MUH. Because there are possible worlds which have observers in them who perceive themselves to exist. If the only sense in which anybody exists is in the sense that they exist from their own perspective, then all those observers exist as much as we do and the MUH must be true.

            But if you want to hold the MUH in doubt, that means you are suggesting that you really do exist in some objective sense that these other fictional mathematical observers in fictional universes do not. And then you run into problems because you can’t really give a good defence of how come it is you objectively exist and the pixies do not.

            Like

          • Hi DM,
            I understand and agree that a processor is simply a physical system going through states according to the laws of physics. Just like a table is table rather than just chopped up dead tree parts only because humans interpret it as such, a processor is only a processor because we interpret it as such.

            And yet I think it matters how much interpretation is necessary. It seems perverse to say that a cloud or a rock is a processor if we can just muster an aggressive enough interpretation. The computational theory of mind is attractive because we can speak of logic circuits in the operations of neurons and synapses without an overly elaborate interpretation. If seeing those logic circuits did require an aggressive mapping, to the extent rocks and walls do, I think the theory would lose its scientific utility, and cognitive neuroscientists and psychologists would look for some other paradigm by which to understand their data.

            One point that is confusing to me. How does the MUH dispel the pixies? It seems like it would embrace them. When I first grasped Putnam, Searle, and Bishop’s premise, I (mistakenly) envisaged you embracing the pixies as just another aspect of what the MUH is about. But you seem to see it as a cure. I’m failing to connect the dots somewhere.

            Like

          • Hi Mike,

            The complexity of the necessary mapping certainly does undermine the utility of viewing a wall as a computer. To view a wall as a computer is (for most intents and purposes) ridiculously perverse. But if Bishop’s argument is right, there are no objective criteria, no hard line between systems we can say are instantiating a computation and those that are not. So there’s no fact of the matter.

            But if what it takes for a mind to exist is for a computation to be instantiated, and if minds really do exist, (and pixies do not) then there’s a problem, because we have a fact of the matter about whether minds exist but no fact of the matter about whether a computation is instantiated. So instantiating a computation cannot be what causes a mind to exist.

            If, on the other hand, there is no fact of the matter on whether minds exist then we’re in MUH territory, because on the MUH minds exist Platonically as mathematical objects, which is to say it is merely a matter of convention whether we talk of them as existing or not.

            How does the MUH dispel the pixies? It seems like it would embrace them

            A little of both. The MUH predicts that all possible conscious observers exist as mathematical objects. DwP predicts that all physical objects are instantiating all conscious observers. I find the latter absurd and the former parsimonious. On the MUH, those pixies you are projecting onto the walls are mathematical patterns and so exist Platonically regardless of whether they are physically instantiated. In fact, since we ourselves and our universe are just mathematical objects the idea of objective physical existence becomes something of a category error. We are “pixies” ourselves, and all persons are “pixies”. There is no real physical world. The concept is meaningless.

            So the MUH dissolves the DwP problem by locating those “pixies” in other universes rather than in our walls, while also explaining the nature of our existence, why it is that anything at all exists and why the universe is fine tuned for conscious life. And it solves a few other problems in philosophy of mind to boot.

            As an atheist, the fact that it can explain existence without appeal to a creator is pretty neat. I think this is the one major problem facing the atheist world view. There can be no scientific account of existence because all science must ultimately appeal to the laws of physics, which are just brute fact without explanation.

            Like

          • Hi DM,
            “So instantiating a computation cannot be what causes a mind to exist.”
            I don’t see it as a dilemma since it takes a particular data processing architecture. I do buy that rocks do a type of computation, but not that they’re running Wordstar, Autocad, or conscious minds. Although I can see the argument, I think the interpretive load necessary to see rocks as running those things removes its legitimacy, or at least its vitality (for me at least). (But it took our discussion for me to be able to articulate that. Thank you!)

            I know you disagree. Until we have new arguments or data to consider, we might just need to recognize it as a fundamental disagreement on how we see this.

            Thanks for the MUH explanation. I’m still not clear how, if we accept the DwP argument, it removes pixies from our midst, but I understand how the ideas are compatible with each other.

            My immediate question about the MUH explaining existence is, what explains mathematical reality? What response can it give beyond those theologists give for God (or whatever they consider the ultimate reality to be)?

            Like

  10. Lyndon says:

    Good overview. I will avoid the main focus given that I have always shrugged at the panpsychism thing.

    I think the biological components in the end can be teased apart, and that they do not create something special about animal or human consciousness. For instance, some of the homeostatic processes that consciousness consistently recognizes (major bodily disruptions, for instance) are things that are present in more basic organisms that most see as unconscious as robots. Or, the fact that we program a computer system to constantly back its self up, to constantly reproduce, does not grant it or even gear it towards consciousness, and this looks likely to be the case for a while. Though, maybe such a system recognizing its needs and creating goals to achieve it in the future will lead to robot consciousness of some sort.

    Your biological drive, which seems close to a will to life or will to power, in the end is bought from rather rudimentary evolutionary structuring, and it seems in ways that are usually mechanistically bought and not in ways that also involve consciousness. When a rudimentary awareness arrives in animals and a robust awareness arrives in language speaking humans, those drives are still there. Those drives and system structures are recognized by awareness and are big parts of those lives, our lives.

    In other words, it makes sense that all things we have seen as conscious have these biological structures and those beings are aware of such structures, and further act on their awareness of such features.

    On the side, if we are going eliminitavist here, then all that matters is attribution, and in this case, what specifically humans are likely to attribute consciousness to. I think there is probably more there. If an entity was reliably complex, self-modeling, and had consistent expressible goals, most humans would eventually grant it to be conscious even if it lacked any other biological or human standards.

    Future question: Can we think of a conscious entity without pleasure or pain?

    Like

    • Thanks Lyndon. I agree that the biological drives are not tied to consciousness. Those drives go down to the earliest life forms. Indeed, I think a fair definition of life is systems with those drives. The phrase “will to life” is, I think, strikingly appropriate. The drives existed in life for billions of years before the first conscious creature evolved. It’s one of the reasons why I don’t think AI, even AGI, will have them automatically.

      “Can we think of a conscious entity without pleasure or pain?”
      I can conceive of machine consciousness not necessarily having these. (Whether we’d have any empathy for such an entity may be a different story.) Of course, we could define “pleasure” as that which satisfies our instinctual needs, our basic programming, and “pain” as the state of alarm triggered by signals from damaged components. A machine might have these states but I don’t think they would be crucial for consciousness. (As always though, it comes down to how we define “consciousness”.)

      Like

  11. john zande says:

    Without interrupting the dialogue, I have to say I’m thoroughly enjoying this exchange from the sidelines. Fascinating. This, right here, is an example of the Interwebs at it’s very, very best.

    Liked by 3 people

  12. amanimal says:

    Also, thanks, something I searched relevant to your post gave me a Google Books link to ‘How the Mind Works’ – you’re still reading? Any thoughts of writing up something on it?

    That led to the phrase “auditory cheesecake” and this interesting recent piece:

    New Ways Into the Brain’s ‘Music Room’ http://www.nytimes.com/2016/02/09/science/new-ways-into-the-brains-music-room.html

    It’s now one click from my doorstep 🙂

    Like

    • I am still reading ‘How the Mind Works’ in fits and starts and getting near the end. There haven’t been any real big surprises. (Which I guess makes sense since the book is almost 20 years old at this point.)

      It has fortified my confidence in the computational theory of mind. Someone who read my posts on the mind might accuse me of getting most of my material from this book, except of course I’m just now reading it. Kind of weird how much I agree with Pinker, although maybe not since he ends up agreeing with Dennett a great deal.

      The only thing I might disagree with him on is that he seems confident that mentalese exists. I’m not sure it does, that it’s not a layer we only want to exist to make things more understandable. I strongly suspect actual mentalese is just the raw processing of neurons. But I’m open-minded about it, if someone can show some functional reason why it might have evolved.

      Interesting article. Thanks!

      Like

      • amanimal says:

        “fits and starts” – here too with McGilchrist, making progress though, I’m up to the Reformation in the history of Western civilization through his bihemispheric lens.

        “mentalese” – ?, I’ll have to do some reading(thanks again) and maybe someday read ‘Consciousness Explained’ too!

        … but I have a couple of less lengthy titles due for some attention once I finish McG’s book 🙂

        Like

        • McGilchrist is covering world history?

          Not sure if I’d bother with ‘Consciousness Explained’ at this point. It’s even older than ‘How the Mind Works’, long, and you can get a pretty good summary in ‘Intuition Pumps’ or ‘Dawin’s Dangerous Idea’, along with lots of other interesting stuff.

          Like

          • amanimal says:

            He does, Part 2 of the book: http://www.iainmcgilchrist.com/The_Master_and_his_Emissary_by_McGilchrist.pdf

            Your ‘The divided brain | RSA Animate’ post(May 25, 2014) – that little 10 min animation is really a pretty amazing condensation considering. The only overtly critical review I recall reading was AC Grayling’s who said something to the effect that his data weren’t fine-grained enough to reach his conclusions. Chris Kavanagh(God-knows-what.com) thought it too speculative for his tastes. I don’t know yet, it’s an interesting read so far regardless, though at times a bit of a slog.

            Note: I’ve noticed over the last several years that much of what I read is chosen on the basis of the descriptive content and less so on the prescriptive, sometimes to the point of not finishing the book. Of course I’m also fairly certain that the selections are also made so as to not grossly conflict with whatever it is that I might claim to be part of a worldview at any given moment 🙂

            Thanks for the un(?)-recommendation on ‘CE’- 1991, it can definitely wait and we’ll see – I did “go back” to Marvin Harris’s ‘Our Kind'(1989), a fascinating read, in looking into anthropology in general a little deeper, no doubt some of the ideas/thinking changed since.

            With McG talking about evolution of culture I may have reread Bruce Hood’s ‘The Domesticated Brain’ looking for what might fit with McG’s ideas/thinking given that the process continues.

            Liked by 1 person

  13. amanimal says:

    given that the process continues.”

    errr, better said “assuming” I think.

    Like

  14. Hi Mike,

    I don’t see it as a dilemma since it takes a particular data processing architecture.

    That’s completely sidestepping Bishop’s argument though. Of course he’s not only arguing that rocks compute how to be a rock, but computing all other algorithms. You don’t buy it, and that’s fair enough, but you don’t seem to be able to point out what’s wrong with the argument, so you’re operating purely on your intuitions. Right?

    Although I can see the argument, I think the interpretive load necessary to see rocks as running those things removes its legitimacy, or at least its vitality (for me at least).

    The question isn’t about degrees of legitimacy, it is whether there is an objective fact of the matter regarding which computations are being instantiated. Do you think there’s some kind of cut off point or what do you think happens to rule the pixies out?

    we might just need to recognize it as a fundamental disagreement on how we see this.

    Not exactly. I’m on the fence. I would like Bishop’s argument to be watertight, because it would help people to see that the MUH is true (it’s true regardless!), but I can see potential holes in it (via Chalmers especially). I’m just trying to explain Bishop’s argument to you because I’m curious to see how you think it ought to be answered. “I don’t buy it” is a little disappointing but fair enough.

    Thanks for the MUH explanation. I’m still not clear how, if we accept the DwP argument, it removes pixies from our midst,

    Because on the MUH (or at least on my view of mind which very much ties into the MUH) it is wrong to attribute consciousness to physical objects. Consciousness must instead be attributed to algorithms or mathematical objects of that nature. When we say a brain or a computer is conscious, what we really mean is that the brain or computer is naturally and straightforwardly interpreted as instantiating an algorithm which is conscious. The pixies are only in our midst because of the assumption that consciousness must be attributed to the physical instantiations of algorithms.

    My immediate question about the MUH explaining existence is, what explains mathematical reality?

    My view is that mathematical objects are said to exist or not exist only as a matter of convention. So my view can be expressed in two ways, depending on which convention we adopt.

    Firstly, if mathematical objects are said to exist, then they exist necessarily, because both of how we define mathematical objects and how we define existence. For instance, it is necessary that there exists an integer between 3 and 5. It is an analytic fact because of how our terms are defined. No other explanation is required. In this sense of necessary mathematical existence, there exists a mathematical object which is our universe. It only appears real and physical to us because we are in it.

    Alternatively, if we say mathematical objects do not exist, that they are just fictions, then our universe does not exist and neither do we, and we just think we do because it is part of our fiction that we do. It’s a bit like how from Han Solo’s perspective, Han Solo and his environment exists, but from ours it does not. So we are all like characters in a work of fiction, a work which doesn’t actually exist and so doesn’t need an author. It’s just one of the possible works of fictions that could exist. If anything did! But then nothing really exists, and existence is a meaningless concept, so we might as well use the term “exists” to talk about what there is within these possible fictions. And so we do exist after all. By convention.

    In short, mathematical objects don’t need an explanation of their existence, not really. If they exist at all, it is by convention and they do so necessarily.

    Like

    • Hi DM,
      “I’m just trying to explain Bishop’s argument to you because I’m curious to see how you think it ought to be answered.”
      Okay sorry, now I understand. The issue of the fact of the matter seems to be your primary concern. My answer may surprise you.

      My current thinking is that there is no fact of the matter, just pragmatic considerations. It’s like trying to decide if viruses are life, when the first human was born, whether Pluto is a planet, what the boundary of the solar system is, or where on the spectrum that red becomes orange. These are all distinctions that may be important to us, but that simply don’t exist in nature.

      Along those lines, any complex physical process could be interpreted to be any computation. In the case of rocks, it may require an absurd amount of work to do that interpretation, but in principle in can be done. In that sense, I think Bishop, Putnam, and Searle may well be right.

      But if so, I’m not bothered by them being right. It won’t weaken my confidence in the computational theory in the slightest. It’s just a consequence of that theory that sounds absurd, but only until we remember the absurd interpretations required. I think the absurdities cancel each other out and we’re left with a rather mundane observation.

      Now, despite fringe cases, we can have pragmatic criteria for life, humanity, planets, and spectral labels. Similarly, I think it is possible to have pragmatic criteria when something would meet our intuition of being conscious. I give a list of what I think those might be in the post. (Although it will almost certainly change as we learn more about consciousness.) As in the other cases above, I’m sure there will be fringe cases that will make us squirm. (Nature seems to like doing that to us.)

      The pixies, as described, wouldn’t meet those criteria. I might feel bad about that if I ever saw or perceived a pixie in any way, but I’m not expecting I ever will. Apparently they can have no causal effect on us, and we can never be aware of what effect we might have on them. While they may exist in principle, pragmatically they are simply too far out of our causal framework to be of concern.

      “I would like Bishop’s argument to be watertight, because it would help people to see that the MUH is true”
      I don’t see the necessity of the MUH from his argument, although I do see some similarities. But maybe I’m missing something?

      “In short, mathematical objects don’t need an explanation of their existence, not really. If they exist at all, it is by convention and they do so necessarily.”
      If convention is itself a mathematical entity, that seems circular. But if I understand your overall point, it could be true, but it seems similar to the God-has-always-existed type argument, which could also be applied to the physical universe. (BTW, I don’t see this as an issue unique to the MUH at all. It pretty much applies to every attempt to explain reality.)

      Like

      • Hi Mike,

        My current thinking is that there is no fact of the matter, just pragmatic considerations.

        I don’t think that answer cuts it, not in this case. Pluto may or may not be a planet, but whatever it is, it does definitely exist. This kind of approach clearly does work in assigning categories to things: distinguishing life from non-life, or intelligence from reflex or instinct, but I don’t see how it works in the case of determining whether a complex, self-reflecting mind with a rich inner world of phenomenal experience exists or not. A first-perspective can’t (it seems to me) half-exist.

        On the other hand, your approach does work from a third person perspective when it comes to whether we ought to regard something as a computation. But as long as we think there is a fact of the matter on whether minds exist, our regard is irrelevant to whether there really is a conscious mind in there. And whatever you say, you must think there is a fact of the matter if you think that you exist while observers in other possible universe do not. If there isn’t a fact of the matter, if we can’t say categorically what exists and what does not, then all possibilities are on an equal footing and we have the MUH as a consequence.

        but only until we remember the absurd interpretations required.

        The interpretations only have to be possible, though, they don’t have to be realised. The pixies exist regardless. So we can have absurd pixies without realising the absurd interpretations. So I don’t think the absurdities do cancel after all.

        While they may exist in principle, pragmatically they are simply too far out of our causal framework to be of concern.

        Agreed. But our pragmatic attitude with respect to the pixies is not the problem. The problem is whether they exist or not.

        I don’t see the necessity of the MUH from his argument, although I do see some similarities. But maybe I’m missing something?

        No, his argument does not imply the MUH. But if his argument is right, I think we have a choice between rejecting computationalism, accepting pixies or accepting the MUH. The third is the most attractive to me. And I would hope to others also.

        If convention is itself a mathematical entity, that seems circular.

        Sort of. I would instead say that the MUH denies that the intuitive concept of objective existence is really a meaningful predicate. Existence is only meaningful within a particular context. We can meaningfully say that Pluto exists because we mean that there is an object out there in space in causal interaction with us. Implicitly we smuggle in the condition that we are talking about its existence from the perspective of an observer in our universe. We can’t meaningfully say whether a planet in another universe exists or not without defining our terms in some way to clarify what we are talking about, and our definition therefore determines whether it exists or not. It is a category error to apply the concept of existence to entities out of causal interaction with us without this kind of clarification. But it is also a category error (I think) to apply it to the universe itself and to our own minds, where the implicit smuggling in of that condition is more problematic. We exist from our point of view and our universe exists from our point of view, but there is no sense in which it or we exist objectively until we give some meaningful definition for what objective existence is supposed to be. Platonic existence is one such definition, giving us a framework for the discussion of these other universes and our own from an objective perspective without really insisting that mathematical objects are physical things out there somewhere and somehow in causal interaction with us.

        Like

        • Hi DM,
          I just want to say that I’m thoroughly enjoying this conversation. Sometimes that doesn’t come across in our exchange so wanted to make sure you knew it.

          “Pluto may or may not be a planet, but whatever it is, it does definitely exist.”
          I think the existence of a complex pattern doing computation isn’t in question. What’s in question is whether or not it amounts to a mind (or some other designated pattern). In that sense, I do think it’s quite similar to whether Pluto amounts to a planet. (Admittedly it’s far more complicated than the Pluto question.)

          “A first-perspective can’t (it seems to me) half-exist.”
          I totally understand this sentiment. But I think there is good evidence to contradict it. Many animal species intuitively seem conscious, but tests have shown that most are not self aware. Newborn babies are conscious to at least some extent, but the axons connecting their thalamus and cerebral cortex haven’t grown myelin sheaths yet, meaning their experience is a substantial subset of what a 1-2 year old experiences. We have extensive evidence from brain damaged patients that consciousness can be damaged while not being completely eliminated, in many disturbing ways. Even in healthy humans, drugs and fatigue can impair consciousness without completely shutting it down.

          “And whatever you say, you must think there is a fact of the matter if you think that you exist while observers in other possible universe do not.”
          On possible universes, that’s a stronger position than I take. I don’t conclude other universe aren’t there, I just don’t see enough data to conclude they are there. But even if I did conclude that other universes don’t exist, I’m not seeing the inconsistency with a position about the status of patterns in this universe.

          “The interpretations only have to be possible, though, they don’t have to be realised. The pixies exist regardless.”
          Well, again, a complex set of patterns objectively exists. Whether they match what we intuitively classify as pixies is a matter of interpretation and definition, an interpretation I see as absurd.

          “I think we have a choice between rejecting computationalism, accepting pixies or accepting the MUH. The third is the most attractive to me. And I would hope to others also.”
          I can’t see how accepting the MUH gets you out of accepting pixies. From what I can see, computationalism + absurd interpretation = pixies. The MUH just seems like icing on the cake.

          Thanks for the reply on the MUH. I recognize your passion on this. It makes me want to see truth in the hypothesis. I do see it as possible, but can’t find any necessity for it, nothing that forces it on me. Maybe it’s still what I’m missing in the choice you outline above between pixies and the MUH.

          Like

          • Hi Mike,

            I’m enjoying it too, but I think there’s one point we’re going in circles on. I was trying to say a conscious mind can’t half-exist, but you answered as if I was saying a pattern can’t be half conscious. There’s a difference.

            A pattern is something we impose on a natural system, not really something that is objectively instantiated (in my view). Some patterns are very readily identifiable and some are arbitrary and there is a spectrum between these extremes. For instance, my coaster is readily identifiable as a circle. A cherry-picked selection of atoms hanging in the air in front of me also form a circle.

            There is also a spectrum between non-conscious patterns such as the minds of cockroaches and conscious patterns such as the minds of people. These are two different spectra and you’re conflating them in my view.

            I completely accept that there is a blurry spectrum between non-conscious and conscious organisms, and no fact of the matter about when real consciousness starts. But let’s suppose we’re talking about the conscious end of that spectrum exclusively, where we would definitively say that the pattern we’re looking at is conscious. If we fix our attention on the conscious side of that spectrum, I’m trying to look at the grey area of that other spectrum, that of natural or arbitrary interpretation of pattern instantiation.

            It is possible to cherry pick observations so as to form an arbitrary mapping and thereby interpret a wall as instantiating a fully conscious pattern. This is no different in principle from cherry picking atoms in the air and saying they form a circle, or spell out the text of War and Peace. It’s arbitrary and possibly retrospective, but all the same it’s hard to pin down truly objective criteria to say that the pattern isn’t really instantiated after all. So there is no fact of the matter regarding whether it is instantiated.

            So if…
            1 a conscious mind can’t half-exist
            2. there is a fact of the matter on whether conscious minds (i.e. you) exist
            3. there is no fact of the matter on whether a pattern is instantiated by a given physical object
            … then it ought to be obvious that simply instantiating a certain kind of pattern cannot be the same thing as causing a conscious mind to exist.

            Either you need something else (biological naturalists might say some specific biochemical process, theists might say a soul), some form of panpsychism is true or I’m right and what is conscious is not the physical instantiation of a pattern but the pattern itself.

            But even if I did conclude that other universes don’t exist, I’m not seeing the inconsistency with a position about the status of patterns in this universe.

            The inconsistency is with where you seem to be suggesting that there is no objective fact of the matter about whether your own conscious mind exists, that it exists only from your perspective. If that were so, then observers in other possible universes would have just as much claim to existence, seeing as they exist from their own perspectives. Which is the MUH in a nutshell.

            Well, again, a complex set of patterns objectively exists.

            I don’t think so. Once again, you seem to be making the mistake of thinking that whatever natural interpretation we make of the pattern of a physical object is the one that is supposed to be conscious. And of course it isn’t: a natural interpretation of the computation of a wall or a rock is well down in the non-conscious end of that spectrum. But all you need to instantiate a complex mind on the DwP argument is an incrementing counter, i.e. some physical object that changes state without repeating.Once you have that, you can impose an arbitrary interpretation to show it is implementing any algorithm you like, including ones from the conscious end of the spectrum. There’s nothing objectively correct about this or any interpretation, there are only degrees of arbitrariness and perversity. Patterns simply don’t get objectively instantiated. The only sense in which they exist objectively is Platonically. If we are Self Aware Patterns, and if we want to say we objectively exist, then we must accept Platonism.

            I can’t see how accepting the MUH gets you out of accepting pixies.

            Because on the MUH (or on my view anyway, not sure what Tegmark would say) it is wrong to associate consciousness with physical instantiations of patterns, and pixies are physical instantiations of patterns. On the MUH, you still get all the same conscious minds, but now they are sensibly located in other universes rather than absurdly in our rocks and walls. The icing on the cake is that we ourselves are no different, and so it doesn’t draw a distinction between the real world and worlds that only exist in patterns (because the real world is just a pattern, i.e. a mathematical object.

            Like

    • Hi DM,
      Rather than circle endlessly on “facts of the matter”, I think I’m going to focus on what both of us find more interesting.

      “Either you need something else (biological naturalists might say some specific biochemical process, theists might say a soul), some form of panpsychism is true or I’m right and what is conscious is not the physical instantiation of a pattern but the pattern itself.”
      Speaking strictly metaphysically, if I accept everything you lay out up to this point, I think we have no choice but to acknowledge panpsychism. I does seem like we’re looping on this point: I can’t see how your third option gets you out of panpsychism. Isn’t the third option a superset of the second option? I can’t see that the third inhibits the second in any way. Nor does it enhance it. It’s simply an add-on. Unless I’m missing something?

      “Because on the MUH (or on my view anyway, not sure what Tegmark would say) it is wrong to associate consciousness with physical instantiations of patterns, and pixies are physical instantiations of patterns.”
      Echoing above, the pixies are physical instantiations of patterns, but they are nonetheless still patterns. If it’s the pattern itself that determines consciousness, then even within the MUH, how are they not conscious?

      Just to re-iterate, I find panpsychism an unproductive outlook, and the pixies neither trouble nor excite me. At this point, I’m primarily interested in understanding how you see the MUH dispensing with them.

      Like

      • Hi Mike,

        Rather than circle endlessly on “facts of the matter”

        Hmm, OK, but that point is central to Bishop’s thesis.

        I can’t see how your third option gets you out of panpsychism

        Panpsychism is the view that consciousness is ubiquitous, all around us. That is not my position, because I don’t think conscious minds really have positions in space. I think they exist Platonically, and so not really all around us after all, any more than we are surrounded by Platonically existing proofs of Fermat’s last theorem. Our walls and our rocks and so on are not conscious. The minds that Bishop calls pixies do exist, but only Platonically, which means in effect that they are observers in other universes. They are no less real or substantial than we are.

        If it’s the pattern itself that determines consciousness, then even within the MUH, how are they not conscious?

        Oh, the patterns are conscious. But physical instantiations of patterns are just projections of patterns onto physical objects and not the patterns themselves. So these conscious minds are not in the walls. They exist Platonically, effectively in other universes.

        Whether a mind is in our universe, in my view, depends on whether we can interact with that mind. Since there’s no input/output for the pixies, they are not in our universe. If some ridiculous contraption was set up to provide the input output (as you say, the interpretation would really be doing all the work) then only at that point would I regard the pixie as being something in our universe (this is also how we can say that we are present in this universe, effectively inhabiting our bodies. Our bodies are our input/output interface to the rest of the world). This is not so different from your position — whether we should regard them as present is a pragmatic question which largely depends on how naturally we can interpret them as being present. But they exist regardless, and that’s the crucial point.

        Like

        • Hi DM,
          “Hmm, OK, but that point is central to Bishop’s thesis.”
          I understand. I bracketed the matter last night because it felt like we were at an impasse. My objections to your premises seemed like variations of the ones I’d raised earlier.

          That being said, I’m starting to regret my earlier conclusion that whether or not a particular algorithm is being implemented is not a fact of the matter. I dislike that conclusion and really would like to find a way out of it. It feels like there should be a limit to how much work can be done with the interpretation, beyond which we should consider that interpretation illegitimate. But I know if I say that, you’ll ask me exactly where that limit is? And I don’t have an answer to it, at least not yet.

          Perhaps one possible avenue is that, in the case of the pixies or Wordstar in the wall, it feels like the interpretation has become part of the implementation of the algorithm. Indeed, in the case of a counter, if feels like the interpretation is doing virtually the entire implementation with the counter only providing pacing. In fact, along the lines of your observation, to successfully hold a suitable interpretation, I suspect we’d need a powerful computer, which makes the idea that the interpretation is actually the implementation more intuitive.

          The difficulty is that one could counter that the interpretation is always part of the implementation, but if the interpretation requires more processing power than the ostensible implementation, it seems hopelessly impractical, a case of the tail wagging the dog, but that puts us back to pragmatics without an absolute fact of the matter. Honestly, I don’t know how to climb out of this without resorting to pragmatics.

          “Our walls and our rocks and so on are not conscious. The minds that Bishop calls pixies do exist, but only Platonically, which means in effect that they are observers in other universes.”
          And yet, assuming the pixies are there, if I take a hammer to a pixie’s rock, don’t I have a good chance of extinguishing that consciousness, or at least affecting it in some way? I may never know that I affected it, but it doesn’t seem like that would change the reality that I would have.

          “Whether a mind is in our universe, in my view, depends on whether we can interact with that mind. ”
          As above, it seems like the pixie can’t interact with us, but we can interact with it, or perhaps more precisely, affect its experience. Doesn’t that put it at least somewhat in our universe?

          Like

          • Hi Mike,

            If you’re seeing that there are real issues with saying an algorithm is instantiated and unsure whether we really can be objective about it, then you’re more or less where I am. I can see Bishop’s point, but Chalmers has raised some plausible objections against it, the kind of objections that might allow us to have some objective criteria where Putnam and Bishop’s pixies don’t really exist because the FSAs are not fully implemented or because FSA is too impoverished a model of computation. I’m not 100% sure Chalmers succeeds, but it’s a good effort.

            And yet, assuming the pixies are there, if I take a hammer to a pixie’s rock, don’t I have a good chance of extinguishing that consciousness,

            No, because the pixie is an observer in its own little mathematical world. You can’t destroy a mathematical object with a hammer. I mentioned before that I could cherry pick atoms in the air before me and claim that a circle was instantiated hanging in mid air. If I wave my arms, I disturb those atoms and they no longer instantiate the circle, but I have not destroyed the Platonic circle itself. In fact I can still find it instantiated in the air before me if I wish, I just have to choose different atoms. And so it is with the pixies. You can disturb the patterns we interpret as instantiating them, but that doesn’t destroy them because the pixies are actually the patterns themselves and not the instantiations of the patterns.

            However if the pixies have I/O, you can destroy them in the sense that you can remove them from your universe. There will always be some other universe where they continue to exist. I think the same is true of people though. If you kill a person, you are effectively removing him from your universe. But there’s another universe where he continues to exist. A parallel world where he didn’t die, perhaps. I could get into big digressions about how I think we should think about this stuff with regard to morality and so but it’s perhaps a little too much for now.

            Like

          • Hi DM,
            “but Chalmers has raised some plausible objections against it, the kind of objections that might allow us to have some objective criteria where Putnam and Bishop’s pixies don’t really exist because the FSAs are not fully implemented or because FSA is too impoverished a model of computation.”
            I need to revisit Chalmer’s paper. It sounds like I’m working my way around to his way of thinking, albeit almost certainly in a less rigorous and formal way. Although I found his focus on the I/O aspect troubling on my first perusal. What would it say about the status of uploaded people living in a simulated world? Admittedly, that scenario is somewhat ameliorated by presumed communication between the simulation and the outside world (in other words, a different level of I/O), but it still leaves entities in a self contained simulation on shaky ground. Also, I suspected Bishop’s focus on the input tape scenario was designed specifically to counter the I/O objection.

            “However if the pixies have I/O, you can destroy them in the sense that you can remove them from your universe.”
            But even if they don’t have I/O, why couldn’t we disrupt their operation in this universe? If they’re in a completely self contained environment in the wall, but I burn the wall to ashes, it seems like their environment is altered, most likely catastrophically. The MUH may allow them to continue unabated in another universe, but if I’m understanding correctly, it allows everything and everyone to do that, so it still seems like a superset of the Bishop scenario.

            Like

          • Hi Mike,

            But even if they don’t have I/O, why couldn’t we disrupt their operation in this universe?

            Because if they don’t have I/O, what happens to them is completely determined by the algorithm that defines them and their universe. If you destroy the rock, their life stories are unaffected. You’ve destroyed one possible convoluted interpretation of their instantiation but there are an infinite number of alternatives. You can find the very same pixies in some other rock, or in the pulverised remains of the original. You’ll have to find a new mapping, that’s all.

            But of course I’m not saying the are in the rocks at all. I’m saying they are in a different universe. All you can find in the rocks is an arbitrarily convoluted reflection of them. In the same way that tearing up a photograph has no effect on the person photographed, nothing you can do to a rock will affect any pixies.

            On the MUH at least. The panpsychism Bishop is talking about is a different story. He would for instance regard a pixie in one rock as distinct from a pixie in another rock, even if they are the products of identical computations, so that you could indeed destroy a pixie by destroying its rock. I on the other hand would regard them as the same pixie. The Mandelbrot set rendered on my computer is the same object as the one rendered on yours. You can’t destroy the Mandelbrot set by destroying my computer.

            it allows everything and everyone to do that, so it still seems like a superset of the Bishop scenario.

            In a way, I think it’s the Bishop scenario that’s the superset, for two reasons.

            1) It posits a real world and many simulated worlds. I only posit one kind of world.
            2) It has all the computable worlds of the MUH instantiated in every physical system. A copy of our universe exists inside this rock and inside that rock. I instead say that identical mathematical objects are the same object and so I posit drastically fewer distinct objects.

            Like

    • Hi DM,

      “Because if they don’t have I/O, what happens to them is completely determined by the algorithm that defines them and their universe.”

      Actually, I realized while reading your response that the pixie systems would have at least contingent input. My destroying their substrate would effectively be a form of input.

      Come to think of it, they’d also have output. Every physical system unavoidably has both input and output (even black holes which output Hawking radiation). The output might be completely incidental according to our pixie interpretation, but it would be there nonetheless. A purely I/O free system doesn’t seem possible, at least not physically.

      (Unless, admittedly, we consider the entire universe as a physical system, or if there is a multiverse, all of the multiverse. Also the universe will eventually be composed of isolated Hubble spheres, which would ultimately have no I/O with the rest of the universe, if we even want to consider everything outside of that volume within the same universe anymore.)

      The pixies are looking increasingly incoherent to me, at least as physical entities in this universe.

      “You can find the very same pixies in some other rock, or in the pulverised remains of the original. You’ll have to find a new mapping, that’s all.”

      The problem is that applies to everything, not just the pixies. If a plane crashes into my house and I die, per the MUH, my pattern is still in the multiverse. Perhaps that’s true metaphysically, but the fact remains that my pattern in this universe has been affected.

      Yesterday I secure-wiped an old hard drive. Per the MUH, the deleted information is still in the multiverse. But for my purposes, it’s gone, unable to be used by identity thieves (I hope). The fact it exists in a realm inaccessible to anyone (absent a world-wide forensics effort), if true, is a complete superset of every interaction in this universe.

      “It has all the computable worlds of the MUH instantiated in every physical system. A copy of our universe exists inside this rock and inside that rock.”

      Here I feel fairly comfortable saying that there is a fact of the matter that this is not true. It can’t be. Having the universe in a rock would require that the rock be more complicated than the universe. A subset of a thing cannot be more complicated than the thing in which it resides. If we say it’s due to the interpretation then that interpretation would have to be more complicated than the universe (or the interpretation plus the rock would have to be).

      I totally agree that the MUH can have room for every variation of our universe, every being, and every variation of those beings, but I can’t see that the DwP possibly can. (Well, a DwP spread over the full MUH could, but I can’t see how the the DwP sans MUH could.)

      Liked by 1 person

      • Hi Mike,

        Actually the pixies need have neither input nor output.

        If input is not accommodated in the computation (e.g. the state transition tables), there is no input. If you mess with a physical system performing an inputless computation, I would say you are just changing the computation it is performing. What you have now is a window into a different mathematical object. When the computation we are considering supports input from you, then the mathematical object we are talking about includes you because it cannot be described without you, and so the computation needs to be considered to be part of your universe.

        Of course there’s no “fact of the matter” about whether input is just part of the computation or whether it’s just changing the physical system so it supports some other computation. You can look at it either way, but often one way of looking at it is more useful than another. I would regard destroying the rock of the pixies as an example of destroying the physical instantiation of the computation and not as a form of input to that computation. These pixies have no way of knowing you exist. When you destroy them, they continue to exist Platonically in precisely the same way, ignorant of your existence. There is no reason to regard them as part of your world and so no reason to regard them as destroyed.

        Neither is there output. The rock has output qua a lump of rock, but not qua an instantiation of a pixie. There is no human that has developed a relationship with a pixie, for instance. So the rock is affected by and having effects on its environment, but these unrealised possible interpretations of what the rock is doing as various computations have no causal relationship to anything. For there to be I/O to a computation, as opposed to a physical object, there needs to be some meaningful correlation between states of the computation and events in the outside world. There needs to be a reason to see that physical object as instantiating that computation and not others.

        per the MUH, my pattern is still in the multiverse.

        Yes. In two ways.

        1) Your life, including it ending in a plane crash, continues to exist. Your biography is a Platonic object with a beginning, middle and end. You still die in the plane crash, but you can regard this as just being the end of the story rather than the story ceasing to exist.
        2) There is some other universe where the plane didn’t crash into you and you continue to exist.

        the fact remains that my pattern in this universe has been affected.

        Of course! As I say, you are in interaction with objects in this universe so you should be regarded as part of this universe. Your death means you are no longer part of this universe.

        But for my purposes, it’s gone, unable to be used by identity thieves (I hope)

        Sure! So what?

        Having the universe in a rock would require that the rock be more complicated than the universe.

        No, because the interpretation can be arbitrarily complex and that makes up for it.

        Then that interpretation would have to be more complicated than the universe

        Yup.

        Like

        • Hi DM,

          “Actually the pixies need have neither input nor output.”

          Perhaps not Platonically, but I think any physical implementation would have I/O, whether it wants to or not, except perhaps under the briefest of time spans. We can interpret the input to be disruption, but it seems unavoidable, and any non-trivial implementation of an algorithm would have to accommodate it. The output may be irrelevant to the computation itself, but it would still have an effect on the outside world.

          Pixies within the universe cannot be causally isolated from the universe. (Unless perhaps we locate them deep in an intergalactic void.)

          “Sure! So what?”

          My point was that, within this universe, things progress the same whether the MUH is true or false. Putnam’s and Bishop’s thesis stands or falls independently of it. Although if it stands, the MUH certainly enhances it in the ways you describe.

          “No, because the interpretation can be arbitrarily complex and that makes up for it.”
          “Yup.”

          Alrighty then, but this particular proposition (whole universes within a rock) becomes far too quixotic for my tastes.

          Like

          • Hi Mike,

            Perhaps not Platonically, but I think any physical implementation would have I/O

            Yes, any physical system has an affect on its environment and vice versa. But a computation only has I/O if the interpretation of the physical system as that computation is realised somehow. Nothing in the environment of a rock is affected by the possible interpretation of a rock as computing a pixie. In this sense it does not have I/O, and that is the important sense for what I’m saying. Since nothing in the world affects or is affected by the interpretation of that rock as computing a pixie, the computation has no I/O and there is no reason to regard the pixie as being part of our universe.

            Pixies within the universe cannot be causally isolated from the universe.

            The pixies are causally isolated. The “computer” itself isn’t. Indeed, it is crucial to Putnam’s argument it not be. He talks about pixies occurring in open physical systems, as opposed to closed physical systems, because he assumes that gravitational and electromagnetic interaction from without the system will guarantee it progresses through successive states without repeating. So a rock just left alone is being affected by the outside world and this is part of what drives it through the states that are being mapped to the pixie FSA.

            But the pixies are causally isolated because what happens to the pixies is determined only by the algorithm we choose. We choose an algorithm first, then we (retrospectively if necessary) map the states of that algorithm to states of the rock in interaction with its environment. The fates of the pixies were determined before we even looked at deriving a mapping or the states of the rock. The fates of the pixies would have been the same no matter what rock we had chosen.

            Alrighty then, but this particular proposition (whole universes within a rock) becomes far too quixotic for my tastes.

            Agreed! And for Bishop too! That’s the point of DwP. You’re not supposed to just accept the absurd conclusion. You’re supposed to reject it as too absurd, and as a consequence abandon computationalism.

            Like

    • Hi DM,
      Well, it’s always comes back to the interpretation. An interpretation which in some instances cannot exist in this universe. In the more grounded cases, we’ve established that the implementation is doing virtually all the work. It’s the implementation that is effectively implementing Wordstar, not the wall.

      In the case of pixies, we’ve taken a rock and built an implementation around it, and then pretended like we found it there. In other words, we built an AI and blamed it on the rock.

      As you note, the power of the argument is supposed to be its absurd conclusion. But it requires absurd premises to reach that conclusion. In my mind, this makes it conceivably true but uninteresting. If I’m allowed to add in arbitrarily absurd premises, generating absurd conclusions is trivial.

      So, unless there are aspects we haven’t discussed yet, I can’t see that this troubles the computational theory of mind. In many ways, it ends up being a special case of the argument I made in the post about general panpsychism: If it’s true, so what? What follows from it? Does it tell us anything useful about human, animal, or machine consciousness?

      (None of this it to say it wasn’t worth investigating. After all, black holes and quantum entanglement were originally supposed to be absurd consequences that couldn’t exist in reality, until we found out they did.)

      Like

      • Hi Mike,

        In other words, we built an AI and blamed it on the rock.

        Again, I think you’re focusing too much on the absurdity of the interpretation as if that gets us out of trouble. It doesn’t because the interpretation doesn’t have to be realised. It doesn’t matter that it is absurdly complex because it only exists in potentia: nobody has to actually build it for the pixies to exist. Building it is really only giving the simulation the potential for I/O. The pixies should exist regardless. If they do not, we should be able to find a good reason why not: an objectively verifiable criterion by which the rock fails to implement the pixie algorithm. You seem to intuit that there has to be such a criterion, but you don’t seem to be able to lay your hands on it.

        One approach would be to say “OK, a mind has to have I/O with the real world (effectively a realised interpretation) to exist”. The problem with that approach is it seems wrong to suggest that a simulated mind wouid cease to exist just because we shut down I/O and there was nobody to interpret what the computer was doing as a computation.

        Another more promising approach is to go the Chalmers route and try to find objective differences between the mappings Putnam proposes and what a computer does. This seems to me like the way you would want to go given your beliefs and attitudes about these issues. I’m not sure it’s guaranteed to be successful, as intuitively it seems plausible to me that there might always be ways around such objections and no way to conclusively determine if a system is implementing a given algorithm or not.

        Again, I think the MUH is the correct response. We don’t have to believe universes exist in our rocks. We don’t have to hold ourselves responsible for genocide every time we interact with a physical system. The pixies are not in the physical objects, they are safely tucked away in completely causally isolated universes where we can’t hurt them and they can’t hurt us, and so we are completely within our rights to live our lives as if they don’t exist (indeed they do not, at least from our perspective — just as we do not exist from theirs). Don’t you find that less absurd than DwP? Doesn’t it sit a little better with your pragmatic view of what we should regard as a computation or not?

        Alternatively, as I said, you can put your bets on the Chalmers approach if you like, but what would you do if it failed?

        Like

        • Hi DM,
          I think maybe we have a disconnect here. So let me say this another way. Let’s say that every aspect of the DwP is true. We can interpret / implement into existence conscious pixies. Isn’t this the very essence of computationalism? The fact that these pixies can be interpreted / implemented into existence is equivalent to our ability to build AIs. The only difference is the substrate.

          Indeed, the DwP strikes me as similar to the Chinese Room argument, in that it’s really just another way of triggering the initial emotional aversion many people have to computationalism. These thought experiments repackage the basic concept to call attention to it in a new and fresh way, but it’s really just the same initial issue.

          But the fact that the computational theory of mind leads to scenarios that might make us uncomfortable is irrelevant. Natural selection makes many people squirm, as does the looming heat death of the universe, or the bizarre properties of quantum mechanics. The relativity of simultaneity still sometimes make me squirm. As I said early in this conversation, reality has shown no concern about our cherished categories.

          So, with all that said, why should I be concerned if the DwP is true? Suppose it is true? What follows?

          Like

          • Hi Mike,

            The fact that these pixies can be interpreted / implemented into existence is equivalent to our ability to build AIs.

            I think you’re missing something though. Per the DwP argument, you don’t have to build an interpretation for the pixies to exist. Unlike the intuitive idea of AIs, if DwP is correct then pixies exist without any effort at all on our part. That makes them substantially different.

            So, with all that said, why should I be concerned if the DwP is true?

            Because it is more absurd than either rejecting computationalism or accepting the MUH. Because it is absurd to think that every action you take kills an infinite number of sentient beings and brings an infinite number of sentient beings into existence. Because if DwP were true, there would be no reason to think you were not a pixie yourself, in fact it would be overwhelmingly likely that you were one. If you can accept all this, then for the life of me I have no idea why you would hesitate for a second to accept the MUH which is far simpler and far less absurd and explains everything that needs explaining.

            Like

          • Hi DM,

            “Unlike the intuitive idea of AIs, if DwP is correct then pixies exist without any effort at all on our part.”

            It seems to me that the version of the DwP you’re describing assumes Platonism, that the interpretation exists before it is developed. But if we accept Platonism, then the AIs exist before we instantiate them. If we don’t accept Platonism, then neither exist until someone puts in the energy to form their patterns, at least for the interpretations that are possible in this universe; the others are only possible with something like the MUH.

            “Because it is more absurd than either rejecting computationalism or accepting the MUH.”

            I think here we just have a fundamental disagreement. Yes the conclusion is absurd, but only with absurd inputs. You don’t see the absurd inputs making a difference. I see it as making the DwP, if true, uninteresting. I’m not sure how to breach this divide. We just seem to have different intuitions about it.

            On the MUH, I feel your frustration, but I don’t think you want me to say I accept it if I don’t. I’ve always been prepared to accept it if there is some necessity for it. But in this particular discussion, it seems to me to pour gasoline on the fire rather than put it out.

            Like

          • Hi Mike,

            I’m not frustrated, sorry if I seem so.

            It seems to me that the version of the DwP you’re describing assumes Platonism, that the interpretation exists before it is developed.

            I don’t think so. Rather, the DwP argument assumes that the interpretation has no effect on the existence of the pixies, because removing I/O from a normal computation or having humans fail to interpret it as a computation should not cause any conscious minds realised in the computation to blink out of existence. The heart of the DwP argument is only that it is not possible to come up with objective criteria by which an inputless computation can be said to exist or not. I don’t see Platonism being smuggled in there.

            Again, sure you have to go to ridiculous lengths to show a rock is computing a pixie, and that may well be grounds for suspicion that it isn’t, but until we can point to objective criteria we’re in a bit of a bind.

            So, let’s take it as a given that you’re right and the absurdities cancel out somehow. Don’t you still think there’s a problem in determining how absurd an interpretation is allowed to get before we say the pixies don’t exist? Doesn’t there need to be a hard cutoff? Is it not true that a conscious mind (from its own perspective) must either fully exist or fully not exist?

            Like

          • Hi DM,

            “I’m not frustrated, sorry if I seem so.”

            Oops, sorry I jumped to conclusions.

            “Don’t you still think there’s a problem in determining how absurd an interpretation is allowed to get before we say the pixies don’t exist? Doesn’t there need to be a hard cutoff?”

            Honestly, it would be emotionally pleasing to have a hard cutoff, but reality simply may not cooperate. The problem is that we consider two patterns to be the same algorithm if mapping between them doesn’t require too much work. Is the precise published DOS version of the Wordstar binary code running in my wall? I think we can comfortably say No. Is something functionally equivalent to Wordstar running in my wall? That depends on how far we’re willing to stretch “functionally equivalent,” how much energy we’re willing to invest in finding a way for the two patterns to be in the same category, in other words how absurd we’re willing to get with the interpretation.

            Let’s look at this another way. What if I asserted to you that every James Bond movie had the same plot? You might agree. Now what if I asserted that every movie ever had the same plot? You’d probably be suspicious, but if I insist that there is an interpretation of every movie that maps them to the same plot, not matter how silly those interpretations might be, you may not be able to find a logical way to refute it, but you’re unlikely to be bothered by it.

            Maybe something like an objective criteria might be how much energy the interpretation requires. Certainly if the interpretation is more complex than the pattern under consideration, I think we’re on shaky ground, although any threshold is ultimately going to be arbitrary, and “how much energy” is itself open to interpretation. 😛

            Like

          • Hi Mike,

            Honestly, it would be emotionally pleasing to have a hard cutoff, but reality simply may not cooperate.

            I’m sorry, but I see this as hand-waving away a crucial problem. I don’t want a cutoff because it would be emotionally satisfying, I want one because I can’t see how you can get by without one. To have some sort of gradation between a given mind existing and not existing you would have to explain what it is for a mind, from its own perspective, to half exist, while being functionally just as complex and self-reflective as an awake, mentally sound adult human. The idea that such a mind can half-exist strikes me as nonsense. A philosophical zombie/human hybrid, perhaps.

            Like, if I was to suggest that in the multiverse, some universes were “realer” than others despite being otherwise similar, that the existence of a possible world and the people within it were a matter of degree rather than an absolute, wouldn’t that strike you as something that needs to be clarified and justified a little?

            Is the precise published DOS version of the Wordstar binary code running in my wall? I think we can comfortably say No.

            Well, on DwP, it is, pretty much. Your wall is running DOS after all! Is the code running? No, but I would say code isn’t running on your computer either, not really. Code (even binary code) is an abstraction to describe a logical computation, not really something that physically runs on a physical computer.

            you may not be able to find a logical way to refute it, but you’re unlikely to be bothered by it.

            Yes, because nothing hinges on whether this is the case or not. It’s just applying a categorisation, like whether a virus is really alive or not or whether Pluto is a planet.

            But say a discontinuous objective fact did hinge on it, then there would be a problem. Say watching movies with Bond plots were guaranteed to kill you, but watching non-Bond plots were guaranteed not to kill you. Then there would have to be some sort of cut-off. On computationalism, the objective fact is whether a given mind exists or not. It can’t hinge on whether a computation is happening or not if there are no objective criteria to determine whether this is the case.

            I think we’re on shaky ground, although any threshold is ultimately going to be arbitrary

            We are on very shaky ground if the threshold is arbitrary. Physicalist computationalism seems to demand an objective, sensible threshold that you can justify from first principles. Failing that, its critics have every reason to call it a failed hypothesis and deem its adherents to be in the grip of an ideology.

            Like

          • Hi DM,
            There are many concepts that defy objective definition. Biology gets along fine without an objective definition of “life” or “species”, astronomers continue to argue about the precise definitions of “planet”, “solar system”, or “galaxy”, sociologists study religion despite not being able to define it, and science overall proceeds despite ongoing arguments about what is and isn’t science. Every one of these concepts is subject to being undermined by ridiculous interpretations, if we choose to take them seriously.

            Even if we abandon the computational theory of mind, the purported problem would still affect computer science, which has so far managed to make progress in ignorance of it. I can’t see this inhibiting computational neuroscience or psychology in the slightest. None of this is to say that we might not eventually discover that the computational theory is wrong, or that some other theory explains observations better, but rejecting it because someone can craft an argument dependent on absurd interpretations (which can’t actually be produced) strikes me as itself absurd.

            Like

  15. Hi Mike,

    You repeatedly make the move of comparing the problem of categorising something with the problem of accounting for how consciousness comes into existence. It’s apples and oranges. Two completely different problems. Categorising is just deciding how we humans talk about something. But computationalism posits that running an algorithm can cause something new to appear in the universe: a conscious mind.

    If our problem is of studying how computer programs or brains function, then I agree, there is no issue regarding absurd interpretations. But if we are studying consciousness and saying that what it takes for consciousness to be realised is for the right sort of computation to take place, we need objective criteria for determining whether that computation is taking place. This is self-evident to me and I’m struggling to grasp your problem with it.

    Again, with the example of deciding what is alive or what is a planet. Suppose that consciousness requires life, so that only living things can be conscious. Doesn’t that strike you as problematic? What we call ‘alive’ is largely just a useful heuristic, a word that connotes a grab bag of various related properties and not an absolute natural category. It’s not plausible that only living things could be conscious until we can define absolutely precisely what we mean by “living” (and presumably offer some kind of justification for how this relates to consciousness).

    the purported problem would still affect computer science,

    How so? I don’t think it’s an issue for computer science at all. Computer science is the study of algorithms in the abstract as well as the study of how they can be physically implemented according to a certain, given interpretation we decide. It is entirely unconcerned (as it should be) with alternative possible implementations or interpretations. Because computer science is not the study of consciousness. It is not positing that running the right kind of computation brings something new into existence.

    Like

    • I’ll put my objection this way.

      It is not tenable to suppose that objective facts depend on subjective facts.

      For instance, it is not tenable to suppose that there is a possible world where beautiful paintings always have a positive electrical charge and ugly paintings always have a negative electrical charge.

      I’m supposing that everything without objective criteria is subjective, and right now that seems to include whether a computation is running. But whether a mind exists is not supposed to be subjective. It is an objective fact (albeit one perceivable only to the mind itself) that the mind exists.

      (OK, actually I don’t think this is objective at all, but if you agreed with me that it was subjective you would tacitly be accepting the MUH, because then we wouldn’t be able to distinguish between possible minds that do exist from those that don’t).

      So, if it is subjective whether a computation is running, and objective that a mind exists, it is not tenable to suppose that the existence of the mind depends on whether a computation is running.

      It is not tenable to suppose that there is a possible world where computers running certain kinds of algorithms are always conscious and other objects are not.

      Like

      • Hi DM,

        “It is not tenable to suppose that objective facts depend on subjective facts.”
        How would you define an objective fact? How do we know it is objective? What about it makes it objective?

        “But whether a mind exists is not supposed to be subjective.”
        I agree it would be nice for it not to be, but why is this necessarily true?

        “It is an objective fact (albeit one perceivable only to the mind itself) that the mind exists.”
        This may depend on your answer to the question above, but how can I have objective knowledge of my mind’s existence? Certainly I have subjective experience of it, but objective?

        “but if you agreed with me that it was subjective you would tacitly be accepting the MUH, because then we wouldn’t be able to distinguish between possible minds that do exist from those that don’t”
        Why is this allowed when we consider mathematics to be the prime reality but not when we consider the universe to be the prime reality?

        Ontologically, why should we be concerned, assuming we achieve an understanding of the algorithms of consciousness, that the objectivity of determining the existence of a mind is the same as determining if Wordstar is executing?

        Like

        • Hi Mike,

          How would you define an objective fact? How do we know it is objective? What about it makes it objective?

          I guess I’m relying to some extent on our intuitions being aligned on this and not so much on definitions. The mass or electrical charge of something is an objective fact. Beauty or elegance or whether something is funny is subjective. I guess objective facts are the kinds of things we can build detectors for, that can be expressed without ambiguity or vagueness and that all who are in a position to observe will agree is the case.

          I can’t really justify that a mind existing is an objective fact, because I don’t actually think it is. I think your view is the one that demands that it be an objective fact for reasons I’ll explain now.

          Let A be some computation that could support consciousness, even though it is not physically instantiated right now. From A’s perspective, A and its environment exist, even if from your perspective neither exist. Let B be your mind. From B’s perspective, B and its environment exist, even if from A’s perspective neither exist. You deny the MUH, so you suppose that B really exists yet A really does not. That implies that there is something more to the existence of a mind than the subjective, as otherwise the question of the existence of A or B would be entirely symmetrical: each exists to themselves but neither exists to the other. So to deny the MUH is to say that there is an objective fact of the matter about whether minds (or universes) exist. This is problematic for a number of reasons, not least the fact that a mind is only perceptible to itself.

          Why is this allowed when we consider mathematics to be the prime reality but not when we consider the universe to be the prime reality?

          Not sure I understand, but if there is no objective fact of the matter about which minds exist and which don’t, then all minds are on an equal footing and we are no different from those that are not instantiated in our universe. Which implies that we would perceive the universe as real even if it weren’t. Which in turn implies we can eliminate the “real universe” hypothesis as unnecessary fluff. Mathematical universes are sufficient to explain all we see.

          Like

          • Hi DM,
            Okay, we may have made some progress here.

            “That implies that there is something more to the existence of a mind than the subjective, as otherwise the question of the existence of A or B would be entirely symmetrical”

            For the non-MUH view, it seems like you may be conflating a couple of things:
            1) physical existence
            2) whether a physical structure implements a mind

            It seems to me that 1) can be objectively determined since it can be measured and verified.

            If computationalism is true, then 2) can only be determined to the extent that it can be determined if a physical system is implementing a certain algorithm.

            (Incidentally, if computationalism is false, but the mind is still a physical system of some sort, then 2) can still only be determined to the extent that any physical system can be determined to perform a certain function. In other words, this interpretation issue doesn’t just apply to computation.)

            “Not sure I understand, but if there is no objective fact of the matter about which minds exist and which don’t, then all minds are on an equal footing and we are no different from those that are not instantiated in our universe.”

            So, keeping in mind the distinction above:

            If the MUH is false, no physically uninstantiated system is real. But via a sufficiently liberal interpretation, any physical system can be considered a mind. Note: “Uninstantiated” here means not even instantiated in someone’s physical mind. I’d also say the interpretation itself must physically exist. (For a simple interpretation, this can be encoded in someone’s brain.)

            If the MUH is true, instantiation is irrelevant. Any system, instantiated or not, can be considered a mind. Since the interpretation can span into Platonic space, there are no bounds for it and no requirement that it be instantiated.

            Both of the above allow the existence of minds to be subjective, but only in the MUH can non-physical patterns be included.

            Does this help to bridge the divide in our viewpoints?

            Like

          • Hi Mike,

            I’m not really seeing the conflation. You may need to clarify a little.

            I’m talking about whether a mind exists. On computationalism, this is just whether a physical structure implements a mind, but in general (on other views such as biological naturalism or panpsychism or whatever) it is also whether there is a first person perspective that perceives itself as existing. On the non-MUH view, this is a substantive, meaningful question, even if it cannot be measured objectively, at least with current technology.

            via a sufficiently liberal interpretation, any physical system can be considered a mind

            This statement is ambiguous. You could be saying that any physical system is on the spectrum from simple and automatic to complex and self-aware, or you could be saying that there exist Putnam interpretations for any physical system to show that it is implementing any algorithm (including that of any mind).

            but only in the MUH can non-physical patterns be included.

            I don’t think there is really such a thing as a physical pattern. I would say patterns are largely in the eye of the beholder. Which is a nice way of phrasing the DwP argument in a nutshell.

            Does this help to bridge the divide in our viewpoints?

            I’m afraid I’m left where I was before. I don’t really understand your argument, and I’m unsure if you understand mine.

            Let me break it down again in more detail just in case you didn’t really get what I was saying.

            Even physically uninstantiated systems can have properties, I would say. Those properties are what we would find if we did instantiate and explored them. Like, the Mandelbrot set has a kind of geography that can be explored with a computer. We can talk of it having these properties in the abstract even if we’re not instantiating it anywhere right now. Similarly we can suppose that there is some other similar fractal structure, say the Smith set, which is as yet undiscovered (and so if we reject Platonism it isn’t actually real), but it still has properties that can be explored when it is discovered.

            For people that exist within algorithms, their properties include their life stories, their beliefs, attitudes and so on. We can talk about them as if they were real even though they are not, because all their properties are in principle accessible to us if we just run the simulation. Since they don’t really exist, they have something like the ontological status of characters in a work of fiction and we can treat them as such (as I will in the remainder of this comment).

            Now, what I’m trying to show is that rejecting the idea that there is an objective fact of the matter about whether a given mind exists is tantamount to accepting the MUH.

            Compare your real, physically instantiated mind with the fictional mind of Han Solo. If there is no objective fact of the matter regarding whether your mind exists, and you only exist to yourself, subjectively, then can’t the same thing be said of Han Solo’s mind? He does not exist objectively, but from his perspective he does, so it would seem that Han Solo’s mind is of the same ontological status as yours.

            Furthermore, it calls into question the very idea that the physical world objectively exists. If Han Solo’s mind is of the same status as yours and mine, then there is very little reason to believe that our minds are indeed physically instantiated and that our world exists at all. We only know that our world exists (and say that it exists objectively) because the community of humans on planet earth agree that it does. But we have no way of knowing that the community of humans on planet earth is not entirely fictional and so know way of knowing that our world objectively exists.

            We can yet cling to the belief that there is an objective fact of the matter about whether the world exists, but it is evidently little more than an empty assertion. In the face of a scenario where every possible world is perceived to exist by its inhabitants, whether it physically exists or not, the concept of physical existence becomes meaningless.

            Like

    • Hi DM,
      Okay, let me restate your logic and you can tell me if I have it right.
      1. Given the DwP argument, the existence of minds is open to interpretation and therefore subjective.
      2. From a mind’s perspective, it exists, but the existence of everything other than it is subjective. (Idealism)
      3. But if both 1 and 2 are true, then everything is subjective, including existence itself.

      My first question would be, why is mathematics exempt from 3? Why aren’t mathematical concepts also subjective and cut out by the same razor?

      My other question is the same as for panpsychism, what follows if it is true?

      Like

      • Hi Mike,

        Given the DwP argument, the existence of minds is open to interpretation and therefore subjective.

        No. Everyone (apart from me) seems to assume that the existence of minds is not open to interpretation. Your mind exist. My mind exists. The mind of Han Solo does not exist. But, given the DwP argument, the existence of computations is open to interpretation and is therefore subjective. Ergo, the existence of computations cannot explain the existence of minds.

        From a mind’s perspective, it exists, but the existence of everything other than it is subjective. (Idealism)

        I’m not sure if this is supposed to be my interpretation of your view, Bishop’s view or my own. My view is just to adopt the convention that all mathematical objects exist. As a mere convention, there is no fact of the matter. I deny that objective existence is really a meaningful predicate, because everything that can exist does or does not exist together depending only on this convention. It is a predicate that draws no distinctions and so is empty.

        Bishop’s view and yours would seem to be that there is a fact of the matter, because your mind exists and Han Solo’s does not. The two of you would also presumably agree that our universe exists and that the Star Wars universe does not.

        I would say that the only sense in which the universe and objects in it objectively exist is in the sense that they objectively exist from the perspective of an observer within. This is an objective fact about something subjective or observer-relative, in the same way that it is an objective fact that I like coffee (while whether coffee is tasty is subjective). As such I’m not sure I would agree that “everything other than it is subjective” is an accurate characterisation of my view.

        My first question would be, why is mathematics exempt from 3?

        Mathematical objects either exist or do not as a matter of convention. If we define “existence” as Platonists do, then mathematical objects objectively exist (and so do we). If we reject Platonism, then mathematical objects do not objectively exist (and neither do we).

        My other question is the same as for panpsychism, what follows if it is true?

        It’s just really absurd. And unparsimonious. And leads to unnecessary silly problems. Any view which leads to accepting DwP is immediately suspect and is evidence that you’ve probably gone wrong somewhere in your reasoning,

        For instance it would mean that we are all almost certainly pixies and that our world does not exist, while at the same time insisting that there is a real world nothing like our own (perhaps only a trivial world which is nothing but a single oscillating string or something) but which is required to exist for us to be conscious. I don’t know why you would regard this as less absurd than the MUH.

        Like

        • Hi DM,
          So, it’s not a matter of me seeing DwP as less absurd than the MUH.

          First, given all the problems with interpretations we’ve discussed, I see the DwP as toothless, all smoke and no fire. The more plausible pixies are, to me, just AI implementations hiding in the interpretations, and the more absurd ones require impossible interpretations. I know you disagree, but given these interpretation issues, I’m not troubled by the DwP conclusions.

          Second, even if I were troubled, I still fail to see how the MUH helps. It may perhaps recontextualize the pixies, but this seems like semantics. To whatever extent the pixies are actually there, they remain if the MUH is true. (Your clarifications about observer dependent existence did convince me that the MUH doesn’t make it worse. Thank you!)

          If it makes you feel any better, I think the MUH is a lot more promising of a concept than DwP. I personally remain more inclined toward something like mathematical empiricism, but I can see the appeal of the MUH. On the DwP, maybe Professor Bishop will produce a more refined version that I find more disturbing; I’ll keep an eye out.

          Like

          • Hi Mike,

            Sounds like we’re at the end of the line!

            Thanks for the conversation. I’ve enjoyed it and it has helped to firm up some of my thinking.

            But I’m left with the impression that you’re more or less ignoring some problems. I get at an intuitive level how the absurdity of the interpretation is an indication that we can discount the existence of the pixies, but I can’t leave it at that. We are left without objective criteria to measure this absurdity or a threshold where a computation can be said to occur. I don’t understand how you can be happy with that if you explain your existence as arising out of a conscious computation, especially if you think there is a fact of the matter regarding whether you exist.

            I’m guessing you basically have faith that such criteria could be found even if we don’t have them yet. That is a plausible position, but it still means there is a pretty major problem to solve to put computationalism on a solid footing.

            I am in the process of writing up my own thoughts on this issue. I hope you don’t mind, but I mention you once or twice as I discuss possible responses to DwP.

            Like

          • Thanks DM. I enjoyed it too.

            On ignoring the problems, I guess in a universe where I’m obliged to accept quantum mechanics and general relativity, accepting this just doesn’t strike me as out of bounds.

            “I’m guessing you basically have faith that such criteria could be found even if we don’t have them yet. ”
            I actually have little faith that such criteria will be found. Again, I accept the implications, such as they are.

            I’d be honored to be mentioned in your write up. Looking forward to reading it!

            Like

          • Hi Mike,

            I’m obliged to accept quantum mechanics and general relativity

            Yes, but at least those are well-defined theories that make specific predictions.

            The idea that consciousness is realised by a physical computation is not in this category at all unless you can define what constitutes a physical computation. The problem is not that the hypothesis is unintuitive, it is that it is too vague to be meaningful.

            Like

          • Hi DM,
            Maybe a better example would have been natural selection. That said, I agree that computationalism is currently more a broad approach to understanding neuroscience and psychology data rather than a rigorous theory. But these issues seem like they would apply to any physical understanding of the mind.

            Like

  16. lemarkle says:

    Dear DisagreeableMe [and SelfAwarePatterns]; I also would, of course, be delighted to read a summary of this exegesis of DwP.. I just regret that my own insane workload has prohibited from participating in these discussions to the extent that I had hoped/wanted-to (and that your insightful engagements with DwP deserve), not least as there remain several key issues i would like to challenge; that said, perhaps DMs forthcoming summary will be the best place to reengage …

    Like

  17. Tom W. says:

    Hi guys,

    As others have said, I’ve thoroughly enjoyed this extended discussion – thank you!
    Could you forgive a rather naïve question though?
    – How is this ‘computationalism’ any different from ‘process philosophy’ (or ‘process metaphysics’ or any other ‘process-[whatever]’)? Both have been used to ‘lead us into thinking’ towards panpsychism.

    Secondly, here’s an appeal for your own opinions and feedback:
    – If “Everything is consciousness” can we really continue to call it ‘consciousness’? The apeiron as was defined by Anaximander would be this ‘consciousness’ too then… “Energy (which physicists fervently claim) is neither created nor destroyed” (‘…in a closed system’ – but isn’t ‘Everything’ a closed system?)… couldn’t Energy be this ‘consciousness’ too? I think that this ‘Everything is consciousness’, if it were true, means we gotta stop calling it ‘consciousness’ – because at this point it becomes a universal substance and I dunno about you but I can think of other names for it than ‘consciousness’…

    I look forward to hearing from you!

    Thanks,

    Thomas

    Liked by 1 person

    • Hi Tom,
      Interesting questions. You just made me lookup “process philosophy.” Based on the intro in the wiki article, I’d say aspects of this could be considered a case of process philosophy. For me, computationalism starts with the realization that the mind is what the brain does, which is, of course, the brain in its processes. That said, some of the others may have a different take on it.

      Your second question, I think, makes the same point I did in the post. If we consider everything to be conscious, then the interesting question then shifts to what makes human and animal consciousness different from wall or rock consciousness? Admittedly, arguing about the ontology of the pixies is an aspect of that question.

      BTW, Disagreeable Me just did his own blog post on this, that you might find interesting.
      http://disagreeableme.blogspot.com/2016/02/putnam-searle-and-bishop-failure-of.html

      Liked by 1 person

      • Tom W. says:

        Thanks SAP!
        I read DM’s post, but I’m now thirsting for his own take on things (promised for a future post).

        I’d like to rephrase your quote below, to hopefully draw your attention to a somewhat larger problem – one I was alluding to when I mentioned the apeiron – so that:

        “If we consider everything to be conscious, then the interesting question then shifts to what makes human and animal consciousness different from wall or rock consciousness?”

        becomes:
        “If we consider everything to be [stuff], then the interesting question then shifts to what makes human and animal [stuff] different from wall or rock [stuff]?”

        which further becomes:
        “If we consider every[stuff] to be [stuff], then the interesting question then shifts to what makes [stuff] and [stuff] [stuff] different from [stuff] or [stuff] [stuff]?”

        Do you see what’s happening? The problem is that by making everything the same stuff then the problem becomes that of distinction. To get a feel of how tricky that is, imagine trying to gather a ‘piece of water’ while you’re underwater! You might want to say something like “Easy! Just seal it in a plastic bag!” – but the trickiness is that here, you’ve got nothing but water – so even your ‘plastic bag’ must also be made of water… so how, if everything’s the same stuff, does one thing ‘distinguish itself’ from another thing? Ultimately, I’m asking how to things ‘exist’ (i.e. from Latin “ex” + “stare” – to “stand out”)?

        So, more fundamentally, if ‘everything is conscious[ness?]’, then even consciousness breaks-down into a problem of “Sameness and Difference”… Something I’ve been banging my head against for almost two years now…

        I look forward to your ponderings on this

        Like

        • Thanks Tom.

          I tend to think that anything we make a distinction about, we do because it is useful to us in some way. We see water as water, and not wood, because the distinction between water and wood is useful. Of course, both water and wood are made of quarks and electrons, but that’s not a particularly useful fact when I’m trying to row a wooden boat across a lake.

          I agree that most naturalistic conceptions of panpsychism arise from overly broad definitions of “consciousness”, definitions that I think are too broad to be very useful. People can get away with this maneuver because no one can authoritatively say exactly what consciousness is yet. But like defining water to be just quarks and electrons, it leads to conclusions not particularly useful for anything.

          Liked by 1 person

          • Tom W. says:

            Interesting that you think we’re the ones making the distinctions – would it not be those things whose distinctions make themselves known to us? (i.e. that their differences ‘merit’ distinctions and thus to have their own words/concepts/references?)
            That we call one ocean The Pacific, and the other the Atlantic are, surely, human conveniences; but that we call them Oceans and not Mountains or Plains is not so much a convenience as it is imposed by the large-expanse-of-water-iness that they present.
            Do you see my point? I’m saying that (IMO) differences give rise to things – and then we refer to those things ‘because’ they are different – not that ‘because’ we refer to things they therefore become different.
            Thus one starts at a ‘low’ level of distinction (quarks and electrons as you put it). But then those quarks and electrons behave differently and thus ‘distinguish themselves’ and thus we attribute a name to those differences… rising all the way up to ‘water’ and ‘wood’ – as you so rightly pointed-out, referring to things by their most common constituents would be a deliberate loss of distinction…

            Liked by 1 person

          • This is actually quite a complex question, and I’m not sure my views on it are completely settled. I used to think there were sharp distinctions between objective versus subjective distinctions. I’m far less sure of that today. Instead, I view it as much more of a spectrum, with things that are more objective on one end, and things that are more subjective on the other.

            For example, take animal species. Biologically, they’re often defined as a group of animals that can successfully breed with one another. But you can often have members of one species, but only limited members, who can successfully breed with selected members of another species. This is so much of an issue that it even has a name, “the species problem.” It’s worth noting that if every member of ever species who ever lived were present and lined up morphologically, there would be no species, just individuals with spectrums of traits.

            Even the distinction between animals and plants can be problematic, with some species being motile (and thus animal like) for part of their life cycle, but being rooted in place (plant like) for another.

            The distinctions between bumps on a plain, hills, and mountains largely amount to what these land forms mean to us. And the distinction between a lake and a sea amount to salinity, except that lakes have some salinity too, so it really amounts to whether the salinity is too high for us.

            Getting back to your example of the distinction between a mountain and an ocean, that seems to be a distinction that is closer to the objective side of the spectrum, at least on Earth with its temperature ranges. An alien from Pluto, with its ice mountains and cryovolcanoes, would likely regard the Pacific Ocean as a vast expanse of molten rock.

            All of which is to say, I think that nature has patterns, which sometimes cluster in ways that allow us to recognize common patterns and form categories, but nature is rarely strict about these clusters. The clustering is real and objective but the distinctions between them are rarely as sharp as we perceive. Nature has no problem throwing examples at us that confound what we take to be solidly objective distinctions.

            Liked by 2 people

          • Tom W. says:

            You make very compelling points – and currently, I’m of the opinion that while reality is continuous, our description of it is (necessarily?) discrete – i.e. that things distinguish themselves, but where we place the boundary of a thing is arbitrary (or dependent upon us humans). But I know my ideas are very weak right now – I flip-flop a lot (just look at some of the crazy ideas on my blog!) because I’m working them out progressively… And in parallel, I’m trying to figure out a discrete system that can successfully describe (while acknowledging the human-arbitrariness) continua… it’s a real head-scratcher! George Spencer-Brown tried (read Laws of Form), but I disagree with some of his initial assumptions – which in turn lead me to disagree with quite a bit of Set Theory too… so yeah. Lots of work ahead! 😀 I’ll be sure to let you know if I succeed – so you can test it out in AI ponderings, see if it’s useful there too!

            Like

          • The discrete versus continuous thing is an interesting quandary. Physics appears to be divided between two overarching theories: quantum theory and general relativity. Quantum theory is, of course, quantum, discrete, while general relativity is continuous. The problem is that these theories disagree, but their disagreement only appears to happen in locations that can’t be observed, such as black holes, the big bang, etc. Which theory ultimately “wins” (assuming either survives unscathed) should tell us whether reality is ultimately discrete or continuous.

            Myself, I think we’ll eventually discover that it is discrete in some fashion, although the level of that discreteness will be at or smaller than the Planck length.

            I saw your new post on Mobius strips and do plan to read it (hopefully today).

            Liked by 1 person

    • Hi Tom,

      I don’t know anything about process philosophy or process metaphysics so would not like to comment too much. A quick scan of wikipedia suggests it’s the kind of thing I might not regard as terrifically meaningful or sensible. Seems a little “continental” to me. I have more affinity with analytic philosophy.

      Regarding whether everything is conscious:

      Few people actually believe everything is conscious. “Everything is conscious” is an absurd conclusion of the DwP, which is intended to show that computationalism is false. Neither Bishop nor anybody else really take this conclusion seriously as something that might be true. So you’re kind of making Bishop’s point by underlining the absurdity.

      However there are some people who do hold to a type of panpsychism. David Chalmers is one of them. Persuaded by his own philosophical zombie argument (where he thinks that because he can conceive of a being which is physically just like us and behaves just like us but which is not actually conscious, then there must be some possible world where such a being could exist) he thinks that the physical facts of this universe are insufficient to explain consciousness and something else is required to bring it to life. He posits that consciousness is a fundamental part of nature, pervading everything. It happens to be present in this world (universe) but there could be worlds without it. He doesn’t think that simple systems are self aware in the way that we are, but he does think that they have associated with them the primitive building blocks of consciousness just as atoms are the primitive building blocks of complex machines such as our bodies.

      Like

  18. Pingback: Are rocks conscious? | SelfAwarePatterns

  19. Pingback: Panpsychism and layers of consciousness | SelfAwarePatterns

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s