Does information require conscious interpretation to be information?

Peter Kassan has an article at Skeptic Magazine which sets out to disprove the simulation hypothesis, the idea that we’re all living in a computer simulation.

I personally find arguing about the simulation hypothesis unproductive.  Short of the simulation owner deciding to jump in and contact us, we can’t prove the hypothesis.  Even if the simulation has flaws that it would allow us to find and perceive, we can never know whether we are looking at an actual flaw or just something we don’t understand.  For example, is quantum wave-particle duality a flaw in the simulation, or just a puzzling aspect of nature?

Nor can we disprove the simulation.  There’s simply no way to prove to a determined skeptic that the world is real.  And if we are in a simulation, it appears to exact unpleasant consequences for not taking it seriously.  It effectively is our reality.  And we have little choice but to play the game.

But this post isn’t about the simulation hypothesis.  It’s about the central argument Kassan makes against it, that there can’t be a consciousness inside a computer system.  The argument Kassan uses to make this case is one I’m increasingly encountering in online conversations, involving assertions about the nature of information.

ASCII code for “Wikipedia”
Image credit: User:spinningspark at Wikipedia

The argument goes something like this.  Information is only information because we interpret it to be information.  With no one to do that interpretation, the patterns we refer to as information are just patterns, structures, configurations, with no inherent meaning.  Consequently, the physical machinations of computers are information processing only because of our interpretations of what we put into them, what they do with it, and what they produce.  However, brains do their work regardless of the interpretation, so they can’t be processing information, and information processing can’t lead to consciousness.

To be fair, this brief summary of the argument may not do it justice.  If you want to see the case made by someone who buys it, I recommend reading Kassan’s piece.

That said, I think the argument fails for at least two reasons.

The first is that it depends on a particularly narrow conception of information.  There are numerous definitions of information out there.  But for purposes of this post, we don’t need to settle on any one specific definition.  We just need to discuss an implied aspect of all of them, that information must be for something.

The people making the argument are right about one thing.  Information, in an of itself, is not inherently information.  To be information, something must make use of it.  But the assertion is that this role of making use of information can only be fulfilled by a conscious agent.  No conscious agent involved, then no information.  The problem is that this ignores the non-conscious systems that make use of information.

For example, if the long molecules that are DNA chromosomes somehow spontaneously formed by themselves somewhere, there would be nothing about them that made them information.  But when DNA is in the nucleus of a cell, the proteins that surround it create mRNA molecules based on sections of the DNA’s configuration.  These mRNA molecules physically flow to ribosomes, protein factories that assemble amino acids into specific proteins based on the mRNA’s configuration.  Arguably it’s the systems in the cell that make DNA into genetic information on how to construct its molecular machinery.

Another example is a particular type of molecule that is allowed entry through the cell’s membrane.  There’s nothing about that molecule in and of itself that makes it information.  But if the chemical properties of the molecule cause the cell to change its development or behavior, then we often talk about the molecule, perhaps a hormone, being a chemical “signal”.  It’s the cell’s response to the molecule that makes it information.

But even in computer technology, there are often transient pieces of information that no conscious observer interprets.  The device you’re reading this on likely has a MAC address which it uses to communicate on your local network.  It probably contacted a DHCP server to get a dynamically assigned IP address for it to communicate on the internet.  It had to contact a domain name server to get the IP address for this website.  The various apps on it likely all have various internal system identifiers.  None of these things are anything you likely know or think about, but they’re vital for the device to do its job.  Many of the dynamically assigned items will come into and go out of existence without any conscious observer ever interpreting them.  Yet it seems perverse to say that these aren’t information.

Of course, we could fall back to the etymology of “information” and insist on defining it only as something that inputs Platonic forms into a conscious mind (in-form).  But then we’ve created a need to come up with a new word for the patterns, such as DNA or transient IP addresses, that have causal effects on non-conscious systems.  Maybe we could call such patterns “causalation”.  Which means we could talk about brains being causalation processing systems.  Of course, computers would also be causalation processing systems, which just brings us right back to the original bone of contention.

And that in turn bring us to the second reason the argument fails.  Every information processing system is a physical system, and can be described in purely physical terms.  Consider the following description.

A system is constantly propagating energy, at small but consistent levels, through portions of its structure.  The speed and direction of the energy flows are altered by aspects of the structure.  But many of those structural aspects themselves are altered by the energy flow, creating a complex synergy between energy and structure.  The overall dynamic is altered by energy from the environment, and alters the environment by the energy it emits.  Interactions with the environment often happen through intermediate systems that modulate and moderate the inbound energy patterns to a level consistent with the central system, and magnify the causal effects of the emitted energy.

This description can pertain to both computers and central nervous systems.  The energy in commercial computers is electricity, the modifiable aspects of the structure are transistor voltage states, and the intermediate systems are I/O devices such as keyboards, monitors, and printers.  The energy in nervous systems is electrochemical action potentials, the aspects of modifiable structure are the synapses between neurons, and the intermediate systems are the peripheral nervous system and musculature.

(It’s also worth noting that computers can also be built in other ways.  For example, they can be built with mechanical switches, where the energy is mechanical force and the modifiable aspects are the opening and closing switches.  A computer could, in principle, also be built with hydraulic plumbing controlling the flow of liquids.  In his science fiction novel, The Three-Body Problem, Cixin Liu describes an alien computer implemented with a vast army of soldiers, with each soldier acting as a switch, raising or lowering their arms  following simple rules based on what the soldiers next to them did.)

It’s the similarities between how these physical systems work that make it easy for neuroscientists to talk in terms of neural circuits and neural computation, and to see the brain as an information processing organ.  Engaging in lingistic jiu jitsu over the definition of “information” (or “computation” as often happens in similar arguments) doesn’t change these similarities.

Not that there aren’t major differences between a commercial digital computer and an organic brain.   (Although the differences between technology and biology are constantly decreasing.)  The issue isn’t whether brains are computers in the narrow modern sense, but whether they are computational information processing systems.

So, am I being too dismissive of this interpretation argument?  Or are there similar arguments that may make a better case?  How do you define “information”?

This entry was posted in Mind and AI and tagged , , , , , . Bookmark the permalink.

32 Responses to Does information require conscious interpretation to be information?

  1. Mike, I think you have picked out a topic that is important and profound. Personally, I think understanding the nature of consciousness requires its resolution. This resolution requires choosing a definition of information and then being consistent with that definition. But I’m afraid I don’t like yours.

    You said “The people making the argument are right about one thing. Information, in an of itself, is not inherently information. To be information, something must make use of it.”

    I think this idea of latent information is confusing and unnecessary. We have no problem saying a book contains information whether or not anyone ever reads it.

    I think we are better off going back to Plato, but not the way you did. Information is not a pattern (Platonic form) input into a mind. Information is a pattern (specific set of data) input into physical stuff, an extant form. One result of this definition is that all matter contains information (it from bit?).

    Now you correctly say that information must be for something, and I hope you will accept that this could be restated as information must be about something. By the above definition, information is about everything in it’s causal history, and by causal history, I mean everything that lead up to that pattern being instantiated in that physical thing. Thus, the information in a watch is about everything in the causal history of that watch, including not only the watch maker and the metal miners and the mathematicians needed to make a watch, but also the stars needed to make heavy metals, etc. The key is in the interpretation, and “interpretation” needs to be well defined.

    You said “To be information, something must make use of it.” I would change this to “To interpret information, something must make use of it.” Thus, the examples you gave (DNA, hormones) are examples of interpretations.

    Now I would make the claim that consciousness is just a combination of such interpretations, and thus all interpretations are conscious events. The differences between conscious entities are simply the differences between the kinds of interpretations those entities can make. However, I notice you might disagree, given that you said “The problem is that this ignores the non-conscious systems that make use of information.” By my understanding, there’s no such thing as a non-conscious system that makes use of (interprets) information. But clearly some people don’t think all such interpretations are conscious, and so I would challenge them to point out which interpretation are conscious and which not.

    To bring this back to Kassan’s essay, his mistake seems to be saying that computers don’t interpret information, which by my understanding they clearly do.

    *

    Liked by 1 person

    • James,

      “Information is a pattern (specific set of data) input into physical stuff, an extant form.”

      I’m not sure if I completely follow your definition. Although if I had given a definition in the post, it would have sounded similar. Specifically, mine would have been something like this: patterns that may have causal influence on a system. That seems like it would have covered your book example. It would also cover Kassan’s tree rings and other examples in his article.

      I follow your line of reasoning on aboutness. I had very similar thoughts when I first started working this out. What makes me shy away from that attribute is that a definition, it seems to me, needs to account for both good and bad information. This isn’t being esoteric. People talk about having “bad information” all the time.

      For example, astrological signs are information, bad information, but information nonetheless. For centuries, they had causal influence on human decisions (and in fact still do for some people), despite the fact that they fundamentally weren’t about what people took them to be about.

      Granted, information like astrological signs had causal influence on people because they thought they were about relevant things. But the other issue is that I see nothing in cells to make me think a cells considers its DNA to be about anything. It seems to just react to it.

      But, on consciousness, if I’m understanding correctly, you defined “consciousness” to be that which interprets information, and “interpretation” to be making use of information. By that definition, cells and computers are conscious and interpret their data.

      My issue with that would be the same as the one I have for panpsychism, that defining consciousness too broadly takes away the usability of the definition. If cells and computers are conscious, I’m still interested in what separates cell consciousness from brain consciousness.

      On your challenge, I grant that it’s an issue under your definitions. But to me, interpretation requires the ability to put raw data into some kind of context, to fit it within some broader framework. I can’t see that something like a cell does that. I do agree that computers can do it, but it seems like they can do it without perception, attention, imagination, or metacognition, in other words, without necessarily climbing very far up what I see as the ladder of consciousness.

      Like

      • Mike, you said a definition has to account for both good and bad information. I’m going to go with Floridi on this one. He would say misinformation and disinformation are simply not information. They have the appearance of information, but they do not provide the causal content of information.

        I don’t think you quite get my definition of consciousness. Consciousness is not a thing. Consciousness is an umbrella term for those things associated with certain kinds of events, specifically, interpretation events as I defined previously.

        You are correct to say that by that definition cells and computers are conscious. You are also correct to be “interested in what separates cell consciousness from brain consciousness”. That difference will include a great number of capabilities, including tha ability to put data into context, as well as other abilities like memory, combining precepts into concepts, etc. If we can accept that these are all results of interpretations of information as described, we can begin to twease out those specific capabilities.

        *

        Liked by 1 person

  2. paultorek says:

    Okay Mike,
    Brains, and human beings, are computational information processing systems. But also, human beings are engines, in the sense that they do mechanical work by consuming free energy and increase the entropy of their environment. One can, with a little ingenuity, name hundreds of categories or properties that human beings belong to or have, without its being obvious what any of them have to do with consciousness. For the simulation argument to work, we’d have to think it at least probable that simulations would be conscious. (Kassan sets his own bar too high, if he thinks he needs to show it’s impossible for computer simulations to be conscious.)

    Brian Cantwell Smith has a great talk on the relation between computation and information, which I think it would be worth your (admittedly long) while to watch: https://www.youtube.com/watch?v=USF1H70bRl0

    Liked by 2 people

    • Paul,
      Thanks for the link! Very interesting talk. I especially liked his points that computational theory has a disconnect from actual computation. That fits my own experience as a programmer for decades who often found that a lot of the theories didn’t quite match up with the reality. In particular, like him, I always found practical programming to be a much more empirical endeavor than how theorists presented it.

      At one point, while he was talking about the idea that all the theories “are wrong” and that coming up with a right theory is hard, I almost expected him to utter something like, “the hard problem of computation,” which would have been delicious.

      I also liked his point that we need to discard the box around computation, that computation, in an of itself, isn’t special. That strongly fits with my own intuition. It leads me to limited pancomputationalism, although his remarks against “panmechanism” seemed to indicate that it doesn’t necessarily lead him there.

      That said, I didn’t see anything in his talk that undermined the idea of consciousness in a simulation, or it being computational in general. But maybe I missed something?

      Like

    • I’m going to disagree with Brian Cantwell Smith from the video that Paul provided above. I’m no computer guy, and I can’t say that I understood quite a bit of his discussion, but it seems to me that he’s let a career looking for a “true” definition for computation, frustrate him into a nihilistic position (or it might be the opposite pancomputation that Mike mentioned). I can’t blame him given the horrible state of epistemology today, but I do think that my EP1 could help. There are no true definitions, but only more and less useful ones in the context of a given argument. So if someone makes a definition for “computation”, it will be erroneous to say “That’s wrong”. Just accept it in the attempt to understand the points that are made, and then comment with any provided definitions as givens. As it happens I have what I consider to be a quite useful way to distinguish mechanics from computation.

      Consider a mechanical typewriter. When you press a symbol on it you can actually witness as an arm is forced to rise up and strike the paper. As I define it there is no computation here. But a very different process occurs if you press a symbol on a digital device. Here information is sent off for algorithmic processing, and an output transpires on the basis of this processing. This definition isn’t “true”, though I do consider it to provide a useful distinction between natural processes.

      Before life on earth I suspect that nothing computational (from my definition) existed — just pure mechanics everywhere. But as Mike mentioned in the OP, genetic material algorithmically processes input to provide associated output. This is what I consider to be the world’s first form of computation (though I would be quite interested in any further suggestions).

      Then I consider the second form of computer on earth to be represented by central organism processors. In the following post (https://selfawarepatterns.com/2016/12/04/is-consciousness-a-simulation-engine-a-prediction-machine/) Mike implied that during the Cambrian there were organisms with nerve networks where a given input would lead to a unique associated output. Well once the nerve system came together in a single place, an assortment of inputs could then be taken together for algorithmically processed forms of output. The human brain is one of these second forms of computer. (Conscious and non-conscious would be a seperate distinction.)

      Then the third and final variety of computer on earth from my definition, is represented by various machines that we build which algorithmically process input to provide associated output.

      Liked by 1 person

      • I have to admit I’m not quite as taken with Smith’s argument today as I was yesterday. His four types of theories seem to really be two, formal and effective, with the others more being attributes than theories. But it’s the fourth attribute he lists, digital processing, that I think perhaps give more life to the formal and effective theories than he might envision.

        Still, those theories, as I understand them, don’t encompass all computation. There are analog computers which those theories can only approximately describe. Indeed, it’s the digital paradigm, the system being built on binary primitives, that allow physical computers to effectively implement those formal or effective systems.

        That said, I suspect Smith would argue that he is talking in terms of usefulness, that it’s what he means by “right” and “wrong” in this context. He might also argue that a digital system is just an analog one which decreases the probability of a non-discrete processing, but can’t completely eliminate it.

        I don’t think I’m getting the division you make between mechanical typewriters and digital devices. Is it the digital aspect, the system being built on binary values, or the sheer amount of mechanics involved in the digital one?

        Myself, I think the amount of computation done by the typewriter is low, too low for it to be pragmatically considered a computational system, at least in most contexts, but for me, it’s a matter of degree whether than sharp distinction.

        Thanks for highlighting my simulation post! If your answer to the typewriter question above was that the digital aspect is what made the difference, how would you account for the brain’s seemingly analog nature?

        Liked by 1 person

    • Mike,
      Not being a computer guy I suppose that I should have known better than to use a technical term such as “digital” here. No it’s not the binary component of one of our computers that I was talking about, but rather that there be the potential for more dynamic sorts of output to input given algorithmic processing rather than relatively fixed mechanics. This should include Searle’s”Chinese Room” and all analog examples of algorithmic processing (so far as I know). It’s really a metric of dynamism that I’m referring to rather than a given medium.

      We could say that a mechanical typewriter algorithmically processes information as it changes to reach its outputs, but then what dynamic potential is there for this input to be algorithmically processed in separate ways on the basis of separate varieties other inputs? As I recall the key for a given letter alone provides the lower case form, and in tandem with the the “Caps” key provides the uppercase form. Input alteration beyond that might require changing the arm head and such (which is difficult to consider algorithmic). Conversely the potential for a given computer key input to provide various algorithmic forms of output seems endlessly more dynamic.

      If that helps clarify my meaning, would you agree that the first variety of computer on earth existed by means of genetic material, and the second with central organism processors, while the third were of human fabrication?

      Regarding Smith potentially meaning that all definitions of computation that he’s ever considered are merely not useful rather than wrong, that’s a standard violation of my EP1. Each definition must be considered in reference to an associated argument in the attempt to understand the meaning of the author. Of course it’s nice when society reaches generally accepted understandings for various speculative terms — these are important tools from which to build — but an author must still be free to fabricate his/her own tools regardless. Beyond mechanics/computation I mean to help humanity with “consciousness” and various other speculative terms.

      Liked by 1 person

      • Eric,
        Sorry, wasn’t trying to be gotcha in any way on the digital thing. Sometimes computer nerds forget that others aren’t as vested as we are in the details.

        I can see the dynamism metric. But it does strike me as one that identifies the difference as one of extent, not a sharp qualitative difference. My reasons are a little different, but it puts us in more or less the same place: mechanical typewriters are too low in information processing to pragmatically be considered primarily an information processing system.

        I don’t think I’m prepared to label DNA, in and of itself, a computer, much less the first one. Many biologists think we had an RNA world before DNA, and who knows what those systems looked like? And DNA by itself seems like just the data and possibly some of the code. We might have to include the entire cell nucleus, including the transcribing and duplicating proteins, in the computer description. Maybe.

        What makes me hedge somewhat is my limited knowledge of microbiology. If the nucleus’ causal effects are primarily informational, I might could see labeling it a “computer”, but if those causal effects come down to how much energy it emits, as opposed to the patterns of that energy, then it could be one of those systems where the fraction of processing involving information is too low to justify that label. But it seems reasonable to think that the nucleus depends on the rest of the cell for its I/O system. Again, maybe. The comparison here might be too strained, particularly for prokaryotic cells where there’s no clearly delineated nucleus.

        All of which is to say, I’m leery of being too sure we know the universal history of computation. I feel comfortable that central nervous systems are primarily computational systems, but while all sorts of other physical systems do information processing to some extent, I’m not sure I would describe anything before central nervous systems as primarily computational.

        Liked by 1 person

    • Mike,
      Your digital observation seems clean enough to me, and so I should certainly be careful using such a term. I did smile when you said the following to me however: “He might also argue that a digital system is just an analog one which decreases the probability of a non-discrete processing, but can’t completely eliminate it.” Coming from you I do know that this must mean something reasonable, though I might wonder from someone else.

      If you’ll notice I was actually clever enough to not reference DNA, RNA, or anything specific. Instead I left things as open as “genetic material”. Of course you’re right that this wouldn’t suffice without being in an environment from which to do its stuff, though I expect this to be granted without stating such a necessity.

      It seems to me that you’ve already mentioned a genetic computational element in the OP, given my provided definition of computation. You said, “Arguably it’s the systems in the cell that make DNA into genetic information on how to construct its molecular machinery.” From my definition you’re clearly discussing a computational function here — molecular machinery is arguably built by means of output from algorithmically processed input. So the question is, are there any other kinds that we know of? Can any other natural processes reasonably be said to algorithmically process input for output? Any weather or perhaps chemical dynamics? Anything that functions in stars?

      I currently know of just three varieties — the computation associated with genetic material, as well as central organism processors, as well as of human teleological fabrication. I do concede that I’m talking about a difference of extent rather than a sharp distinction, though I’ve not yet come across anything that’s even fuzzy. I ask because they may very well exist. Furthermore perhaps I’ve provided an answer for Brian Cantwell Smith and others who desire a useful distinction between things which function mechanically and things which function computationally. They will however need to realize that there are no true definitions (which I consider to be a greater issue still).

      Liked by 1 person

      • Eric,
        “Coming from you I do know that this must mean something reasonable”
        Just to clarify, I was presenting a possible / probable stance Smith might take. My own take is that any lower level causal factors which bled through the binary layer of a digital system would, if it happened under normal operating conditions, be considered a failure condition.

        Sorry. I did notice your wording on genetic material, which I should have acknowledged before musing on the issues.

        “Can any other natural processes reasonably be said to algorithmically process input for output? ”
        I think it depends on how narrow or broad we want to be with the definition of “algorithm”. Physicists would say that all physical systems process information, right down to quantum particles. They talk about things like whether black holes represent a permanent loss of information and the conservation of information.

        For me to call something a computer, I think its principle causal effects have to be through information, in the patterns it takes in or emits. That’s definitely true of a central nervous system and of a technological computer. They’re dependent on I/O systems to moderate their incoming effects and to magnify their causal effects.

        But most physical systems have the information processing and the I/O all tangled together. Most of the causal effects of these systems have more to do with the magnitude of their emitted energy rather than its patterns or structure. Calling such systems “computers” feels a bit too expansive for me, like we’ve stretched the definition of “computer” beyond a useful conception.

        In the case of a cell nucleus, you could argue that the mRNA molecules it sends out are primarily information signalling. (Which was my point in the post.) And various signals from both within and outside of the cell reportedly alter its processing. But the nucleus also manufactures ribosomes, the protein factories it communicates with. It’d be like your computer manufacturing its own keyboard, monitor, or printer, which seems to make it more than a computer. And the nucleus physically copies itself, which also seems like a more physical activity.

        You could argue that the computer is the part of the nucleus that does information processing, perhaps the parts outside of the ribosome factory in the nucleolus. Again, my lack of microbiology knowledge is an issue here, but I’m not sure these systems aren’t hopelessly entangled.

        It seems to me that if your criteria for something being a computer is that it processes information, then everything is a computer. But if your criteria is that it must be almost entirely dedicated to processing information (my stance), then it’s hard to see anything other than central nervous systems and technological computers fitting in, at least of what we know so far.

        All that said, I don’t have strong feelings about this. Ultimately it’s a definitional matter.

        Liked by 1 person

    • Mike,
      Yes I do realize that you don’t have strong feelings about this given that it’s a definitional issue. Brian Smith does have strong feelings about it however, since he seems invested in the position that it isn’t just a definitional issue, but rather a fools errand. Furthermore I have strong feelings in opposition to his position in two seperate regards. First I consider it to reflect epistemic failure. I don’t blame him for this given that there seems to be nothing such as my own EP1 accepted in epistemology today, but must nevertheless address his position as such a failure. Then secondly I’m in opposition because in order to found my own models of mind, non-conscious mind, and conscious mind, I’ve found it necessary to begin by developing a demarcation between mechanical and computational function. Though done for my own models exclusively, I do suspect that others would find them useful for some of their own purposes.

      I don’t actually get too technical with them, though I’ll certainl take any technical observations that physicists and others would like to offer. Once again, when we press a symbol on a mechanical typewriter, an arm is forced to rise up and strike the paper. We call this mechanical. When we press a symbol on a computer, information from this input is algorithmically processed for a more dynamic sort of output. We call this computational.

      We humans build all sorts of things that can be considered more like the computer, and it seems clear to me that this is the case for the central organism processors that evolved (including the human brain). I also see this in the standard function of genetic material, never mind the existence of any factory components. In this wonderous world of ours I suspect that there must be many other good examples of computation as I’ve just defined it, though I haven yet come across any. This is why I ask.

      Liked by 1 person

  3. It seems bad salesmanship to display blatant skepticism for what you’re skeptical about. Perhaps Peter Kassan took the title of the magazine that he was writing for too seriously? I’d say that he had a few reasonable points, but that we must also be wary here since reasonable points can be used to foster less reasonable points. I agree with Mike’s criticisms in general, and furthermore must object to Kassan’s belief that he knows the true definition for “computer”, and that “brains” are something different. There are no true definitions, only more and less useful ones from the context of a given argument. To me it seem extremely useful to view our brains as machines that algorithmically process input to yield associated output. This is non-consciously displayed in the function of my heart, and consciously displayed by the words that I’m constructing now. So on to the question itself…

    I begin with Descartes as well, and so must substitute talk of “we” for “I”. It’s not possible to me that I don’t exist, though the rest of you may not. Of course it’s fine if you do exist as well, but then to say that you know that I exist is to degrade the term “know” down to “have reason to believe”.

    My existence may be the fodder of an evil genius, which mandates the supernatural, but might I be a simulation in a natural world? I think not. Just as one of our computer simulations of a chair does not render it as something to sit upon, I don’t see how a simulation of a person could be a conscious entity. Naturalism (unlike Cartesian Dualism) presumes that I’m made of material rather than just information.

    I will concede that it’s conceivably possible to use information about me to produce a physical replica of me, though that person would be someone else. Conceivably my information could be sent across space at the speed of light where something indistinguishable from me could be built (which is hopefully sufficient to continue SciFi fun).

    Here’s the closest I can get from the premise of naturalism: Though I currently consider myself to be a product of the body that I perceive, it could actually be that a separate physical structure was built whereby my consciousness instead occurs through that structure. In that case I wouldn’t call what I’m perceiving now to be a simulation however, but rather a very convincing delusion, á la Matrix.

    Liked by 1 person

    • Eric,
      I agree that Kassan displays an air of predetermined skepticism. That’s usually considered unseemly in the skeptic community, which makes me wonder why Skeptic magazine publishes his writing. Unfortunately, it isn’t the first time I can recall that publication failing to be suitably skeptical of its submissions.

      “I don’t see how a simulation of a person could be a conscious entity. Naturalism (unlike Cartesian Dualism) presumes that I’m made of material rather than just information.”

      But wouldn’t you agree that all information processing systems are physical? If so, isn’t a simulation of a person a physical entity? Granted, the simulated version’s physics are different than the original’s, but it’s well established in computing that algorithms are multi-realizable, they can be realized on a variety of hardware platforms. So it seems like the question is, is there anything unique about a biological nervous system that can’t be ported to another platform?

      “I will concede that it’s conceivably possible to use information about me to produce a physical replica of me, though that person would be someone else. ”

      To a large extent, identity is a matter of philosophical outlook, tangled up with Ship of Theseus type reasoning. Ultimately, I don’t think there’s any fact of the matter answer for it. It’s a matter of which definition of “you” is more useful.

      But what if you and your replica could share memories? (Given the way the human brain appears to implement memories, I’m not sure this is possible, but suppose it was.) If so, you’d have memories of being in both bodies. In other words, you could remember being the replica? Would that change your view of the replica?

      BTW, I’m currently reading Sean Carroll’s ‘The Big Picture’. He spends a lot of time on epistemology, going over points that would probably resonate with you, such as whether models of reality are useful, and accepting notions based on prior credences, which he ties in with Bayes’ Theorem. I’m only 30% through it so far, but thought you might find it interesting.

      Liked by 1 person

    • Mike,
      I agree with you that all information processing systems are physical — I’m as physicalist as anyone. But saying that I could be a computer simulation because computer simulations involve physical dynamics, is about as arbitrary as saying that that I could actually be made of sand. As a physicalist I will not so rashly substitute one material for another.

      Of course I agree that a computer could be built which is conscious (as life demonstrates), but that’s no simulation. If you did have the ability to build conscious machines however, I suppose that you could make them generic enough to give one subject’s conscious and non-conscious attributes to another one. (That moves us to your duplication query.) Could the complete computational processes of a human be recorded and transfered into something that could effectively implement them, even though the human wasn’t designed for such transfer? That’s perhaps in the realm of possible.

      How can I demonstrate that a person will have a different self from a copy? Well if you’re standing next to your copy and one of you gets punched, one self will obviously feel it while the other does not. That said, I define any self to be constituted by what is felt instantaneously, and only connected through memory of the past and anticipation of the future. So you in the future is actually a different self from you now, just as a replica form will not be the same.

      As far as shared memories goes, I was already presuming that a replica will have the exact same memories. Nevertheless one is you and the other is not you, as displayed by the punch. If going forward you feel and experience everything that the other does, then this would simply be a new source of input for you. You’d still be you and it would still be it.

      Definitely interested that you’re reading Sean Carroll, who I hear about from time to time but have failed to pursue. Of course philosophers will get upset if a physicist does a better job with epistemology than they do, though I doubt their very worried yet.

      Liked by 1 person

  4. Michael says:

    Hi Mike,

    I agree with your thoughts on the simulation hypothesis being pretty unproductive. And in reading Kassan’s article I didn’t find his arguments particularly convincing. The analogs between a computer and the algorithmic characteristics of the human brain seem pretty obvious to me and difficult to discard.

    The question about whether consciousness must be involved in order to call a signal information just feels too complicated and weighted down by concepts difficult to define, like consciousness itself.

    Many biological processes rely on feedback mechanisms coded into the physical properties and physical structure of the system’s components. It seems in many cases the hardware and the software are almost seamlessly interwoven. I am curious how you would think about systems with complex feedback loops that are largely self-regulating. The Earth’s climate could even be viewed as such a system. It wouldn’t seem a stretch to propose such systems also have some sort of information processing characteristic, though it is not separable from the physical system itself.

    The idea of information as it relates to consciousness seems difficult because if such systems are not construed to be handling information, then almost nothing would be, right? In such a scenario information might only exist in your fifth level of consciousness, if I am remembering correctly, which I am recalling was meta-cognition (or some similar phrase, forgive me I can’t recall exactly).

    If we take such an approach, then the question in light of Kassan’s article is simply whether or not a computer system can exhibit this level of functionality, right? And it is difficult to imagine that being impossible.

    Michael

    Liked by 1 person

    • Hi Michael,
      I think the software-hardware divide is an engineering innovation of modern computing, one that biology, at least on Earth, hasn’t made. Still, the information processing in a central nervous system is concentrated enough and separate enough from its I/O systems, that I think we might someday be able to engineer a division.

      But when talking about Earth’s climate, I think the distinction you’re actually making is between the information processing and the I/O systems. In most systems, the information processing and its physical effects are indeed tightly interwoven. And such a low fraction of the overall processing of the system is informational that any attempt to separate its informational and physical aspects makes the resulting information system a non-functional shadow of the original.

      I think that’s the point many people make when they say that a simulation of a storm is not a storm. And they’re right. But then they extrapolate that to say that a simulation of consciousness is not consciousness. But a simulation of a calculator or a word processor is still a type of calculator or word processor. The question is whether consciousness is more like the storm or the calculator.

      Interesting question about my levels of consciousness. Since those levels are inherently from an information processing perspective, I suspect Kassan would dismiss the framework out of hand. But if he did accept it, given his assertions about the knee-jerk reflex, he’d probably insist that my lowest level, reflexes, aren’t information processing.

      Perhaps a more interesting question is where interpretation might come into the picture. Any non-trivial interpretation is, to me, a type of imaginative simulation, so I think it would come with level 4, simulation scenarios, although I think you’re right the the human capacity for interpretation wouldn’t come until level 5 metacognition, which would probably suit Kassan just fine.

      Like

  5. Fizan says:

    Hi Mike, another interesting post!

    Having read Kassan’s argument and yours I feel more inclined to his position (although this could be a misunderstanding on my part).
    You say:
    “To be information, something must make use of it. But the assertion is that this role of making use of information can only be fulfilled by a conscious agent.”

    Without getting stuck on terminologies I feel when you say to be information ‘something’ must make use of it, then we have to look at what that something or any other something is. If that something is the system in the cell (for DNA to be genetic information) you say, then what is the cell on its own. Isn’t it true that it has a similar fate to the DNA when left on its own?
    My opinion is that the whole ‘system’ is meaningless until it is given meaning. The word ‘system’ in itself is giving ‘it’ some meaning. Otherwise, it is just the ‘it’ (even this word gives some substance to it but there is no way of escaping this as we are conscious). So I side with Kassan in thinking consciousness is a necessary precursor for there to be information. One may think that what about outside our conscious experience, if we were to die, for example, do these systems cease to exist? But the truth is we can’t escape our conscious experience evening in saying or thinking such notions, so it is fruitless to consider this. Because all you or I know is within our conscious spheres, we have constructed a reality but there is no transcendental proof or anchoring for this.

    (In my words) what Kassan I feel may be trying to say is that we were created naturally by evolution, and have got what we have got. When we try to understand what we have got, for example using neuroscience we do that using a symbolic system of references and metaphors (or simply put our already existing consciousness), this helps us create an understanding. Using this understanding we cannot (most likely) replicate consciousness because this will already be residing as a system within the sphere of consciousness (our consciousness). Our actual consciousness, on the other hand, formed from/ within/or as part of the ‘it’.

    Liked by 1 person

    • Hi Fizan,
      On DNA and the cell, I think it’s fair to say that information is always a relative concept. It “means” something to some system, but may be utterly meaningless to another one. There’s no absolute authority anywhere that bestows meaning or withholds it. (Admittedly, if you’re a theist, you could put God in that role.)

      So, DNA means something to the cellular machinery. What does the cell itself mean? Well, it might be food so some other organism, or it might release its own chemical signals which might mean something to other cells.

      I think you do an excellent job at presenting Kasan’s thinking (as well the more general argument), but I’m still not seeing how regarding information as something that only conscious agents can use isn’t just definition twiddling. To repeat what I said in the post, if we label patterns like the ones in DNA “causalation”, then the cell, brains, and computers all share the quality of being systems that do causalation processing, and we’re back to square one.

      If we just think in terms of physical systems, then it’s hard to see how the argument has any ontological implications. But I’m open to the possibility that I’m still just missing something.

      Liked by 1 person

      • Fizan says:

        Does DNA ‘mean’ something to the cellular machinery? I think this is where to divide is between you and Kassan (I still incline towards his position).

        What is it to have meaning? Does the ‘DNA’ have any value for the cell? Or does the cell have a ‘use’ for DNA?

        If there is no value in and use of DNA for the cell, then what does it still ‘mean’ for the cell?

        In my opinion it is a whole process in which both are components having no meaning with regards to each other or use with regards to each other. The meaning and use is with regards to us. And without us this meaning disappears.

        I feel you are taking the cell as an independent system which interacts with other things and systems by way of causality.

        There is no need to come up with a word for patterns that have causal effects on non-conscious systems. This is because the patterns themselves are non-conscious systems and are interacting with similar systems, who in turn are doing the same, so on and so on. What separates these systems from each other? why not consider it a large all inclusive system instead. Or going the other way why no tease out the systems within systems, till we reach the quantum foam (which again is a large all inclusive system).

        What separates the cell from the DNA?
        Is it space and matter? then what about the other separate components in the cell (which as a whole are constituting what we call the cell). If all are separate components what makes them together a ‘cell’ except for our perception? (Percpetion is the key)

        We are already conscious and can already perceive. Without our perceptions (in my opinion) there are no boundaries between cause and effect.
        So what is giving meaning to all of it is our conscious perception of it.

        Liked by 1 person

  6. J.S. Pailly says:

    I’ve been thinking about doing information for Sciency Words for a while now, specifically in relation to the physics of black holes. I’ve also been thinking about doing the word observation in relation to quantum physics. A lot of times people seem to get hung up a word’s vernacular definition and don’t realize that words sometimes have weird, alternative definitions in certain fields of study.

    Liked by 1 person

    • I’d like to see those posts. Observation in particular is interesting. At the macroscopic level it’s a passive activity, at least relative to whatever physical system is being observed. But at the quantum level, you can generally only observe something by interacting with it, which unavoidably changes its state and behavior. Interestingly, observation changing behavior is also a thing at the social science level, although it’s a category error to regard them as the same issue.

      Liked by 1 person

      • J.S. Pailly says:

        They’re both high on my list of things I want to write about, but I’m still hesitant about doing them. They’re big concepts, so they’ll be tough to explain in 300 to 400 words. Also I’m not sure I understand them well enough to do them justice. But still, at some point I do want to do them.

        Liked by 1 person

        • I once thought about doing a post on quantum physics, but it quickly became apparent that it would have been a long series of posts. I settled for sharing a Youtube video along with brief comments on the double slit experiment.

          So you keep your posts to 300-400 words? I wondered if that was intentional or just your style. I know I start getting nervous when my post exceeds 1000 words, and generally think about breaking them up after 1500 words.

          Liked by 1 person

          • J.S. Pailly says:

            That’s what I aim for, though I do allow myself to go “over budget” sometimes if I think a topic needs it. I guess it is sort of a style thing too. If I let myself go too long, I start to feel like I’m writing a textbook, and that’s not really what I want to be doing.

            Liked by 1 person

  7. Callan says:

    It’s curious how we’ve gone from thinking the world is made of four elements, to chemicals, to molecules, to atoms, to crazy string theory. We have managed to think a step smaller, over and over.

    But when it comes to ‘information’, we seem to ‘bounce’ – instead of going a step smaller, down past ‘information’, we bounce off it. Assume the answers lie in the direction bounced.

    Why is everyone assuming ‘information’ exists to begin with? Suppose that like the physical items called money have no intrinsic value and are merely a fetish, so too is ‘information’ merely a fetish.

    Liked by 1 person

    • Totally agreed.

      If we start asking what really exists, we eventually find ourselves in a place where there is nothing but quantum field excitations and spacetime, along with the interesting patterns they create (and according to some physicists, even the fields and spacetime may be patterns of other things). Of course, everything else we usually think about, trees, tables, squirrels, etc, are names, symbols, that we assign to groups of patterns we judge to be similar to each other.

      Those labels are used because they’re useful, and ultimately they’re only retained to the extent they continue being useful, to continue, in some sense, being predictive. I think information and money are concepts we use because they are useful. We can always talk about the underlying patterns they refer to without reference to the labels, but it makes doing so harder, unless we find new labels to use.

      Like

      • Callan says:

        Well, to be thorough, I would say ‘patterns’ don’t exist as much as ‘information’ doesn’t.

        Liked by 1 person

        • Erm … some of us would define information as patterns which are discernible in stuff that exists.

          *

          Like

          • Callan says:

            Well, that’s the rebound I’m talking about. Hitting ‘patterns’ and bouncing off and extending upward into various human things, rather than downward into something smaller than ‘patterns’.

            I mean, it’s right there as a question – why is ‘patterns’ somehow as small as it gets? But instead a rebound – you say patterns exist and with it information and with information no doubt a whole bunch of things. I’ve asked what’s under the elephant, but it ends up as ‘elephants all the way up/patterns lead to info, info leads to other human things, etc’. Elephants up, sure – but what is down, smaller than ‘patterns’?

            What if, like ‘money’, upon examination what seems to be ‘patterns’ are actually just another fetish? You say patterns are discernible, but is ‘discerning’ itself discernible? Or is it an invisible catalyst? One that makes things clear as day – and yet the very thing that makes these patterns clear as day…cannot be seen? Doesn’t that raise question marks – how something can be clear and yet how it is made so very clear is itself very hidden away?

            Like

          • I think my definition is compatible with Scott Aaronson’s: see http://www.scottaaronson.com/blog/?p=3327

            Like

          • I’m sorry, Callan, but I truly don’t understand the “bouncing off” and upward/downward metaphors with respect to patterns. That seems like asking what’s smaller than mathematics, or, what is mathematics made of.

            I’m sure my vocabulary is lacking, but I don’t see how money is a fetish, and even if it is, I don’t see what difference that makes to the nature of the relationship between information and interpretation, or discernment. In my understanding, where there is discernment, there is a catalyst, or agent, which is doing the discerning. That agent can be seen (in theory, depending on the system and what you require to count as “seeing”). That agent is a separate physical thing, with it’s own information, including information in the form of knowledge, which information allows for the discernment in the first place.

            *

            Like

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s