Why are we real?

Nathaniel Stein has an interesting article at Aeon, The why of reality:

The easy question came first, a few months after my son turned four: ‘Are we real?’ It was abrupt, but not quite out of nowhere, and I was able to answer quickly. Yes, we’re real – but Elsa and Anna, two characters from Frozen, are not. Done. Then there was a follow-up a few weeks later that came just as abruptly, while splashing around a pool: ‘Daddy, why are we real?’

After spending some time pondering exactly what his young son meant with this existential question, and veering through a good part of philosophical history (curiously sans Descartes), Stein finishes with this:

How, then, can I give a good answer, if the question is about what’s real and what’s pretend? I suppose the right answer has something do with the fact that trustworthy images are causally connected to their subjects in the right ways, and carry information about them because of these causal connections, but I don’t think we’re ready for that. I settle for something a bit simpler: he can do whatever he decides to do, but Elsa can do only what happens in the story. It’s not great, but it’s a positive message. It works for now.

That’s an interesting answer.  It assumes we have more free will than characters in a story.  While I’m a compatibilist on social responsibility, I don’t think we have the contra-causal version of free will, which essentially makes us characters in the story of the universe.  The characters in the movie, if they had any viewpoint, would likely see themselves as having just as much freedom of choice as we do.

So that’s not the answer I would have given.  Mine would have been that you’re always real to yourself.  Even if we’re only characters in an advanced 22nd century video game, relative to ourselves, we exist.  The very fact of asking that question makes you real to yourself.  This is, of course, a version of Descartes’ “I think therefore I am”, although a bit more hesitantly as, “I think therefore I am to me.”

The question gets more challenging when considering everything else.  I’m a skeptic, so by what means do I determine what is real?  We only ever have access to our own consciousness.  Everything else “out there” is a theory, a predictive model that our minds build from experience.  But our minds can hold both real and unreal things.  Is there any concisely stated standard we can use to tell the difference?

In the end, I think that which enhances our ability to predict future experiences, future observations, is real.  That which can’t, isn’t.  This isn’t always a satisfactory answer, because often we won’t know for some time, or possibly ever, whether a particular notion fulfills this role.  But if there’s another standard, I’m not sure what it might be.

So what do you think?  What would your answer have been?

76 thoughts on “Why are we real?

  1. Well, given that these are my axioms:

    1. Physical stuff exists
    2. Patterns are real
    3. Things change

    I would have to answer the original question as Elsa and Anna are real, but they don’t exist and never have. We exist, because our patterns are discernible in physical stuff.

    *

    Like

        1. I could ask how we define “matter”, but I think instead I’ll ask how we know what is matter and its interactions.

          (Not trying to just be cantankerous, just trying to see if our positions ultimately reduce to the same thing.)

          Like

          1. Okay, so now you want to get into epistemology. Cool. We know what is matter by observing how it interacts with other matter. We observe patterns of inputs and outputs and we organize this information so that we recognize similar patterns of inputs and outputs in the future. And yes, sometimes we organize this information so that when we see the inputs we can predict the outputs, and we can “plan” to generate specific outputs by generating the right inputs. Pretty sure you would call this building a predictive model.

            So to go back up the thread, sets of [inputs —> outputs] are patterns, which are real. Something exists only if you can actually observe the associated inputs and/or outputs.

            *

            Liked by 1 person

  2. Stein’s answer is actually right, although not very explanatory.

    I don’t think we have the contra-causal version of free will, [the lack of] which essentially makes us characters in the story of the universe.

    I also don’t think we have contra-causal wills – but we’re still authors of our stories, in ways that Elsa and Anna are not. The story of your tomorrow *depends on you-today*, while the stories of Elsa and Anna only depend on Barbara Jean Hicks, the author of Frozen. If you ripped out the pages of Frozen that describe what they do on Monday, they’d still do what they do on Tuesday.
    This is decidedly not true of you. If we ripped you out from the universe today, you wouldn’t do a damn thing tomorrow. Thou art physics, to quote a famous philosopher (whose post “Thou art physics” is a link within the page I’ve linked to, but the page I’ve linked to is much more to the point I’m making here.)

    So, being a decider is sufficient for being real. It’s just not necessary for being real.

    Like

    1. I can see the argument, but a story has to follow a plot to be coherent. And ripping me out of the universe isn’t the same as ripping pages out of a story. To do the equivalent, you’d have to rip a day out of the whole universe while leaving the universe in the same state it would have been in if the day had still been there. If you could do that, and did, everything would proceed after the missing day as through nothing had happened.

      Of course, there’s no particular rule that fiction must be coherent, and a lot isn’t.

      Like

      1. But precisely the point is that you can’t rip a whole day out of the universe; it’d be contrary to the characteristics of the things in the universe. If A causes B which then causes C in the real world, it’s just not true to say “A causes C and B does not”. But it is true to say “Barbara Jean Hicks causes Elsa’s happy ending and Elsa’s previous activity does not”.

        Like

  3. I hadn’t realized that you’re such a strong Cartesian Mike. Sounds good. Technically I try to avoid the “we” term here. I can only know that “I” exist, rather than any “we”. I find this to be an important distinction to make at least once in a while.

    I think I’d go further than just provide my child with a lesson in Descartes. If thought proves my existence to me, whereas all else can only be belief, then what do I mean by “thought”? This would be the processing component of the conscious form of computer (which I outlined last time in the “brainstem” post).

    So if a cat interprets conscious inputs (valence, senses, and memories), as well as constructs scenarios about how to make itself feel better, will it then “know” that it exists? Not in so many words, no. I define language as an advanced second mode of thought that only our species seems to have mastered (on Earth). But given that it does think, a cat should still for the most part conceptually understand that it exists. Conversely I wouldn’t say that any of our computers “understand” that they exist, and regardless of how powerful. All non-conscious forms of computer should function obliviously to their existence. In order for one to understand that it exists, theoretically it would need to output the conscious form of computer by which existence is experienced.

    Then on agency, I’m a compatible determinist as well. Here everything is fixed in an ultimate sense, though we don’t have perfect information about what’s going to happen (to say the least!). Thus from the tiny human perspective we can effectively be free, and so can intelligently be praised and reprimanded for what we do. From a perfect perspective however, our lives should be just as fixed as a character in a printed book. Right Mike?

    Liked by 1 person

    1. Eric,
      I always get a little nervous when anyone assigns any “strong” philosophical position to me, since it almost always involves commitments I’m not comfortable with. A lot depends on what you mean by “Cartesian”. There are some things Descartes believed that I’m not on board with, such as substance dualism, or that animals don’t feel real pain or fear. (Some may not, but it’s far from obvious which are which.)

      On thought, my take is that, if we’re thinking about whether we’re thinking, we’re thinking to do that. Interestingly though, I’m not sure thought in an of itself is what does it. It’s more about the fact that our beliefs about our self are the most predictable ones we can have. If I believe I’m going to go get another cup of coffee, and do so, that confirms that belief about my self. (Of course, the self, like anything else, isn’t perfectly predictable, but it seems less imperfect than any other beliefs.)

      On the cat, I think it understand its bodily existence. Anything conscious has to. (Otherwise it might try to eat itself.) But I’m not convinced it understands its mental existence.

      On whether a computer understands its existence, it comes down to what we mean by “understand.” Often a computer system (including software) has a model of itself used for diagnostic situations. (Pull up the Windows task manager to see aspects of it.) The difference is the computer system doesn’t have any goals associated with that existence. It has no (comprehensive) impulses to continue that existence. Everything alive does.

      On whether our lives are just as fixed as a character in a book, it might depend on the ultimate ontology of quantum mechanics, but in general I agree.

      Liked by 1 person

      1. Agreed on Descartes Mike. By “strong Cartesian” I merely meant the “I think therefore I am” part. Still you may have implied that you’re not quite as strong a Cartesian as I am in this regard. This was given your “getting coffee” and “cat” scenarios. I’m not entirely sure however. Perhaps if you have problems with the following argument…

        I don’t consider my body to truly be “me”, but rather to produce me, which is to say to produce my consciousness. And I presume it’s the same for the cat. Each of us do have experience moving our bodies around as we command, though that needn’t continue. Mad scientists could paralyze us and run all sorts of existential experiments. We could be put into dream worlds with new personas which are set up to the uses of these scientists. Here our former bodies would no longer be in our charge.

        For such reasons I consider my consciousness to be me, not the body which (presumably) creates it. The cat should be no different. Given that it thinks, the cat should also “know” that it exists in some capacity. Conversely I wouldn’t say that a tree can know it exists, even though it lives, nor our most powerful computer, even though electricity may fuel amazing algorithmic function. Each should have an awareness which matches that of a rock, which is to say no awareness of its existence at all. There should be nothing that it’s like to be a tree, or one of our computers, or a rock, and so there should be nothing to be aware of.

        Or consider a door. It can be in various open to closed states. But I wouldn’t say that it could be “aware” of such a state unless it could “feel” such a state. Without consciousness it should not be aware of anything, including that it exists.

        Do you agree with what I’ve said here so far? If so then I’d say that we should be at about the same level of Cartesianism. If not then I’ll have to assess your level on the basis of your objection.

        Beyond my Cartesianism there are certainly intricacies to the brain architecture which I’ve developed. Here the brain, which is fueled by means of neurons, outputs a tiny second computer that runs not by means of electricity, and not by means of electro-chemical dynamics (as neurons do), but rather by means of the only thing valuable to anything anywhere, or sentience. This stuff brings a new “me” each moment of some magnitude, and these selves are somewhat connected through my memory of the past and my anticipation of the future.

        My models concern psychology rather than neuroscience, and from the days of Freud to the present no broad theory in this capacity has yet become generally accepted. Thus mental and behavioral sciences today should be somewhat like chemistry before there was a conceptual grasp of the atom. Founding theory will be required, whether provided by Lisa Feldman Barrett’s theory of constructed emotion (for a random example opposed to mine, not that hers has close to the breadth of mine or Freud’s), or another distinguished scientist, or even one of your friends here.

        Like

        1. Eric…
          “Founding theory will be required, whether provided by Lisa Feldman Barrett’s theory of constructed emotion (for a random example opposed to mine, not that hers has close to the breadth of mine or Freud’s), or another distinguished scientist, or even one of your friends here.”

          This statement stimulated a response on my part. The late philosopher Richard Rorty once noted: That without a vocabulary that captures either the way the world really is, or a core human nature, there is never any possibility to locate a metaphysical foundation for truth and that the endeavor should be abandoned. His justification consists of few words: In order to locate this “standard”, the seeker must already be at the convergent consensus point which is being sought. The seeker must already know what this is in order to recognize it when seen.

          Are you in agreement with Rorty’s assessment?

          Liked by 1 person

          1. Hi Lee,
            Apparently I’m in opposition with Rorty there. I’m not about what’s ultimately Real regarding what exists. This is because I do not believe that I can ever verify such general understandings with perfect certainty. As I see it there is only one thing that I can ever know with perfect certainty about what’s Real, or that I personally exist. So that’s the essential foundation upon which I build, and even if fallible. Still I suspect that I can minimize failure here through good practices, and so have attempted to develop effective principles in this capacity. In essence I’d like to help establish a community of respectable professionals that has its own generally accepted principles of philosophy from which to better found the institution of science. Our soft mental and behavioral sciences seem most in need of a “scientification” of philosophy.

            Your displayed passion drew me to you when we first met — a kindred spirit. But knowing the depth of my own passion I decided that neither of us will ever convert the other in this regard. I’ll always be a solipsist, and you’ll always be a noumenalist. Still even though our paths seem to be in opposition here, I do respect you as a kindred spirit.

            Like

        2. Eric,
          I don’t know if you saw the previous post, but in it I discussed three levels of consciousness. When we use the word “consciousness”, we can mean all kinds of different things about it. But what we mean can often be grouped these three categories.
          1. Wakefulness, being responsive to stimuli
          2. Awareness, phenomenal experience
          3. Self reflection

          We have a strong tendency to see these three as inseparable, to the extent that when we see one, we tend to assume all the others are present. In the case of another healthy developed human, that’s usually a good assumption, but the further we move from that category, the shakier the assumption gets. We have to be careful not to project our own mental states on other things.

          In the case of the cat, I think you’re seeing 1 and some 2, and assuming 3. But I’m not convinced the cat has 3. I don’t even think it has 2 to the extent we do. (Although in some ways, such as smell and night vision, it has more.) Again, seeing the lower layers inclines us to think the whole package is present, but history has shown that intuitions when it comes to fellow consciousnesses are not to be trusted.

          On it “being like something” to be certain systems, I think this very common phrase masks a very common confusion. It has an implied meaning that I think most people don’t consciously realize. When we say it is “like something” to be a certain system, I think what we’re really saying is it is “like some human thing” to be that system. We’re really talking about how much humanness it has, that is, how closely its way of processing information, how much its basic drives, matches ours.

          I think any theory that has as one of its precepts that what we commonly refer to as “consciousness” is something that is either completely in a particular system or completely absent, is fundamentally wrong, operating on a dualistic notion consciousness that I don’t think exists. There will be systems that have none (like the door), many that have some aspects (such as a frog), others that have more (cats), others that are pretty close (bonobos), but none will have the full package of humanness, except humans. Even some brain injured or very young humans won’t have the full package.

          I’m not completely on board with Lisa Feldman Barrett’s theory. It’s a little too close to James-Lange theory for my tastes. But I tend to think she’s capturing an important part of the reflex/affect/emotional feelings stack.

          Liked by 1 person

        3. Eric,
          Just realized my previous response, while an accurate reflection of my views, probably reads a lot sharper than intended. Please know that I didn’t intend any disrespect. It’s what I get for trying to fit a long reply into the workday.

          Sorry!

          Liked by 1 person

      2. Mike,
        Your response didn’t seem all that harsh to me, though I do appreciate your sentiments. I plan to be reading and participating in your blog until its very end (mind willing!), so no need to always be gentle with me. And I do try to be gentle with you as well, even though my strong convictions may not always make this quite as clear as I’d like. You’re the best that I’ve met in this game (though helping you gain a “working level” grasp of my models has been challenging).

        Let me also say that I’m happy that you’ve been able to find so much more time and enthusiasm for posting this year. But given rising popularity you can’t be expected to service all commentary. Thus we’ll need to talk amongst ourselves more than in the past. Of course that can go awry as well. I find it best to look for ways to build others up, or at least to say nothing rather than work on knocking them down. Should your blog take this path, then hopefully it will remain courteous.

        So back to business…

        Your quick response presumed that “we” see consciousness in the way that you mentioned in your previous post. But actually in the commentary over there I defined the term in a very structured separate way (with diagram). Fortunately James of Seattle had some questions that let me expand a bit.

        On your “wakefulness” category 1, my own such term is “altered states of consciousness”, as exists through sleep or mood altering drugs. The cat seems similar to us here in the sense that it also needs sleep and can have altered states of consciousness through drugs.

        Your “awareness, phenomenal experience” category 2 is dealt with in my model as the input component of the conscious form of computer (outputted by the non-conscious brain). Three varieties are presented — valence such as pain, senses such as vision, and memory of past consciousness. So the cat seems to also have #2.

        “Self reflection” is your category 3. As normally defined this could be placed under “thought”, or the conscious processor heading of my model. Here the cat is incited to interpret conscious inputs and construct scenarios about what to do given its sentience based motivation. (Technically I define “self” as “sentience”, so “self reflection” will be standard for all conscious forms of existence if they are functional in this regard.)

        But here I think that you actually meant what you’ve called “metacognition” in the past, since you mentioned in this post that it’s reserved for the human and some other primates. There are various definitions for the metacognition term, though I would not say that the cat needs to formally “think about its thought” in order to conceptually understand that it exists. For that it simply needs to “think”.

        Even though I am human and so have human consciousness, this doesn’t mean that I can only define the “consciousness” term to reflect my own human experiences. Similarly I can drive a certain car and yet define the “car” term in a way that leaves out various specific elements of the machine that I personally drive. Effective language often requires general terms.

        I suppose that one reason that you’ve had difficulties grasping the nature of my consciousness model, is because you’ve been inclined to hold on to a “layered” perspective. This is to say that something can be “more conscious” than something else if it seems “more human”. But that’s just not how my model works. If a fly is sentient, then by definition it is conscious as I define the term. There simply is no grey area here. And there are lots of things which seem human but aren’t conscious, even if a fly is conscious from a given definition but does not seem at all human.

        Note that if you say that my approach here is “wrong”, then you imply that a “true” definition exist for consciousness. That would violate my first principle of epistemology. I must use your definitions when assessing your ideas, while you must use mine while assessing my ideas.

        Liked by 1 person

        1. Thanks Eric. I’m grateful for your graciousness.

          I have been blogging a lot more in the last few months. Part of it was a realization in late December that my threshold for a post had simply gotten too high. I can easily respond to comments with 200 word replies, but for some reason had unwittingly ruled out posts of similar composition. I’ve tried to relax the formality of what causes me to post, which has led to an increase in frequency. I’m not committed to any particular frequency, just posting if I have a thought, even a relatively small one.

          On popularity, my experience is it always ebbs and flows. I’ve learned to be happy when people are engaged and not to get too upset during the inevitable droughts. I do enjoy the discussions, which means if commentary here ever got frenetic like it does on popular blogs, I’d probably find the overall effort less rewarding.

          On self reflection and metacognition, yes they’re related, although not quite the same thing. I didn’t use my usual five layer model for the last post because it would have been a rough fit. The brainstem has layer 1-reflexes, low resolution 2-perception, and bottom up 3-attention, but grouping that all together into wakefulness seemed simpler. And calling 1-4 collectively as phenomenal consciousness seemed simpler. But metacognition itself, I think, enables self reflection rather than is just another term for it. But as you note, definitions vary, a never ending difficulty with consciousness discussions.

          On dreams, I see them as a case where we have 2-phenomenal awareness without 1-wakefulness, or maybe 2 with 1 very reduced. 3-self reflection is usually thought to be absent in dreams, so maybe it’s 2 with reduced 1 and 3. These hierarchies are all vast oversimplifications I use to keep things straight, so they’re only so resilient to this type of contemplation.

          “Note that if you say that my approach here is “wrong”, then you imply that a “true” definition exist for consciousness.”

          Maybe another way of getting at this is, what happens if we were to completely drop the words “consciousness”, “feeling”, “bad”, or “good”, from the discussion? If you couldn’t use those words, along with any synonyms such as “sentience” or “subjective experience”, how would you describe your theory?

          The question is whether your model then reduces to something I’d agree is correct, or whether there would still be ontological differences between us. I suspect there would be, but I maybe I’m wrong?

          Liked by 1 person

      3. Mike,
        I suspect that you’d enjoy overseeing a progressively more popular blog, though it would take extra management. The main issue I think should be to keep things functioning civilly. I’m pretty sure that you’d do alright with that. Of course ideally people would also say reasonably interesting things. So what might you do when people go on and on and on in ways where intelligible points seem missing? That should be difficult. I suppose that you’d realize that your patrons in general will skip what they consider boring. With too much of that, you’d need to skip such material as well!

        On wakefulness and dreams, I like to keep these dynamics to the side of my basic model. As far as I know our non-conscious computers don’t need general breaks in order to function as they do. They don’t get “tired”. I presume this of non-conscious brain function as well. But for some reason outputting full consciousness perpetually just doesn’t function right. Note that bodily rest isn’t sufficient because full “sleep” seems required. I’m not the only person curious about this — science doesn’t yet grasp why sleep is so crucial to conscious forms of life. Regardless consciousness must run at an incapacitated state for a while somewhat daily. And in such an incapacitated state the thought processor tends to “dream” various ill conceived things. Something similar may be said given chemical impairment, though the effects vary with how various substances interact with neurons.

        Regardless I don’t tie altered states of consciousness into my basic consciousness model, but rather structure it more as a side effect. Whether a person is quite tired, asleep, drunk, stoned, or whatever, the non-conscious computer still provides input information to the conscious computer (valence, senses, and memory), though in a somewhat compromised way. The thought processor interprets this information and constructs scenarios about how to promote instant valence, though the processor may be compromised. And of course muscle operation output may be compromised, such as driving while intoxicated. Or a person might safely be dreaming while asleep in bed.

        (If you recall I’d like it if the “unconscious” term were replaced with three other terms so that there’d be less conflation about what’s meant. Here there’d be “non-conscious” when a person is referring to a straight computer, “altered states of consciousness” for the scenarios that I’ve just mentioned, and “quasi-conscious” when a melding of the two computers is meant, such as a non-understood racial bias.)

        I don’t believe that I could effectively describe my theory regarding brain function without terms such as “sentience” and “good/bad”. Perhaps if someone were to solve “the hard problem of consciousness” then I could put my theory in neurological terms, though that’s certainly not an option today.

        I propose what I call my single principle of axiology. If you understand what I mean by it (and I use an assortment of terms here in the attempt to make it as difficult as possible to be misunderstood), would you then be able to effectively challenge it?

        I believe that it’s possible for a neurological “computer” or “machine” to produce what I consider to be the strangest stuff in the universe, and do so for something other than that original computer to “experience”. What’s produced goes by many names, such as “qualia”, “affect”, “sentience”, and so on. But the key here is that before this emerged, all of reality functioned casually though remained perfectly inconsequential to itself. Nothing that occurred was “good” or “bad” for anything that existed. When this “value” stuff did emerge however, existence could then be good/bad for the thusly created “conscious entity” given the circumstances of its existence. Apparently this dynamic got incorporated into the evolution of life, with you and I as examples.

        Are you able to effectively tell me that I’m mistaken about this, and if so, then how am I mistaken?

        Like

        1. Eric,
          If you are serious about constructive criticism I might be able to help. I find the following positions problematic:

          1. “… all of reality functioned casually though remained perfectly inconsequential to itself.”

          One cannot have causality without causality being consequential to itself within the paradigm of discrete systems, the very systems which make up our phenomenal world, and the very systems which are directly affected as a consequence of that causality. That cycle consists of a continuum of change, resulting in construction, destruction and reconstruction etc. Personally, I would consider this continuum of change to be very consequential to itself.

          2. “…Nothing that occurred was “good” or “bad” for anything that existed.”

          As a fundamental epistemology, this statement would still be true unless one arbitrarily builds into the model of causation an anomaly called anthropocentrism, where the Cartesian Me is now somehow an exception to any previous model of causality, a model of causality which is intrinsic to every discrete system.

          This is the point of my critique: If a given model is to stand up under the scrutiny of analysis, there cannot be any exclusions, exceptions or built in paradoxes.

          Thanks,

          Liked by 1 person

          1. Lee,
            That was both coherent and well said. I see that Mike is somewhat hinting at the same thing. Furthermore I’m going to agree with you far more than a competing theorist should naturally be expected to. Playing standard defense just isn’t part of my agenda. My theory remains open to all forms of scrutiny.

            I agree that causality does mandate inherent “consequences” in a causal capacity. I actually meant the term in a somewhat different way however. The way I meant it does apply to your second point however.

            I do consider myself essentially the Cartesian Me, or the anomaly which is different from the standard. I presume that most things aren’t sentient, as well as Know that I happen to be sentient. So how might we effectively refer to a belief that there is stuff that is sentient (like me), and stuff that is not sentient, as a rock is presumed by all but the panpsychist? Couldn’t this effectively be called “dualism” given that there are two separate classifications of existence?

            Well sort of, though not fully in the traditional sense. I’m as strong a causalist and determinist as they come, and beyond most modern physicists. But I do believe that causality creates sentient things from non-sentient things, and thus here two different kinds of things are distinguished. If one reduces back to the other however, as I propose, then I remain a monist. Furthermore here I stand with the vast majority of scientists. For the panpsychism nich to thrive, a great deal will need to fall.

            Liked by 1 person

        2. Eric,
          “So what might you do when people go on and on and on in ways where intelligible points seem missing?”
          My approach is to let them post, although not necessarily to engage too much with them.
          At least as long as they aren’t attacking people or being nasty in some other way. Often times if someone is being earnest, even if I can’t make sense of what they’re saying, someone else can and will strike up a conversation with them, sometimes a conversation I end up finding interesting.

          “As far as I know our non-conscious computers don’t need general breaks in order to function as they do.”
          Actually they do when you consider both the hardware and software. Think how often your computer or phone insists on doing updates. It’s also not unusual for complex systems to need to run extensive maintenance processes during the night so that they stay healthy.

          “If you recall I’d like it if the “unconscious” term were replaced with three other terms so that there’d be less conflation about what’s meant.”
          I’ve actually grown more relaxed about this. I was avoiding the term “subconscious” because apparently a lot of people saw it as referring to a subterranean consciousness. But now I’m starting to realize that some lower level processes in the brain could coherently be called subterranean consciousnesses (although they’re probably nothing like what the people who buy into that concept think they are).

          “Are you able to effectively tell me that I’m mistaken about this, and if so, then how am I mistaken?”
          We’ve discussed this before, but by not being reducible to components which are not conscious, your model basically doesn’t answer the questions that I for one find the most interesting. By merely positing that the computer produces “the strangest stuff in the universe”, without attempting to answer what that stuff is or how its produced, you’re glossing over what I think the main point of coming up with theories of consciousness is all about. Is that wrong? Perhaps not for your purposes, but it seems that way for mine.

          I also find that your description of the two computer model seems to be evolving toward a sort of naturalistic dualism, similar to how David Chalmers talks about consciousness. A lot of people agree with him, but I for one don’t find it compelling. But then I’m a relentless functionalist.

          Liked by 1 person

      4. Mike,
        I’m no computer guy, though the conscious need for sleep isn’t something that I’d analogize with my phone’s software updates. It’s not like my phone gets too “tired” to function properly and so needs periodic “sleep” to recuperate — sleep being a state where it functions in an “intoxicated” way that actually recharges it. Scientist in general are perplexed about why conscious function requires periodic sleep, or par for the course in matters of consciousness. I can’t think of anything similar to the conscious requirement for sleep.

        “Subterranean conscious”? Hmm…. Well I consider a non-conscious computer to produce consciousness, so I can see how a “below the surface” term may be useful. Regardless people clearly use the “unconscious” term to refer to “not conscious” (like getting knocked out), “sort of conscious” (like being biased), and “altered conscious” (like being asleep or drunk). Thus it should be helpful to retire the “unconscious” term for three reasonably specific terms.

        “…by not being reducible to components which are not conscious, your model basically doesn’t answer the questions that I for one find the most interesting.”

        Actually I do presume that consciousness reduces to components which are not conscious. I simply have no practical understanding of how. Would it be better to hand wave a hard problem solution? Feinberg and Mallott come to mind there. No, it’s surely better to honestly state that my ideas concern psychology and the like rather than horribly complex dynamics of neuroscience. And even though you may be far more interested in solutions to the hard problem of consciousness than I am, your blog continually suggests that you’re no less interested in the questions that I do explore.

        My single principle of axiology states that it’s possible for the central organism processor which is commonly known as a “brain”, to produce a punishment/ reward dynamic which constitutes all that’s valuable to anything anywhere, or sentience.

        I see only a few central paths from which to challenge me here. One would be to take the panpsychist route that everything produces sentience, and I know that you’re no panpsychist. Another would be to say that nothing is sentient. Can existence never be horrible or good for you? Of course it can. I’d have you be mindful about taking a third path, which is to say fooling yourself that you disagree through some technicality. Would it be so bad to agree that most things aren’t brains, and that some brains, somehow, produce all that’s valuable to anything anywhere?

        Like

        1. Eric, on sleep, rather than look for why we need it, it might be helpful to consider it as an adaptive advantage. At night it is dangerous to go out, and largely unproductive too. It would be an evolutionary advantage to conserve energy during the hours of darkness, and so a desire to sleep could be quite strongly selected for. On the other hand, creatures adapted to night time hunting lose their advantage during hours of daylight, so sleeping then to conserve energy makes sense. This is just my theory. I might be completely wrong!

          Liked by 1 person

          1. Thanks Steve,
            In the past I’ve also thought about sleep as mainly a “conserve energy” strategy, but then decided that something far more profound must be happening. You could lie in bed for days on end and still need sleep. And strangely enough this seems to be the case for conscious life in general (though energy conservation does seem to be a main objection for hibernation).

            My present suspicion is that consciousness itself needs to take cycled sleep breaks for whatever the reason. But not full breaks. Dreaming seems to be conscious function in an impaired capacity, at least somewhat like being drunk. But get this. Its “hangover” is instead sharp mental function. Wow!

            I’ve enjoyed sleep science articles in the past, though don’t recall reading any for a while. I wonder if someone here would like to provide any favorites?

            Like

        2. Eric,
          On sleep, I totally agree it’s an interesting problem. But it’s worth noting that it isn’t just conscious organisms that need it. Just about any motile species (read: animal) needs it, even ones that no one really considers to be conscious. Even c-elegans worms display active and dormant phases. It seems to be some kind of deep pervasive physiological need of mobile organisms, a need for downtime to do maintenance, much like the IT systems I mentioned.

          “Would it be better to hand wave a hard problem solution? Feinberg and Mallott come to mind there.”

          Do you really consider F&M to be hand waving? I’m certainly not on board with everything they propose or conclude, but I do think they’re always being reasonably scientific. As an aspiring sci-fi author, who definitely plans to do my own share of hand waving in stories, I don’t think F&M come anywhere near it. And I think their explanation of the hard problem has considerable currency, at least at the level for which we can actually address it, in terms of why we think there actually is a problem.

          “My single principle of axiology states that it’s possible for the central organism processor which is commonly known as a “brain”, to produce a punishment/ reward dynamic which constitutes all that’s valuable to anything anywhere, or sentience.”

          I guess my question here is, what about this adds to the existing conventional understanding of how the mind works? Psychologists and neuroscientists have been talking about punishment / reward systems for decades. What insight are you adding to that understanding that aren’t already there?

          Forgive me, but it seems more like your own personal understanding of how these interactions work rather than any groundbreaking theory. Certainly there’s nothing wrong with that. Most of the frameworks I describe on this blog amount to just my own consolidation of mainstream science. But if you’re going to put your model forth as something new and revolutionary, it needs to provide insights that aren’t already out there.

          All of which to say is, I do think your model is right in some ways (and wrong in others), but I don’t see it as revolutionarily right, or if it is, I’m unable to see how.

          Like

  4. A dictionary definition of “real” is “actually existing as a thing or occurring in fact; not imagined or supposed”. So “real” in this sense, I take to mean things that we believe to be real, rather than imaginary or hypothetical. So why are we real? It must be because we believe ourselves to actually exist.

    Liked by 2 people

    1. Steve,
      I suspect than many of the faithful would like your answer, since they could interpret it to mean that their belief causes their god to be real. Still you did add “imaginary or hypothetical” contingencies that could unsettle them. Regardless with belief as a form of thought your answer does seem to reduce to mine. For something to believe that it exists it would need a functional conscious mind (or a form of computation that I proposed last time https://selfawarepatterns.com/2019/02/10/is-the-brainstem-conscious/#comment-26925 ). In that case even a god could “know” that it exists, unlike anything else.

      Liked by 1 person

    2. As I mentioned to Eric, I think that, of all our beliefs, our belief in our existence is the most predictable one we have. It’s not perfectly predictable, but it seems more predictable than any other belief we might hold.

      Like

  5. I’m reminded of a comment Robert Pirsig made after his son Chris was murdered. The question became obsessive for him: “Where did he go?”. In response to his anguish, I remember thinking to myself: “Chris didn’t go anywhere, because Chris was never here.”

    The diversity and novelty of all of the discrete systems which make up our world of expression are nothing more than a condition. And what is that condition one may ask? That condition is nothing more than a possibility of another condition. The most striking example of what I’m talking about can be expressed utilizing a sperm and an unfertilized egg. Both of those discrete systems have a finite lifespan in and of themselves, nevertheless, both the sperm and the egg or nothing more than a condition on a possibility, and that possibility is another condition.

    I challenge everyone to do a mind experiment, and follow that thread of thought through it’s natural progression and see where it leads… It should conclude with a profound question: “Is our own experience as a solipsistic self-model a condition on the possibility of another condition, and if so, what would that next condition be?”

    Liked by 1 person

  6. Perhaps we are not real?

    Our revels now are ended. These our actors,
    As I foretold you, were all spirits, and
    Are melted into air, into thin air:
    And like the baseless fabric of this vision,
    The cloud-capp’d tow’rs, the gorgeous palaces,
    The solemn temples, the great globe itself,
    Yea, all which it inherit, shall dissolve,
    And, like this insubstantial pageant faded,
    Leave not a rack behind. We are such stuff
    As dreams are made on; and our little life
    Is rounded with a sleep.

    Liked by 1 person

  7. Sadly we are too good at translating objects into meanings. For example, you watch Frozen, you’re completely unaware you are not watching a three dimensional space. You just read the two dimensional space as three dimensional because – your brain is just too quick to interpret that information in that way and present it to you without reminding you its BS.

    It’s a two dimensional screen. The thing called Elsa is X amount of pixels at certain luminosities and spectrums.

    Those pixels exist.

    Our inability to avoid falling into automatic translation of 2D to 3D means those fragments of truth, that the pixels exist, are taken and used to confuse us about the existence of Elsa. Because the best lies contain a pinch of truth.

    This text is luminated pixels.

    That sentence is luminated pixels.

    That sentence is luminated pixels.

    This sentence is luminated pixels.

    It will not really sink in for the automatic translation of arrangements of pixels into meanings.

    Meanings.

    That word is…

    Liked by 1 person

    1. It’s interesting if you think about it, but all the retina ever gets is a 2D representation of the world. Our nervous systems combines the signals from both eyes to create the 3D versions. So we probably shouldn’t be too surprised that 2D representations work so well.

      Liked by 1 person

      1. Well I’d say we have attributions of distance from the amount the eyes are adjusting focus and how focusing on a closer thing blurs other things and vise versa. So I’d disagree our eyes do not have a sense of dimension in seeing.

        But the issue is how close enough is good enough with human perception. Elsa seems real because the information of a world only has to be close to what is needed to conjure that set of lit pixels into being a being.

        Possibly some people love movies because philosophically they can’t really draw a distinction between them and Elsa or Luke – so they get sucked into the movie, by dint of becoming peers of Elsa and Luke Skywalker.

        Where as perhaps Stein’s daughter is resisting being sucked down the rabbit hole, asking for some kind of distinction for why we are any more real than the two imaginary characters (which are depicted on very real mediums).

        Liked by 2 people

      2. And I’d add that Elsa exists in a space just as 3D as the real world — she was rendered on a computer working with 3D geometry after all. Elsa isn’t made of pixels, she’s made of polygons! The 2D/3D angle is a red herring, because as you point out the image on the retina is just as 2D as the image on the screen.

        Again (as in my other comment) I would instead say that she isn’t real because she’s a hollow shell — she has the superficial appearance of a (cartoon) person but it’s all surface.

        Like

        1. No, she is a series of on off states in a computer – there was no 3D geometry. Just changing of on/off states in a computer. There are no polygons either.

          If you want to say she’s all surface, how do you actually prove that beyond assertion? You’ve eschewed the material base I’ve described and describe something going on inside computer chips as three dimensional. The chips are certainly three dimensional, but there’s nothing else beyond that. You seem to not go to that level, so I wonder how you prove she’s all surface without it?

          Like

          1. Which computer is that? Elsa was rendered on a render farm, consisting of many computers, and then later again on other computers, and then streamed by other computers (servers) to other computers (clients) etc.

            To the extent that you can talk about Elsa at all, you have to be talking about a certain pattern rather than any specific series of ones and zeros on any specific computer, and that pattern is three dimensional and built of polygons.

            If you want to be so reductionistic and materialistic about it I don’t see how you can talk about people either, as people are just temporary patterns of atoms, and those specific atoms also change over time.

            She’s all surface because the computers model her surface (polygons, textures etc) and they do not (I am assuming) model her internal organs (apart from maybe bones and muscles) or neurons etc.

            Like

          2. I really don’t have to talk about a specific pattern just because a number of computers were involved. Bit states changed, electric charges were sent down cables. You’re succumbing to heuristic thinking on the matter, as we’re all built to because thinking at such small levels is not efficient for finding where the good fruit is, as Pratchett would put it. There is no pattern, there are chips – they do have dimension. They also culminate in pixel configurations that are good at tricking your heuristics into seeing a three dimensional humanoid figure.

            I also don’t know why you have an issue with looking at it in a reductionistic way – you’ve watched Blade Runner, I assume. Even if you felt the Replicants were actually human (of a sort), you didn’t think ‘And maybe I’m a fabrication as well’. You still felt the replicants weren’t you – they were something other than you on some fundamental/reductionist level.

            You just defined people as temporary patterns of atoms – it’s as easy as that to talk about. Do these patterns, in Venn diagram terms actually even vaguely match the thing called Elsa? Keeping in mind we need some room for discrepancy, because every human is not only not a clone of each other but even clones grow differently and have different amounts of molecules (also over time molecules are cycled).

            Elsa and humans – it’s like trying to argue The Thing was ‘real’ when it was duplicating human forms as its infection route. Though at least the Thing was actually a life form and so much more similar to being human in that regard than the goings on in a bunch of computer chips.

            So at a certain level Elsa is worse than the Thing. Which was a pretty scary movie.

            Like

          3. Hi Callan,

            If there is no pattern, only chips, then what is Elsa? She is not the chips. OK, so fair enough you’re explaining why she’s not real, so it seems like no problem for you to say there is no Elsa, but if there is no Elsa you can’t very well talk about Elsa’s properties (i.e. being 2D rather than 3D).

            I don’t really follow your point about the replicants. Actually I would regard the replicants as being approximately human and not significantly different except insofar as there are very minor functional differences (i.e. failing the Voight-Kampff test).

            > Do these patterns, in Venn diagram terms actually even vaguely match the thing called Elsa?

            No, because Elsa is all surface. No interior detail.

            I think you’ve lost track of my position. While I do regard Elsa as a real pattern, and think that she exists as a pattern (I’m a platonist), I don’t think she’s at all comparable to a person (this is why I said she was all surface). My beef is a jokey minor one, making the point that the issue is not that Elsa is 2D rather than 3D, as she if anything she is 3D rather than 2D. Her shape resembles the mathematical abstraction of a sphere more than it resembles the mathematical abstraction of a circle. The pixels are not Elsa, the pixels are just a projection of Elsa, just as the 2D image formed on our retina when we look at an actual person is just a projection of that person’s body.

            But if you don’t think that patterns are real, then I don’t think you can consistently say that people are real, as people are (self aware) patterns. Or do you think that some patterns (people) are real, and others (Elsa) are not?

            Like

          4. Elsa is something you keep repeating. If you keep refering to a ghost when it’s just a lampshade in a window at a peculiar angle, I’ll have to refer to the ghost in order to actually speak with you. If I just keep talking about a lampshade you wont think I’m talking about anything you’re thinking about. There is no performative contradiction here – it’s a matter of when in Rome, speak as the Romans do. Even if they speak about something that doesn’t exist.

            The point about replicants was as said – you might pride yourself on thinking they are human. But you never thought you are a replicant (of some kind). Replicants are still ‘other’ to you. You still draw a distinction. They also have short life spans and can stick their hand in boiling water without flinching or taking damage.

            The phenomena you call Elsa is to a human simply as much a porch lamp is to a moth. The moth can’t help treating the lamp as the moon and you can’t help treating this phenomena as something to give a 3D attribution to, even as you treat it as hollow. Aint nothin’ there, bro.

            I think there are plenty of examples in nature where one creature fakes being another. It fakes the ‘pattern’ of the other creature. Some flies have the colour patterns of wasps. What can I do when I say it is not a wasp but a fly and you tell me because of that I don’t believe in patterns, so can flies even exist?

            A machine has faked the pattern of a living humanoid. Even as you call it hollow, you’re still, in my estimate, partly duped.

            No, it is not a flattering prognosis. That’s why people stick to illusions – the alternative is unflattering so it doesn’t seem there is any alternative. To get past illusions means accepting some ugly – something depressed people are generally better at doing (scientifically proven, IIRC, to have more accurate self assessments than people in normal mental states)

            Like

          5. Hi Callan,

            Still don’t see the relevance of the talk about replicants. They don’t seem any more ‘other’ to me than a person with a different background or suffering from some medical condition I don’t share and even if they did I don’t see that has to do with the conversation about Elsa.

            > The phenomena you call Elsa is to a human simply as much a porch lamp is to a moth.

            Sure, OK. I’m not saying Elsa is anything like a human so that’s fine.

            > The moth can’t help treating the lamp as the moon and you can’t help treating this phenomena as something to give a 3D attribution to, even as you treat it as hollow.

            Well, OK, but it really has nothing to do with the convincing visual illusion. Elsa is 3D in the same way that a 6D hypersphere is 6D. She is a mathematical abstraction to me and as such has the kinds of properties mathematical abstractions have such as dimensionality, and in this light she is certainly not 2D made of pixels but 3D made of polygons.

            (in this discussion I’m thinking of only of Elsa’s geometrical form rather than her role as a character in a story, because you got me thinking about whether she was 2D or 3D).

            You can say there ain’t nothing there, and that’s a fairly standard anti-platonist view I disagree with, but understand, but to say that she’s 2D rather than 3D seems to me to be mistaken, so that’s all I was trying to address.

            Like

          6. Hello DM,

            With replicants I’ve described it a few times now. Ignoring todays technical capabilities for now, did you ever seriously think for a moment that you are a replicant? That your memories are faked? Either you didn’t, and yes, that’s ‘othering’ replicants because you’re acting as if you could never be them, or you did but you rejected it as not possible – which is othering the replicants as well. Can you see the line of real/something else you’re reflexively drawing there?

            On the thing called Elsa, you’re making insisting multiple times that the thing is ‘3D’. It’s like a moth insisting ‘That’s a moon!’ when looking at the porch light. That’s no moon. That’s no 3D. How would you argue with the moth in such a case? You know it’s wrong, but it insists it is seeing the moon. How would you approach it? Maybe you’d have to go reductionist in order to get around its insistence.

            Like

          7. Hi Callan,

            I have to admit that I haven’t seriously considered the possibility that I am a replicant from the Blade Runner universe. Neither have I seriously considered the possibility that I’m a Cylon from the Battlestar universe. I have tried to consider the possibility that I am a Boltzmann brain or a simulation, and not really absolutely rejected either (though on balance I don’t lend such ideas much credence), but that’s about as close as I’ve gotten. Not sure that really counts as othering (because you’re talking about science-fiction constructs that we have no reason to believe can exist on earth in the real world at present technological levels), but even if it did, I don’t see the connection to anything else we’re talking about.

            > On the thing called Elsa, you’re making insisting multiple times that the thing is ‘3D’. It’s like a moth insisting ‘That’s a moon!’ when looking at the porch light. That’s no moon. That’s no 3D. How would you argue with the moth in such a case?

            I don’t think that’s analogous at all, and you can see the disanalogy in how you’re torturing the language to treat an adjective (3D) like a noun (moon) when you say “That’s no 3D”. 3D is a property something has, not a thing something is. The analogy would work only if I were saying Elsa were a human. Saying she is 3D is more like saying a light source is bright. Elsa is not a person and the light is not a moon. Elsa is 3D and a person is 3D. The light is bright and the moon is bright.

            Again, a much better analogy, I feel, is to the 6D hypersphere. Would you tell a mathematician that a 6D hypersphere is in fact 2D because the pages on which its equations are written are 2D? That’s how your argument comes across to me.

            If you just want to say that a 6D hypersphere is not really 6D at all because it doesn’t actually exist and so has no genuine properties, then that’s different (though I still disagree). But it seems silly to say that it’s actually 2D because pages/screens are 2D.

            Like

          8. Hello DB,

            Rejecting being a replicant or simulation means your resorting to reductionism as well. How else could you be so sure?

            3D is a property something has, not a thing something is.

            Well that’s your opinion, certainly. To me it puts the categories in the wrong order – there is a 3D space and there are things in it, there aren’t things and they then have a property of 3D. It’s like refering to something being a bit of memory, then saying it’s just a property of that bit that it’s inside a memory chip inside a computer, rather than it being the computer. Or that a particular swirl shape in set concrete has the property of being in a block of concrete, rather than it being part of the block of concrete is what it is.

            When there is a larger thing and an object is made of that larger thing, it seems hardly appropriate to say that object merely has a property of the larger thing. Three dimensional space is the larger thing – objects in it don’t have it as a properly. At best objects in it are a property of the three dimensional space. Reality isn’t a footnote of each object, each object is a footnote of reality.

            You’ve really a different idea of heirachy that to me seems to be exceptionalism. Okay, so we diverge sharply on this matter.

            Again, a much better analogy, I feel, is to the 6D hypersphere. Would you tell a mathematician that a 6D hypersphere is in fact 2D because the pages on which its equations are written are 2D?

            A 6D hypersphere he is holding in his hands or a reference to a theoretical arrangement? Yes, it’d say the idea of it is 2D – possibly because the idea is also arranged across his brain, 3D. Because he’s presenting no actual object.

            In the end the question was how to argue the moth out of treating the porch light as a moon. The moth isn’t talking about brightness, it’s saying it is a moon.

            I honestly don’t think you’ll get very far unless you use reductionism.

            (Also the quotes were there so as to use a star wars quote then use a derivative of that quote…it’s a torturing of star wars dialog, really)

            Like

          9. Hi Callan,

            > Rejecting being a replicant or simulation means your resorting to reductionism as well. How else could you be so sure?

            I reject being a replicant because there doesn’t seem to be any such thing as a replicant in my environment. In a world where I knew that some people were replicants and some were not (if I were in Deckard’s place) then I would be open to being a replicant.

            I’m far less confident about not being a simulation, but ultimately because of my views on other matters (mathematical universe hypothesis) I think it’s a meaningless distinction whether one is in a simulation or not, and I largely dismiss the idea for that reason. But that really is a tangent to a tangent so let’s not go there. But again, if I put these views aside and I thought that some people were simulations and did not know it, then I would not be especially confident that I am not a simulation. My beliefs on the matter would align with the evidence and probabilities, e.g. if I knew that 50% of people were real and 50% of people were “fake” somehow, then all else being equal I would judge it 50% probable that I were “fake”.

            I get that you were torturing Star Wars dialog, but if you are reduced to saying that something is “a 3D” rather than, say, “a 3D shape” to force the analogy, then you ought to be able to see that the analogy doesn’t work.

            The ordinary sphere is 3D. The ordinary circle is 2D. That dimensionality is a property of the shape. The shape isn’t just part of the 3D or 2D space, because you can have a circle in a 3D space or a sphere in a 4D space. Similarly, I would say that the earth’s orbit around the sun is a 2D ellipse. It isn’t 3D even though it is embedded in a 3D space (well, I guess it’s bound to be slightly 3D in some imperceptible way — in the same way that pixels on a screen are perhaps — but hopefully you take my point anyway). And I say *a* 3D space because I don’t think there is just one 3D space, I think there are unlimited disjoint 3D spaces. The space Elsa is in is not ours.

            Anyway, we agree that Elsa is not part of our 3D space. She’s in the 3D space defined by the mathematical object being rendered by the render farm. She is some other 3D space the way the mathematician’s 6D hypersphere is in a 6D space.

            The mathematician is not holding a 6D hypersphere in his hands. He is just doing a mathematical analysis of an abstract object. I contend that it is the height of perverse silliness to insist that this object he is analysing is 2D because it’s on a page or 3D because it’s in his brain. The hypersphere either doesn’t exist at all (on mathematical anti-realism) or it is 6D.

            > In the end the question was how to argue the moth out of treating the porch light as a moon.

            Again, I can’t answer that question and I reject the analogy, because I’m not saying that Elsa is a human, I’m saying that from a geometrical point of view she is 3D, a property she shares with a human. The analogy would be to a property the light shares with the moon.

            But if you accept my analogy to the 6D hypersphere and you really would insist to the mathematician that his hypersphere is not 6D but 2D or 3D, then I think we can end it there, because this is in my eyes a successful reductio ad absurdum of your position.

            Like

  8. Hi Mike,

    I would agree with your first thought, that what is real is relative to a viewpoint. I and my physical environment (which includes you) is real to me, but someone in another possible universe has a different viewpoint and we may not regard each other as real.

    Whether that other universe (or this one) is real in an objective sense is to me an ambiguous question. It is and it isn’t, depending on what you count as real — but to me that’s just down to how you define “real” and not something that has a clear answer otherwise.

    Your later thought “In the end, I think that which enhances our ability to predict future experiences, future observations, is real. That which can’t, isn’t.” is OK with me as long as what you’re talking about is physical reality from a particular observer’s perspective. I wouldn’t agree with it if it were construed as talking about an objective sense of reality or used as a reason to doubt the platonic reality of very abstract (and so useless in a practical/predictive sense) mathematical objects.

    I would say that Elsa is technically real in a limited, superficial sense, in that she is a real pattern, but that pattern is very lacking in detail compared to the pattern than corresponds to an actual human being. We have inner worlds and experience in virtue of all the neural activity going on in our heads. Elsa does not — she’s a hollow shell with all her behaviour scripted by a human. So in casual speech I’d be fine with saying she’s not real.

    Liked by 1 person

    1. Hi DM,
      Given your mathematical platonism, I can see where you’re coming from. I’m not a platonist myself, being more of an empiricist or semi-empiricist who has doubts about the ultimate objectivity of platonic forms. (I think they might be better described as intersubjective concepts.) So for me the prediction criteria applies across the board.

      I did say something in the post that, in retrospect, I’m not quite sure about. I said, “I think that which enhances our ability to predict future experiences, future observations, is real.” The “future observations” part was meant to be an additional clarification, but it might have inadvertently narrowed what I was saying from predicting future experiences to only predicting sensory perceptions.

      But if something could predict a conscious experience that is not an observation, I think it might have some claim to being real. I’m unclear whether this would open the door to abstract mathematical objects with no physical correlates still being real.

      Like

  9. I haven’t read the comments, so I hope I’m not repeating what someone else has already said…

    I think what “real” means depends largely on the context. I could argue that unicorns are real, in some sense. But even if we imagine ourselves to only have access to our own consciousness, the same stuff of reality is still there, perhaps only in consciousness, but it’s just as it was before philosophizing a la Descartes. And our various judgements of reality remain the same, they continue to pertain to all that we experienced before the Cartesian belly-gazing. Maybe this is what you meant in saying: “But our minds can hold both real and unreal things.”? (Because if we’re being truly Cartesian, we can’t say we know whether our minds hold real or unreal “things.”)

    Yeah. Let’s not be truly Cartesian. 🙂

    As for your question:
    “Is there any concisely stated standard we can use to tell the difference?”

    I don’t think so. At least, if there is, I doubt it’s concisely stated. People who write about ontology aren’t terribly good at being concise, myself included.

    Suppose I’m talking about a fictional character in a novel with my friend, but I’m under the mistaken assumption that I think the character is a real person, maybe the author, maybe an historical figure. Someone can say, “No, so and so’s not real,” and I wouldn’t think the character was not a character in a novel or that the novel was a figment of my imagination, but I would automatically understand that the character doesn’t represent any “real” person, i.e., any biological creature who’s ever lived.

    In other words, “real” and “unreal” are context dependent, and usually we understand well enough in what sense something is being described as unreal. But sometimes it’s linguistic gymnastics. What can we talk about or think about that’s utterly and completely unreal? Ghosts? The Easter bunny? A geocentric theory of planetary motion? But those are real in some sense.

    I’m not sure we always need a theory to know what’s real. Predictive power might be one way we decide upon what’s real, especially in science, but I think most of the time it’s obvious, and if it’s not obvious most of the time, I think we’re in trouble. We question the reality of something when it causes us problems. If it works, hey…

    By the way, I don’t think I’d say any of this to a kid. I’d say, “Eat your sandwich, Jimmy, before I make it cease to exist.”

    Liked by 2 people

    1. Thanks Tina.

      “The Easter bunny?”

      This is an interesting example because parents go out of their way to fool children into believing in things like the Easter bunny or Santa Claus. Essentially the make their children’s belief in these things predictive. It’s considered a milestone of childhood when the child figures out these things aren’t real, that is, when they figure out that the concepts aren’t really predictive.

      “but I think most of the time it’s obvious,”

      But what makes it obvious? For example, if I see a cat in front of me, the perception of the cat is itself a prediction, or actually a whole set of predictions, on how the object in front of me will behave, how it will react if I approach it, what sounds it might make, etc. These predictions are extremely natural and “automatic”, which causes us to use labels such as “obvious”, but does that make them something other than a prediction?

      Even your knowledge of your house, which seems very obvious, is itself a complex prediction framework, essentially a theory, a model of reality. If that model was wrong, you’d have trouble getting from the bedroom to the kitchen.

      These examples might seem overly pedantic, until you realize that one of the challenges of schizophrenia is difficulty in forming models that are accurately predictive. Their models are wrong, but they have no recourse to anything else to realize they’re wrong, nothing to compare them to.

      “By the way, I don’t think I’d say any of this to a kid. I’d say, “Eat your sandwich, Jimmy, before I make it cease to exist.””

      Haha! Totally agreed.

      Liked by 1 person

      1. Good question! I have no idea what makes the obvious obvious, but I’m not sure that everything I perceive is a prediction. For instance, what if I don’t know what I’m seeing? Unless you mean “prediction” as a condition for perceiving anything at all.

        Liked by 1 person

        1. A lot of neuroscientists see perception (and consciousness overall) as prediction. To understand why, rather than looking at it as an already conscious system, you have to look at it from the perspective of an organism that is nothing but reflex arcs. That system has no predictions. It receives stimuli and reacts in a programmatic fashion.

          Now, let’s say it develops a distance sense, say chemoreception, which enable it to detect gradients between higher and lower levels of nutrient. That is a perception, a very simple one, but it’s still a primitive model of its immediate environment, and it’s also a prediction mechanism, allowing it to predict what direction more nutrient lies in, that is, which direction will lead to more satiation.

          In this view, the brain is reflexes with lots of prediction added to enhance the scope in time and space of what the reflexes can react to.

          Liked by 1 person

          1. Thanks for the explanation. I keep taking the word “prediction” either too narrowly, as in the common usage, or too broadly, in a phenomenological sense, prediction being something like an idea running alongside its percept.

            According to your sense of the word, does a plant, then, make predictions? And what sort of organism has no predictions?

            Liked by 1 person

          2. That’s an interesting question. Nothing I’ve read about plant behavior inclines me to think they make predictions. To be sure, their behavior is complex and often very sophisticated, but it never seems to rise above programmatic responses.

            What kind of organism has no predictions? Aside from plants, animals without distance senses (sight, hearing, smell) don’t seem to make any. Examples would be worms, starfish, jellyfish, or corals. There are borderline cases, such as land snails, and many arthropods (such as ants) might make perceptual predictions without any action-consequence predictions.

            For it to be a prediction, I think the organism has to take in information and use it to predict future stimuli. Reflexive responses don’t count because it usually just amounts to genetic programming, perhaps modifiable by classical conditioning, which is itself programmatic.

            Liked by 1 person

          3. Setting aside the assertion that any perception of the world is essentially a prediction framework, I think the ability to handle novel situations, to consider multiple courses of action and make a decision based on value trade offs. The main point is to show behavioral flexibility in circumstances that instinct cannot prepare for. (I’m basing this on what I’ve read. I’d have to do research to find specific tests that have been used.)

            Liked by 1 person

  10. “I settle for something a bit simpler: he can do whatever he decides to do, but Elsa can do only what happens in the story.”

    In my experience, characters in stories do have free will, or something like it, to the occasional annoyance of their writers. I’m sure that’s the writer’s subconscious at work, or something like that. I don’t know. But I definitely wouldn’t make that distinction in that way.

    Liked by 1 person

  11. I have to think about this (and read all the comments)… My first impression is that the characters in Frozen are obviously real in some sense, but not realized.

    I also have an early impression that almost any definition of “really real” can be applied to “pretend real.” That we find it hard to deny the possibility all this is a virtual simulation, or that free will is a hard case to make, both go to the difficulty of distinguishing “real” and “pretend.”

    Perhaps the answer to the child’s question is, “Because we can prove we are.” We can pass something like a reality Turing test. Fictional characters cannot. (Until they become driven by strong AI, and that’s a whole can of real worms.)

    Liked by 1 person

    1. Applying the Turing test here is an interesting idea. But what aspect of the test gives us the impression that there is a person there? It seems like our theory of mind is involved, and like any theory, it should make predictions about what another mind might be able to do. Matching those predictions might be the equivalent of passing the Turing test.

      Like

      1. Not the Turing test, per se, but something like it intended to demonstrate reality rather than consciousness. I’m not sure how it would work, though. As I said, it’s the classic conundrum of realism. What really is real?

        Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.