Inflate and explode, or deflate and preserve?

Philosopher Eric Schwitzgebel has an interesting post up criticizing the arguments of illusionists, those who have concluded that phenomenal consciousness is an illusion.

Here’s a way to deny the existence of things of Type X. Assume that things of Type X must have Property A, and then argue that nothing has Property A.

If that assumption is wrong — if things of Type X needn’t necessarily have Property A — then you’ve given what I’ll pejoratively call an inflate-and-explode argument. This is what I think is going on in eliminativism and “illusionism” about (phenomenal) consciousness. The eliminativist or illusionist wrongly treats one or another dubious property as essential to “consciousness” (or “qualia” or “what-it’s-like-ness” or…), argues perhaps rightly that nothing in fact has that dubious property, and then falsely concludes that consciousness does not exist or is an illusion.

Schwitzgebel is talking about philosophers like Keith Frankish, Patricia Churchland, and Daniel Dennett.  I did a post a while back discussing Frankish’s illusionism and the debate he had arranged in the Journal of Consciousness Studies about that outlook.

As I noted back then, I largely agree with the illusionists that the idea of a form of consciousness separate and apart from the information processing in the brain is a mistaken one, but I remain uncomfortable saying something like, “Phenomenal consciousness doesn’t exist.”   I have some sympathy with the argument that if it is an illusion, then the illusion is the experience.  I much prefer pointing out that introspection is unreliable, particularly in trying to understand consciousness.

But as some of you know from conversation on the previous post, I have to admit that I’m occasionally tempted to just declare that the whole consciousness concept is an unproductive one, and that we should just move on without it.  But I also have to admit that, when I’m thinking that way, I’m holding what Schwitzgebel calls “the inflated” version of consciousness in my mind.  When I think about the more modest concept, I continue to see it as useful.

But this leads to a question.  Arguably when having these discussions, we should use words in the manner that matches the common understandings of them.  If we don’t do that, clarity demands that we frequently remind our conversation partners which version of the concept we’re referring to.  The question is, which version of consciousness matches most people’s intuitive sense of what the word means?  The one that refers to the suite of capabilities such as responsiveness, perception, emotion, memory, attention, and introspection?  Or the version with dubious properties such as infallible access to our thoughts, or being irreducible to physical processes?

I think consciousness is one of those terms where most people’s intuitions about it are inconsistent.  In most day to day pragmatic usage, the uninflated version dominates.  And these are the versions described in dictionary definitions.  But actually start a conversation specifically about consciousness, and the second version tends to creep in.

(I’ve noticed a similar phenomenon with the concept of “free will.”  In everyday language, it’s often taken as a synonym for “volition”, but talk specifically about the concept itself and the theological or libertarian version of free will tends to arise.)

So, are Frankish and company really “inflating” the concept of phenomenal consciousness when they call it an illusion?  It depends on your perspective.

But thinking about the practice Schwitzgebel is criticizing, I think we also have to be cognizant of another one that can happen in the opposite direction: deflate and preserve.  In other words, people sometimes deflate a concept until it is more defensible and easier to retain.

Atheists often accuse religious naturalists of doing this with the concept of God, accusing them of deflating it to something banal such as “the ground of being” or a synonym for the laws of nature.  And hard determinists often accuse compatibilists of doing it with “free will.”  I’ve often accused naturalistic panspychists of using an excessively deflated concept of consciousness.  And I could see illusionists accusing Schwitzgebel of doing it with phenomenal consciousness.

Which is to say, whether a concept is being inflated or deflated is a matter of perspective and definition.  And definitions are utterly relativist, which makes arguing about them unproductive.  Our only anchor seems to be common intuitions, but those are often inconsistent, often even in the same person.

I come back to the requirements for clarity.  For example, in the previous post, I didn’t say consciousness as a whole doesn’t exist, but was clear that I was talking about a specific version of it.  For me, that still seems like the best approach, but I recognize it will always be a judgment call.

Unless of course I’m missing something?

This entry was posted in Philosophy and tagged , , . Bookmark the permalink.

70 Responses to Inflate and explode, or deflate and preserve?

  1. paultorek says:

    You’re missing something.

    To wit, when both philosophers and ordinary folk inflate, they do so for reasons, and often spectacularly bad reasons. Some think, for example, that physical is the “opposite” of mental, in so far as we talk about (e.g.) physical versus mental ailments. But this just uses the wrong concept of “physical” for the philosophical context: what is wanted is more a what-the-laws-of-physics-are-about concept, which is different. Failing to notice that it is different, folks can make a wrong assumption about what physical events and processes must be like – even if we give them the right (i.e. relevant) definition of “physical” at the start.

    Since the inflation comes from a misunderstanding of physical processes, it would be wrong to use the inflated definition of “mental” whereby the mental is inherently nonphysical. It could be wrong even if 100% of the folk-on-the-street endorse the “mental implies nonphysical” definition.

    Now, most dualists have reasons that aren’t that bad as the one I just noted. But what matters is not how bad the reason is. What matters is whether the special property A (in our story, nonphysicality) has been asserted based on a misunderstanding of not-A (here, physicality). And I think you’ll find that the answer is almost always yes. That definitely applies to both consciousness and free will.

    Like

    • Hey Paul,
      Certainly if someone holds an inflated version of a concept, just by labeling it “inflated”, we’re inherently saying that version is incorrect, regardless of whatever problematic reasons they have for holding it. Note that the same is true for deflated versions, although the deflated versions don’t necessarily have the same ontological problems that the inflated ones would have since they are a subset of what we view as the correct version.

      On your points about “physical”, it occurs to me that it could be subject to the same inflation or deflation as the other concepts. For example, I’ve had discussions with people who didn’t think energy or gravitation was physical. Arguably they’re using a deflated view of the physical, at least in terms of how physicists see it. And I’ve had people be skeptical when I pointed out that information is physical; from their perspective, I’m using an inflated version of physical. (A perspective I obviously disagree with.)

      So, no argument that people hold inflated concepts due to defective understandings of that concept. I didn’t mean to imply otherwise. It’s the definitions that are relativist, not the reality the words are supposed to refer to.

      Liked by 1 person

      • paultorek says:

        The “inflated” label should be based on a diagnosis of erroneous thinking. (The “deflated” label would have to be based on different evidence.) Here’s a helpful idea: in place of inflated/deflated we can use the relatively neutral thicker/thinner for definitions that include more or fewer requirements, respectively. Note that you can replace all occurrences of “inflated definition” in my argument with “thicker definition”, and nothing important changes. Wish I thought of that earlier.

        By this criterion you just mislabeled the gravitation and information examples. The people who say “physical” excludes gravitation have a more demanding implicit definition of “physical”, so they have the thicker definition. Similarly for information.

        I didn’t mean to imply that the ability to track down erroneous thinking, thereby showing how a definition truly does deserve the label “inflated”, contradicts any of your main points. But, it’s an important tool. I think your original post throws up its hands too early. It won’t always be a judgment call.

        For what it’s worth, I think the “divide” between semantic versus substantive disputes is a misrepresentation of a continuum. Moreover, it’s a continuum whose “nearly purely semantic” end is extremely sparsely populated.

        Liked by 1 person

  2. Steve Ruis says:

    Maybe that there is no such thing as a philosophical argument that proves anything. Methinks we need to just get back to the lab.

    Philosophy has been grinding on this “problem” for many, many centuries, science only maybe one (seriously). If that kind of head start doesn’t work, then maybe the wrong tool is being used, no?

    Like

    • I definitely think the answers will require science. But a lot of neuroscientists read philosophy and their research hypotheses are often driven by philosophical questions. Of course, a lot of the philosophy of mind are rationalizations for dualism, ignoring what science has been telling us for the last century and a half. I agree that portion isn’t helpful.

      So I think it’s a mistake to dismiss philosophy out of hand, but I agree it should be evaluated with a skeptical eye.

      Like

  3. Callan says:

    To me the argument just seemed unfair – person A is saying qualia are ineffable, then person B says they don’t exist because only matter exists, then person C is acting like person B is saying both that qualia are ineffable AND that that means they don’t exist. Person C is just being unfair in their argument, miss attributing claims to B that B did not make.

    Like

    • If you read Schwitzgebel’s post, he does put some responsibility on A, noting that in their enthusiasm, the statement that qualia are ineffable (which is uncontroversial) is often paired with assertions about what that means, typically involving some implied variation of dualism.

      B then reacts to the whole package and declares that qualia don’t exist. Of course, Schwitzgebel’s main point is that B is throwing the baby out with the bath water.

      Ironically, A and B agree on a definition of qualia that is non-physical, they just disagree on whether it is reality. C disagrees with both of them on the definition of qualia, but ends up agreeing with A that they exist, even though they’re not talking about the same thing.

      Which just bring me back to my observation that productive discussion in this area requires clarity.

      Like

      • Callan says:

        No, the unfair bit is treating it as if A and B somehow agree on a definition of qualia, rather than B simply doing as the romans do (so to speak) and working from A’s point of view. If someone says dragons exist because there’s kryptonite in the world, someone else saying there’s no kryptonite in the world doesn’t mean they believe or agree in the association between dragon existance and kryptonite.

        Let’s look at the first example: “Paul Feyerabend (1965) denies that mental processes of any sort exist. He does so on the grounds that “mental processes”, understood in the ordinary sense, are necessarily nonmaterial, and only material things exist.”

        Does Feyerabend agree with person ‘A’ about the definition or is he just referring to the person’s belief/claim and dismissing it from it’s own point of view, even if Feyerabend doesn’t share that point of view?

        The blow up actually appears to be taking someone engaging others ideas and treating it as if they also claim those ideas in doing so.

        Like

  4. James Cross says:

    I’m starting to find tedious much of the philosophical discussion of consciousness but I have some random thoughts, not necessarily agreeing or disagreeing.

    The concept of consciousness as an undeniable stream of experience is simple and somewhat refreshing. So I like that.

    Nevertheless, it seems to me that consciousness could exist and be illusory at the same time, much like a mirage exists as an image we see but still represent an object which does not actually exist.

    The equation (or perhaps “correlation” is a better word) of consciousness with information processing I don’t find useful since it doesn’t distinguish it from information processing that doesn’t produce consciousness. Information processing probably is involved in consciousness but doesn’t explain it.

    The distinctions between physical/mental, material/non-material are distinctions made by consciousness and may be like mirages.

    Time and memory may be essential components of consciousness and I don’t find this to be much discussed. Our stream of experience is shapedin the present by memory which draws on the experience of past.

    Liked by 2 people

    • Thanks James.

      On information processing, my actual view is that consciousness is an application of information processing, or more accurately, a collection of applications in a hierarchy:
      1. reflexes, automatic reactions to stimuli
      2. perception, building predictive models of the environment, increasing the scope in space of what the reflexes are reacting to.
      3. attention, prioritizing what the reflexes react to
      4. imagination, sensory and action simulations as a guide to action, increasing the scope in time of what the reflexes are reacting to, enabling what we commonly think of as volition.
      5. introspection, a feedback mechanism that enables symbolic thought and vastly increases the range of 4.

      So consciousness could be thought of as a mechanism to increase the scope in time and space of what the organism can react to.

      Liked by 1 person

  5. James Cross says:

    A reflex by definition doesn’t involve conscious thought.

    I’m not sure how tagging any of these neurological activities as information processing provides any insight into consciousness. Information processing happens on my phone but my phone isn’t conscious.

    Liked by 1 person

    • On reflexes, I agree, but a panpsychist who defines consciousness as anything that interacts with its environment, probably wouldn’t. And that layer is an important one. By layer 4, it manifests as affects, emotional feelings.

      My view is that human consciousness requires the entire hierarchy. Most vertebrates seem to have 1-4 to varying degrees, but it’s not clear that non-primates have 5. Basically I see those layers are accounting for the major components of what we commonly call “consciousness.”

      Like

    • Callan says:

      Conciousness just needs to have no sense of all the reflexes it’s made up of and then it’s easily enough grasped as reflexes. When does a thought come to you? Perhaps there’s a structure of reflexes just raising the thought or memory, completely unsensed. The memory is recalled but the storage of the memory is unknown, even though you can see people with brain damage lose memories.

      Suppose you’re all just reflexes and synaptic responses, but without a log of all these happening there just isn’t any information to tell you what consciousness is?

      Liked by 1 person

  6. Excellent post Mike. I consider it quite lame to inflate a definition so that physical examples of it quite obviously don’t exist, or to deflate a definition so that they do. Of course these people erroneously believe that they’re helping us understand what consciousness “truly is”, which you know I consider to be a false premise anyway. Philosophy oversees the topics which I consider science to at least implicitly rest upon (metaphysics, epistemology, and axiology), so without a respectable community with various agreed upon principles in these regards (such as my EP1), science thus suffers.

    Panpsychist deflation seems pretty clear to me, since causality itself can then be defined as what’s conscious. Thus naturalism would mandate that everything which is real, is conscious as well. How insightful!

    Could you give me some specifics regarding the exploder side? I just listened to my 22,000 word notes for Dennett’s Consciousness Explained, but couldn’t find where he put something in that I wouldn’t. In fact he seemed to take something vital out — no qualia! Perhaps this is why the book is derisively referred to as “Consciousness Explained Away”? Is it because he’s removed the most instrumental element? Regardless my 2015 notes exhaustively detail the book as crap. That it shot him to superstardom really makes a statement about the state of things in this regard. He’s one hell of a clever and charismatic old trickster!

    I’m far less familiar with Frankish and Churchland however, so I’d love your thoughts on how they’ve “exploded” consciousness.

    Liked by 1 person

    • Thanks Eric.

      “Could you give me some specifics regarding the exploder side?”
      If you haven’t yet, you might want to read Schwitzgebel’s piece directly since this is his concept. In general though, I see the inflated version as the one that assumes phenomenal experience isn’t the information processing, that the distinction between “access consciousness” and “phenomenal consciousness” is something more than perspective, that there is something ontological about it. Personally I think that ontology is hopelessly tangled up with substance dualism. The fact that so many people who hold that ontology insist that they’re not dualists means that someone is confused. (Naturally I think it’s them 🙂 )

      I do think Dennett, when he dismisses qualia, is dismissing an inflated version. But to be fair to him, as Schwitzgebel admits, a lot of people see the inflated version as inseparable from the more modest version that sees a quale as a unit of subjective experience without any ontological assumptions.

      I haven’t read Churchland at length, just an article or two here and there, and watched her in some talks and conferences, so I can’t really comment on her arguments. I have read Frankish enough to know that he is an unapologetic illusionist. I think he sees that position as justified because most people hold the inflated view of phenomenal consciousness.

      As I noted in the post, I agree with the illusionists ontologically, but I disagree with the way to communicate it. Although as I’ve noted before, I’m sometimes tempted to join them and dismiss the whole concept of consciousness as hopelessly entangled with Cartesian dualism and a mistaken concept. But that attitude tends to end conversations, so it seems more productive to talk about the mechanisms which make up this thing we refer to as “consciousness.”

      Liked by 1 person

    • Right Mike. I’d forgotten that this illusion business is not really about consciousness not existing, but rather that perceptions of reality (phenomenal color, sound, taste, and so on) don’t quite exist as we perceive them to. I don’t see the point of illustrating standard physics to people who believe that “red” ultimately exists in nature! I’d hope that these distinguished academics would find better things to do than state what educated people have understood for quite a while. And otherwise I’d hope for them to not go about this project in such an overstated way. Apparently it adds to their popularity, which is troubling.

      Of course you know that I love discussing my own entirely subjective model of consciousness (or all “illusion”, if we must call it that). Here there’s a non-conscious computer (or brain) that outputs +/- experienced value from which to drive the function of the conscious form of computer by which existence is experienced. I theorize that the “illusion” of consciousness is created because creatures in more open environments with no a teleological element, are unable to gain sufficient programming to deal with novel circumstances. Theoretically just like our troubled non-conscious robots, they can’t otherwise gain enough autonomy. So conscious life took over the more open environments.

      Liked by 1 person

      • “I don’t see the point of illustrating standard physics to people who believe that “red” ultimately exists in nature! ”

        That’s a good way of putting it! Colors don’t exist, except in nervous systems, where they are an abstraction, a convention used to communicate the reception of certain wavelengths of electromagnetic radiation reflected off of surfaces, a convention that is adaptive because it aids in the construction of predictive models of the environment.

        “I theorize that the “illusion” of consciousness is created because creatures in more open environments with no a teleological element, are unable to gain sufficient programming to deal with novel circumstances.”

        Would it be fair to say then, that your theory of consciousness considers there to be degrees of it? For example, a fruit fly has very limited ability to learn new things. The vast majority of its behavior is instinctual. A mouse has a greater share of learned behavior, as does a bear or dog. When we get to primates, the lion share of the behavior is learned, built off of a foundation of instincts. So would it be accurate to ascribe more consciousness to a primate than a bear, more to a bear than a mouse, and more to a mouse than fly?

        Of course I realize “learned behavior” can be a tricky phrase. A worm’s reflexive responses can end up being conditioned, which can be considered a form of learning. But the worm shows no signs of building mental concepts, predictive models, so its behavior remains relatively reflexive in nature. So maybe this isn’t quite the correlation I’m thinking it is?

        Liked by 1 person

    • I should qualify one thing there Mike. I consider it extremely important for standard science to be taught to people who aren’t familiar with it, such as children. It’s those who oppose some of its highly accepted ideas with little more than faith based alternatives that I’d rather we not worry so much about. And are these the people that Dennett and the rest nevertheless use their “illusionism” to counter? No, I suspect that with this funky little stance they’re simply playing “the fame game”.

      Surely most everyone believes that there are more and less advanced forms of life in a conscious sense, as in primate to bear to mouse to fruit fly to worm (pending a given consciousness definition regulating what has this capacity at all). Yes “learning” isn’t the key from my own such definition, but rather the existence of something which is motivated to function through a punishment/ reward dynamic. Without this I wouldn’t say that any of our computers are conscious, for example, though some may be said to have programming which helps them non-consciously adapt to various presented circumstances, or thus “learn”. But how might they do so if their programming isn’t set up to deal with a given circumstance? Not only do our pathetic machines seem to fail here, but I suspect that under more open environments evolution’s non-conscious forms of life do as well. Thus the theorized need for teleological function.

      As I define consciousness I actually suspect that the absolute number of conscious calculations done in my head are less than one thousandth of one percent of its non-conscious calculations. From my model consciousness exists as an output of the necessarily far larger non-conscious computer. So if the human has 86 billion neurons, and the fruit fly also has a punishment/ reward element but only 250,000 neurons, perhaps it’s function is proportionally “more conscious” than the human?

      Liked by 1 person

      • Eric, do you make any distinction between your view of consciousness and sentience (in the sense of being able to feel and perceive)? Just curious.

        On fruit flies, my take, which I’ve held since reading F&M, is that everything about it is smaller scale. Its ability to take in information about the environment happens at a much lower resolution than ours. Given it’s small size and the scale it needs to perceive things at, this isn’t a problem for it. But it’s also limited in its ability to extract meaning from its low resolution sensory flow. It has imagination, but it’s minuscule compared to the imagination of your typical mammal. And I see no evidence that it has introspection.

        My question is, in what you call the smaller conscious computer, what actually happens? I think we established previously that the meat of the actual modeling and simulations, which requires vast computational resources, don’t take place there. And I don’t think you’d say that movement control happens there (which requires its own resources). So what are the conscious calculations that are such as small part of the overall system? (I’m not asking this argumentatively. Just trying to understand your view.)

        Liked by 1 person

    • I’m always pleased when you’re curious about my models Mike!

      In truth I do define sentience and consciousness in the same essential way, but tend to use them in separate situations. A sentient life form has the potential to experience positive to negative personal existence, or thus harbors a value dynamic (which I don’t believe is a controversial definition). And since I define the conscious form of computer to function on the basis of value, one can’t exist without the other. If something is suffering horrible pain, even without “functional consciousness” (or no ability to reason, move, or consciously do anything beyond suffer), I do still term this existence “conscious”. Presumably we all consider it sentient.

      I certainly agree with your thoughts on fruit flies. Still I’ve heard that most scientists, including F&M, do not consider them “conscious”. (And in the case of Feinberg and Mallatt, this seems strange since as I recall they theorize that consciousness evolved in order to facilitate distance senses, which flies obviously have.) I anecdotally suspect that flies harbor consciousness as I define it, but would appreciate dedicated testing in this respect.

      On what I theorize happens in the tiny conscious form of computer, well that’s the stuff that you’re actually aware of. It’s the pain that you feel for example, not what creates the pain. It’s the words that you’re thinking as you try to interpret what I say, not what facilitates such potential. Consider the following diagram:

      Notice that I place the conscious form of computer entirely as an output of the non-conscious computer. (I presume that the non-conscious computer does countless things more, though here only consciousness has been added.)

      There are three varieties of conscious input that your non-conscious computer should provide your conscious computer from moment to moment. Valence is the punishment/ reward stuff which theoretically drives your conscious function. Then there are senses like vision, and also memory which provides degraded accounts of past conscious processing. In order for you to figure out how to respond to me, you should be interpreting such inputs and running scenarios (now from the “Thought Processor” box) in order to come up with something that you think will promote your valence. I consider valence to be the unique purpose by which all teleology emerges.

      Any decisions that you make will concern the output box, or muscle operation. But notice that my diagram feeds these decisions back into the non-conscious computer’s input box. You may decide to consciously answer me somehow, though theoretically your non-conscious computer is what operates your fingers and all of your muscles. Perhaps many find this problematic since we anthropocentrically tend to take credit for what our “silent partner” does.

      Hope this helps!

      Liked by 1 person

      • On F&M and their reluctance to see insects as having primary consciousness, I agree. One of the things they spent some time discussing in their book is the minimum number of neural layers necessary for consciousness, and their concern is that insects don’t have enough substrate for those layers. I personally found the whole minimum number of neural layers=consciousness thing the least persuasive of their ideas.

        It smacked to me of IIT type thinking, unwittingly looking for a physical recipe that generates the ghost in the machine. I think positing the ghost, even a purportedly naturalistic version, introduces an unnecessary and unjustified complication. Their concern in essence seems to be that insect brains aren’t big enough to generate the ghost. I see this as a blight in an otherwise excellent book. (They would, of course, deny any positing of ghosts, but in my mind, that’s what IIT style thinking unintentionally is.)

        But insects like fruit flies do have distance senses, and do seem to have predictive models of their environment. Those models may be very low resolution by our standards, but as F&M themselves note, it’s not the scale but the capabilities, capabilities flies have with only 250,000 neurons.

        On your model, thanks for the refresher. Do you see see any possibility that there may be multiple small computers? Or do you see only one, existing at some location in the brain? (I know you don’t get into the specifics here. I’m just asking at a conceptual level.) If so, could a brain lesion in the wrong place completely knock out consciousness? If not, why not?

        Liked by 1 person

        • James Cross says:

          Damage to a small number of cells the brain stem brings about permanent and irreversible coma. The cells are part of the reticular activating system and go back in evolution to reptiles.

          Undoubtedly the “content” of consciousness is governed by many other parts of the brain and hence would be different in different species but these cells in the brain stem seem to play a critical role in pulling everything together into consciousness.

          I think of consciousness not as an all or nothing proposition but more as a continuum with beginnings in the first brains (bilaterians) and most elaborated in social animals. Interaction with other conscious entities, primarily others of our own species, is critical for the formation of self-consciousness.

          Liked by 1 person

          • Thanks James. I agree across the board. Definitely a minute lesion, that in the neocortex may result in a minor function loss, typically results in devastating disabilities in the brainstem or midbrain regions. Their crucial location plays an outsize role. But as you note, most neuroscientists don’t see consciousness per se as residing there. It’s more accurate to say those lower level structures are crucial to it.

            Like

          • James Cross says:

            “But as you note, most neuroscientists don’t see consciousness per se as residing there. ”

            If I said that, it wasn’t exactly what I meant.

            To use an analogy: you go into an unfamiliar, dark room and turn on the light. The room may be a tiny broom closet or a great hall. It may have colonial or contemporary furniture. It may have high ceilings or low ceilings, stone or hardwood floors. Consciousness is the light and originates through the coordination activity of these cells in the brain stem. The characteristics of the room – its contents- are the product of evolutionary and individual history. Other neural circuits may have evolved to assist these cells in the brain stem but the function of these cells in the reticular formation are key to consciousness.

            Liked by 1 person

          • Sorry if I misunderstood you above. From what I’ve read, the reticular formation is a crucial supporting structure, but it isn’t sufficient. Likewise, the thalami are also crucial but not sufficient. Sufficient damage to it can snuff out awakeness similar to the way damage to the brainstem or mid-brain structures can. However, sufficient damage to the anterior cingulate cortex or the neocortex can leave someone a zombie, an awake zombie, but one whose behavioral repertoire and apparent perceptions are reduced to devastating levels.

            Like

    • I’ve got to thank F&M for their book as well, given that your review of it helped me add something to my own models. They (and you) helped give me a better sense of when and how the central organism processor, or “brain”, probably evolved on Earth (or what I theorize as reality’s second form of computer (after genetic material, which I theorize as the first)).

      Still it’s interesting that they’ve founded their project on the basis of two contradictory premises. If they’re right that consciousness evolved to facilitate distance senses, then they’re wrong about a minimum number of neural layers for consciousness withholding it from insects (and modern insects support this wrongness). Or they could be wrong about a minimum neural layer requirement, and right that distance senses evolved to facilitate consciousness. I think they’re just guessing regardless. But if you’re going to make things up it would seem appropriate to at least do so with consistent ideas! Perhaps this doesn’t matter for their own careers however, since prominent people in general there seem to present flawed positions. Given the circumstances I’d expect the person who presents simple and sensible ideas to have the the most trouble.

      As I define it, “predictive models” don’t quite get us to consciousness, since even our computers can be said to “predict”. If the fly is sentient, then here it’s also conscious. Furthermore a human that loses all sentience, must also lose its consciousness.

      Regarding multiple small computers, or even a single conscious computer in a specific part of the brain, all such speculation gets away from my own models. I refer to consciousness as “a computer”, merely through analogy, and in some ways it may not be a very good one. One reason that I’d like you to grasp this model is so that you might then propose ways to explain it better than I have so far.

      Perhaps one misleading element of this analogy is the computer/ machine association. I do consider the brain to be a non-conscious “machine” however. One of countless things that this machine should output, is the consciousness dynamic by which you and I experience existence. So no, I do not theorize consciousness to exist as a machine that resides in one or even many parts of the brain. Instead I theorize it to be something that the non-conscious brain outputs. And specifically this provides three forms of conscious input (which are “valence”, “senses”, and “memory”), one form of conscious processor (which is “thought”, or something that interprets those inputs and constructs scenarios about how to promote valence welfare) and one form of output (which is “muscle operation”, though the non-conscious brain is what actually runs those muscles of the basis of conscious decisions).

      On multiple consciousnesses, I have pondered why the human apparently has only one. For example, wouldn’t it seem productive if a person could use one consciousness to silently read an article, and concurrently use another to have a telephone conversation? So why didn’t more than one consciousness evolve for the human? I suspect that they’d tend to get into each other’s way. It wouldn’t surprise me if some forms of life pull this off however, though clearly the human has only one.

      On brain lesions, the model I present suggests that some will alter the outputting of the consciousness dynamic in various associated ways, which is exactly what we find.

      Liked by 1 person

      • Eric,
        On F&M and insect consciousness, I need to do a mea culpa. I just went back and looked at what they wrote about insects, and, while noting the concern about the size of their brains, and the lower number of layers in their sensory hierarchies, they do come down on saying that they have primary consciousness. For some reason, those concerns about brain size weighed much more heavily in my memory than their later judgment that insects are conscious (in the primary or sensory sense).

        I’d also note that “predictive models” is my phrase. They tend to use “mental images” or “image maps”, and they’re using that in relation to exteroceptive consciousness. They have other criteria for affect consciousness, which I think more closely matches your conception.

        So it sounds like your second computer is an emergent phenomenon. One way to interpret this is to compare it to software architectures, which have an existence on top of hardware structures, but the relationship between them is extremely complicated. (I covered this in a post in December: https://selfawarepatterns.com/2017/12/27/could-a-neuroscientist-understand-a-microprocessor-is-that-a-relevant-question/ )

        Although that doesn’t really match your idea of the second computer being miniscule in comparison to the first one. Of course, comparing the size of software to the size of hardware is a fairly meaningless exercise.

        On multiple consciousnesses, here’s something to consider. If we did have multiple, how would we know? Maybe the consciousness we know about is just the one that has access to the language and movement centers and can discuss itself.

        In actuality, most regions of the brain execute independently of each other. We perceive it all as one unified thing, but a lot of that is because those independent regions are communicating with each other. Maybe what we call consciousness is just the region that the introspection mechanism has access to.

        But there appear to be at least two introspection regions in the brain. https://neurosciencenews.com/neuroscience-memory-introspection-512/ Our perception of a unified introspective view itself appears to be from communication between those regions.

        All of which is to say, the reality is complicated and strange.

        Liked by 1 person

    • Mike,
      I see from this comment (https://platofootnote.wordpress.com/2016/09/20/on-panpsychism/comment-page-6/#comment-11058) that we first met two years ago yesterday at Massimo’s old site. Lots of words under the bridge! Coincidentally at that time you were in of middle of your F&M series. How interesting that you would now revise an off hand and incorrect remark about their ideas. Cheers!

      I’ve just rewatched that Brain Science episode, by the way, and Jon Mallatt clearly stated that they propose primary consciousness for insects, though apologetically so. Apparently he didn’t want people to interpret this as human level consciousness. Anyway, I missed that one as well: http://brainsciencepodcast.com/bsp/2016/128-jonmallatt

      Furthermore that podcast demonstrated that by “primary consciousness” F&M are indeed referring to affect consciousness, or exactly what my own consciousness model addresses. Unlike them however, I make no apologies. The diagram that I earlier put up proposes the basics for both advanced and basic forms of consciousness. Introspection/ metacognitian does not reside there because I don’t consider it basic enough. Our language and our culture may have provided huge paradigm shifts, but that’s currently as far as I’ll go.

      As it happens I’m also able to more directly attack the F&M “distance senses” theory to the origins of consciousness. We don’t consider our robots conscious, and yet we also provide them with distance senses. Thus evolution clearly should have been able to manage exteroception without consciousness. My theory, conversely, is that such non-conscious creatures lacked sufficient autonomy under more open environments, and so a teleological dynamic became extremely adaptive once available, or consciousness.

      Though I appreciate the suggestion, I’m not satisfied analogizing my conception of consciousness with computer software. We provide our computers with software to help such machines function in associated ways. But it’s not like we build non-software based machines, and they then output special software from which to function. My consciousness model proposes something more like that.

      But you’ve also mentioned “emergence”, which is very associated with the “output” idea that I’m currently using. Given the scourge of dualism however I think that I’ll modify it with the “causal” term however. Just as various individual sounds can be put together for the causal emergence of “music”, the stuff that I (and F&M) define as consciousness can causally emerge from a neuron based non-conscious brain. The most crucial element here is that this “computer” (since I currently have no better consciousness analogy) is driven to function by means of a punishment/ reward dynamic, or value.

      (On Eric Jonas, I do still support his work. I believe that neuroscience is tremendously handicapped today without a basic framework from which to build. The one that I propose may or may not be helpful, but in order for this science to harden up, I’m quite sure that some generally accepted architecture will be required.)

      The reason that conscious function might do less than one thousandth of one percent as many calculations as the non-conscious, is because as I define it consciousness requires tremendous support. Notice that opening and closing your hand should require virtually no conscious processing. But the non-conscious computer which causes these muscles to function as you instruct, should require vast computing resources. I presume that much associated with conscious function requires similar support.

      On multiple consciousnesses not knowing about each other, I like the way you’re thinking there, since consciousness does seem quite private. Furthermore I do theorize multiple consciousnesses in a temporal sense. Apparently each moment brings a new conscious entity. I theorize them to be largely joined with past subjects through memory, as well as future subjects through present hope and worry about the future.

      What I was referring to however was two modes of thought which would thus need to work together in order to promote shared value. Here I could be writing to you through one, and having a conversation with my wife through the other. Twice the productivity! But note that they’d need to somehow inform each other about their separate goings on in order to stay on the same page. Perhaps they’d have shared memories, though that may not be sufficient given degradation. With two joined but separate deciders, I suspect that they’d have reason to argue. Regardless we only seem able to think about one thing at a time, though aided by a vast non-conscious computer. Then as for non-conscious processing, I presume that there are countless ways in which this machine functions in parallel.

      On introspection, you’re certainly free to define the consciousness term in such a way. The model which I’ve developed is far more basic. And if you do, sure, you might consider associated regions of the brain to be special. Not so for me. And as you know, I’m more about the architectural side of things anyway.

      Liked by 1 person

      • Eric,
        I’m impressed by your ability to pull up comments from that far back, particularly on Massimo’s posts where it’s not unusual for threads to have hundreds of comments. I have a hard enough time finding stuff like that on my own blog, even with privileged access.

        F&M do make it clear in the book that they aren’t attempting to address human level consciousness. They’re really focused on what they call sensory or primary consciousness, what some call first order consciousness.

        “Furthermore that podcast demonstrated that by “primary consciousness” F&M are indeed referring to affect consciousness,”
        That wasn’t the sense they conveyed in the book. The primary/sensory consciousness label there refers to three types: exteroceptive (external perceptions), interoceptive (internal body perceptions), and affect consciousness (essentially feelings). My impression is that an organism with only exteroception would get the “conscious” label from them, although perhaps with a qualifier. But this is why I called attention in my series that self driving cars and other autonomous systems would then meet that definition of consciousness.

        As I’ve noted before, my take on this is to consider consciousness a layered thing. Simple organisms have reflexive capabilities, which most of us wouldn’t consider conscious even if the organism displays sleep / wake cycles. Then we can have what F&M call primary consciousness which enables far more complex behavior. But we only get to the human level with introspection, metacognition.

        Incidentally, equating consciousness with introspection goes back the earliest modern usages of the word. John Locke defined it in 1690 as “the perception of what passes in a man’s own mind”.

        Myself, unless I’m using the word colloquially, I’ve become resigned to the fact that the word “consciousness” simply can’t be used with clarity without a qualifier. I’m now suspicious of any writing on the subject that doesn’t clarify which version it’s addressing. Unfortunately, most writing doesn’t, which is what sometimes makes me wonder about the continued value of the general concept.

        Incidentally, someone shared a paper today on Eric Schwitzgebel’s blog calling into question the whole endeavor of consciousness studies. It had some interesting historical info. In case you’re interested: http://info.sjc.ox.ac.uk/scr/hacker/docs/Consciousness%20a%20Challenge.pdf

        Liked by 1 person

    • Mike,
      I’ve had another listen to that podcast, and I suppose you’re right that F&M might at least call certain things without affect “conscious as such”. I really hate all of these halfway definitions floating around. If they’re serious about “the hard problem of consciousness”, as they say, then why define there to be any variety of consciousness which lacks “hardness”? Why put our machines under this label? Develop a definition that you consider special, announce it generally as the idea that you’re using for the term, and then let that be your position for the world to consider. But with all these “easy” definitions floating around, we’ll need theorists to plainly state what they mean by the term in order to effectively assess their positions. Here one might say for example, “I’m a panpsychist by the way, so when I mention “consciousness”, you may effectively interpret what I mean as “stuff that happens””.

      Well hopefully you’re getting a better sense of my own model. And indeed, for it no such ambiguities exist. For me it’s “hard problem all the way” — if something feels good/bad then it must indeed be conscious. Furthermore while other models start with experience and only later disclose that brain function may not be entirely conscious, I conversely present brain function as entirely non-conscious, though consciousness can exist as an output of this specific variety of machine.

      In questioning the uses of the “consciousness” term itself however, let’s not throw out the baby with the bath water! “Force” should have been a similarly frustrating term for professionals to deal with, that is until Newton so effectively defined it as “accelerated mass”. Mark my words — consciousness will develop a useful and generally accepted definition soon enough, whether the one that I’ve presented or some other.

      Actually before sending I’ve had a look at what you wrote to Lee below, and thus your disdain for the hard problem(s) of consciousness. The “why” of it isn’t something that I consider all that difficult. I propose that consciousness was needed because under more open environments, non-conscious organisms couldn’t be sufficiently programmed. Thus a small teleological element was helpful.

      The “how” of producing +/- value, however, does seem like a hard problem to me. I’m not quite saying that we’ll never figure it out, though when I compare the engineering of life to the engineering of our pathetic machines, no I’m not optimistic.

      Liked by 1 person

  7. Lee Roetcisoender says:

    Mike,

    As always, I have a great appreciation for your unique introspection of the ambiguous, colloquial term consciousness. I am curious: Do you understand the concept of underlying form, a phrase which I often use when I post on blogs such as yours? The reason I ask is because I rarely, if ever, see anyone who post comments addressing the concept of underlying form, most individuals only seem interested in the observations and/or behaviors for which underlying form is responsible. My Silly Putty Metaphor, which I have posted on a couple of sites address this paradox directly, yet even that metaphor itself garners little response from the audience.

    Liked by 1 person

    • Thanks Lee!

      My initial interpretation of “underlying form” is that it refers to an internal or supporting structure of some thing, as opposed to the surface attributes of that thing. However, since you’re asking about it, I suspect you’re using it in a more precise manner? If so I’d have to say, that I probably don’t understand it in that way. (Just googled the phrase and got a lot of linguistics stuff that somewhat matches my initial impression.)

      Along the same lines, a silly putty metaphor seems like something that would be used for an amorphous concept?

      Now I’m curious what they are 🙂

      Like

      • Lee Roetcisoender says:

        Mike,

        Here’s my metaphor, you should be able to glean from the quote how I use underlying form in my vocabulary. “Like consciousness, silly putty is some really cool stuff, it does all of these neat things; it’s flexible, one can squish it into all kinds of shapes, it bounces like a ball, and one can even press it onto a colored image of the Sunday funny papers and lift that colored image off the page. But like consciousness, silly putty is something rudimentary and fundamental; it is a single chain of Carbon, Hydrogen, Oxygen and Silicone mediated by Boron and crossed linked to give it its elasticity putty like behavior.”

        The quote is a relatively definitive and discrete statement using silly putty as an allegory in direct correlation to the phenomenon of consciousness, but since we do not understand the phenomenon itself, nor can agree on a rudimentary definition of the term, then in this isolated context only, the underlying form of consciousness as a phenomenon would certainly fall under the umbrella of amorphous. Amorphous or not, the underlying form of consciousness is an objective reality, and that objective reality can be discovered and understood using the correct methods. It would foolish on our part to believe otherwise, the only tool that we are missing is that “correct method”. Now, that’s setting to bar pretty high, but I do believe it’s worth the risk. Being a skeptic yourself, you may not necessarily agree.

        Thanks……

        Liked by 1 person

        • Thanks Lee.

          On finding the correct method, I agree to the extent that we can establish solid definitions. The problem with consciousness is that the meaning is indeed amorphous, and often downright inconsistent. That doesn’t leave science much of anything to sink its teeth into. No doubt that’s why many scientists stay away from the subject.

          For example, Richard Passingham in his ‘Cognitive Neuroscience: A Very Short Introduction’ (which I recommend for anyone interested in the brain) manages to cram an enormous amount of information without mentioning consciousness, except for one casual reference.

          Things become more hopeful if we can narrow our focus a bit. Consider concepts like exteroception, interoception, imagination, memory, attention, or introspection. (Introspection can be further partitioned into metaperception and metamemory.) These aspects of mental life are far less amorphous in their meaning, and thus much more amenable to scientific investigation.

          Of course, philosophers like David Chalmers insist that understanding these phenomena are the “easy problems”, and that once we’ve solved them, the hard problem will remain, in that we won’t have solved the raw problem of experience. In that sense, he’d probably agree with your silly putty metaphor (if I’m understanding it correctly), that there remains one underlying reality applying to all the things we call “consciousness”.

          On this point, I’m afraid I am a skeptic. I don’t think that underlying reality exists, and that once we’ve answered the easy problems, there won’t be anything else. Of course, those concerned with the hard problem won’t be satisfied, but in time, like the problem of biological vitalism, fewer and fewer people will be bothered by it.

          Like

  8. Lee Roetcisoender says:

    Mike,

    I don’t know Mike, it seems to me that giving up on an underlying reality leaves only two viable alternatives, neither of which are favorable; either magic or solipsism. Personally, I think noumenalism offers the best venue for discovering an underlying reality. I watched a youtube video recently where Robert Lawrence Kuhn interviewed Eric Schwiltzgebel and Eric himself felt that noumenlism hasn’t been give a fair hearing as an alternative architecture to either materialism, idealism, substance dualism or property dualism. What was ironic, is that Eric himself misunderstands noumenalism himself because he called it transcendental idealism, a label which Kant himself refuted in his own lifetime, a label which was placed upon his theory that he was never able to distance himself from.

    At a fundamental level, Kant’s architecture of noumenalism is grossly misunderstood. I watched a recent youtube video, The Science of Consciousness 2018. During the question and answer session someone from the audience asked Chalmers a question about Kant. Chalmers poignant response was: “I don’t know that much about Kant…” Go figure

    C’est la vie

    Liked by 2 people

    • Lee,
      I haven’t read Kant, so I suspect my own understanding of noumenalism may not be solid. So I’ll phrase my reply entirely in the language I’m more familiar with.

      When I said I was skeptical of the underlying reality, I was only referring to the idea that there is a distinct ontological thing that is consciousness, one that meets all the disparate conceptions of it that are out there. Of course, some conceptions do match up with how the brain works, so I do think those conceptions have an actual reality. But many others are tangled up with substance dualism or similar dubious notions.

      I definitely was not referring to objective reality overall. In general, I accept that it’s impossible to prove that the world exists to a determined skeptic, but if the world is an illusion, it appears to extract painful consequences for not taking it seriously, and we have little choice but to play the game. It’s more useful to regard the world as there than it is to deny it.

      Like

      • James Cross says:

        Have you taken a look at Conscious Realism?

        I wrote about it here.

        https://broadspeculations.com/2016/04/26/world-stuff/

        There is a link to Quanta Magazine article and a Hoffman paper on it in my article. I don’t think would agree but it is a sort of scientific Kantianism. The objective world exists but our interface to it – consciousness – doesn’t tell us much about it.

        Liked by 1 person

        • James,
          Thanks for sharing your post! Just followed you so I catch new ones.

          I did my own post on Hoffman’s ideas a couple of years ago ( https://selfawarepatterns.com/2016/05/02/is-reality-an-illusion-if-so-does-it-matter/ ), although my comment above provides a quick summation of it. I see his ideas as idealism, and idealism just doesn’t strike me as a productive outlook.

          On consciousness and the objective world, I do think it tells us a great deal. It wouldn’t have evolved if it didn’t. But what it provides is calibrated to our evolutionary background. It’s only our ability for symbolic thought that allows a hominid species from the African savanna the ability to develop an understanding of the world far beyond its ecological niche.

          Like

    • Lee,
      I doubt that Mike is quite giving up on the existence of underlying reality itself, but rather just conceding that he has no pure access to it. And regarding epistemology, are we now left with both solipsism and magic? Even if my realm does function by means of magic, I don’t see how I could be fully certain of that or anything noumenal other than that I exist in some manner. You’ve mentioned another option as well, or noumenalism, but to me this appears to be a gateway to pseudoscience. In the end I believe that epistemology must always reduce back to solipsism. I’d even call this a priori.

      Though my position here may be modest, I consider modesty necessary given the tremendous scope of what I mean to do. I seek the creation of a community of philosophers that has its own generally accepted principles of metaphysics, epistemology, and axiology. I believe that such a community will be required in order to provide science with a solid foundation from which to advance in ways that it currently fails. By instead immodestly telling the world that I have “truth” (though it isn’t possible to verify), I don’t believe that my project could succeed. Solipsism seems necessary in the end to potentially do what I believe is needed.

      Like

      • Lee Roetcisoender says:

        Eric,
        (Solipsism seems necessary in the end to potentially do what I believe is needed.)

        As always, I both admire and respect your candor even though I cannot agree. Just to be clear: so, you do not consider all of the models which our scientific community has constructed to capture the description of the structure of reality, “NONE OF WHICH ARE TRUE”, to be pseudoscience?? Am I missing something here?

        Liked by 1 person

        • “Am I missing something here?”

          Well Lee, I believe that I’ve developed a pretty good way to demonstrate whether or not someone understands one of my positions. It’s for the person to take their theorized understanding of it, and then use it to assess an associated issue (which could come from a web article for convenience). If the person is then able to reasonably guess what I’d say about that particular issue, we’d then have reasonable evidence that he/she had been assessing my actual position rather than something different. So if you ever want to give this a try, tell me what I am to assess so that we can then compare answers. An understanding of my position here would qualify you to assess it. Or if you have theory for me to assess, we’d first see if I understood through such a test.

          One of the greatest problems in academia, I think, is that people often don’t quite understand the positions that they criticize. I’ve developed this test to potentially counter the problem. (Of course only non nonsense positions would suffice, since by definition a nonsense position has no potential to be understood anyway.)

          Without quite getting into such a test however, permit the following attempt to explain. I believe that there is only one aspect of the noumenal that can ever be understood with certainty, or that the thinker itself exists — nothing more. Thus I can be certain of my own inclusion in the noumenal in some manner, and you can as well if you’re able to think, though neither of us can ever be certain about the existence of the other, or anything else.

          So from this point, do I have anything left? Yes actually. I also have my metaphysics of causality (as I believe we’ve discussed before), as well as my second principle of epistemology. My EP2 states that there is only one process by which anything conscious, consciously figures anything out. It takes what it thinks it knows (evidence), and uses this to assess what it’s not so sure about (a model). The more that a given model continues to remain consistent with evidence, the more that it tends to become believed .

          And what evidence can I provide that my EP2 happens to be an effective description of reality? (Observe that I didn’t say “real” here, but rather “effective”.) The provisional nature of science itself. So if you tell me that you’ve developed a method from which to go beyond past science and into the noumenal, I must naturally wonder if your passion has tricked you into a pseudoscience? Regardless I’d use my EP2 to consider your plan to convert science into a non provisional field of study.

          Of course the great Rene Descartes was led to despair and then dualism given these very circumstances. Clearly he was no less passionate than you or I. I consider his tale to be a tremendous lesson for the virtues of humility. In the end I doubt “Truth” is actually needed anyway, that is as long as we’ve able to develop provisional answers that seem effective.

          Like

  9. Lee Roetcisoender says:

    Thanks James,

    I’ve read Hoffman and I just read your link. The best demonstration of Kant’s architecture is demonstrated by a quote in your essay: “Hoffman gives a compelling reason why what we would be seeing is not anything like reality in his computer desktop analogy.”

    According to Kantian philosophy, any analogy may be valid and/or indeed true for the “thing”, which is the analogy, but it tells us nothing about the “thing in itself”, which is the true nature of reality. This is the very reason Kant is reviled, because he cuts the legs out from under reasoning by demonstrating that reasoning is limited in its scope and absolutely useless as a resource when it comes to trying to understand the true nature of reality. Jacque Derrida’s model of deconstruction theory also dismantles the West’s highly esteemed crowning jewel of achievement, i.e. reasoning. Derrida’s writings undermine this cultural investment in human reason, calling into question every technique used to craft an intellectual construct.

    (The objective world exists but our interface to it – consciousness – doesn’t tell us much about it.)
    Consciousness is not an interface, consciousness “is” the underlying reality of our world beginning with the primordial qualitative properties of space, mass, spin and charge, finally culminating with the last qualitative property of consciousness, reasoning. Reasoning is the latest qualitative property of consciousness to arrive on the scene after billions of years of evolution, and as a form of consciousness, the power of reasoning is without precedence.

    Like

  10. Lee Roetcisoender says:

    Eric,

    Solipsism is the most primordial, fundamental property of all forms of consciousness and I refer to that inherent characteristic within homo sapiens as the Schwarzschildian Me. The Schwarzschildian Me is derived from Karl Schwarzschild, the physicist who gave us the Schwarzschild radius, a mathematical formula for black holes. A black hole is an excellent metaphor to capture the self serving locus of consciousness expressed as solipsism.

    The Schwarzschildian Me is the Pinnacle of consciousness, Descartes “I think, therefore I am”, the gray matter who sits behind these eyeballs passing judgement on the world. He alone is the sovereign who decides what everything means. Nothing escapes his grasp, because power, the wild card of consciousness is firmly in his grasp. All of the data, everything is drawn into his sphere of influence to be sifted by the power of reasoning. Then, and only then will he make his determination; this idea lives, and that idea must die, this feeling can be trusted and that feeling cannot. There is no mediator in that process, there is no one to intercede on behalf of ideas, feelings or concepts. The Schwarzschildian Me is the administrator, the judge, the jury and the executioner. That my friend is Protagoras’, “man is the measure of all things”, be it good, bad, or indifferent.

    (In the end I doubt “Truth” is actually needed anyway, that is as long as we’ve able to develop provisional answers that seem effective.)

    And that is the fulcrum question isn’t it Eric, effective to whom or for whom? The answer is always the same, The Schwarzschildian Me, a self serving point of singularity; and then we all sit around scratching our heads wondering why our world is such a mess, and we just can’t seem to figure it out. We need “Truth”, not provisional, effective answers that serve only a few, the few who consider themselves elitist’s who look down on the masses with disdain wondering why others cannot see the world from their own elite perspective. It’s grand to be right Eric, but do you know what…… at the end of the day, we are all wrong and we need “Truth”, and that truth resides in the noumenal realm not in the paradigm of solipsism.

    My summary: There is a genetic defect in the underlying form of reasoning and rationality, and unless or until one is willing to address that defect, nothing will change, because nothing can change…..

    C’est la vie

    Liked by 1 person

    • Alright Lee, I see that you do understand my point that the Schwarzschildian Me has no potential to comprehend Truth beyond that of its own existence itself. No need for any of the testing that I’ve mentioned. And yet you also disagree with this position. Apparently you’ve found a method that you believe does provide Truth, and I suspect that this answer has spared you from the depths of despair. Thus I hesitate to ask about it. I’d hate to potentially compromise something which has served a good friend so well! I believe you’ve mentioned that you’re now satisfied about taking this model to your grave. But the fact that you discuss these matters with others online suggests otherwise — except of course that the theory itself seems to go missing.

      Regardless I have my own theory which I think helps preserve me from falling into the depths of despair, and I’d be devastated if it were to die with me. In truth however I don’t believe that it can die with me. I believe that science will validate these models with or without me, and given how far and how fast this relatively new institution has come. But hopefully I can at least help speed this process up. And along the way I’d be only too pleased to provide you with a second platform from which to stand!

      I don’t believe that I’ve yet mentioned to you the four great power revolutions that I associate with humanity. The first is human language. The time frame is controversial, though many specialists seem satisfied dating back to about 200,000 years. The following article however suggests that homo erectus had language .8 to 1.5 million years ago, given its standardized tools, and more significantly it’s ability to sail across oceans. https://aeon.co/essays/tools-and-voyages-suggest-that-homo-erectus-invented-language Regardless I mark the tool of language as the first great human power revolution.

      I don’t place the next quite at agriculture or civilization, but rather at the specialized occupations that should have thus emerged. In associated societies reasonably specialized occupations might date back 6,000 years. Then I put the next at about 5,000 years with written language, since an exterior means of recorded thought seems quite necessary for advanced human function. Then I put the last great power revolution at hard science, which seems to have emerged about 400 years ago.

      These revolutions, and most notably the final, have transformed our species into an incredibly powerful animal. But do you know what we’re still missing Lee? We’re still missing generally accepted theory from which to effectively use our power. And what happens when an organism becomes extremely powerful, though without proportionally increased understandings of how to use it? All sorts of power abuses, of course. It’s largely because the modern human does not have effective theory from which to lead its lives nor structure its societies, I think, that we find ourselves with the great problems associated with humanity today. Our hard sciences have given us tremendous power, and our soft sciences have failed to teach us how to effectively use it.

      So why do psychology, psychiatry, sociology, neuroscience, and so on, remain so speculative? We need such sciences to teach us about our nature and thus how to effectively harness the tremendous power that hard science has unleashed, so why do we fail? I have two basic explanations.

      The first of them should be a problem for science in general, since its basics do not yet seem sufficiently founded, though in practice it seems to harm the soft side most. I believe that science will require a respected community of professionals that have their own generally accepted principles of metaphysics, epistemology, and axiology, from which to better found the institution of science itself. Note that without effective metaphysics we get all sorts of dualistic nonsense, and even in the modern field of physics. Without effective epistemology we naturally fail to comprehend how to practically do science as effectively as we otherwise might. And without effective axiology, well that one hits right to the heart of our soft sciences. How does one understand something for which existence can be valuable, without a respected community which defines a “value” term from which such a variety of thing functions? Thus we fail to understand ourselves.

      Of course philosophers oversee all of these disciplines, and submit that they needn’t develop any sub societies with associated agreed upon principles. They claim a sort of epistemic dualism. Here we have a “science stuff”, which specialist can agree upon, as well as “philosophy stuff”, or ideas that must always remain beyond professional agreement. The progression of science however mandates that this situation can not and will not stand. A respectable community of “philosophers” (and I don’t care what name they eventually take) which provides its own generally accepted principles of metaphysics, epistemology, and axiology, should come to found the institution of science sooner rather than later. As you know, I propose a single principle of metaphysics, two principles of epistemology, and one principle of axiology, in these efforts.

      Beyond the foundation problems of science, the other basic obstacle I see is our extremely entrenched paradigm of morality. This social tool seems to encourage us to deny the nature of value in favor of fabricated moral notions. Given that we only feel what is good/bad for ourselves, and so function in associated ways, each of us are naturally encourage to selfishly deny our selfishness to others. Thus we state that it’s wrong to lie, cheat, steal, kill, and so on, given the benefits that such false assertions provide us from others. Apparently our morality paradigm is so strong, that even scientist haven’t yet been able to come to terms with formally acknowledging the nature of value for that which is conscious. Thus here again the human remains a mysterious thing, and so we fail to understand ourselves well enough to harness the power associated with modern humanity.

      Lee, I don’t know about your infallible “super science”, though if it helped get you to this level, then I wish that more people had such a thing in their lives as well. But I’m looking for people who are willing and able to help me improve the institution of standard science itself.

      Like

      • Lee Roetcisoender says:

        Eric,
        (…I see that you do understand my point… And yet you also disagree with this position.)… It isn’t that I disagree with this position Eric, it’s that I see the position which I’ve articulated as the Schwarzschildian Me and the paradigm of solipsism “as” the problem; and unless or until one is willing to address the “problem” nothing will change, because nothing can change, no matter how much human energy is expended. The best we can hope to achieve using the power of reasoning is developing another salve or ointment which can applied to the open wounds of our human experience. And in the end, if this is your goal as well, I commend you for that effort because it’s a noble calling.

        Contrary to what most people are willing to accept, what we refer to as consciousness didn’t just magically appear as an output of a highly advanced organic non-conscious computer. Consciousness is universal, and as a result of being universal, consciousness then becomes fundamental in explaining the natural world, our selves, and our place within that world. That architecture and that architecture alone will be a small step, an incremental first step required to move us beyond our current models which are rooted deeply in the paradigm of mysticism.

        I’m a pragmatist Eric, not a dreamer. The reason I blog is because I keep looking for a fissure or a crack in my models which will demonstrate that there is an inherent flaw in my theory, because even I do not want to believe what the models are telling me. I do wish you the best of luck my friend, I will cheer for you…….

        Liked by 1 person

    • Lee,
      So you agree that, for example, you have no potential to be certain of any Truth beyond that you exist in some manner? And furthermore that it’s the same for anyone else (should anyone else exist, that is)? Good to hear. For some reason I was under the impression that you thought that you’d somehow gone beyond provisional science and so created something absolute.

      Rather than refer to fundamental human uncertainty as a problem in itself however, I consider this to be a fairly standard issue to deal with. Thus it’s good, bad, or indifferent, based upon associated contexts. Do you instead consider Truth to be some kind of end purpose to achieve? Or conversely if you consider it only a potential means to an end (as I do), do you theorize any end purpose to exist (once again, as I do)? If so then I’d love to hear what you’ve come up with in that regard.

      On consciousness, be careful about presuming that everyone uses the term to reference one unique idea. Perhaps everyone uses the number “two” in one unique way, but certainly not “consciousness”. As I understand it, you use the term to reference a theorized universal element of reality. Furthermore as a naturalist you define it to contain absolutely no hocus pocus. I also theorize such a dynamic, but instead refer to it as “causality”. Regardless just as I consider it mandatory to use your consciousness definition when I consider your associated ideas, I expect this same concession in return when we’re discussing my ideas.

      I define “consciousness” to causally exist as an output of certain highly advanced organic non-conscious computers. And what specifically do such computers output? The defining characteristic is a punishment/ reward dynamic. I consider this stuff to be all that’s valuable to anything throughout all of existence, or the theorized purpose that I mentioned above. (And of course this “value” is also defined differently from the way that you define the term in your models.)

      It’s good to have your support Lee. What I’m interested in most however are people who are able to demonstrate that they understand the nature of my models themselves. Here I should receive criticism that helps me improve my efforts. And hopefully the more people who are able to understand the nature of my models, the more invested collaborators I’ll gain who are interested in promoting this sort of “salve”.

      Like

      • Fizan says:

        Eric and Lee, I enjoyed following your thread of conversation. Eric your reference to the idea of somehow being saved from the depths of despair really captures something I think.

        There are so many examples of when people think they’ve “figured it out” and I’m not talking about religious or politcal fanatics only. Intelligent and well meaning people including some scientists are at least on this quest to figuring it all out.
        No wonder there are so many ‘models’ out there. To me most of them do make sense if understood from certain perspectives and yet often those different models don’t gel with each other at all. The reason is they are unwittingly built and make sense only if you have a certain perspective that is an underlying sense what the truth aught to be like. And science is a method to protect against this. But true science does not shed any light into any truth either, it’s only a tool which can be used in any paradigm. People tend to get mixed up between the scientific method and the underlying philosophical inclinations many scientists these days share or prepetuate (these days is important) – which in itself is bad science but very much human.

        I think people in general are uncomfortable with uncertainity infact extremely so. It’s infact the depths of despair you talk about. And yet if you get rid of all ‘ the models’ like getting rid of all the gods, what you are left with is maximum uncertainity. That is probably the ‘real truth’ that none of us wants to face or in fact each of us does always faces but rejects by drawing a sphere of meaning (models) around us. Otherwise we would lose our sanity (as extremely psychotic patient’s do – No, not delusions because delusions carry meaning and are infact weak attempts to protect against losing sanity completely).

        For me the best way to move forward in maximum uncertainity (which is probably The Truth’) is to base our understanding interms of ‘usefullness’ rather than ‘truth’ (as truth changes with perspective). Usefullness would have to be grounded as ‘moral usefullness’ which can be something most humans share as we are the same species. For example avoiding hell and aiming for heaven or in other words reducing suffering and aiming for maximum wellbeing for everyone.

        And that’s my ‘model’ (for now at least).

        Liked by 1 person

        • It’s good to hear from you Fizan! I generally try to counter my various ideological friends diplomatically when possible, given how important this stuff is to each of us “ideologues”. I nearly forgot about you, or a person who’s quite happy to remain agnostic until things seem quite sensible.

          Lately I’ve been proposing that the ancient field of philosophy resides over (or sits on?) the key to improving science. I believe that we need a respected community that has it’s own generally accepted principles of metaphysics, epistemology, and value. While I don’t see academic philosophy getting there as a whole, I do believe that a smaller and stronger society will accomplish this feat, and so become the keepers of principles through which effective future science occurs. I don’t care what name this society ultimately takes, but simply seek its swift arrival.

          I see that you’re proposing a value position of your own. Apparently your “moral usefulness” aims to reduce suffering for the maximum welfare of everyone. Excellent! My main initial concern is your use of the “moral” term there, given its tremendous baggage. I conversely propose amoral value theory. Consider this:

          Physicists present various useful models, such as “force equals mass times acceleration”. These are clearly neither moral nor immoral, but rather simply more and less effective descriptions. Furthermore people use such models to figure things out, build machines, and so on. These applications may be considered amoral as well, since there are all sorts of things which can be done with them.

          Let’s now apply this model to the human. If our morality paradigm didn’t exist, modern psychologists should be permitted to use their “valence” concept (literally “what’s valuable”) to state that maximizing this stuff for any given subject over a specific period of time, maximizes its associated welfare. Thus just as today we have effective engineers who use physics amorally, we could also have a new amoral form of psychology, as well as people who use the field’s descriptions somewhat as engineers currently use physics. Alas however, our morality paradigm does not yet permit psychologists to state such a bold description of what’s valuable. Thus while hard science brings us incredible power, apparently soft science does not yet provide effective theory of how to use it.

          Anyway Fizan, I’m happy that you’re working on some welfare theory of your own, as well as that Lee and I have been able to entertain more than just ourselves.

          Liked by 1 person

          • Fizan says:

            Eric,
            I agree perhaps moral has too much baggage to be used here but I couldn’t find a better word. ‘Useful’ or ‘effective’ are good words (effective is what you prefer it seems) but they carry no meaning unless we also specify useful ‘to do what’ or effective ‘in doing what’?
            They are both saying this is a ‘useful’ or an ‘effective’ way towards this end (but what is the end goal?)

            Let’s take your physics example of F= ma . You say this is a useful description. But useful in what sense?
            Useful as a ‘tool’ or useful in that it is ‘Reality’ (or the Truth) or both ?
            You seem to suggest it’s useful as a tool for further applications (to figure things out and build machines). If that’s the case then such applications are ‘useful’ only in the sense that they can effect human suffering or well being. Can you think of any other sense they can be useful for? (that’s a genuine question). Hence, I said ‘moral’ usefulness to ground it. And I think this is something most humans can ultimately have good agreement on as well.

            The other way would be to say it’s useful in describing Reality. That would probably be truly amoral. But that’s a hard ask and rather than being a physics matter it becomes a philosophical matter.

            “… “valence” concept (literally “what’s valuable”) to state that maximizing this stuff for any given subject over a specific period of time, maximizes its associated welfare. Thus just as today we have effective engineers who use physics amorally, we could also have a new amoral form of psychology, as well as people who use the field’s descriptions somewhat as engineers currently use physics. ”

            But if you’re talking about maximizing welfare how is that an amoral usage?

            Now coming to what I think was your underlying point i.e. building machines to increase suffering, then again that’s not an amoral usage, it’s an ‘immoral’ usage. The physics used does not become amoral either, in fact it proves that those physics models have ‘moral’ usage (which can be either positive or negative).

            Liked by 1 person

        • Fizan,
          It’s good to hear that you had concerns about using the “moral” term, but couldn’t quite think of an appropriate substitute. Perhaps a term such as “welfare” would better serve your purposes?

          You’re right that I consider F=ma useful as a tool rather than as potential Reality. And for what end purpose might such a tool be useful? Welfare, valence, value, affect… there are lots of terms for this. I consider this stuff to be produced by a non-conscious computer (such as the brain in your head). I’m referring to a punishment/ reward output that serves as input to motivate a conscious form of function.

          I consider applications of physics (or anything else) to potentially be useful for sentient existence (not just the human). Conversely without teleology, no I wouldn’t say that anything can effectively be termed “a tool”, and even for dynamics such as life and evolution. Apparently they’re teleonomical rather than teleological.

          By terming the uses of physics “amoral”, I merely meant to imply that they could be used anywhere in the spectrum of morality. It’s the same for effective applications of psychology and philosophy I think. But apparently in these subjects things get so personal that professionals tend to balk when human nature gets unsavory. From trolley problem dilemmas to Derek Parfit’s “repugnant conclusion”, they seem to hope that we can somehow institute what we’d like to be, or “moral”. If so this should help explain why we remain so confused about our nature itself.

          I suspect that visiting alien life would have no problem determining what’s conceptually good and bad for us. They’d say “Welfare for Earthlings (and all else) exists as how good to bad a given subject feels over time”.

          (So perhaps I’m an alien? And note my cover Mike. I not only downplay the potential for life to make it in other solar systems, but even robots!)

          Liked by 2 people

          • Eric,
            “I suspect that visiting alien life would have no problem determining what’s conceptually good and bad for us. They’d say “Welfare for Earthlings (and all else) exists as how good to bad a given subject feels over time”.”

            Given how evolution works, I suspect any aliens would have just as much of a jumbled mess of instincts as we have, with all the resulting never ending debates about the best way to live.

            Liked by 2 people

          • Fizan says:

            Eric, we’re somewhat on the same page now.
            Let’s use the term welfare then (only reservation is there is no equivalent word of it in the negative dimension such as ‘immoral’ where I’d probably have to use negative-welfware rather then suffering to keep it as a spectrum and not as a dichotomy).
            As long as we can agree that we have to somewhat give up on saying that we are understanding ‘Reality’ but rather focus on the usefulness of theories /concepts etc. as tools with the power to have effect on the spectrum human welfare.

            In terms of using physics anywhere on the welfare spectrum I can also see us similarly using chemistry, biology, psychology, sociology etc. on the same spectrum. The difference one can argue is in the quantity of their impact or in other words the power in their tools. You can kill whole cities using a bomb made with the tools of physics and chemistry but you can also alter and tune a whole country’s thoughts and desires using the tools of psychology and sociology (including their willngness to use city killing bombs). Which is a more powerful tool? (debatable)
            But this doesn’t even have to be a debate, because the debate assumes there is some real seperate existence of physics, chemistry, psychology etc. rather than these being arbitrary and (again) useful divisions of the same subject matter.
            In terms of moral dilemma’s I guess once everyone can agree what our end goal is then everyone can come to similar conclusions in such dilemmas. Again I feel the best bet we have is to have everyone agree that we need more of heaven/ utopia and less of hell/ dystopia. The critical issue becomes is one life the same as two lives? The answer is No. Because we are a social and interdependant species so no one can be seen in isolation to the rest and most likely even our concepts of welfare and suffering arise from us observing and comparing to each other.

            Liked by 1 person

        • Wow Fizan, it sounds like we truly are pretty square right now. And it’s not just “welfare” for which I mourn the lack of a dedicated negative term, but “value”, “affect”, and so on. Perhaps “disutility” works for that term, not that I’ve noticed its use. Sometimes I’ll put an “anti” in front of a positive term, while other times I’ll use a “+/-“ distinction. And yes I like that you see a spectrum rather than a dichotomy for this sort of thing.

          Regarding welfare dilemmas, that’s where it becomes most critical that we somehow take the perspective of objective third parties attempting to understand, not interested parties which thus choose sides in sympathy. Reality functions amorally, and therefore in order to potentially understand the scientist must strive to take an amoral perspective as well. But then once an effective understanding does seem to be gleaned, that understanding should also have some potential to be used as a tool. The tools of hard science have made our species extremely powerful. Our human related soft sciences have not kept pace however, and so haven’t taught us much about the effective use of modern human power.

          I’m curious what you understand about my models so far Fizan? For example, much earlier in this thread I presented a diagram which practically defines elements of functional human consciousness, and relates this nature to non-conscious computation. One of my blogging goals is to find people who are able to understand my models well enough to actually predict what I think about various related issues. I suspect that such people could help identify faults in my models for potential mending. And indeed, if we come to see eye to eye well enough, my hope would be for any such person to share my journey to change academia quite radically.

          Like

  11. Fizan says:

    Mike,
    I’m very much onboard with what this guy Eric Schwitzgebel is saying. I think consciousness is a very precise term and we very well know what we mean by it because there is nothing else to get it confused with.
    The problem seems to arise in our efforts to reduce it to physical processess not because that must be impossible but because of our inadequacy to be able to do so. It seems like a tantrum to then disregard it’s existence altogether just because you’r stuck with the puzzle.

    Liked by 1 person

    • Hi Fizan,
      I think you might be misunderstanding Eric Schwitzgebel’s position. (Easy to do if you’re only going on my quote above.) His latest post covers the difficulty in deciding whether a garden snail is conscious due to our inability to reach agreement on a definition of consciousness. http://schwitzsplinters.blogspot.com/2018/10/two-problems-with-extending-theories-of.html

      We might have covered this on another thread, so sorry if this is ground we’ve already covered, but if you see consciousness as a very precise term, what would you say that term refers to?

      Like

      • Fizan says:

        Hi Mike,
        I’m not much familiar with Schwitzgebel but I should have been clearer. I meant to say I agree with his general position that you refer to in this post. On consciousness being a percise term that’s something I say so for my own point of view.

        I think it is percise in that it describes something we most definitely clearly know what it is. The problem starts when we you say something like ‘what does it refer to? As if you didn’t know it from experience. But I can see when I say you do know, you automatically start thinking of whatever model you have for it.

        Like

        • Thanks Fizan. I think my problem with saying that it’s a precise term, without being able to articulate what that term refers to, assumes that we’re all referring the same thing when we use it. As someone frequently involved in the collection of requirements for new IT systems, I can tell you that assumptions like that are frequently wrong.

          Since anytime anyone attempts to articulate a precise definition it leads to disagreements, I strongly suspect that consciousness is a particularly severe version of this. The problem is any precise statement either excludes things we intuitively feel are conscious, or includes thing we intuitively don’t think of as conscious. It’s one reason why so many people like to use borderline synonymous (and equally ambiguous) phrases as definitions such as, “something it is like”, or “subjective experience”. And I think we have to be careful not to confuse familiarity with simplicity.

          The reality is that I think our intuitions are inconsistent. We typically have one standard for consciousness within our own minds, another for other organic systems, and a third for technological systems.

          Like

          • Fizan says:

            Mike, you say “As someone frequently involved in the collection of requirements for new IT systems, I can tell you that assumptions like that are frequently wrong.”
            And this is an ‘intuition’ you have.

            But you also say “The reality is that I think our intuitions are inconsistent. ”

            I agree with you trying to precisely define one term using other terms is problematic because then each term has to be described with respect to some other terms and so on.

            “We typically have one standard for consciousness within our own minds, another for other organic systems, and a third for technological systems.”

            I take by standard you mean what I called ‘a model’ and I disagree in that many of these models even for the same thing are not similar from person to person. But I agree that they are all models.
            Firstly, we experience consciousness, before we try to understand it or have any intuitions about what it is. It is fundamentally experienced by everyone.
            When trying to define it we get into details such as consciousness is within our mind > our mind is in our brain > the brain is made up of this stuff > these other things are also made up of this stuff > these other processes work in similar ways etc.
            This latter part is where we all tend to diverge and have many interpretations etc. All that may be good because eventually it may help us find some useful heuristic tools for practical application.
            But if going down that route someone eventually comes to a conclusion like ‘consciousness is an illusion’ then they probably need to retrace their steps. Because they can’t be sure if their conclusion isn’t also an illusion. That’s why it helps to know that our starting point is always our real experience of being conscious and most people would agree to what that is from personal experience.
            A bad example is to think of it being similar to how people don’t need to know the taxonomy or genetic makeup of a cat to know what a cat is. Someone might overthink it and say a cat is an illusion, there is only atoms in space-time or quarks etc. Or someone may say a cat is something real and distinct because it has a specific evolutionary history and genetic makeup etc. Then someone may nitpick that and say but each offspring is genetically distinct even if by a billionth of a fraction, so it is an illusion etc. Yet they are all discussing cats and not ‘ufgthy’ which says something.
            (It’s a bad example because everything we describe is already happening within our consciousness so it makes no rational sense to equate it as being in some sense analogous.)

            Like

  12. Lee Roetcisoender says:

    Fizan,

    (I think people in general are uncomfortable with uncertainity infact extremely so. It’s infact the depths of despair you talk about. And yet if you get rid of all ‘ the models’ like getting rid of all the gods, what you are left with is maximum uncertainity. That is probably the ‘real truth’ that none of us wants to face or in fact each of us does always faces but rejects by drawing a sphere of meaning (models) around us.)

    You have isolated the problem of the human experience with precision. The noumenal realm of the “unknown” is responsible for this uncertainty, and that is the real Truth.

    Our primary experience offers two alternatives to this conundrum, neither of which is favorable: First, a chronic relationship with the unknown or second, an acute relationship with the unknown. A chronic relationship is a long term relationship with the unknown where the models we build gives one a sense of meaning, something one can grasp and hold on to. It provides one with stability in the face of uncertainty where one can then soldier on. An acute relationship with the unknown is a more immediate and therefore more problematic. Intellectual models for this type of personality do not have the same staying power. Individuals who have an acute relationship with the unknown have the tendency to move from one intellectual construct (model) to another because they find uncertainty to be fundamentally troubling, they are also predisposed to recognize the hypocrisy inherent within the models themselves and the people or institutions who promote them. Individuals who display an acute relationships with the unknown are stereotyped by those who have chronic relationships as individuals with addictive personality types because individuals with an acute relationship tend to create dependencies (models) for themselves which go beyond the boundaries of intellectual constructs into abusing substances like alcohol, drugs etc, etc. Where in fact, because of the paradigm of maximum uncertainty we are all addicts with no exceptions, and the dependencies (models) we create for ourselves are merely a matter of scope.

    Liked by 1 person

    • Fizan says:

      Lee,

      I liked your description of chronic and acute relationships with the Unknown. It has the same flavour as some psychoanalytic theories I have found interesting such as Jacques Lacan’s work.
      I would add two caveats though, firstly you can call the Unknown as Kant’s ‘Noumenal realm’ but you have to acknowledge that calling it something doesn’t add anything, it’s truly unknown. Secondly, even this explanation is a constructed ‘model’ (possibily of the chronic variety) and you stick to it because it’s ‘what makes sense to you’ or you haven’t found a better one yet (or for any number of unkown reasons).

      Like

  13. Lee Roetcisoender says:

    Fizan,

    Caveats number 1, the Unknown is also Parmenides Reality/Appearance distinction. There are no qualities or properties that can be assigned to the Unknown, and there is no opinion that one can have of the Unknown because it is separate from appearance or opinion and will always be Unknown. The idea of a noumenal realm is not new, however, homo sapiens have always tried to give meaning to the Unknown, and the only way meaning can be assigned to the Unknown is by drawing a correlation between something that is already known. Unfortunately, that reference point always leads to the gods, what I call the god deference mechanism. For some reason, when addressing the Objective Reality of the Unknown, the god deference cannot be avoided and homo sapiens immediately start freak out.
    Caveat number 2, as far as a model, one has to first acknowledge that there is such a thing as an Objective Reality and that for all practical purpose that Reality is Unknown. When I wrote my book, I originally thought this was a no brainer, but having since been involved in correspondence with others, there is a huge resistance to the idea of an Objective Reality, let alone one that is Unknown ….. go figure. People prefer the surface of appeal of Idealism, be it Eastern or Western in flavor, Materialism, Substance Dualism, Property Dualism, or the most noble calling of Skepticism itself.

    For what it’s worth, my models states that there is a third alternative to either a Chronic relationship with the Unknown or an Acute relationship with the Unknown. That third option is a Meaningful relationship with the Unknown. We already know what the Unknown is… it’s our greatest paradox. It’s our greatest paradox because the Unknown has no meaning. Just because the Unknown has no meaning doesn’t mean that it lacks value, just the opposite is true. The Unknown is Value, and Value comes first in hierarchy.

    Like

  14. Eric,
    “I suspect that visiting alien life would have no problem determining what’s conceptually good and bad for us. They’d say “Welfare for Earthlings (and all else) exists as how good to bad a given subject feels over time”.”

    Given how evolution works, I suspect any aliens would have just as much of a jumbled mess of instincts as we have, with all the resulting never ending debates about the best way to live.

    I don’t doubt that at all Mike. But in this age of science, notice that we humans have developed some truly amazing models on the “hard” side, and continually fail to come up with anything close on the “soft” side. So what permits the non-human related side to be studied effectively, but not the human side? Are the dynamics of psychology incredibly more complicated than the dynamics of physics? Instead I suspect that since physics is physics, though psychology is us, that we thus find it difficult to be objective about it. Perhaps if we could study the psychology of an alien race (somehow without associating it with ourselves), then we could develop some reasonably solid models from which to describe its function? And perhaps it could also do so for us? (Hypothetically rather than practically of course!)

    For example, over here (https://selfawarepatterns.com/2018/09/30/seti-vs-the-possibility-of-interstellar-exploration/#comment-23730 ) I proposed that once we invent effective “valence machines”, that our species should progressively become more and more dependent upon them. Given that we naturally find this end repulsive, with perhaps entire lives spent hooked up at some stage, such troubling implications seem to restrict us from theorizing the nature of value itself. Instead we ponder “morality”, which is to say a rightness and wrongness notion from which to judge behavior. This way we can decide that if something is repulsive then we needn’t entertain it. Instead we can explore doing what’s “moral” or “right”.

    But here’s the catch. If the goal of our soft sciences is to understand the nature of value laden entities, though our morality paradigm does not permit these sciences to openly acknowledge value and its various implications, then their softness should thus be mandated. If something harbors “purpose”, we shouldn’t expect that we can conveniently sidestep that feature and still be able to grasp much about how it works.

    Note that when I propose that the human should end up on “the blue pill of valence machines” (given their creation), I do so as a standard theorist proposing implications for a specific organism’s function. Indeed, perhaps I’m the one who ended up taking the red pill after all?

    Liked by 1 person

    • Eric,
      I think it’s worth noting that there isn’t a sharp divide between “hard” and “soft” sciences. Physics and chemistry are harder than biology, which is harder than meteorology.
      Neurobiology is harder than psychology, economics, sociology, etc. Even within physics, particle physics are harder than astrophysics, primarily because the former is an experimental science and the latter largely an observational one.

      On studying alien species, we do study the psychology of animals (comparative psychology). For simple animals, it’s reportedly easy to develop predictive models. But the more intelligent the animal, the less predictive those models can be. Of course, scientists studying animals have a major advantage in that the ethical rules on animal research are far less restrictive than for human research. We can’t raise a human under strictly controlled conditions to see what variables affect that human’s behavior.

      I don’t see the social science preoccoupation with morality that you see. Sure, some of it is, but much of it seems as objective as it can be given the constraints of the particular field.

      Liked by 2 people

      • Mike,
        The softness that I’m referring to is associated with human related sciences given that there should naturally be elements of human reality that we’d rather not be true. For example there’s my speculation that if valence is our end goal, and if we figure out how to build machines which directly provide us with this end goal, then humanity should naturally end up concerned mainly about it’s capacity to service itself through these valence machines. Conversely since physics concerns something other than us, our findings in the field simply shouldn’t have such potential to invoke personal emotions. So perhaps if we could somehow maintain a reasonable level of objectivity about human dynamics (regardless of personal emotions) then the vast majority of the softness associated with our human related sciences would evaporate?

        Your observation that we don’t do well in comparative psychology does suggest that we also wouldn’t do well with the psychology of an alien race. Good point. And note that the “comparative psychology” title already suggests the anthropocentrism which I consider so problematic for these fields in general. (Long sigh…)

        It’s not that I see an overt preoccupation with morality in human related sciences. That would indeed be suspicious! It’s more that I consider there to be an understood limitation to science such that even if value does exist for us, its study can only occur outside of its domain, or in what’s known as “philosophy”. I call the dichotomy here “epistemic dualism”, which is to say, a belief that we require two separate disciplines in order to explore one single causal reality. Of course this notion complements substance dualism very nicely, though this one invokes an epistemic failure rather than magic. And notice that while substance dualists are looked down upon in academia, epistemic dualists are instead the accepted norm. But how can there be only one kind of causally related stuff to study, and yet two fundamentally separate modes of exploring how that stuff functions? Here naturalists don’t seem to openly charge me with the crime of “scientism”, perhaps because they have no sensible reply to my question for them. So I presume that they simply think of me this way rather than openly state it.

        This brings me to another unwelcome question. How might scientists develop effective models regarding the nature of a value laden creature, if they also abstain from theorizing the nature of value itself? Regardless it’s not quite that I consider these fields inherently biased, but rather that I consider them handcuffed given that our morality paradigm hasn’t yet permitted them to really get into the nature of value itself.

        Liked by 2 people

        • Fizan says:

          Hi Eric,

          I thought I’d reply to our above discussion in this thread to touch on some of the discussion you and Mike were also having. Firstly, when you say “Reality functions amorally, and therefore in order to potentially understand the scientist must strive to take an amoral perspective as well.” I take issue because now it seems you are using the word ‘moral’ which we agreed has a lot of baggage. We also agreed that ‘Reality’ can’t be known rather we can only have useful concepts (or tools).

          Along the same lines I can’t see the sharp devide that you see between ‘hard’ and ‘soft’ sciences I think it’s a mistaken devide. I don’t think physics or anything for that matter concerns anything other than us. Everything concerns us that’s why we do it. There are devided opinions on the most fundamental things for example are there infinite universes or just one, can you imagine a bigger devide than that? Can you provide a similar scenario from the ‘soft’ sciences?
          What you are probably concerned with is the fact that findings in physics and chemistry tend to be replicated time and again whilst fidnings in psychology and sociology etc. tend to face a replication problem. They also have the problem of how statistically significant (sigma value) the findings are which is usually low. This is a factual problem and it does not seem to have anything to do with human bias or with our ability to dig deep enough or our ability to face the ‘truth’ and be objective etc. This problem exists because the processes being studied are more complicated than physics and chemistry. Where as the basic unit of study in physics and chemistry tend to be basic particles, forces, energy etc. the basic unit of study in psychology and sociology tend to be complex creatures and their complex interractions. Particles don’t learn or change, creatures do learn, change and evolve constantly making them much more difficult to perdict (and hence the problem of replication and statistical signifance).

          Coming to your model, this is what I understand so far (it may be completely off). You claim that humans (and other higher creatures) are constantly computing to increase ‘valence’. It seems to me that your are more concerned with how we make choices/ decision based on the goal of increasing valence. What I fail to see is an explanation of what consciouness is from your model. You use the word ‘valence’ which already is a positive or negative expereince (qualia are embeded into this word in you model). This valence output is generated through the non-conscious computer and it drives the conscious computer whose goal is to increase valence. I fail to see a description of how the conscious computer is able to feel this valence as positive or negative. As far as it being a model of decision making I can’t comment on the validity of this either since it would have to be tested first. What I will say is that we are already making alot of progress with this regards through experimental study and there are many incomplete models of how we make decision. For example we now know that our decision are usually made many seconds before they come to conscious attention and as such can be perdicted even before the person himself realises it.

          Liked by 1 person

          • Thanks for picking this up Fizan!

            The accepted definition for the term “amoral” implies ideas which are neither moral nor immoral inherently, so it’s unfortunate if my use of the term caused you to wonder if I was backhandedly presenting moral theory. Nothing could be further from the truth. I consider notions of rightness and wrongness to serve as an evolved social tool, and so strong do I consider this tool that I believe scientists themselves tend to fall under its spell. Actually I don’t see how one could suggest that moral influences are removed from science today, given that it even harbors a field entitled “moral psychology”.

            I present theory which is no more moral or immoral regarding our nature than the field of physics is. But as such it will naturally have implications that may be perceive as immoral. Thus here people are able to say, “That’s too repugnant to be true. What you’re saying would be immoral.” But the thing is, if reality itself happens to have various repugnant implications, it may be that various effective theories can also be repugnant to us. Regardless, please alert me if you can ever document my ideas veering away from amorality.

            Like you and Mike, I don’t consider there to be a sharp divide between harder and softer sciences. But I do consider it extra difficult for the human to effectively study itself, given that what it finds should naturally have personal implications to it. It can be distressing to believe repugnant things about us. My question to Mike was, how does a value laden creature effectively study itself, if it cannot openly discuss the nature of value to it? Note that science effectively abstains from value by leaving it for philosophers, and they in turn present value almost exclusively in terms of moral notions. Of course the answer to my question can only be “poorly”.

            I’ve often heard about how our mental and behavioral sciences are “naturally soft”, thus rendering scientists in these fields blameless. But the effective scientist in these fields must question such personally convenient positions. You may be entirely right that human behavior happens to be far too complex for effective heuristics, or the sorts of positions that we see in chemistry and physics. (I realize that you’ve been taught this very thing from the beginning of your studies, so it’s to be expected.) But this position seems both self serving, as well as a self fulfilling prophecy. Why not remain agnostic regarding whether or not psychology can become far more of a “hard” variety of science? Why not at least try to give radical theorists who see great problems an objective ear? I ask for people in these fields, who make it their business to know all about human biases, to somehow practice what they preach. Whenever someone wants to believe something, and of course mental and behavioral scientists want to believe that they’re doing great, then question that position first and foremost.

            Regarding my model of brain function, I suppose that I’d hope to gain your interest by explaining that (unlike other models I’m aware of), it concerns two computers. Here the brain is a standard neuron based computer, though I theorize consciousness as a second form of computer that’s outputted by the first. It harbors three varieties of input, one variety of processor, and one variety of output.

            I believe that from my models I should be able to sensibly answer most any non-engineering question regarding the basics of our function. For example I recently got into the “unconscious” term with Hariod, or a term that I believe would be helpful to retire in favor of “altered states of consciousness”, “quasi- conscious”, and of course straight “non-conscious”. But hit me with whatever queries you like and I’ll do my best!

            Like

          • Upon further consideration Fizan, regarding “morality” I believe that I interpreted you incorrectly. Like you I usually take my time with this sort of thing, though my last was instead written in haste. When I say that I present “amoral” theory, you’re right that I must inherently get into the nature of “morality”, or a term that we agree has baggage. But there is a relatively standard way to use the term as well. I’ll now get into this to hopefully help clarify my last response.

            We realize that our cars need fuel in order to function, for example, so one could say, “If you want to drive your car, then you ought to make sure it has fuel”. There’s nothing moral to such an “ought”. It’s instead just part of an “If… then…” statement. Apparently Immanuel Kant called these “hypothetical imperatives”. But there are also statements such as “You ought not steal, since stealing is wrong”. This would be a moral statement, or one that brings up what he would call “a categorical imperative”. The validity of such a statement will depend upon the validity of an associated wrongness/ rightness idea, not an otherwise implicit or explicit goal. And whether moral realist like Kant, or antirealist like Massimo, that’s virtually all that philosophy’s field of ethics concerns today (though there is a handful of amoralists as well, which I believe began with the late J. L. Mackie).

            I’m on the nascent side of the amoralists, but go much further than their relatively standard philosophical musings. I consider our moral notions to be an evolved social tool which effectively inhibits mental and behavioral scientists from exploring the nature of value, or thus hinders their quest to develop useful models regarding our nature. Of course if you believe that modern science already does explore the nature of value sufficiently today, and so isn’t much inhibited by humanity’s paradigm of morality, then we could discuss that.

            I hope that you realize how much I appreciate that you’re now considering the concerns of an outsider. Thank you!

            Like

          • Fizan says:

            Hi Eric,

            I would like to know what personally convenient positiions do you think scientists in these fields have taken?

            Secondly, I want to clarify my own history so you may have a better idea of my biases. For the majority of my life I’ve been interested in physics and have idealised many great physicists. I was pushed into biology and the medical field which I hated. Coming out of med school I decided to take the unusual (and very stigmatised) career route for doctors to go into psychiatry and have now been working in psychiatry for 4 years. I thought I need to give you some context as you are an outsider and because you also said “you’ve been taught this very thing from the beginning of your studies, so it’s to be expected.” With regards the later statement I want to clarify that no one taught us (at least not myself studying in Pakistan) anything about psychology or sociology not in O and A levels. Then in meds school there was ‘behaviour sciences’ which carried the smallest percentage of marks for any subject in all 5 years of med school. Even here the subject matter was never taught but we rather had to read a small booklet.

            Now coming to the complexity of the subjects, as I said earlier psychology and sociology are more complex than phyisics and chemistry. All these fields use heuristics. And I’m certainly not saying that psychology can’t become as ‘hard’ as biology, what I’m saying is that there is no such thing as ‘hardness’. If we want the same level of statistical significance and perdictability in psychology as biology then we essentially need to be able to explain psychology from the basic biological perspective (that is at the level of cells) which is exactly what neuroscientists are trying to achieve. This is a tremendously difficult task (not impossbile) because using cutting edge technology some neuroscientists are trying to map an entire human brain at the cell level which they expect to acheive by roughly 2050. Even when they have that template it would be an even bigger task to make accurate perdictions abou the brain’s behaviour since we struggle to do that for even simple systems with a few hundred neurons.
            What I’m essentially saying is that at that level of explanation it essentially become biology. The use of psychology exists because so far we’ve been unable to supplant it from such a perspective despite trying. Having failed so far and looking at the enormity of the task one has to start wonder whether it is even possible, but I as always remain agnostic about that.

            With regards ‘Reality’ and ‘morality’ I would like us to clarify whether you believe the teories in physics are useful in that they are describing Reality or in that they are heuristic tools useful to humans?
            Because if it’s the later then how do you claim these tools to be amoral when compared to the tools of psychology for example?
            Lets take F=ma and the ‘hawthorne effect’ how do you think the later is concerned with rightness and wrongness whilst the former isn’t? (or give my another example if you want).

            Perhaps what you are actually trying to get at is something else. Let me elaborate using medicine: What is healthy and what is unhealthy ? This is a question which has many dimensions and is perhaps difficult to answer (and probably the kind of thing you wan to tackle). But once we agree or decide on what is healthy or unhealthy we can start using the tools of medicine towards a goal, the same can be said for psychiatry (although it becomes even more murky here). The tools of medicine or psychology are similar to the tools of physics or chemistry i.e. they would fit into your definition of amoral and are purely based on science.
            But the problem is with our initial presumptions (when we defined what ought to be healthy) which has that moral aspect to it. So far I don’t see a way in which science can tackle that issue but I’m starting to feel that’s what you are attempting to do?
            In my view you would have to start at concepts like heaven and hell (which I think is also your positive and negative valence) to give direction to what we can all agree we want and donot want.

            Liked by 1 person

          • Fizan,
            I have been curious about you, so thanks. So you were a physics student who was pressured into med school (and I can’t argue against the economics of that!), but still rebelled by going into psychiatry? Well given your blogging I suspect that you’re enjoying your career, And I also suspect that you’ll find that your field will advance far more than physics will over the next half century, which should be quite interesting.

            I think I understand your point about how it’s inherently far more difficult to develop effective theory in psychology than physics or biology. That would surely be the case if we need to approach the field from the cellular level, as in neuroscienific mapping. Tough job! And apparently you weren’t formally taught to believe this though it just makes sense to you. I have a different model that you may find effective however.

            Just as people like Newton have used standard observations of physical dynamics to build effective models, I believe that psychologists should be able to take standard observations of human dynamics to build effective models. And indeed, psychologists in general seem to believe this as well. But apparently we don’t yet have very informative and/or effective theory in the field. There’s nothing close to an ‘f = ma’, for example.

            My proposal is actually that ALL science needs to be approached amorally, so apparently I misled you there And if so you’ll surely wonder how I think they’re not approached amorally? How might something like ‘the hawthorn effect’ display morality? But that’s not actually what I’m suggesting. Instead I’m proposing that standard social impositions of rightness and wrongness (morality), have effectively dissuaded scientists from formally acknowledging and exploring the nature of value itself. The idea is that scientists would be socially persecuted for proposing theory which can be immoral, and indeed, don’t naturally want to go there given what this sort of thing suggests. Regardless, given that science does not openly theorize a value dynamic to existence, I suspect that this basic void hinders mental and behavioral theory.

            I see two separate ways to logically challenge this position. One would be to assert that our mental and behavioral sciences already do formally acknowledge a value dynamic. But then if so I’d like that theory identified. As an outsider I do propose such a position, which I call Amoral Subjective Total Valence, though I haven’t noticed this or any competing theories in science. (My ASTV defines the value of existing over an associated period of time, as the summation of the subject’s positive and negative valences over that period.)

            Then the second logical way I see to challenge my position would be to propose that value theory for something that harbors value, isn’t actually needed to develop useful models regarding its function. Why wouldn’t we need to openly acknowledge purpose in order to develop useful models regarding the function of purposeful creatures?

            Then given my value theory I also present a wide range of models for individual and comprehensive assessment.

            Like

  15. Lee Roetcisoender says:

    Morality is a “straw man”, as is the notion of equality and justice. Every artist knows that the “underlying form” of any expression is contrast, because without contrast in all of its forms there cannot be an expression. The best metaphysical definition for morality, justice and equality that I’ve been able to come up with is: “An unequivocal, equal partnership of shared power.” This is an axiom and self evident, a dynamic that does not exist in our phenomenal realm.

    Quote from chapter 11:
    Justice, equality and morals are not qualities, properties or characteristics which can be assigned to the underlying the form of any expression. Every artist is aware of the inherent nature of expression. The underlying form of expression consists of diversity and contrast, both within the texture, form and content. This axiom is self-evident, unveiling itself across the entire spectrum of art, whether it be a painting, a sculpture, a drawing, music, dance, or the syntax of poem and prose. The diversity of asymmetry not equality, the contrast of inequity not justice, the opposition of immorality, not morality are the fundamental building blocks of expression. In a world of justice, music would consist of a single analogue of sound for fear of being prejudice. In a world of morality, a painted work of art would consist of a single hue of color void of diversity or contrast, both in the texture, form and content for fear of being immoral. In a world of equality, dance would consist of a singular analogue of body position for the fear of inequity. In a world of morality, a sculpture made of stone would be the hewn marble untouched by human hands for the fear of being immoral. In a world of justice, a drawing would consist of a single line following an infinite plane into infinity for fear of bigotry. In a world of equality, the syntax of poem and prose would consist of a single letter in a never ending sentence for fear of partisanship.

    Like

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.