The prospects for a scientific understanding of consciousness

Michael Shermer has an article up at Scientific American asking if science will ever understand consciousness, free will, or God.

I contend that not only consciousness but also free will and God are mysterian problems—not because we are not yet smart enough to solve them but because they can never be solved, not even in principle, relating to how the concepts are conceived in language.

On consciousness in particular, I did a post a few years ago which, on the face of it, seems to take the opposite position.  However, in that post, I made clear that I wasn’t talking about the hard problem of consciousness, which is what Shermer addresses in his article.  Just to recap, the “hard problem of consciousness” was a phrase originally coined by philosopher David Chalmers, although it expressed a sentiment that has troubled philosophers for centuries.

Chalmers:

It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does…The really hard problem of consciousness is the problem of experience. When we think and perceive there is a whir of information processing, but there is also a subjective aspect.

Broadly speaking, I agree with Shermer on the hard problem.   But I agree with an important caveat.  In my view, it isn’t so much that the hard problem is hopelessly unsolvable, it’s that there is no scientific explanation which will be accepted by those who are troubled by it.  In truth, while I don’t think the hard problem has necessarily been solved, I think there are many plausible solutions to it.  The issue is that none of them are accepted by the people who talk about it.  In other words, for me, this seems more of a sociological problem than a metaphysical one.

What are these plausible solutions?  I’ve written about some of them, such as that experience is the brain constructing models of its environment and itself, that it is communication between the perceiving and reflexive centers of the brain and its movement planning centers, or that it’s a model of aspects of its processing as a feedback mechanism.

Usually when I’ve put these forward, I’m told that I’m missing the point.  One person told me I was talking about explanations of intelligence or cognition rather than consciousness.  But when I ask for elaboration, I generally get a repeat of language similar to Chalmers’ or that of other philosophers such as Thomas Nagel, Frank Jackson, or others with similar views.

The general sentiment seems to be that our phenomenological experience simply can’t come from processes in the brain.  This is a notion that has long struck me as a conceit, that our minds just can’t be another physical system in the universe.  It’s a privileging of the way we process information, an insistence that there must be something fundamentally special and different about it.  (Many people broaden the privilege to include non-human animals, but the conceit remains the same.)

It’s also a rejection of the lessons from Copernicus and Darwin, that we are part of nature, not something fundamentally above or separate from it.  Just as our old intuitions about Earth being the center of the universe, or of us being separate and apart from other animals, are not to be trusted, our intuitions formed from introspection, from self reflection, a source of information proven to be unreliable in many psychology studies, should not necessarily be taken as data that need to be explained.

Indeed, Chalmers himself has recently admitted to the existence of a separate problem from the hard one, what he calls “the meta-problem of consciousness”.  This is the question of why we think there is a hard problem.  I think it’s a crucial question, and I give Chalmers a lot of credit for exploring it, particularly since in my mind, the existence of the meta-problem and its most straightforward answers make the answer to the hard problem seem obvious: it’s an illusion, a false problem.

It implies that neither the hard problem, nor the version of consciousness it is concerned about, the one that remains once all the “easy” problems have been answered, exist.  They are apparitions arising from a data model we build in our brains, an internal model of how our minds work.  But the model, albeit adaptive for many everyday situations, is wrong when it comes to providing accurate information about the architecture of the mind and consciousness.

Incidentally, this isn’t because of any defect in the model.  It serves its purpose.  But it doesn’t have access to the lower level mechanisms, to the actual mechanics of the construction of experience.  This lack of access places an uncrossable gap between subjective experience and objective knowledge about the brain.  But there’s no reason to think this gap is ontological, just epistemic, that is, it’s not about what is, but what we can know, a limitation of the direct modeling a system can do on itself.

Once we’ve accounted for capabilities such as reflexive affects, perception (including a sense of self), attention, imagination, memory, emotional feeling, introspection, and perhaps a few others, essentially all the “easy” problems, we will have an accounting of consciousness.  To be sure, it won’t feel like we have an accounting, but then we don’t require other scientific theories to validate our intuitions.  (See quantum mechanics or general relativity.)  We shouldn’t require it for theories of consciousness.

This means that asking whether other animals or machines are conscious, as though consciousness is a quality they either have or don’t have,  is a somewhat meaningless question.  It’s really a question of how similar their information processing and primal drives are to ours.  In many ways, it’s a question of how human these other systems are, how much we should consider them subjects of moral worth.

Indeed, rewording the question about animal and machine consciousness as questions about their humanness, makes the answers somewhat obvious.  A chimpanzee obviously has much more humanness than a mouse, which itself has more than a fish.  And any organism with a brain currently has far more than any technological system, although that may change in time.

But none have the full package, because they’re not human.  We make a fundamental mistake when we project the full range of our experience on these other systems, when the truth is that while some have substantial overlaps and similarities with how we process information, none do it with the same calibration of senses or the combination of resolution, depth, and primal drives that we have.

So getting back to the original question, I think we can have a scientific understanding of consciousness, but only of the version that actually exists, the one that refers to the suite and hierarchy of capabilities that exist in the human brain.  The version which is supposed to exist outside of that, the version where “consciousness” is essentially a code word for  an immaterial soul, we will never have an understanding of, in the same manner we can’t have a scientific understanding of centaurs or unicorns, because they don’t exist.  The best we can do is study our perceptions of these things.

Unless of course, I’m missing something.  Am I being too hasty in dismissing the hard-problem version of consciousness?  If so, why?  What about subjective experience implies anything non-physical?

190 thoughts on “The prospects for a scientific understanding of consciousness

  1. I read your former posts and you are way ahead of me on this topic, but I think we will finally solve even the hard problem. As with most mental properties I think consciousness is something that co evolved along the way (there’s a term for it, but I am no biologist). I think what distinguishes us from other animals (in degree, not necessarily exclusively) is the mental construct we call imagination. With it we create constructs of the world around us and then can do experiments on that construct rather than having to do them in real life. E.g. “is that rustling of the grass due to the wind or a leopard?” In our imagination we can see ourselves stepping up to that area of grass and seeing … grass … or the fangs of a leopard. Since the consequences of making a mistake could be lethal we take Door #2 and avoid the site of the rustling. This tool which helps us survive is amplified the more we can detail the experiments, which gives a form of inner life to our interior debates over our options, also a spur to an inner life.

    Possible this property we call imagination was due to something other than evolutionary pressure, maybe it was a mistake, but it is a damned useful mistake and evolution selected for it. Certainly a certain number of functioning neurons are required, etc., etc.

    Liked by 2 people

    1. I think you’re right about imagination being an important capability. But I don’t think it was a mistake. It enhances our survivability. The earliest nervous systems simply responded to direct stimuli: touches, the presence of certain chemicals, etc. Perception from distance senses dramatically widened the scope in space of what our nervous system could respond to. Memory and imagination further expanded the scope but now in time. Together, perception and imagination dramatically increased the repertoire of available actions an organism could take, with an ability to respond to stimuli in an increasing envelope of time and space.

      Fish imagination is very limited, usually only able to look ahead a few seconds in the future. Land animals can generally imagine much further ahead, for minutes, or in some cases hours. But only humans can imagine a future days, weeks, months, or years ahead of time. I think what gives us this ability is symbolic thought. Imagine attempting to think about what will happen in 2020 if you didn’t have the symbolic framework of the calendar.

      Liked by 1 person

  2. “The general sentiment seems to be that our phenomenological experience simply can’t come from processes in the brain.”

    Which would be the polar opposite of what John Searle says about consciousness, so not sure why he gets mentioned in that sentence.

    Otherwise, yes. The example I use is standing on a hill watching a sunset. Various senses – kinesthetic, proprioceptive, inner-ear, viscera, all tell us we are at rest. This much is certain! Visual sense confirms we are at rest with respect to the hill and local surroundings. There is apparent movement on the horizon, therefore the sun is “setting”. This picture makes sense, feels right, and until 1610 most right thinking people would have agreed. It’s just that it’s completely wrong because our senses are tuned in such a way that we don’t experience coriolis forces even though we are subject to it. Not because we are stupid or anything like that. “The sun moving” is the logical conclusion until you invent the telescope to prove it isn’t happening.

    Consciousness is like this. We observe it to be one thing, through the senses that we have available to us, and draw the wrong conclusions. The “self” is a complex of experiences, or even perspectives on experience, that we call self and everyone knows what we mean because they experience it also. But there is no such thing outside of the experience of it.

    I think part of the legacy for a lot of these philosophical problems is that they are phrased in the abstract (the supposed realm of pure intellect and reason). What is the nature of consciousness? Well, that’s a question about an abstraction and the answer is always going to be that an abstraction has the nature of being abstract. Deduction only leads to restatements of your axioms. There probably is no shared quality that all experiences have that would make up the category of “consciousness”, especially since self-reflexivity is also an experience.

    Liked by 2 people

    1. Good catch on Searle. I stand corrected. Not sure why I wrote his name there. I replaced it with Frank Jackson’s. (I do disagree with Searle on other things, but that’s a different topic.)

      Excellent points on the rest. I particularly like the abstraction one. Yes, consciousness is an abstraction, a category, a word we use to refer to a suite of phenomena. Treating it like a thing in and of itself is like treating humanity as a thing, rather than a category.

      Liked by 1 person

      1. One can only disagree with Searle on naive realism. I was gutted when I realised he was serious about that, because I found his books on the mind and society brilliant.

        Yes, and the thing that the suite of phenomena have in common is that “I” experience them. And we already know that the self is a virtual model maintained by the brain. So the question becomes, “Is this a coherent category?” I’m not sure, but I’m starting to have doubts.

        Liked by 1 person

        1. I think it’s a coherent category as long as we remember what it is. My issue with calling the self an illusion is that we then need to come up with another term for that internal model we maintain. But I could see the point of view that shedding the old vocabulary clears out a lot of misconceptions.

          Liked by 1 person

  3. Related: John Heil’s explanation – https://iainews.iai.tv/articles/the-secrets-of-experience-auid-850 – of why the “hard problem” need not be as hard as it’s often made out to be.

    I also love your point about the communication between perceiving centers and planning centers of the brain: this hypothesis predicts that we should find an epistemic gap between scientific descriptions of brain activity, and the qualia they correspond to. And lo and behold, we do find such a gap! In other words, (any intelligent version of) physicalism predicts the very fact that is most often used against it.

    It’s not often that a theory is rejected because it makes a successful prediction. But there you have it.

    Liked by 2 people

    1. Thanks Paul, both for the link to Heil’s article and your comments about my other post.

      You actually focus on an aspect of the communication hypothesis that I failed to note, that there won’t necessarily be communication from some regions of the brain, notably the early sensory processing regions. For example, in visual perception, we only consciously get the more prepared versions from later processing regions, not the raw sensory data coming in from the optic nerve. I suspect the circuits from these early regions so the movement planning regions don’t exist, and so no matter how hard we introspect, that raw sensory input is always denied to us.

      Like

      1. This comment disappeared, then got called a duplicate; I’ll try again.

        Good point about early raw sensory input. I had a different gap in mind: I mean the epistemic gap between the whole brain process description of perceiving red (for example), versus why that feels this way (subjective-red). Contrast the heliocentric explanation of our perceptions of sundown – it explains everything we observe. We can think through the explanation and see, oh yeah, the angle from us to the sun will dip below the horizon, just as we see. Whereas the neural description doesn’t explain why subjective red feels like this.

        What would it take to explain feels like this? The language of the explanation would have to call up a memory in the hearer, a memory of subjective-red. So, something about our perception of neurons and brains would have to rely on the very same sensory-processing region that communicates redness. But that would be bizarre and wildly improbable. It would be like looking under the hood of a car and finding that the very same machinery that constitutes the engine was also the radio receiver. And we have done some looking under the hood (i.e. skull) and that’s not what we’re finding.

        Our scientific hypothesis can’t provide a non-circular explanation of why a particular quale feels like this. But it does provide an explanation of why there are no non-circular explanations of subjective feelings. And that is what is needed.

        Liked by 1 person

        1. Sorry Paul. Looks like the spam folder ate the original. If that ever happens again, and you don’t feel like re-typing a comment, just post a short followup comment letting me know, or drop me an email, and I’ll fish it out.

          Thanks for the clarification. Well said!

          Like

    1. Thanks Mark. Your essay reminds me of a conversation I had with someone a few years ago, when they asked how we have the sensation of red. After I described the concept of red sensitive retina cones and the process where signals propogate up the optic nerve, are processed in the occipital lobe, and generate neural patterns in the rear temporal lobe and parietal lobe that interact with regions in the prefrontal cortex, they responded with, “Yes, but why do we experience red as redness?” I asked how else they thought it should be received, and pointed out that it has to be communicates some way. They responded by asking why that way in particular.

      In other words, some people simply aren’t prepared to receive the explanation, or at least an explanation other than what they’re hoping for.

      Like

  4. This is the question of why we think there is a hard problem.

    That got my head nodding. Like you, provided I understand your position correctly, I don’t tend to think it’s really complicated at all.

    Liked by 2 people

  5. But it doesn’t have access to the lower level mechanisms, to the actual mechanics of the construction of experience.

    Okay, here’s a devils advocate that might have some challenge to it.

    Just as much as you yourself do not have any kind of access to yourself to prove this statement you’ve made. You are working on faith that this is correct – you being unable to access the construction of experience means you can’t access that any such construction is taking place. You have a theory that inwardly you cannot confirm and have placed faith in it (stated as ‘having faith’ for controversy – arguably we have faith that we wont be run over by a bus or stabbed by our loved ones and it’s not controversial that we invest in that kind of faith. But I said I’d be a devils advocate so there’s some provocative wording for you)

    So how much of this lack of inner access can you actually know for sure…given your lack of inner access to determine you have a lack of inner access? Or are you running off faith that it is the case.

    And when the other guy runs off another faith, can you be so ready to jump on their unconfirmable intuition about themselves?

    “But the science! The brain scans!”

    All internally unconfirmable. The screen can show you what’s in your brain, but if you can’t feel it, how do you know it’s true?

    My main point is a pain in the ass humility point where a lack of internal access means you or I are no better off than the guy who surmises a supernatural soul is involved. We’re equally as oblivious as to what is involved as that guy.

    Liked by 1 person

    1. I think we do have to always be cognizant that the current explanations may not always be the best ones, that more accurate ones, ones that make better predictions, might always replace the current ones. That kind of epistemic humility always needs to be there.

      That said, I don’t think the faith (I prefer the word “trust) is equal on both sides. First, we have a lot of neuroscience data that tells us that lower level processing takes place all over the place. I’m sure you’ve heard the example that we have a hole in the center of our field of vision which we never perceive, because the brain edits it out. Or that we perceive the feel of our hand touching something at the same time that we see the touch as happening concurrently, even though the visual signal reaches us much faster than the touch one. And of course, we know that autonomic processes, such as heartbeat regulation, breathing, hormonal regulation, etc, all happen well below the level of consciousness. So, low level processing happens.

      But many dualists would probably accept that. The real bone of contention might be where the upper level processing happens. Here, brain scans are only the latest iteration of something that’s been going on for a long time. We have a century and a half of medical data correlating capability loss with specific brain injuries. If dualism of some kind were true, we’d expect some aspect of the mind, of consciousness, to be immune, but the data doesn’t show that. V. S. Ramachandran wrote a whole book describing all the weird aspects of the mind that could be knocked out with such injuries.

      And the idea of higher level processing not having access to lower level mechanism will resonate with anyone who works in software development. Software applications don’t have access to the lower level processing of the operating system, or the hardware. In other words, the application’s models don’t include what happens at those lower layers. It’s not hard to imagine a similar dynamic happening in the brain, where conscious cognitive functions don’t have access to lower level ones.

      Liked by 1 person

      1. The thing is with the blindspot in the eye, you can actually test that. Can you test any of this science stuff internally and tell your brain works this way or that as suggested (with mounting evidence) by scientific investigation? The answer is no. We’re stuck in the same place as those claiming a supernatural soul or a dozen other interpretations of events.

        I mean, with more and more science out there you could arrange to have a needle put into your brain and fire a pulse of electricity to have certain responce effects. But could you tell that can be done from the inside? From self reflection? No.

        Without such access, does it count as any better understanding of the professed state of ‘consciousness’ than the dualist?

        The explanations require so much trust as to be to the point of faith. That’s most of your hard problem there – that leap of faith.

        Like

  6. I may perhaps be too literal in my thinking, or insufficiently subtle and agile of mind, but I have never once been able to grasp why the problem of consciousness is a problem at all, hard or otherwise. I suspect that you have nailed it in your reply, but I can’t even get to the point where I understand what the fuss is all about.

    Liked by 2 people

    1. I have the exact same problem with spirituality. People try to explain to me what it is, but I never understand. It always sounds like religion or mysticism. I doubt that it exists at all. Same thing here.

      Liked by 1 person

    2. For a long time, I totally didn’t understand what the hard problem was about. Eventually I made a strong effort to get that understanding. So many people were troubled about it that it felt like I must be missing something.

      I think I do understand it now (although the language has never been particularly concise or concrete). The answer has always seemed obvious to me. Introspection is not to be trusted. Our first strategy with any mysteries or paradoxes it seems to produce should be to first rule out that they aren’t apparitions.

      Neuropsychologist Elkhonon Goldberg, in one of his books, made the quip that the only reason we still talk about consciousness is, “Old gods die hard.” I think consciousness remains a subject because a lot of people want it to be something more than it is.

      Liked by 3 people

  7. Hi Mike,

    I think you’ve said this exceedingly well, and that you describe a coherent view of the world. It is also the case, I believe, that your conclusion is your premise. Unfortunately I think that is the very nature of this topic of discussion. Those who see this differently simply have a different premise.

    On the question of the hard problem, I would just say that when we first watched balls bouncing down the street, knowing nothing about physics, as nascent scientists we could formulate a dozen hypotheses for why bouncing balls behave as they do. Then we could test the predictions of our ideas on another ensemble of bouncing balls, and what’s more, we could invite everyone in the world to observe the same data we have in our possession.

    I think it is rational to admit that the problem of consciousness is another category of problem. The reason is that no one else can observe another person’s consciousness. I simply cannot test with 100% confidence any rules I may formulate about the state of your experience, and the state of your physical body, without hypothesizing a one-to-one coupling of subjective experience to the physical system. It is, I think, at least for the time being, quite simply the case that the possibility of a decoupling between the expressions we make on our faces and the experiences we are having at that particular moment in time does exist.

    The hypothesis that underpins your article is twofold I think. The first aspect is that a necessary and precise coupling exists between subjective experience and the state configuration of a body, and second that this coupling has a hierarchy to it, which you describe as a real and an illusory component. Taken together these hypotheses produce a coherent view of the world.

    The strength of argument that science offers us is evidence of the first premise: e.g. evidence of the coupling, insofar as we have been able to map various correlations. What science cannot yet ascertain is whether or not the hierarchical disposition is accurately posed. For instance, if I put a glove on my hand, I could confirm that everywhere the glove goes, my hand goes with it. There is perfect coupling, or correlation, and if our view was sufficiently limited, we could suspect the glove was driving the hand. It’s hard to envision making that mistake, but it’s merely a simple analogy.

    We have a preponderance of evidence at our disposal to suggest that balls do not bounce as they do because that is how they want to bounce. But when it comes to our subjective experience I think it remains the case that we cannot objectively study the matter. I don’t believe we have the capability to coincidentally examine our subjective experience and the state configuration of our body without dragging into the problem the unreliable nature of our perceiving. How do we do that? So the only way to approach the problem scientifically is to make the two claims above: to test for a perfect coupling of material states and subjective experience, and assert the principality of material states.

    It is a beautiful and powerful set of claims with considerable predictive power, and yields a scientific explanation of consciousness. But it is obvious why people who disagree won’t accept a scientific answer, and that is because it assumes the outcome through the second premise. What you’re left with is Occam’s Razor. At the present time, I respect opinions on both sides of the line, and look forward to what comes next.

    Michael

    Liked by 2 people

    1. Suppose the afferent neurons (the ones going from receptor cells to the brain) on the backs of two peoples’ hands were somehow grafted together. Isn’t it likely that a stimulus to a hair follicle on one of the person’s hands would produce a tickle in both of them? If it wasn’t precisely the same tickle (due to higher order neural processing), they could at least have an interesting discussion of “what it is like to experience that tickle.”

      Liked by 1 person

      1. This reminds me of the conjoined twins who seem to share sensory perceptions.

        Of course, we from the outside can’t know what it’s like to be them in their shared state, although who knows what future technology might provide.

        Like

        1. SelfAware,

          An utterly riveting article. Thanks so much for linking it—and in such a creative format!

          The “tickle” example in my comment was suggested to me by the neurobiologist Gordon Shepherd at Yale when I emailed him asking if he knew of any attempts to graft neurons. I’m surprised—in fact a little shocked—that he seemed unaware of cases of craniopagus twins. Sharing a “tickle” is small potatoes.

          I think the takeaway from the article (and your post) is that there really is no specific philosophical “problem of consciousness,” at least not in terms of its origins and general ontogeny. The problem is how to use it properly, especially in view of the fact that it perishes with our bodies.

          Mark

          Like

    2. Hi Michael,
      As always, I very much appreciate your kind words and thoughtful comments.

      If I understand your remarks correctly, you agree that science can corroborate correlations between subjective experiences and body/brain states, but has yet to validate the hierarchy of processing.

      On the first, I actually have to admit here that, while I think science is making progress in this direction, I actually don’t think it’s there yet. For example, it’s not clear to me that we yet know what the brain state is for the experience of redness.

      Ironically, I think we have better information on the lower level processing. For example, we know there are cones on the retina that get excited by certain wavelengths of light. A portion of these cones get excited by the range of wavelengths we generally define as red. This causes signalling to go up the optic nerve to the thalamus and then to the occipital lobe.

      We have a good idea of where various combinations of shapes, textures, and colors get resolved into particular categories of objects. We also have a good idea of where any motion of these objects may be identified and processed.

      This all happens below the level or outside of consciousness. I’m generally not conscious of my mind resolving a fish shape into the fish category, just the final perception, unless there are conditions that make this resolution difficult, such as poor lighting or other confounding factors. But we know from brain injury studies that this processing happens since injury to particular areas deprives people the ability to make those resolutions.

      But where does the brain determine that “I am seeing red” or “I am seeing a fish”? Where does it become conscious knowledge? Some people argue that it happens in the parietal lobe, others the prefrontal cortex. Some insist it doesn’t happen until the introspective mechanisms in the prefrontal cortex kick in. Myself, I find compelling the view that it happens collaboratively between several regions, but orchestrated by the prefrontal cortex. But at this stage it’s mostly (informed) speculation.

      It seems to me that these gaps in knowledge do give some comfort to those who see subjective experience as non-physical, but those gaps seem to be getting tighter every year.

      Liked by 1 person

  8. Some further thoughts on this. I’ve recently been reminded that the problem with materialism is that it is usually combined with reductionism. In reductionist materialism, the “real” is only the lowest layer of the universe (what happens at the Planck scale) and everything else is some kind of illusion. And because the lowest level is deterministic, then reductionist materialists argue that the entire universe of 100 orders of magnitude of mass, length, and energy is also deterministic. And then they try to explain consciousness within this straight-jacket.

    Consider a protein catalyst. A macromolecule made from 100s of amino-acids chained together. However the catalytic activity has nothing to do with the chemistry of amino-acid monomers or polymers. It is because the protein is a particular shape. It is the structure of the protein as a whole that gives it the catalytic property. The atomic theory of matter could never predict this. It’s not even in the remit of the atomic theory of matter (let alone quantum field theory!).

    Once we move along the scale to a living organism, structure is playing the decisive role and substance is almost irrelevant. Our atoms and molecules are constantly being swapped out for identical components without our noticing or needing to notice. This is pretty much true at the level of cells. Structures have the property of persisting over time even when components are replaced (Theseus’s ship is still a ship no matter how many planks are replaced).

    Yes, we can see the brain as a collection of atoms or cells, but this tells us little. What makes the brain interesting is the structure, the topology of it. A reductionist approach will certainly gain us knowledge, but not the kind of knowledge that allows us to replicate such a system.

    A biologist may learn something about the organism they are interested in by reducing it to its component atoms. But clearly they will learn more by looking at its molecules. And more still by looking at its organs. And still more by looking at a whole living organism. And most by seeing the organism alive and interacting with its environment. It’s not that reductionism tells us nothing about complex systems. We do gain some knowledge from destroying structures. The problem is that it is the structures themselves that make the higher levels interesting and if we don’t pay attention to that, then we are missing at least half of the universe.

    The phenomena that interest us are clearly emergent properties of an embodied brain (and are in the middle of the layers of scale in the universe). The concern some of us have over reductive materialism is that its simply the wrong approach for understanding such phenomena. We need anti-reductive (or emergentist) approaches, whether “materialist” or some other view. Personally I think mono-substance ontologies make the most sense. What most of us who get pejoratively called “materialists” really think is that there is no ontological distinction between mind and matter/energy, even though there might be a useful epistemological distinction.

    Part of the reason ontological dualists are pessimistic about understanding what they call “consciousness” is precisely that dualism makes the mind axiomatically unknowable. A year after David Chalmers published that article on the Hard Problem, he published another advocating dualism. In 1995 he was mildly optimistic about solving the HP, but by 1996 he had completely abandoned that optimism in favour of a pessimistic dualism which said we would never solve it.

    As ontological monists we tend to be more optimistic because of the successes in understanding what stuff is made of and made into, may be extrapolated into other manifestations of the one kind of stuff. Our methods are applicable across the board and thus we can expect progress.

    The problem is that substance monism is counter-intuitive: there is a clear epistemic mind-body difference, and people assume that this is because of an ontic difference (I argue that it isn’t). Most people, throughout history have been some kind of mind-body dualists. Most people I know still are. But it is difficult to have a conversation about it because neither side really understands their own views very well, i.e. many don’t adequately distinguish epistemology and ontology; and within that don’t see the reductionist/antireductionist distinction clearly. Add to that my previous comment about consciousness qua abstraction and one begins to see why there is no consensus. The commitment to legacy concepts in philosophy and science seems to me to be the greatest stumbling block to progress in this area.

    Liked by 4 people

    1. Well said.

      I do think we have to make a distinction between reductionism and eliminative reductionism. I count myself in the first camp, but not the second. While I think everything eventually reduces to elementary particles, I don’t find it productive to dismiss higher level phenomena. The higher level phenomena aren’t just jumbles of the lower level ones. The structure of these phenomena, the organization, the patterns, matter.

      Epistemically, Steve Carroll pointed out that we can’t use quantum theory to predict the periodic table of elements, much less stars, planets, galaxies, and the rest of the universe. The fact is that our minds have limited capabilities. We construct models to understand reality. Some of those models feel primal to us, because they’re innate or ones we develop very early on, such as models of our bodies, homes, and social environment. Others require substantial metaphor, such as our understanding of the very big or very small.

      The main thing is, that we have to switch models as we scale up or down in size or complexity. When thinking about particle physics, we employ particular models for that. We employ different models for chemistry, biology, or a social gatherings.

      Even in computing, I hold certain models when I’m thinking about hardware, different models when thinking about device drivers or low level operating system functionality, and yet different models when thinking about automated business processes. In this case, there’s no doubt for anyone that the higher level processes are completely built on the lower level ones, but dealing with the higher level ones productively requires employing the right model.

      So I think you’re right. Substance dualism is intuitive precisely because we employ a different model for mind than we do for brain. It’s completely rational for us to do so. And the exact connections between the two were, for a long time, completely obscure. Even today, as I admitted to Michael above, the mapping between them is far from complete. Even if it ever is, the connection between the two won’t feel like one arises from the other.

      And the model for our own consciousness is innate, an evolved adaptation. But there’s nothing to map that model, at least intuitively, to the others. There never will be. We’ll always be faced with having to look beyond our intuitions, as we have to do with other aspects of science.

      Liked by 3 people

      1. Hmm. Of course all structures can be reduced, but all the knowledge one gains in the process is knowledge of substance. Knowledge of structure and phenomena that depend on structures, is ipso facto eliminated by reductionism. Any reductionist ontology leads to methodological reductionism and this is always eliminativist. If you break a brain into pieces you won’t find a conscious mind anyway.

        However, since we are interested in high levels of organisation and structure, reductionism can only shed a limited amount of light on the subject.

        In the classic example of the Ship of Theseus, the focus is usually on the identity of the ship, which is eliminated by reductionist methodologies. However, no matter how many planks we replace, the ship is still a ship. The structure has integrity over time and has properties that are more than simple aggregations of planks etc. A pile of wood may float, but a raft is never going to give you the advantages of a boat in terms of carrying capacity and hydrodynamic efficiency.

        If one takes a reductionist ontology as axiomatic and goes around dismantling everything, then of course one gains knowledge of parts and substances. And this is taken as an endorsement of reductionist methodologies. But even the most hardcore reductionist has to turn this on its head to understand systems. And they all do, even if only tacitly.

        Any complex phenomena must be seen through two lens: substance reductionism and structure antireductionism. The brain is made of neurons, but it is not a mush of disordered neurons, it is a network of neurons with a very specific topology. Without that topology the neurons are not a brain. For example, the 100 million neurons that line our gut do not constitute a brain, even though that is about how many neurons a cat brain has.

        It’s not enough to accept weak emergentism and switch reductive models as we change scale. Of course we must do this. And we all do it, though few people these days talk about this and even Sean Carroll who does, maintains a commitment to a reductionist ontology which we can see from his frequent references to “fundamental reality”. But in itself this is not enough.

        Because of paradigmatic reductionism, particle physicists tend to dominate the discussion about metaphysics. The axiom is: “smaller is more real”. But this is just an ideology. It’s just something that people with a metaphysical commitment to reductionism assert. It’s not true. By most standards of “real”, structures that persist over time and can act as causal agents are every bit as real as simple objects. From a human point of view, macro objects seem more real because we can interact with them directly. Biologists can also tell us about reality. Sociologists can tell us about social reality. I make no claims for psychology which I am currently very dubious about.

        To declare oneself to be a reductionist, is to announce the poverty of one’s approach to any given problem. Its states that one’s ontology limits the methods one is are allowed to use. This is more religion than science. But the fact is that you almost certainly use antireductive methods, but cannot talk about it because reductionism has impoverished the philosophical vocabulary and made systems talk taboo (the very term antireductive says it all really). Systems-oriented sciences are still seen as on the fringe of mainstream science precisely because of this hegemony of reductionism.

        Methodologically, substance reductionism gains us knowledge of parts and substances. Structure antireductionism gains us knowledge of systems. Structure reductionism eliminates potential knowledge. Substance antireductions leads to false positives like mind-body dualism or panpsychism.

        Doing philosophy should not be a situation in which one takes sides, defends a goal, and attacks the opposition. It should be a situation in which one uses the best tools for the job of building something of value.

        The reason I think that substance dualism is intuitive is not to do with models, since it emerged at a time when the world was all on one basic scale as far as anyone knew. It was long before the telescope and microscope opened up new vistas for us to ponder. It’s more to do with how the brain evolved and erring on the side of caution. It’s better to avoid 100 false tigers than to fail to avoid on real tiger. So we over-determine objects as having agency. This allows us to imagine disembodied agency. It combines with certain types of weird experience – the paradigm being the out-of-body experience – and with the fear of death (plus one or two minor factors) to make disembodied minds minimally counterintuitive (to use Justin L Barretts phrase). The discovery of different scales has not affected this in the vast majority of people. Even as someone with a background in science, I have to admit I find very small and very large things hard to imagine – and I’ve spent many hours peering down microscopes and proving Newton’s Laws (in a non-relativistic frame).

        I do agree that our eventual understanding of minds will be counter-intuitive for the majority of people. For example, my standing assumption these days is that anyone still talking about “consciousness” simply hasn’t understood the problem they are trying to think about. So most philosophers are going to be left scratching their heads when we figure out what makes a mind and mental states.

        Liked by 2 people

        1. I disagree that reductionism is necessarily eliminativist. Just because I learn about the underlying components and the way they interact does not mean that I cease to see the whole system as existing. Having read him at length, I’m sure Carroll, and many other self described reductionists, would agree.

          But this is a matter of definition and I’m not particularly keen to debate definitions. I do agree with most of your remarks against the eliminativist version of reduction.

          I think to understand a system, you must be willing to reduce it, to pierce its veil, to dissect it. Until you can do so, I don’t think you can truly say you understand it. Sure, you can gain a surface level understanding without that reduction, but in general I find the anti-reductionist stance unproductive. It prematurely ends investigation and ensures that the phenomena in question will not be understood, except perhaps in high level terms.

          Like

        2. There’s a third thesis that often gets labeled “reductionism”, besides your two (substance reductionism, structure reductionism). Let’s relabel it “nomological micro-compatibility”. This thesis says that there are no exceptions to low-level laws (how electrons behave for example) due to interference from high-level laws like laws of biology, economics, etc. I like this idea. I call it “compatibility” to distinguish it from the idea that high level laws are logically implied by low level laws, which seems dubious, and which I’m pretty sure I’ve seen labeled “nomological reductionism”. (Un?)fortunately there are no label police, so people can label ideas however they want, with no consistency.

          Like

    2. ‘ . . . there is no ontological distinction between mind and matter/energy.’

      Yes; so too dichotomous apprehending i.e. subject(ivity)/object(ivity). Call it a mind-construct, call it all a brain construct — makes no difference; it’s all a construct, an abstraction from (or rather, ‘along with’) what is actual, and which so-called consciousness can never ‘touch’ as if it (consciousness) were ‘here’ and the actual ‘there’.

      Liked by 1 person

      1. You missed out the portion where I said that there is epistemic distinction. We perceive mind via different senses than we perceive sights, sounds, etc.

        I’m quite familiar with this kind of cant (being an ordained Buddhist and published scholar in the field of the history of Buddhist ideas) but I no longer find it very interesting or informative.

        The fact is that however enlightened people experience their perceptions, it doesn’t change the basic ontology. The light entering their eyes still only enters their eyes and is still only processed by their brain, producing an experience that only they experience. Whether or not that person accepts it, the fact is that the experience created by their brain is subjective. I have no more access to the Buddha’s mind, than I have to the nutrients of the food he eats. So the distinction is real and it continues to impose real limitations.

        You find this when you talk to people who report having no sense of self. There is no problem with them knowing whose turn it is to talk, or what things are being referred to, or even what objects are being pointed out. Ontology hasn’t changed, The mountain is still a mountain. What has changed is the meditator’s epistemology, their understanding of experience. And the value they place on certain aspects of experience.

        The fact that so many people try to turn this epistemic insight into an ontological doctrine is indicative of the poverty of religious thinking and nothing else. Instead what appears to happen with enlightened people is that certain types of epistemic distinctions are not made and that certain types of conscious processing of information become unconscious (though still accessible to introspection). They all report that this is a more satisfying state of affairs, which I do not doubt. But none of them becomes confused about how to walk down to the shops and buy a loaf of bread. They all continue to live, to use language and numbers, to socialise, and very often to work. None of which would be possible is the traditional ontologies were taken seriously.

        Like

        1. I find your tone rather insulting, but never mind, as that appears to be your intent.

          What do you mean by ‘subjective’ — that it only occurs as a phenomenon inside the cranium?

          Like

  9. I’d like to challenge Michael Shermer’s position. He says that science will never solve consciousness, freewill, and God, not because of dualism (mysterian), not because of cognitive insufficiency (new mysterian), but rather because these are inherently unanswerable questions (his own “final mysterian” classification). Well let’s take a look.

    First, what is meant by “solve”? Ideally this would be to understand reality, though there’s only one answer that he can ever attain in this regard, or that he personally exists in some manner (from Descartes). So what he must truly be after are functional principles from which to work regarding God, freewill, and consciousness, not the ultimate solutions which he’s fundamentally deprived of. Furthermore I’ll suggest that my single principle of metaphysics would be wonderful for him to learn in this capacity. It goes like this:

    To the extent that causality fails, there’s nothing to figure out anyway.

    This is to say that because “figuring out” demands causal dynamics rather than the converse, figuring out that things don’t function causally inherently violates its own premise. So if Michael Shermer wants to explore reality through reason rather than faith, then he can only metaphysically presume that there is no God — and may ultimately be wrong about that. (Some theists propose that their God provides a realm that’s otherwise causal, which would be fine for us as functional naturalists, though the full premise could only exist through faith rather than reason, or metaphysics rather than physics .)

    Next is freewill, which causality mandates does not exist. But this is from a perfect perspective rather than that of a human. From a mere human perspective in a vast causal realm, yes we can say “freedom exists”, even though greater perspectives should naturally tend to dispel such illusions. (It disheartens me that academia hasn’t quite grasped this yet.)

    Then finally there is consciousness. How are phenomenal experiences produced in a causal realm? Such a practical understanding could conceivably be figured out however. If we were to produce a machine which outputs phenomenal experience for an associated second form of computer to experience (and thus these experiences motivate the second computer to try to feel good rather than bad), then we should also effectively solve this “hard problem”. I have no confidence that humanity ever will produce such a machine, though contra Michael Shermer, it’s certainly not off the conceptual table given the metaphysical premise of causality, or reason over faith.

    Liked by 1 person

    1. Hey Eric,
      Shermer is a skeptic and an atheist, so I think he’d agree with your conclusions about God.

      I’m not sure, but on free will, I think he’s a compatibilist. But compatibilists see libertarian free will as false, so again, I think he’d be on the same page. (The questions then become what the scope is of freedom we’re talking about, whose or what’s freedom, and whether social responsibility remains a useful concept.)

      “How are phenomenal experiences produced in a causal realm?”

      I don’t know how Shermer would respond to this. My response is that I think in considering phenomenal experience something that is “produced”, you may be introducing an implied assumption of dualism. In my mind, phenomenal experience, whatever else it may be, is information. I think it’s just part of the information processing of the brain going about its movement planning functions.

      In any case, I agree that it’s scientifically approachable. But I’m more optimistic that we will eventually produce a machine who capabilities and inclinations trigger our intuition of consciousness / humanness, although there will always be those who, clinging to the version of consciousness I dismiss in the post, insist that the machine doesn’t have the real version.

      Liked by 1 person

    2. But Mike, the point isn’t that he and I see eye to eye on some things, which I suppose we do. Shermer claims to be a “final mysterian” in only these three regards. I’m saying that he should be a final mysterian in all regards but one, or something which would effectively rob his position of novelty and potential usefulness. Then when we step down to effective rather than ultimate understandings, I’m proposing that my single principle of metaphysics should help quite a bit regarding his three supposed unanswerable questions.

      On God, to the extent that our realm functions supernaturally, reason does not apply. Thus a person who seeks to explore reality through reason rather than faith, needs to effectively presume that there is no God in order for their “reason metaphysics” to be maintained. One down.

      On freewill, I’m surprised that people still consider this question puzzling. Causality mandates that it cannot exist ultimately, though humans don’t have anything near an ultimate perspective. Thus from such a crippled position freedom can usefully be said to exist. What is the scope of this freedom? Freedom here is a function of ignorance — the less that a given situation is understood, the more freedom that should be apparent. What can be considered free in this regard? Something that is conscious, and from a non-omniscient perspective. Social responsibility is definitely a useful concept from this position, given how far the human happens to be from “all knowing”. Two down.

      Given my naturalism I simply can’t be “introducing an implied assumption of dualism” by means of the “produce” term. We use this term in all sorts of natural ways, such as hydrogen and oxygen can “produce” water. I don’t know what physical dynamics create phenomenal experience specifically, but apparently a natural non-conscious computer can output the stuff and so drive the function of a conscious form of computer. Three down.

      I certainly agree with you that phenomenal experience provides information. Toe pain not only tells me that I have a problem there (sense information), but motivates me to protect it from getting worse, or provides valence from which to drive my conscious processor. But how does a computer which is not conscious output such a conscious input? We may never get the engineering worked out there, though I do appreciate your optimism. (Apparently Garth is optimistic about that as well, or at least while high!)

      Liked by 1 person

      1. Eric,
        Sorry, didn’t mean to imply by my remark about “produces” that I thought you were abandoning naturalism, only that the way you phrased it made it sound like an unintentional dualistic notion. But I’ve pushed back often enough on your concerns about my specific language that I’ll concede point. Sorry for being pedantic.

        “But how does a computer which is not conscious output such a conscious input?”
        Okay, bear with me, since, despite what I just said, I’m going to make a language point here. Let’s take your language just prior to this question…

        “Toe pain not only tells me that I have a problem there (sense information), but motivates me to protect it from getting worse, or provides valence from which to drive my conscious processor.”

        …and replace it with this language:

        “Toe damage signalling not only tells the central system that there is a problem in that location, but activates an action plan for evaluation by the movement planning subsystem to protect it from getting worse, or provides weighting from which to drive the movement planning function.”

        Does the second one seem more approachable from a technological perspective? Is there meaning in the first that is lost in the second? If so, what?

        Liked by 1 person

    3. Mike,
      You said, “In my mind, phenomenal experience, whatever else it may be, is information. I think it’s just part of the information processing of the brain going about its movement planning functions.”

      From there I agreed, but did so by putting things explicitly in terms of my own “dual computers” model. Here all elements of the conscious computer exist through output of the non-conscious computer. Thus the phenomenal experience of toe pain is somehow created by a vast non-conscious computer, for a distinct conscious computer to experience as input. This provides both location, as well as personal punishment (or anti-value). Thus from my depiction, toe pain certainly provides conscious information, though I doubt that you meant it quite like that.

      You now ask me to consider the following potential depiction of my position,

      “Toe damage signalling not only tells the central system that there is a problem in that location, but activates an action plan for evaluation by the movement planning subsystem to protect it from getting worse, or provides weighting from which to drive the movement planning function.”

      Your first question here asked if this seems more technological, and to me yes it definitely does. This seems like how one of our non-conscious robots could be programmed. If something hurts me then I wouldn’t say that I’m “told” that there’s a problem. Instead I’d say that it just f—ing hurts! I interpret this as the non-conscious computer in my head punishing the conscious one (or me). Furthermore the “weighting” term seems extremely robotic. Presumably a damage weighting of “5” for example, would incite associated algorithmic function. I do believe that the non-conscious computer in my head constantly does this sort of thing, but in addition I believe that it outputs a second from of computer that has true teleology, or is motivated to feel good and not bad. So it’s this second computer that I’m not getting a sense of from your iteration.

      I’ll also say that this conscious form of computer is actually the third of the four that I theorize. The first concerns genetic material and functions on the basis of chemical properties. I believe that this computer emerged with “life”. The second is a whole organism processor and functions on the basis of neurons. The third is outputted by the second, but functions on the basis of value rather than neurons. Then the final is a technological variety which is teleologically fabricated. Ours generally function by means of — not chemical properties, not neuron systems, and not value — but rather electricity.

      Liked by 1 person

      1. Eric,
        On mapping back to your model, no worries. We always have these discussions with each of us holding our own model of what we’re discussing. As we’ve noted before, I think there’s enough overlap that we can have many productive discussions. And the differences can be what make the discussions interesting.

        My questions about the description versions are designed to get at what in particular about consciousness you think makes it so difficult for a technological system. Yes, my revision does sound pretty robotic. But what is the missing difference? We agree that substance dualism is false, so what is it in the subjective version that is missing from the robotic version?

        (Note: I do think something implicit in the subjective version is missing from the robotic one, and have my own answer, but I’m curious what yours might be, particularly since my missing ingredient is technologically achievable.)

        “If something hurts me then I wouldn’t say that I’m “told” that there’s a problem. Instead I’d say that it just f—ing hurts! I interpret this as the non-conscious computer in my head punishing the conscious one (or me). ”

        But isn’t something “f—ing hurting” information? Isn’t the receipt of that information being transmitted about something? Isn’t pain communication from one part of us to another part of us signalling a situation? Even if it is punishment as you characterize it, isn’t punishment itself a form of communication?

        Certainly pain is a far more primal and visceral form of communication than being told something with language. Language has to be encoded by the sender from its imagination and encoded by the recipient into its imagination. That does make it qualitatively different, more abstract than something received as pure sensation. But isn’t the sensation, fundamentally, still communication? If not, why not?

        Liked by 1 person

    4. Yes Mike, when something hurts it inherently provides input and is thus informative. That was actually my original agreement with you. Then from there I packaged things up with my model and so we’ve been talking about that as well.

      What the robot is missing from my model is conceptually simple. It’s missing the conscious second computer which is outputted by the neuron based computer. This conscious computer is not motivated through (1) chemical dynamics, (2) neuron systems, or (4) electricity. This second computer functions on the basis of a punishment/ reward dynamic, or value. I consider value to be the strangest stuff in the universe and have no idea how a non-conscious computer outputs it to drive the function of a computer that experiences it. As before, I remain an architect rather than an engineer (not that any modern engineers can answer this question either).

      If that’s clear enough, what would you say that the robot is missing? And do you consider your answer to conflict with mine?

      Your main question asks why I’m pessimistic about humanity building conscious forms of computer? Well first let me say that once we discard vast swaths of “futurism” nonsense, I should actually be one of the most optimistic people around. Beyond Shermer’s “final mysterian” position (where God, freewill, and consciousness are off the table), note that mental and behavioral scientists in general seem to consider what they study to be “naturally soft”, or somewhat beyond their potential to model. Then there are modern philosophers with both two and a half millennia of western culture to oversee, as well as their own domain of reality to address. Apparently they’ve decided that humanity needn’t develop a respected community that has its own agreed upon principles of metaphysics, epistemology, and axiology from which to work. Conversely I’m optimistic about all of these fields, and believe that progress in philosophy will help our soft sciences harden.

      What I can’t quite get over however, is the magnitude of complexity found in nature’s machines versus ours. A standard human cell might hold the complexity of the function of a vast human metropolis. How might we design and build something comparable? And note that we’d have to do so while carrying an enormous extra condition. Evolution needn’t understand anything in order for its machines to be developed, unlike us. I’m not saying that it would be impossible for us to build a conscious computer. I’m saying that even after all of the advancement in human understandings which I foresee (and modern professionals do not), we’re playing under far more stringent rules. Right?

      Liked by 1 person

      1. Eric,
        “I consider value to be the strangest stuff in the universe and have no idea how a non-conscious computer outputs it to drive the function of a computer that experiences it.”

        Given your anti-mysterian stance, I’m surprised you’re willing to just leave it at that. Personally, I don’t find value that mysterious. In my view, it’s the survival circuits, the reflexes, the programming that exist at the base of the brain that determines what our dispositions will be toward various perceptual patterns, evolved because it increases our homeostasis and gene preservation. But maybe I’m missing something?

        “If that’s clear enough, what would you say that the robot is missing? And do you consider your answer to conflict with mine?”

        My answer is metacognition, introspection, the system building predictive models of aspects of its own processing. But that’s an addition to what I called the movement planning centers in the rewording. Together the movement planner and metacognitive framework (all in the prefrontal cortex), could be considered your second computer.

        So, there’s conceptual overlap, but they’re not quite the same. The introspector and the movement planner (often lumped together as the “executive center” in neuroscience literature) are far less discrete, far more entangled with the rest of the brain, and far vaster than the second computer you envisage.

        “I’m saying that even after all of the advancement in human understandings which I foresee (and modern professionals do not), we’re playing under far more stringent rules. Right?”

        Perhaps. There are problems where there’s room for doubt that science will ever solve them, such as wave-particle duality, black hole singularities, multiverses, etc. But based on all the neuroscience I’ve read, while I see an enormously complex system, I don’t see anything beyond science’s ability to observe and learn it. It may take decades, perhaps even centuries, but I think we’ll get there. If it’s observable and coherent, history hasn’t been kind to those who said it was forever unknowable.

        Liked by 1 person

    5. Mike,
      It’s not mysterian to say that I have no idea about something that I have no idea about. This is just being honest. Surely if I were to claim that I understood things that I didn’t understand, then my models would suffer for it?

      As I define “value”, and though you’re perfectly free to define it in separate ways, metacognition is irrelevant to it. So here I’m not contradicting you, but rather explaining a separate definition which I’ve found useful.

      Note that we call “gravity” a property of nature by which matter attracts matter. Well I theorize an element to reality by which a computer can cause something that it creates, to suffer or feel good. I define this stuff as “value”. Why do I theorize such a strange dynamic to exist? Because I am such a thing, and my evidence suggest that I was created by means of computation. As a conscious computer I naturally try to do what will make me feel good and avoid what will make me feel bad. Still as a naturalist I don’t believe that value can only exists when it’s “functional”. I believe that the properties of nature mandate that it’s possible for an organic computer to produce +/- value beyond evolutionary effectiveness. If not, how would evolution have this tool at its disposal?

      I realize that you’re defining “value” such that it depends upon metacognition. From my own separate definition however, do you have reason to doubt the existence of a property of nature by which existence can be anywhere from horrible to wonderful?

      I like this statement:

      If it’s observable and coherent, history hasn’t been kind to those who said it was forever unknowable.

      Indeed! Of course there’s a difference between “knowing” and “doing”, or essentially “science” and “engineering”. Evolution obviously does have the tools to create nothing less than life, though for example, I don’t presume that evolution has provided the human with such tools as well.

      More importantly however there is our optimism regarding human understanding. Might you be as optimistic as I am regarding philosophy and our mental and behavioral sciences? And would you say that science might very well be aided by means of certain derived principles of metaphysics, epistemology, and axiology from which to work?

      Liked by 1 person

      1. Eric,
        Actually, my understanding of value doesn’t require metacognition. Many (most?) animal species don’t have metacognition, yet still have drives that factor into their behavior.

        “do you have reason to doubt the existence of a property of nature by which existence can be anywhere from horrible to wonderful?”

        It depends on what you mean here by “property of nature”. You make a comparison with gravity, which I know you understand is one of the fundamental physical forces. Do I think value is anything like that? No. I see no evidence for it. A physics view of the universe strikes me as an utterly nihilistic one.

        If by “property” you mean something like biological value, the urge to preserve and propagate genetic legacy, something that every life form strives for, then I’d say yes, it does exist as a property of biology, an inevitable aspect of natural selection.

        Or do you mean a property of conscious creatures? Again, I’d say yes to this version too, but to me this is a specialization of biological value. (Although I think I remember you rejecting that connection.)

        But in the case of both biological and conscious value, I don’t see anything fundamentally mysterious, just biological instinct, i.e. programming. Granted, it’s not intuitive to think that our feelings of joy or sorrow are due to our evolved programming, but I think that’s where the data points.

        Again, maybe I’m missing something?

        “Might you be as optimistic as I am regarding philosophy and our mental and behavioral sciences?”

        If by “optimistic”, you mean that I think the social sciences can produce theories with predictions more accurate than random chance? Sure. But if you mean that their predictions will be as accurate as General Relativity or the Standard Model, then no. I’m not optimistic about philosophy, because too much of it delves in the unobservable or the incoherent.

        “And would you say that science might very well be aided by means of certain derived principles of metaphysics, epistemology, and axiology from which to work?”

        In my view, the only unalterable axiom of science is that more accurate predictions are better than less accurate ones. As I see it, every other rule, principle, or methodology should only be kept to the extent it pragmatically helps with that one overriding value. Things like empiricism, materialism, or the scientific method, are products of science, not dogmas of it.

        Liked by 1 person

    6. Mike,
      I believe that I’ve discussed these issues with you quite a bit more than I’ve discussed them with any single person. I’d be surprised if you couldn’t say the same about me, or at least over the past couple years. But for some reason I’m only now realizing a crucial element to your consciousness beliefs, and incited by your discussion with Fizan below. And then with this insight I’ve been able to realize that you did essentially tell me the same just above. There you said

      “But in the case of both biological and conscious value, I don’t see anything fundamentally mysterious, just biological instinct, i.e. programming. Granted, it’s not intuitive to think that our feelings of joy or sorrow are due to our evolved programming, but I think that’s where the data points.

      So you don’t consider what’s known as “the hard problem of consciousness”, to exist? This is to say that pain for example, essentially occurs in the human through programming, or essentially in the manner that turbo tax calculates someone’s taxes on a PC? If so then this is actually quite a relief in one regard anyway. It might help explain why I haven’t quite been able to teach you the nature of my “two computers” model. One must presume tremendous specialness to phenomenal experience in order to get it, or what I consider to be reality’s most amazing phenomena (not that know about all phenomena, but still).

      Before I continue, could you give me some specific direction about your beliefs on the matter?

      Liked by 1 person

      1. Eric,
        Maybe we’ve had a breakthough? Or perhaps just a break 🙂

        “So you don’t consider what’s known as “the hard problem of consciousness”, to exist?”
        I recognize that there is a uncrossable divide between subjective experience and the correlated objective mechanisms. But I don’t see this as a problem so much as a profound fact.

        “This is to say that pain for example, essentially occurs in the human through programming, or essentially in the manner that turbo tax calculates someone’s taxes on a PC?”
        The determination of pain is obviously far more complicated than what Turbo Tax does, but ultimately I do think that pain is fundamentally just information processing. Of course, from the point of view of the system (us), it’s far more than that. But from the outside perspective, that’s all it is.

        “One must presume tremendous specialness to phenomenal experience in order to get it”
        Obviously our own phenomenal experience, from our perspective as the system, is very special. But again, I see it as information processing, neural computation.

        As far as I can determine, we are information processing systems, part of gene survival machines. Phenomenal consciousness is just the perspective of the system itself. Because the system is us, we feel that there must be something more to it than that. But it’s just us privileging the way we process information, the same way we once privileged ourselves in other ways as being above animals or nature, or in believing that we were the point of the universe.

        All that said, if you can give me reasons why this view is wrong, I’m totally open to reconsidering it.

        Liked by 1 person

  10. Hey Mike. I think you are, in fact, being “too hasty in dismissing the hard-problem version of consciousness”. I think there can, should, and will be an explanation derived solely from experiment and logic.

    The problem, as I see it, is that people believe their experience of “red” has some property that other experiences (like experiencing “blue”) do not have, let’s call this property “redness”. It seems natural for me as a person to assume that this property of redness is part of your experience of “red” also. But it also seems like this property could theoretically instead be part of your experience of “blue”, and so even when you say you’re experiencing blue, you’re actually experiencing what I call redness.

    Now you and others (including myself) will say that this experience of redness is an illusion, but I think you can and should explain exactly how that works. Here’s my take:

    Each experience does have a (kind-of) property, and that property is “difference”, i.e., the experience is identifiably different from other experiences.

    My thinking has taken a mechanistic track of late, so it’s easier to explain in terms of a mechanism. Let’s assume any event (such as an experience) can be expressed as:
    Inputs —> Mechanism —> Outputs
    We’re only interested in sets of inputs which generate unique outputs. If different inputs generate the same output, that’s a difference that doesn’t make a difference.

    So suppose that all the inputs are similar in character but are distinguishable. For example, suppose the inputs were a set of lights placed randomly around a wooden board. Each individual light is similar, but the mechanism of reading the board to produce distinct outputs can differentiate each individual light from the others. Imagine there is a camera looking at a room and wired such that when a red light is on in the room, a particular but arbitrary light on the board lights up. When a blue light is on, a different light on the board is turned on. As far as the mechanism is concerned, there is only inputs and associated outputs. But the mechanism is not “concerned”. The mechanism, as Dennett would say, is competent without comprehension. If queried, the Mechanism would know nothing about the board of lights, only whether or not the “red one” was on or off.

    I expect you good materialists know where this is going. Instead of lights on a board we have incoming synapses. So the scientific question is, what is the mechanism that has the ability to differentiate all the possible incoming inputs? Which parts of the brain are inside the mechanism and which outside? Mike thinks parts of the cortex (prefrontal) are inside the Mechanism. I’m inclined to think that the entire cortex is outside the mechanism, and the inside includes the thalamus, basal ganglia, probably claustrum, and possibly other subcortical parts. But science will determine that discrepancy.

    I should note that deciding what is inside the Mechanism is arbitrary. Once inside and outside is defined, you can always ask what Inputs produce what Outputs for that particular grouping. When I refer to “the Mechanism” above, I am referring to that Mechanism which is responsible for all the inputs and outputs we associate with Damasio’s autobiographical self, i.e., what most philosophers mean when they refer simply to “the self”.

    *
    [tagline: the cortex is the umwelt of the autobiographical self]

    Liked by 1 person

    1. Hey James,

      “Now you and others (including myself) will say that this experience of redness is an illusion,”
      Actually, I don’t think it’s an illusion (at least not in the sense that it doesn’t exist). I have sympathy with the camp that says if it is an illusion, then the illusion is the experience. I think it’s more productive to say it’s a construction. But this may be a matter of terminology on a point that we agree on ontologically.

      Interesting take on the mechanism. If I understand what you mean by the mechanism, as a point of clarification, I would consider the prefrontal cortex and its associated nuclei in the thalamus to be in it, but with inputs coming in from and outputs to all over the cerebral-thalamic system.

      The basal ganglia strike me as being more involved in habitual movement decisions, much of which happens below the level of consciousness. After our discussion the other day, I do suspect that the claustrum’s role may be as a pace setter, a sort of processing clock, which would meet all the observations of its involvement with cognition but be compatible with its diminutive size.

      The problem is that the discrimination between different visual signals seems to happen at multiple levels. The mid-brain region: superior colliculus, seems to be involved in rudimentary determinations, to the extent that even if the occipital lobe is destroyed making conscious discernment impossible, patients can often still take an accurate “guess” at what in front of their eyes (see blindsight). And obviously if the basal ganglia is involved in habitual movement decisions, it still has to receive visual signals, even if we’re often not conscious of them.

      It may also be that the redness of red or blueness of blue doesn’t happen unless the introspection mechanisms are involved, and there’s data indicating that they’re also in the prefrontal cortex.

      All that said, I’m not clear why any of this make you think the hard-problem version of consciousness should not be dismissed. I don’t perceive it’s necessary for any of the above. But maybe I’m missing something?

      Liked by 2 people

      1. Mike, you do seem to have the correct understanding about the mechanism. Unfortunately I’m afraid I don’t know enough about what processing happens where in the brain to be able to have an intelligent conversation about it. I could be totally wrong about this, but I’m pretty sure that frontal lobotomy is not associated with significant specific changes to consciousness. I would expect significant damage to the mechanism of interest (autobiographical self) would tend to destroy the ability to have a significant subset of reportable experiences. Such is what you find with the destruction of the thalamus, for example. There is certainly processing that goes on totally within the cortex. If my hypothesis is correct, localized damage to cortex would have the effect of removing certain specific inputs to the mechanism without affecting others.

        *
        [as for the other stuff, I think your understanding is the same as mine]

        Liked by 1 person

        1. On frontal lobotomies, full ones are relatively rare. Most of the ones that were performed were partial to some degree, which clouds the effects of separating that region from the rest of the brain. Mercifully, lobotomies are rare today.

          We do know that frontal lobe pathologies can rob people of the ability / motivation to self report anything. The question is whether they’re still conscious and unable to report, or simply lack high order consciousness due to the pathology.

          Koch, in one of his position papers, reported that some frontal lobe patients who had recovered use of their frontal lobes, later reported that they were conscious but were simply unable to respond. However, this was anecdotal and hearsay, not clinical data, and since it contradicts other data, I’d need to see corroborating evidence before I changed my understanding.

          All we can say for sure is that we don’t have a perfect understanding yet. I suspect both of us will have to update our views as more research is done.

          Like

    1. Hey agrudzinsky,
      Just to make sure we’re on the same page, this is what the transcript showed (I’m at work so watching the video would have to wait):
      “The mind can be defined as an embodied and a relational process that is self organizing and emergent. And what it means is that arising from energy information flow is a self organizing process. So self organizing emerging process that’s both embodied and relational that regulates the flow of energy and information.

      So, the simplest way of saying it is, one part of the mind can be defined as an embodied in-relational process that regulates the flow of energy and information.”

      This doesn’t seem very concrete to me, and a bit too buzz-word heavy.

      To me the mind is evolution’s solution for an organism to plan and execute its movements for optimal homeostasis and gene survival, and everything needed to support that function. But I fully realize that definition will come across as too reductionist for many people.

      Like

      1. I wouldn’t expect a definition of a mind to be very concrete. I like how Siegel defines mind as a process rather than a system, for example. Read his book “Mindsight” for more understanding of what he means.

        I don’t think that your definition is very far from Siegel’s. What he means by the “embodied relational process that regulates the flow of energy and information” is that organisms process information, such as threats, availability of food, social signals of all kinds, etc., and react to it. Some reactions are physical like increased heart rate and release of energy in the muscles, some are emotional in terms of releasing pleasure hormones or adrenalin. But most of these reactions can be described as “regulating energy” which is a more general description of “planning and executing movements for optimal homeostasis”.

        What he means by “relational” is that the process works through connecting the neurons in the brain, connecting parts of the brain to work together, and also relating the processes in the brain to the processes outside the body – memories, emotional attachments, etc.

        It is also interesting that from Siegel’s definition, a group (society or a nation) can also have a “mind” in terms of ways to process information such as internal and external threats and reacting to it by directing resources and energy.

        It also follows that it is possible for a machine to have a “mind”. Because machines can process information and direct energy as well.

        Liked by 1 person

        1. Thanks for the clarification. I think I’m on board with each of the terms.

          “Relational” and “embodied” in particular remind me that minds always exist relative to their environment. I’m often told that minds can’t be information processing, because information processing is always a matter of interpretation and so relative to other minds, but embodiment provides the environment’s interpretation of the information processing happening in the brain, just as a machine body would provide the interpretation for the information processing of its central control systems.

          Liked by 1 person

  11. This lack of access places an uncrossable gap between subjective experience and objective knowledge about the brain.  But there’s no reason to think this gap is ontological, just epistemic, that is, it’s not about what is, but what we can know, a limitation of the direct modeling a system can do on itself.

    This. It seems to me that the hard problem just is the difficulty, or maybe impossibility, of seamlessly transitioning between the 3rd person and the 1st person descriptions of the mental world. Even if we were to ever be able to perfectly predict, or cause, phenomenal experiences from descriptions of neural processes, people would still cry “Law of identity! The process is not the same as the experience!” If satisfying the law of identity is the criteria we need to meet to have an explanation, then maybe ‘mysterianism’ is the proper response. But that seems like an excessive requirement that exceeds the burdens we place for claiming ontological equivalence in other domains where there are multiple, variant descriptions.

    Liked by 2 people

    1. Yes. The trouble is that “the 1st person descriptions of the mental world” are part of the same mental world they describe. Describing an experience is another kind of experience. The sense of being the someone who is having or describing the experience is also an experience. What is it like to wonder about what it is like to be a bat?

      When we try to examine our minds from the inside we just see endless reflections, because what our mind does is generate experiences. As far as I can tell, simple observation can never get behind this. The experience of looking behind experience is another kind of experience. There appears to be no way out of the hall of mirrors in that sense. We have what appears to be an absolute epistemic limitation on the kinds of knowledge we can gain on our own experience, because everything we do generates more experience of the same basic nature.

      And where-ever there are epistemic limits, ontologies should remain uncertain. Thomas Metzinger’s insights into the nature of mind by observing what goes wrong with it and trying to reverse engineer something that could break in that way are decisive. The observing self in his terms is actually a virtual self model generated by the brain. In my terms being a self is another kind of experience.

      The trick is that we can compare notes on experiences and this allows us both to infer knowledge of what the world is like and what minds are like. Such knowledge as we infer is more or less accurate and we inch towards greater accuracy. Whether we ever reach our goal of full knowledge is moot, and really not that interesting while we are still making progress with no end in sight. Speculating on what might happen is for journalists having a slow news day.

      Every time some philosopher comes out and says “you’ll never know X” or “you’ll never understand Y” I take this to be an example of the Mind Projection Fallacy. The basic axiom of Shermer et al is that “If I can’t understand this, then no one can understand it.” I suggest that no intelligent person would take such pronouncements seriously (and leave the obvious corollary unstated).

      Liked by 1 person

    2. Thanks Travis. Good point about the Law of Identify issue. In truth, I’m not sure any scientific theory could ever satisfy it. At what point does the causal explanation become the empirical observation? Ultimately, for any scientific theory, we always end up dealing in ever more granular correlations, without the two ever fully meeting.

      Like

  12. It seems to me that once neural systems are able to create sensations (as described in an essay I linked earlier to SelfAwarePatterns), they are able to preserve and store them (memory), and then through flexibility of function (neural plasticity) select, combine, arrange, edit, and rearrange the results into a product that we call consciousness. In this way consciousness is “bootstrapped” into existence by and within neural systems.

    Liked by 1 person

    1. Mark,
      In other posts, I’ve described a possible hierarchy of that bootstrapping.
      1. Reflexes, programmatic responses to stimuli.
      2. Perceptions, built from input from distance senses (sight, smell, hearing), and semantic memory, which expands the scope in space of what the reflexes can respond to.
      3. Attention, prioritizing what the reflexes respond to.
      4. Imagination, scenario simulations of both of future and past episodic events, increasing the scope in time of what the reflexes can respond to, and choosing which reflexes to allow or inhibit. It is here that reflexes become feelings.
      5. Metacognition, introspection, recursive monitoring of some aspects of 1-4 for fine tuning of action planning, and symbolic thought. Human self awareness requires this layer.

      In the literature, 1-4 is often call first order, primary, or sensory consciousness, and 5 second or higher order consciousness.

      Most computer systems remain firmly at 1. Machine learning systems are at 2. Some autonomous robots (self driving cars, Mars rovers, etc) are approaching 3. The Deepmind people are striving to add very simple versions of 4 to their systems.

      Like

      1. I think we should try to keep the hierarchy simple:

        Organisms with neural systems create sensations from environmental stimuli (radiations, pressure changes, molecules).
        They respond to their sensations by moving toward them (“pleasure”) or away from them (“pain”).
        They organize their sensations into patterns they can successfully move around in and survive.
        When their neural systems become complex enough to create those patterns in their absence, they become SelfAwarePatterns.

        Like

        1. With these models, it’s all in what you hope to get out of them. You simple layout gets a lot of work out of, “They organize their sensations into patterns they can successfully move around in and survive.” No worries, but I want to know more about how this piece happens, which is what leads me to layers above.

          Like

          1. “…patterns they can successfully move around in and survive” refers to what ethologists call “fixed action patterns.” https://en.wikipedia.org/wiki/Fixed_action_pattern

            Of course we are able to modify, combine, and elaborate on these patterns with our more complex neural systems, creating the “higher” layers you refer to.

            (I had originally labeled the hierarchy into 4 points–point 3 was the one about patterns. Somehow it got changed when I posted the comment.)

            Like

    2. Thanks for the link!

      “Fixed action pattern” seems equivalent to a reflex. They’re definitely rare, in the brain, but there are a number of them in the spinal cord (such as knee jerk and avoidance reflexes).

      Overall, I think we’re saying much the same thing, just with different emphasis and focus on different aspects.

      Like

      1. Fixed action patterns might be thought of as sequences or fixed patterns of reflexes perhaps; but they are definitely a product of higher, more central neural processing than reflexes (which is why a spinal cord is sufficient for a knee jerk, but it takes a brain to do what animals with FAPs do).

        Like

        1. I view it as something of a hierarchy. In the spinal cord are fixed, programmatic reflexes, but their only stimulus is somatosensory, such as a rubber hammer hitting the knee. In the mid-brain regions are FAPs, but they’re broader since they also have access to exteroceptive senses (sight, hearing, smell).

          And then there are the ones that can be inhibited to one degree or another. These are the initiators of affects, dispositions to act that are either allowed or inhibited by higher level functionality, the primal survival circuits around which higher level emotions develop.

          Like

          1. We seem to be getting bogged down in a question of which neural functions enable which behavioral activities, and how they are to be arranged into a hierarchy of complexity.

            It seems we might agree that close to the top of the hierarchy is a degree of neural complexity that enables an organism to mirror its behavior–for example, to dream about it when its body is asleep. At the very top would be an organism that knows that it dreams–or shall we say, dreams that it dreams.

            Perhaps you know the Nobel-Prize winning biochemist George Wald’s quip, “A physicist is an atom’s way of knowing about atoms.” That inscrutable intuition, it seems to me, is the bottom line.

            Like

    3. Mark,
      Sorry. I didn’t intend for my previous response to come off as disagreeing. I was just fleshing out the FAP / reflex concepts.

      Definitely agree about the top part of this hierarchy being organisms metacognitive self awareness, if it has it. (An interesting question is how widespread that awareness of awareness is in the animal kingdom.)

      I hadn’t heard that particular quip before. Thanks! It reminds me of Carl Sagan’s statement that we are the universe experiencing itself.

      Like

      1. I think “awareness of awareness” is a very good definition of consciousness.

        Thanks for your stimulating essay. (I take liberty to speak for other commenters as well.)

        Mark

        Like

  13. Perhaps the Higgs field turns information processing into conscious experience in the same way it gives particles mass, I’m pretty high right now bur that sounds right.

    Liked by 2 people

  14. Hi Mike,
    I think you already have a pretty good idea of where I stand on this. I’m probably never going to be satisfied by any explanation given for the hard problem. And I’ve actually been converted to a hard-problemer fairly recently.
    Perhaps even in the stone age, people knew that we see with our eyes. Even some animals would know that I think. Saying we see with certain neuron networks is essentially the same thing. It does add new useful knowledge of how the process works. But as saying we see with our eyes tells us nothing about why we see, elaborating it further also tells us nothing about why we see.

    Liked by 1 person

    1. Hi Fizan,
      I remember you describing how an associate had convinced you that there is a hard problem. I very much respect your admission that no explanation will suffice.

      That said, I do hope you’ll indulge me a bit while I explore your specific question:
      “But as saying we see with our eyes tells us nothing about why we see, elaborating it further also tells us nothing about why we see.”

      My question is, what do you mean here by “see”?

      If you mean: why do we build predictive perceptual models based on visual sensory input? If so, I think the answer is an adaptive one, because it increases the accuracy of predictions that we make about the environment, which increases the chances we’ll find food, mates, and avoid predators.

      Or maybe you mean: why are we aware of that we have these perceptual models? Here, I think the answer is that it’s paired with the feelings of valences associated with what is being perceived, and aids in helping to determine which mental reflex we should allow or inhibit.

      But you probably mean: why do we have the experience of seeing? Here I’d ask what is meant by “experience”, and offer a possible answer, that experience is the stuff of what’s happening, what the information flow in the above paragraph feels like.

      Or you might mean, why do we have the appreciation of that experience? Here I’d reach for metacognition, self reflection, which enhances our ability to simulate our own future reactions to things and facilitates our ability to communicate about it.

      Or are my definitions of “see” and “experience” deficient? If so, what are better ones?

      I’m not trying to trip you up in any way, just either to get you thinking about this, or learn what I’m missing about it.

      Liked by 1 person

  15. By “see” I mean what everybody intends to mean when using this word. We know what it is and we have always known.

    By “see” do I mean:
    “why do we build predictive perceptual models based on visual sensory input” – No. That has nothing to do with seeing. We know why we do that, it is because certain frequencies of the electromagnetic spectrum stimulate certain types of cells in our eyes. The patterns in which these cells are stimulated gets encoded into neuronal networks. These patterns are then compared and composited. It’s a process.

    “why are we aware of that we have these perceptual models?”
    This is somewhat close in my opinion. Although I would add that we are not aware that we have “perceptual models”. Through some inquiry we can become aware that certain perceptual models are associated with seeing. When you say “it’s paired with the feelings of valences associated with what is being perceived” it pushes the ball further to ‘feelings’ what do you mean by this? if it’s the same as ‘seeing’ then there we are again.

    “why do we have the experience of seeing?”
    That is what I’m saying although I don’t think I have to add the word experience. Experience is a broader term. I don’t have to say I experience seeing a tree. I can just as well say I am seeing a tree. When you say “…what the information flow in the above paragraph feels like.” Again we have the word ‘feel’ which is again only part of the problem.

    “why do we have the appreciation of that experience?”
    I think here again the problem becomes the word ‘appreciation’ what do you mean by it?

    Liked by 2 people

    1. Fizan,
      Thanks for engaging with this.

      “By “see” I mean what everybody intends to mean when using this word. We know what it is and we have always known.”
      This is often the crux of the matter in these discussions. If we’re not willing to explore what concepts like seeing, experience, or similar terms actually are, then I think we make progress impossible.

      I agree that we’re not aware of the perceptual models as models, only as perceptions, the final result. But I think they are models, or representations, or image maps, concepts, or prediction frameworks, or whatever term is preferred.

      By “feelings”, I meant the conscious experience of emotions and/or affects. More controversially, I mean the communication from the brain regions which construct the affect or emotions to the reasoning or movement planning regions of the brain.

      By “appreciation”, I meant knowledge of the experience, or more controversially, models / representations / concepts of the experience, in essence models of the models, concepts of the concepts, awareness of the awareness.

      I know we won’t solve this today, but again appreciate you discussing it.

      Liked by 1 person

      1. Yes, it took me a year’s worth of discussions to move a little from my original positions so we’re not going to solve it today. Though I’m still interested in this to see if there is perhaps something that does become apparent in a new light.

        I think I am open to exploring concepts. ‘Seeing’ is not a concept it is something we do as essential as creating concepts is also something we do. It is also something many animals do. But yes to capture this ‘thing we do’ in language we have words like seeing.

        “..But I think they are models..”
        But what is ‘they’? perceptual models are by definition models. You are equating perceptual models with the perception itself. That’s the contention. Perceptual models are arrangements of neuron networks.

        Feelings are a conscious experience as you said at first. They can’t explain the conscious experience itself.
        But then your controversial suggestion of feelings being communications is again the same as equating perceptual models with perceptions. Even if the model is true, it does not help us understand why a communication ‘feels’ like anything.

        Liked by 2 people

        1. Thanks Fizan. I’ll leave you one last point to consider.

          When asking why it feels like anything, another question to ask is, what would an evolved pre-language system for subsystems in an animal’s control center to communicate with each other be like? How would those subsystems communicate varying wavelengths of light? Or vibrations of air? Or recognition that a certain signal indicated damage to part of the body? What, other than the raw phenomenal experiences of seeing, hearing, and pain, might be the answer?

          Liked by 1 person

          1. Thanks for that Mike. It’s an interesting point and I will ponder over it more I think.

            How do subsystem communicate with each other pre-language?

            It’s a bit of a conundrum for me.
            Is communication possible without language?
            I suppose so, bees communicate, so do plants and bacteria etc. but they don’t use language.

            So what you are describing is another system of communication which is employed to communicate between subsystems.

            Does this neccessarily have ‘to be like’ something?
            I can’t say.

            Liked by 1 person

    2. Hope you don’t mind my jumping in with a different tack. I’m also trying to figure out how to explain the nature of qualia (what it “feels like”) from a physicalist viewpoint.

      My view is ultimately extremely close to Mike’s, but I start at a more basic level. A mechanistic level, to be precise. I view every experience as Input —> [Mechanism] —> Output. So in “seeing” a tree, we have light impacting cells in the retina, and then various neural processing identifying edges and textures, and ultimately a set of neurons firing in a particular pattern. This pattern becomes the Input, and has the semantic meaning of “tree”. The Mechanism responds to this input by generating Output. Some of this output constitutes memory, which can later trigger the same “tree” pattern as more input. Additional output might be triggering action, depending on current context.

      The point with regard to qualia is that if the Mechanism wants to communicate that this experience happened, its only reference to the experience is via the meaning of the input, i.e., “tree”. The Mechanism cannot produce responses regarding the pattern of neural firing. It can only refer to the fact that the pattern was recognized, and that the pattern was different from other significant patterns, like “House”. This reference is the “feeling”. We say it “felt” like “seeing a tree”.

      I would be grateful if anyone could let me know what parts of the above are helpful and what parts are unhelpful.

      *

      Liked by 2 people

      1. Hey James,
        Your comments are always welcome. Please do jump in anywhere you’d like to offer anything.

        I think your description of the Mechanism has important insights, but for some reason I’m struggling tonight to come up with a suitable response. Fatigue may be the culprit. Anyway, hopefully I’ll be able to get it together better tomorrow.

        Like

      2. Ok James, let me take another shot.

        I think your insight that the Mechanism wouldn’t hold the image is important. It gets to an important fact about subjective experience that’s worth pointing out. Namely that the theater aspect of it is an illusion.

        There are regions of the brain that build image maps, and there are regions that categorize an image with a known concept, and associates it with related concepts. Then there is a region which uses this information to plan movements. But the movement planner never receives a full scale theater, only specific pieces of information. Where then do we get the impression of the theater?

        Consider when you see a tree. The tree image is formed in the visual regions, and identified with a tree in the associative regions. The impression of a tree then reaches the movement planning regions. Not the full tree image, just the treeness of it.

        Then the movement mechanism points the eyes at the trunk. The trunk image is formed in the visual regions, it’s trunkness identified in the associative regions, and then an impression of trunkness arrives at the movement planner.

        Importantly, while the movement planner is considering the trunk, it is not considering the tree overall. It may appear that we are taking in both the tree and a detail of it, but this is an illusion built on the fact that we can quickly switch contexts and aren’t conscious of the transitions.

        We then look at the bark on the trunk, and the cycle loops again. We can quickly switch from the tree, to the trunk, to the bark, back to the tree, then to a branch, etc, and have the impression that there is a theater happening.

        But what’s actually happening is that the movement planner is having rapid communications with the associater, the visual image mapper, and perhaps other sensory regions. The movement planner can rapidly flick between all these different domains.

        That said, some signals do arrive in parallel. For example, the sense of self is constantly streaming in. And any affects and emotions triggered by the tree impression, or noticed aspects of it, flow in as well. It’s a never ending loop that gives the impression of the Cartesian theater, but is really just subsystems communicating with each other.

        Does that fit with what you’ve described with the Mechanism?

        Like

        1. Actually that does fit, and well said. I think your description explains why people (like Tononi) think there is a “unified field” of Consciousness when in fact there is just a unified world, and whenever you look at part of it, there it is. It also ties in with all the new neuroscience stuff about predictive processing (which really should be called expectation processing, but ah well). The brain has expectations of what it will see/experience depending on where it looks or what it (the body) does. I think these expectations are just inductive in that the “prediction” is that you will see whatever was there last time you looked, but taking into account things like velocity and time and other context.

          And we still differ as to which part of the brain is likely to be the Mechanism in question. You refer to the “movement planner”, but I’m wondering how the movement planner is involved when you are simply watching beautiful scenery, like a fjord in Norway, which is where I am at the moment. 🙂 You may be right about what the movement planner does, and it would certainly count as a separate Mechanism in my schema, but I still don’t think it is the Mechanism of the autobiographical self. It seems to me this self learns of the goals and plans some time after the goals and plans are made. Time will tell.

          BTW, if you want a neurologically plausible scheme for how concepts like a tree and its parts can be managed, combined, uncombined, recombined, etc., you should look up Chris Eliasmith and Semantic Pointers.

          *

          Liked by 1 person

          1. Thanks James.

            On the movement planner, that’s my name for what a lot of people refer to as the executive. “The executive” term works if you understand what executives do, which is planning. (I’m not wild about the term because many people think executives give direct orders for actions all day, instead of leaving that for more front line people.)

            Often that planning involves doing research whose relevance to the ultimate plans may not be obvious. I think your appreciation of the fjord would fit in that category. (I’m very jealous by the way.) I think beauty is some perceived thing triggering our primal instincts to focus on the thing in a way that was once adaptive, but may not be now.

            But I think your point about the Mechanism learning about the plans after the fact is right too. In truth, I was over simplifying above, because what I’m called the movement planner is itself a complex system. One of its subsystems is a metacognitive function, a recursive feedback system, and it is here where our knowledge of our own experience is captured. Meaning a lot of plans get made which escape the notice of that metacognitive function, although the metacognitive function often does have causal influence on the overall planner.

            Eliasmith’s book looks interesting. Thanks for the recommendation! Just added it to my Kindle.

            Liked by 1 person

      3. Hi James,

        The start of your theory seems ok to me. It’s a version of how the brain operates like a neural network. But that part tells nothing about the nature of qualia.
        At the end you say: “This reference is the “feeling”. We say it “felt” like “seeing a tree”.

        This last part is where you attempt to tackle qualia. But you tell us nothing about why or how the recognition of a pattern and differentiating it from another pattern feels like something. Where does the feeling come from?

        Like

        1. Hi Fizan. Congratulations, you are my target audience, i.e., an intelligent person who is stuck on the “what it’s like” definition of qualia/consciousness. My goal here is simply to get the concepts in my head into your head. You can then evaluate them.

          So to begin with, there are two ideas of “feeling” that I want to make sure we distinguish. The first is what I will call an experience. An experience is a complete “Input —> Mech. —> Output” process that meets certain criteria, namely, the input is a sign or signal of something else. Let’s use seeing an apple as an example. So the input is a group of neurons firing in a pattern that means “apple in my visual field” and the output perhaps constitutes a memory of that event. That memory can become input for a subsequent process that has output such as saying the words “I saw an apple”.

          The second idea of “feeling” is all of the subsequent events that occur because of the original experience. So for example, if you have not eaten food for three days, the mechanism in question is likely to be in a state in which the original experience, seeing an apple, will cause multiple outputs besides just the memory. These other outputs will likely include releases of hormones, generation of plans to obtain and eat the apple, etc. These outputs will then generate new inputs, which will become new experiences, for example, generating a memory of the newly formed plan to obtain the apple.

          My point is, when discussing the “what it feels like” aspect we call qualia, we want to restrict ourselves only to the first idea of “feeling”, i.e., the initial experience, and not the subsequent experiences.

          So what do we mean when we say “when I see red, it feels like something, namely, it feels like I’m seeing red”? I’m going to try to expand on a point that Mike made by giving an example. Suppose we have a computer that is hooked up to a camera that can only see colors. The hookup is such that whenever the camera sees a color, a number is put into a memory location the computer can access. The number is random, but is always the same for a given color. So let’s say 3625 means “red”. Let’s say the computer is otherwise intelligent and can pass a Turing test, but it’s only sensory input is via the camera. Further, the intelligent program running on the computer does not have access to the number representing the color. In fact, it doesn’t even know there is a number. All it has access to is whether something is there and whether it has seen it before.

          So in the following conversation, a number in brackets, like [1234] means a the current state of the color register. The first conversation might go like this:

          [0]
          Me: Do you see anything?
          It: No
          [3625]
          How about now?
          Yes, I see something.
          What does it feel like?
          I don’t know.
          Let’s call that “red”.
          OK
          [1234]
          How about now?
          I see something else.
          Let’s call that “blue”.
          [3625]
          I see red now.

          And so on.

          So what I’m saying is that qualia is simply the recognition of a difference, without access to the actual difference (specific number in the register), but with access to the meaning of the difference (red color).

          Does that help any?

          *

          Like

          1. Thank you, James. I appreciate your effort.

            Let’s first focus on your idea of experience (only the first one):
            A firing pattern of neurons to represent something outside. This pattern has to be distinguished from other patterns to give it meaning and that can be stored into memory. That’s fine. It is simply a categorization of data. But do you agree that at this point no ‘feeling’ has occurred?
            In other words, this experience has occurred but it is devoid of feeling. If so I can agree with you and call this instance an experience. I cannot, however, say it has any felt aspect to it.

            Coming to your example:
            “[0]
            Me: Do you see anything?
            It: No
            [3625]
            How about now?
            Yes, I see something.
            What does it feel like?
            I don’t know.
            Let’s call that “red”.
            OK”

            You use the word ‘see’. In common language, we automatically associate seeing with a felt quality to it. But I don’t think you’re using it that way.

            Let’s try to take human language out of it first:
            The intelligent program is presented with [3625]
            We ask ‘do you see anything?’ OR similarly, we could press a button which indicates to the program to go ahead with the output it has generated.
            It bleeps an amber LED.
            This bleeping LED indicates to us that the program has found something matching the code in their program.
            We ask ‘what does it feel like?’ OR we could again press a button to indicate to the program to go ahead with an output of whether this number pattern belongs to a known category.
            The program interprets the word ‘feel’ as you described above (i.e. a category or you could say an experience devoid of feeling).
            It bleeps another amber LED, which indicates to us that it cannot categorize this input into a known category.
            We tell it to light a red LED when the same input is matched in the future.
            [3625]
            It lights a red LED.

            It’s essentially the same program but in this example, we’re not automatically jumping to qualia. I think the pitfall is using human language to describe these processes as our language is already incorporated with qualia e.g. see, experience, recognize are all such words.

            I hope you would let yourself be my audience too. I think we can take 2 positions either that certain communications or processes etc. are always accompanied by qualia. This would mean qualia exist as a fundamental property of the universe. Or if we could say we don’t know so far how to account for it.

            Like

          2. Fizan, I think the problem is that the scope of human consciousness, the number of possible experiences, is so vast and so rich that it is hard for anyone to imagine cutting it down to the essence of a single experience.

            You use the word ‘see’. In common language, we automatically associate seeing with a felt quality to it. But I don’t think you’re using it that way.

            In fact, I am using it that way, I think. I’m saying the only “felt” quality of any experience, in the first sense, so not including following experiences, is that it is different from other experiences. So in your counter example, where the query inputs are via button and the outputs are via blinking lights, I’m saying the experience from the perspective of the computer is the same, and it has absolutely real conscious experience, but obviously a very limited repertoire.

            Imagine scaling up your version of the model. If we want to add more colors which can be recognized and reported we would have to add a new light and new color for each one. Now imagine considering two colors at once. Imagine that the interlocutor does not know which light is on in the room, so to find out they would have to ask about every single possible light by hitting every single button because there is no mechanism to simply ask “which one is on”. Now imagine adding another sense, like smell. Now imagine adding the ability to detect lines and texture. Now imagine adding the ability to detect (and differentiate) objects. And so on.

            I’m saying there would be no practical way for the computer to communicate specifics about these events without some sort of general purpose language, and if the number of possible experiences is the same as that of a standard human then that the language would necessarily look exactly like the language people use (“seeing, smelling, feeling”) and that there would be no reason to think that what people mean by that language would be in any way different from what the computer would mean by that language.

            Whatcha think?

            *

            Like

          1. James, thanks for starting a new thread.
            I don’t think if presented with my example machine one could reasonably conclude (atleast in my opinion) that it was having a conscious experience of any sort.

            If we were to scale it up then outputs don’t neccessarily have to be new lights but (as you indicate as well) we can give it more degrees of freedom by allowing lights to blink in certain patterns etc. Again doing that does not tell me where conscious experience can come from.

            “..the only “felt” quality of any experience, in the first sense, so not including following experiences, is that it is different from other experiences.”

            If you replace experience with categorization of inputs (which is what I think you are refering to) then you get:

            the only “felt” quality of any categorization of inputs, in the first sense, so not including following categorizations, is that it is different from other categories.

            The question is who feels this felt quality?
            And does any mechanism which is able to ‘detect’ a difference between categories feel this difference as something?
            Here we have to be careful with the word detect. Can the mechanism itself detect or does it just facilitate a conscious observer to do the detection. For example if a ball is to be dropped onto a surface it does not know what type of surface it is going to be. Then once it hits the surface it detects if it’s a slope or a flat surface, the angle of the slope, the smoothness, the hardness etc. But surely the ball did not detect these properties but we did from observing the process.

            Like

        2. Fizan, I am saying that the process which I call an experience and you call a categorization of data is the feeling. Once that happens, the feeling has happened. If that process happens repeatedly, then the “‘feeling” is happening. There doesn’t have to be any observation of that feeling. If there were, then that’s a new feeling.

          *

          Like

  16. I see that Mike and James are presenting positions where “the hard problem of consciousness” is a relatively standard form of computation (though obviously involved). From this perspective phenomenal experience might for example be a matter of system and subsystem communication. Regardless I think it’s clear that they’re presenting the human nervous system as an individual form of computer.

    Many suspect that the hard problem of consciousness is not so easily dismissed however, as expressed above by Fizan (which is certainly my perspective). Thus I’ll present my own “four forms of computer” model, which I think remains more true to the perspective of materialism.

    Before “life” existed, I believe that everything functioned mechanically. This is to say that input dynamics were not algorithmically processed for output function regarding anything from stars to molecules and beyond. Life did add algorithmic computation, I think, by means of its genetic material. Here chemical properties of matter interact with genetic material to produce output function. Furthermore apparently life evolved to facilitate more and more complex varieties of itself to occupy wider assortments of niches. My point however is that we don’t consider this first form of computer to function by means of feeling good or bad. Presumably that’s simply not one of its properties.

    Then after billions of years of life on Earth (and perhaps inciting the Cambrian explosion), apparently a second variety of computer evolved as well from which to help guide the function of organisms that are multicellular. This provided a full organism form of computation which accepts input information, as well as processes it algorithmically for output function. Instead of base chemical dynamics, this one algorithmically processes input by means of neuron systems. And while some will say that this form of computer can evolve to incorporate punishing and rewarding feelings into such function, I personally suspect that it cannot. Like genetic material, I don’t believe that neurons systems function in a purpose driven way, or such that anything can be personally good or bad for them.

    To briefly skip the conscious form of computer, there is also the technological variety that we create. Of course they process input information algorithmically for output function by means of electricity. Like the neuron and genetic material based computers, I suspect that they do not feel good or bad. Given that we create them, wouldn’t we otherwise know?

    So now regarding the third form of computer that does have the capacity to feel good and bad, I believe it works like this. Apparently at some point the neuron based computer created punishing/ rewarding feelings by means of its standard algorithmic function, for something other than it to experience. Initially this dynamic should have been carried along as a useless element (as evolution is wont to do). But then it should have reached a point where this other dynamic was put in charge of deciding something for the organism, and so made these decisions in a fundamentally different way. For this other type of function existence personally matters. Thus “stop if it hurts” and “do more if it feels good”.

    Though it may seem that straight algorithmic function can do the same thing by means of weighted parameters, the problem with this may be that it requires programming to be developed for relatively specific situations. Conversely the conscious form of computer seems more plastic in the sense that it has some inherent guidance – it will go into any situation with the quest to feel better, and this will be determined by its pain, hunger, itchiness, curiosity, and so on. Furthermore I believe that such an organism should tend to develop its own conscious form of memory as another input from which to work, and interpret inputs and construct scenarios about what might complete a given quest. I consider this computer to mark the rise of the teleological, or purpose driven, form of computer. And apparently to support it a computer must exist that does not function alone on the basis of chemical dynamics (1), or neuron dynamics (2), or electrical dynamics (4). This third from of computer instead functions on the basis of personal value, or output of the neuron based computer.

    So now to the issue which James presented of a tree that is seen by something conscious. For my own interpretation, not only does such “sense” information arrive, but also generally a “valence/ value” input that feels good to bad. Thus a scene might be beautiful, disgusting, hopeful, and so on for a particular subject to feel given its nature. Conversely non-conscious computers do not have this element. So for conscious computers, inputs (sense, value, memory) are consciously interpreted, and scenarios are constructed about what to do to promote personal value. The only non thought conscious output that I know of is “muscle operation”. Here the conscious form of computer is manufactured by and supported through a vast supercomputer that is not conscious. I believe that the conscious form of computer as defined here, does less than a thousandth of a percent as much processing as the non-conscious form of computer which creates it. In a sense the big computer creates and fosters the small one given that the big one is inherently deprived of teleological function.

    Liked by 2 people

    1. Eric,
      As we’ve discussed before, I see considerable overlap between our understandings. And the idea of the hierarchy of computation you describe is interesting.

      Although I’m not really comfortable drawing sharp lines between “mechanical” and computational systems, since computational systems are themselves physical mechanisms. I think what distinguishes systems we typically describe as computational from others is that the ratio of its causal dynamics to the magnitude of the energy involved is high, with the ratio opposite for systems we typically see as primarily physical. But if so, that means there’s no hard line separating computational from non-computational systems. There are only degrees of information content.

      That said, I could see the information processing between DNA, the transcription proteins, mRNA, and ribosomes as computational, although exactly where to draw the border seems complicated. Ribosomes go on to produce proteins, most of which are arguably more about physical effects than information.

      But for your model of consciousness in particular, as we’ve also discussed, I have two broad issues.

      The first is that you make little effort to explain what “feeling good or bad” actually is. I know you consider yourself to be at the architectural level rather than the engineering one, but it seems to me that a theory of consciousness that doesn’t reduce to non-conscious terminology risks being just an alternate description of the phenomenon rather than an actual explanation. And given that you see achieving this capability as the missing ingredient that technology may never find, this strikes me as a crucial omission.

      This, incidentally, is a variation of my overall issue with hard problem thinking. It posits something mysterious: subjective experience, but then denies any attempt to examine the components of that thing. It guarantees that the thing remains mysterious and intractable. To have any hope of making progress, I think we have to be willing to dissect these concepts, to pierce the conceptual veil.

      The second issue is the idea of a discrete second computer. A phrase for this concept occurred to me this morning, which I don’t know if you’ll find useful: physical dualism. In my own understanding, to quote Laplace, “I have no need of that hypothesis.” I might feel different if there was neuroscience that supported this concept.

      Backing up from persnickety details, I do think the dual computer model has resonance with the mainstream neuroscience idea that various regions of the brain specialize in particular functions, and in particular does have some resonance with my own idea of the movement planner and perhaps James’ idea of the Mechanism.

      Liked by 1 person

    2. Let me first mention that I’d love for James, Fizan and others to join wherever they please. Here I’ll yet again be attempting to deconstruct and analyze objections from our sometimes persnickety friend, Michael Smith. 🙂

      Mike,
      I have never wanted to imply an absolute line between computational and mechanical function. And yet now after some years of proposing this position, I’m starting to consider the distinction “harder” than I had originally thought. Where are the grey areas? And I’m surprised that no one has yet provided a fifth example of something that can reasonably be said to algorithmically process information. Thus we have the genetic form based upon chemical dynamics, the whole organism form based upon neuron function, the conscious form based upon feeling good/bad, and the technological form based upon electricity (though I suppose that light could do the trick, or sound, and so on).

      Note that it’s the third form of computer, or the one that’s outputted by the second, which I theorize necessary for Strong AI, as in John Searle’s Chinese room thought experiment. I highly doubt that a person manually following computer code could ever thus become a computational device that’s able to “understand” Chinese, let alone produce phenomenal experience.

      I think what distinguishes systems we typically describe as computational from others is that the ratio of its causal dynamics to the magnitude of the energy involved is high, with the ratio opposite for systems we typically see as primarily physical.

      I was ready to put that into my own bag of tricks (with attribution of course), though upon reflection I’m not sure it’s sufficient. We don’t consider pushing a rock off a cliff computational, though the ratio of causal dynamics to initiating energy is high here. I may need to stick with my original algorithmic processing criteria, as in the mechanical typewriter doesn’t function this way, while a digital device does.

      I actually see no grey area regarding the function of ribosomes. Here input substances interact algorithmically with genetic material to produce output proteins.

      Given the tremendous number of things which my models do address, let me suggest that it’s standard human psychology which leads you to keep bringing up the one thing that I quite publicly assert that my models do not address. Here I tell you something that I can’t provide, and so you naturally retort that I’ve deleted the exact thing which is required to assess the validity of the models that I’ve developed. It’s textbook. And the funny thing here is that while I consider the hard problem of consciousness to be a truly hard problem, and frame it as an output product of certain neuron based computers, you’ve been denying the hardness of this problem given your own models. Of course saying that it isn’t hard doesn’t make is so. I suppose it is what it is regardless of what you or I suspect.

      In any case this uncertainty isn’t something that I consider very consequential. I suppose that if an answer were gifted to the AI people, then they might be able to use it, though perhaps not since they might be too technically inept to output conscious computation by means of a non-conscious computer (as my model, at least, suggests is required).

      Psychology, psychiatry, sociology, cognitive science, and yes your favorite, neuroscience, do not need to understand the technicals of how brains manufacture phenomenal experience, I think. I don’t see how such information would benefit them today. But the critical element which they do currently lack, or the thing which I consider to mandate the softness of these sciences, is that they do not yet have effective definitions and models regarding the nature of what they study. These sciences have not yet found their “Newton”. And yes in the past some have gotten snarky with me here for my audacity to hope that I might help. Me? Yes, why not me? Shouldn’t all interested people be trying to help these fields improve?

      Who’s denying any attempt to examine the components of phenomenal experience? I not only welcome that exploration, but provide models which, if empirically supported, should be quite helpful in that quest. Indeed, without effective architectural models in these fields, when we look at the brain, how might we interpret such data? You may protest at the following, but neuroscience today seems similar to astronomy at the dawn of science. I don’t mean this in terms of lacking measurement devices, since we have plenty of those, but rather in terms of lacking theory from which to effectively interpret such information.

      I agree that my dual computer model does conform with many elements of what’s already accepted. For example people today refer to an “unconscious” dynamic. As I’ve mentioned in the past, I don’t like this term given that it seems to be used without notice in at least three different ways. Firstly it’s used it as what I call “non-conscious”, such as for a computer or even a rock. Secondly it’s used it as “quasi-conscious”, such as when one wishes to refer to conscious function that’s subtly influenced by the non-conscious computer, as in unacknowledged racial prejudices. Thirdly it’s used for “altered states of consciousness”, such as sleep or being stoned. So this term is one of many issues that I’d like to help clean up in these fields.

      In the end I foresee various generally accepted principles of metaphysics, epistemology, and axiology (whether future theorists here call themselves “scientists” and/or “philosophers”). I believe that this theory will come to found all branches of science.

      I’m quite sure that your “movement planner” and James’ “Mechanism” could find a place in my own model. Note that the conscious form of processor, which interprets inputs and constructs scenarios about what to do in the quest to promote personal value, is what I call “thought”.

      Liked by 1 person

      1. Eric,
        On the distinction between computational and non-computational systems, I’ve had people assert to me that all kinds of things were computational: molecules, proteins, synapses, cells, plants, weather systems, stars, black holes, even rocks, not to mention the overall universe. I did a post on this subject a couple of years ago: https://selfawarepatterns.com/2016/03/02/are-rocks-conscious/

        The TL;DR is that whether a system is implementing a particular algorithm, or any algorithm, is a matter of interpretation. Yes, under computationalism, this means that consciousness is a matter of interpretation. It exists in the eye of the beholder. That post references a Stanford Encyclopedia of Philosophy article on computation in physical systems: https://plato.stanford.edu/entries/computation-physicalsystems/
        My position generally pairs up with the limited pancomputationist view.

        All that being said, as a pragmatic matter, I only consider it productive to treat systems that require minimal interpretive energy to be computational as computational. The only things I’ve historically considered to pass muster are the various forms of technological computers, and central nervous systems. DNA does strike me as a possible addition, although DNA specifically seems more spools of data and programming. We have to consider the transcription proteins in the cell nucleus to get at the computer interpretation, but the overall nucleus, not to mention the overall cell, strikes me as a physical system, which leads to the issue I mentioned above of where to draw the borders.

        On your model, sorry if I hit a nerve. You keep putting your model forth, and I keep telling you what I think about it. I’ll try to keep my future responses to aspects I haven’t commented on before.

        Liked by 1 person

      2. Hey Eric. Actually, I would place your model inside my framework. 🙂

        [Just to reorient, the base model for my Framework is: Input —> Mechanism —> Output]

        So every system you mention, from stars, to ribosomes, to neurons, to sub-regions of the brain, can act as a mechanism. The question is what constraints you put on the input or output so as to count as a conscious event. I interpret what you wrote above as requiring that the output constitute a good feeling or bad feeling. Personally, I find that constraint too constricting, i.e., excluding events I would consider conscious. For example, I currently see the pattern of rain cascading down a window. I don’t think I feel good or bad about that, but I am certainly conscious of it. Also, I would want to know exactly how you would identify a good feeling versus a bad feeling.

        Interestingly, I absolutely agree with your reference to one (possibly not conscious) “computer” (Mechanism) providing the input to another, conscious “computer”. I’ve even gone as far as to speculate that that is exactly what is happening,in the brain, with the entire neocortex being the first Mechanism and various subcortical systems, including at least the thalamus, as the second Mechanism which is responsible for the conscious events. [Hi Mike!]. FWIW, the neocortex would be the source of input and also the location of the output.

        However, a necessary conclusion from this framework is that the Chinese Room definitely counts as a mechanism, and given the nature of the inputs and outputs, such input/output events definitely constitute conscious events.

        By the way, whether any event is a computation is exactly like asking whether an event is a conscious event. It all depends on what you require with respect to inputs and outputs.

        *

        Liked by 2 people

        1. Hi James,
          I think I’ve told you this before, but I don’t know that Eric, or anyone else still monitoring, has necessarily seen it. While I think the thalamus is crucial for consciousness, I don’t think it’s sufficient. Certainly hydraencephalics, children born without much of anything above their thalami, have a primal sort of reflexive proto-consciousness, but it seems like a pale reflection of the full thing. And adults who receive extensive injury to their cingulate or neocortex can, despite intact thalami, be reduced to a zombie-like status.

          Overall the thalamus, cingulate cortex, and neocortex seem like an overall system to me, with the cortices being sort of expansion substrates for the phylogenetically more ancient thalamus. The consciousness of a healthy adult human seems to require most of the thalamo-cortical system. This fits with my point in the post that consciousness isn’t an all or nothing thing. It isn’t like a light that is either on or off. A person can have varying degrees of it, or some aspects of it but not others.

          One of these days I’m going to have to do a post on the Chinese Room. I think Searle made sort of a valid point with that thought experiment, but I don’t think it supported the conclusions he drew from it.

          Liked by 1 person

          1. Mike said: “While I think the thalamus is crucial for consciousness, I don’t think it’s sufficient.”

            This is, of course, absolutely correct. In my Framework a mechanism generates conscious events. It necessarily requires input. As you take away potential inputs, you decrease the possible number of experiences. Likewise, if you take away possible outputs, you take away possible experiences.

            Also, to make sure we’re on the same page, it’s not necessarily correct to attribute consciousness to the mechanism. The system would “be” conscious. A mechanism would “have” consciousness if and only if there were conscious events internal to the mechanism. This necessarily means there would be a sub-mechanism internal to the mechanism in question which would generate conscious events without any mechanism internal to it generating conscious events. (That’s an awkward sentence, let me know if you don’t follow.)

            Actually, this last point has given me cause to adjust my point of view on the Chinese Room. [Woot! New thought!]. By my description, the Room would be a mechanism of conscious events, but not necessarily conscious itself. That said, if the Room demonstrated any kind of memory of previous questions, there would necessarily have to be an internal mechanism generating conscious events (memory being the output), thus making the Room conscious.

            *

            Liked by 1 person

        2. Hello James,
          I deeply hope that I have been able to develop a consciousness model which resides within your framework — that is if its “mechanism” addresses the function of stars, ribosomes, neurons, sub-regions of the brain, and so on. My question would be to ask what sort of reality this does this not address? As physicalists we might describe everything in terms of “Input —> Mechanism —> Output”, though I suspect that you’re using this statement in a less generic and so more profound way?

          I agree that the way you’ve interpreted my consciousness model wouldn’t be very effective, as illustrated by your example. But in truth the model doesn’t require that output incite positive and/or negative feelings. Here’s a quick summary:

          I identify three forms of input to the conscious processor — value, sense, and memory. The first one is the punishment/ reward dynamic which motivates conscious function. For example if a subject is feeling bad, this provides reason for it to further interpret inputs and construct scenarios about how to feel better. The second is perfectly informative rather than valuable, such as the example that you’ve provided about watching rain. In truth however virtually all inputs will have both value and sense components to them. A given scene might cause a person to feel good given its beauty, for example, but also provide sense information such as that the sun is going down, rain is coming, or whatever. Then the memory input evokes past consciousness that somewhat remains for future use. Without it for example, from second to second you wouldn’t know your own son. The only non-processing form of output that I know of, is “muscle operation”.

          Furthermore let me say that I consider consciousness to technically function instantaneously, even though it does seem continuous to us given both memory of the past and anticipation of the future. While the continuity associated with memory should be clear, anticipation adds continuity by means of the positive value of hope, and the negative value of worry. For example we might willingly choose to endure uncomfortable circumstances when counterbalanced with sufficient feelings of hope. And even if things are good right now, if we foresee bad circumstances in the future this worry hurts us presently.

          I don’t have any specific criteria for identifying good to bad feelings, beyond just feeling what I feel and so deciding it’s level of good to bad. I will say however that I suspect that we’ll some day have machines which are able to reasonably quantify how good to bad someone feels at any given moment. There should be at least some reasonably objective criteria to measure in this regard.

          It’s good to hear that you’re positive about a non-conscious computer outputting a conscious one. I don’t know all that much about brain biology myself, though I’d love for someone who does to grasp the full nature of my architecture and so assess it in that way.

          Regarding the Chinese room, I think it all depends upon how the “consciousness” term is being defined. From my definition (or where a non-conscious computer outputs value, sense, and memory input to a conscious computer) this just doesn’t seem right. Here we’re depending upon poor John Searle to function as the non-conscious computer by means of accepting the Chinese information, manually running it through algorithms, and then by doing so output a conscious computer. This is to say another machine that is motivated and able to understand those characters by means of punishing/ rewarding feelings. And what if there were a machine that could feel horrible pain upon the proper human command? Might John manually go through its algorithm and so output something that feels the same pain? But of course this is from my own definition of consciousness. None are true, though science remains in desperate need of a useful one I think.

          By the way, whether any event is a computation is exactly like asking whether an event is a conscious event. It all depends on what you require with respect to inputs and outputs.

          I would say that they are the same in that it depends upon definition. Is that about what you meant as well?

          Like

          1. Eric,
            My understanding of the main point of the Chinese Room is that the room + man doesn’t really understand Chinese. This gets into how we define “understand”, but I think Searle is taking it to mean having mental images of what the Chinese words refer to. So an actual Chinese speaker, when they see the Chinese word for “pain”, maps it to the phenomenological experience of pain, which the room never does. So I think you’ve got the scenario right. (Or we have similar misunderstandings 🙂 )

            Of course, Searle’s scenario assumes that it’s possible for the room to pass the Turing test without some form of understanding (on purpose or otherwise) lurking in the instructions.

            Liked by 1 person

          2. Good to hear that we’re on the same page with this Mike.

            Of course, Searle’s scenario assumes that it’s possible for the room to pass the Turing test without some form of understanding (on purpose or otherwise) lurking in the instructions.

            Here you seem to imply that he’s handicapped the test by not providing the room with enough information. Well if so I think its fixable. Notice that even a human can’t pass this Turing test without living a life as a Chinese person, and so learning to speak the language and such. Thus strong AI (as a conscious computer) would need this provision as well. Now we have Searle in the room manually preforming all of the algorithms associated with that computer gaining an education in Chinese life, which would include reproducing pain, hope, and so on felt by the machine as it essentially became Chinese.

            Even if a technological computer could do this, could the room as well? I say no, and at least because I doubt that the hard problem of consciousness would be overcome in this way.

            Liked by 1 person

          3. I think having the man in the room use any of his own understandings violates the conditions of the thought experiment. The idea is that any Chinese responses the room produces only comes from him strictly following the instructions.

            But my point was that if the room can, say, interactively and at length discuss its experiences growing up in China, who can say that within the instructions isn’t an entity that thinks it grew up in China?

            Liked by 1 person

          4. Yes Mike, it’s also my understanding that the scenario here concerns the strict following of instructions. I suppose that whether or not something exists within the instructions that thinks it grew up in China, would depend upon the definition that one uses for “consciousness”, or the presumed means of “thinks”. No such definition is true (according to my EP1), though some definitions do seem more useful than others.

            I expect the scientific community to come up with a highly useful definition for consciousness some day, and so help harden up associated sciences. Perhaps this definition will correspond with the one that I’ve developed? If so then from here the Chinese Room would need phenomenal experience in order to potentially “think it grew up in China”, and it’s hard for me to imagine that its process would support that sort of thing. But then I don’t mind when people submit contrary “functional” definitions in order to make various points. They’re no more true or false than mine is.

            Liked by 1 person

          5. “I expect the scientific community to come up with a highly useful definition for consciousness some day, and so help harden up associated sciences.”

            You’re more optimistic than I am. Personally, I think the best scientific progress will be made in terms of concepts whose precise definitions are more easy for people to agree on, such as memory, perception, discrimination, attention, imagination, introspection, or affects. I suspect whenever someone makes progress on some definition of consciousness, there will always be people saying that they’re not making progress on “real” consciousness.

            Liked by 1 person

          6. I suspect whenever someone makes progress on some definition of consciousness, there will always be people saying that they’re not making progress on “real” consciousness.

            Well sure Mike, but also consider my own plan to potentially end that bullshit practice. The reason that it’s permissible today in academia to challenge the truth of a given definition, is because nothing like my first principle of epistemology has yet become generally accepted. If it were formally understood that there are no true or false definitions, though only more and less useful ones in the context of a given argument, then educated people would tend to be ridiculed for such transgressions. This goes back to my position that the foundation upon which science rests, or metaphysics, epistemology, and axiology, will require a group of respectable specialists that have various generally accepted principles in these regards. I propose four. Conversely most scientists today (not to mention philosophers), believe that science begins and ends with science.

            Though it may seem that components of consciousness would be less problematic than consciousness itself, observe that “big picture” speculation should also mandate speculation below. There are no true definitions for memory, perception, discrimination, attention, imagination, introspection, or affects, but rather more and less useful definitions in the context of a given argument (of consciousness or whatever). So I think that we’ll need effective wholistic models in the end, since the bits and pieces should naturally tend to vary with each.

            I suppose that I’d be pessimistic about consciousness as well, that is if I hadn’t developed an account of it that corresponds with my conception of reality in general. And note that it wasn’t consciousness that I was originally working on, but rather value. Once satisfied with value I decided that if this model happens to be effective, then perhaps I can test its validity by using it to develop an effective consciousness model as well?

            I don’t believe that one truly learns physics by means of attending lectures. Instead I think that one must put pencil to paper in the attempt to solve practical problems where a given concept is implemented. This is where general ideas become refined to illustrate what a given concept both does and does not mean in necessarily subtle ways.

            Another example would be how a journeyman plumber can show an apprentice what to do on a daily basis, though such lectures in themselves remain insufficient to truly teach plumbing. In order to learn this trade, an apprentice must actually do plumbing, which is to say, personally decide what to do at each stage of a given task. Mistakes refine this process.

            So from this perspective, how might you learn the subtle elements associated with my consciousness definition, and so be able to decide whether or not my optimism is warranted? Through the following path, I think:

            When you read an associated article, write your perception of what I’d say about it. Then tell me what you’ve read so that I can send you my actual perspective. It’s where these two accounts diverge that you’d be shown subtleties of my model that I doubt could otherwise be provided. I can’t say how many articles would be needed to give you a reasonable understanding of consciousness as I define it, though obviously you’d only continue if you found this more interesting than other pursuits.

            Like

        3. “(That’s an awkward sentence, let me know if you don’t follow.)”

          I think I get it. If a mechanism is conscious, at some point, the nested mechanisms inside it wouldn’t be. I usually word that as, consciousness must eventually reduce to non-conscious components, just as biological life must eventually reduce to non-living components (amino acids, lipids, etc).

          Like

  17. I have some thoughts on this.

    I’m not sure if a limitation on a system’s ability to model itself is functionally any different than talk of souls.
    I think what really troubles people about scientific theories of consciousness is not the idea that it arises from the grey stuff between our ears, but the crappy-scientism version of that story in which we’re simple machines running on protein and that, in a few year’s time, some scientist will solve an equation and we’ll be able to know what Gertrude the 8 year old will do on her 80th birthday. As we’ve discussed regarding Godel and AI, I don’t think this works even in principle.
    The biggest problem I think does go into the hard problem of consciousness is actually more of an irony – the irony that subjectivity is literally the only thing we have. It seems like there’s space to call b.s. when we try to say that objectivity, this thing we construct out of our subjectivity, is somehow supposed to supplant subjectivity. It’s like saying milk is false because we have made cheese.

    Anyway, thanks for the article. Well written and thought provoking as always. 🙂

    Liked by 1 person

    1. Thanks Ben.

      Predicting what Gertrude, as a system in the outside world, will do 72 years from now is impossible under currently understood physics, even in principle. I don’t know any thinking person who would argue otherwise. On the other hand, coming up with probabilities of what she might do tomorrow? We already do that ourselves, so the proposition that technology will never be able to seems…improbable.

      I think it always pays to remember your last point. The objective is always a theory, a model of what’s out there which could be falsified at any point. On the other hand, we can test those theories by how well they predict our future subjective experiences. The idea that theories that have a successful track record will be shown to be completely wrong is itself not a theory with a good track record, at least according to my subjective experiences 🙂

      Liked by 1 person

      1. I agree with you on Gertrude. What I was more referring to are the overly flip, deterministic things that get said in pop-science all the time (cough – Elon Musk – cough). I think that a lot of people (including the people who make the overly flip predictions) don’t do a very good job distinguishing between probabilities and “scientific” prophecies, which is one reason I think people are uncomfortable demystifying consciousness.

        And you are absolutely correct on theories assuming objectivity. However, I also think we agree that the subjective is primary and can’t be shuffled out the door so easily.

        🙂

        Liked by 1 person

  18. “What about subjective experience implies anything non-physical?”

    I believe the hard problem of consciousness lies squarely at the feet of subject/object metaphysics (SOM), an architecture of thought we inherited from Plato, Aristotle and his Greek cronies. The constraints of SOM are clearly demonstrated by David Chalmers own words: “But there is one kind of consciousness that I am most interested in, and that is consciousness as subjective experience…”

    A coherent theory of consciousness will never be attained because the hard problem of consciousness is constrained by our current paradigm of subject/object metaphysics, an architecture of thought that by its own nature is massively suppressive. Dismantling SOM would be the best place to start, because fundamentally, there are no such things as subjects and objects, just the things we do not understand, and because we do not understand them, we label them as subjects and objects, crafting an intellectual construct that in the end suppresses meaning. Our current distinction dividing the subject and the object is an arbitrary one. One could just as easily define objects as things which have qualitative properties that are determinate, just as easily as one could define subjects as objects which have qualitative properties that are indeterminate, putting both the object and the subject in the same box without distinction. It’s all a matter of preference, but something as simple as preference could lead to a new model of reasoning.

    In this new paradigm of reasoning, consciousness would be a first person objective experience of some “thing” that is radically indeterminate, not a subjective experience at all. One could then develop a model of consciousness where consciousness is universal and is an objective experience of some “thing” that is common to all phenomenon, from the indeterminate qualitative properties of inner and outer space, spin, mass and charge, to the indeterminate qualitative properties of consciousness that we as human beings experience. Grounding any theory of consciousness in our current paradigm, by calling it a subjective experience, is a blueprint for an endless debate predicated on the grounding tenets of subjectivity, the very definition of a black hole.

    Liked by 2 people

    1. Thanks Lee. I’m not quite sure I’m following the SOM paradigm you describe, but I do agree that the language of subjective experience trips us up a lot. In general, I think the language of mental description is problematic, not in and of itself, but because people are often unwilling to decompose it, to dissect it, to explore what it means at a lower level of abstraction, which of course leaves the concepts hopelessly mysterious and intractable.

      Like

    2. Lee,
      I think I might understand your position. I interpret you to be saying that it would be helpful if we didn’t assert that there are subjects and objects to reality, but rather just things that aren’t understood. Therefore anything that does exist might just as well be referred to under the heading of “object”. Thus if everything is an object then there can only be objective reality, and even if objective reality happens to be difficult (or worse) to grasp. And then from the premise of objective consciousness, this “thing” could be interpreted as an elemental component of nature.

      Is that close? If so then I suspect that academia might counter your argument in the following way:

      There is only one element of reality that I can ever know to exist for certain, and it’s that I experience my existence (which is of course from Descartes). This is to say that it’s impossible for me to not exist but still “think” (as I’m clearly doing from my own perspective). I consider this position beyond physics, and perhaps even beyond metaphysics. From here my existence cannot possibly be an untrue element of reality, while all that I perceive needn’t have any truth to it beyond sensations. Should you experience existence, then you could say the same.

      Thus I cannot assert that subject and object may usefully be referred to under the same heading — I absolutely exist while what I perceive could be false. But from there if I’m curious enough to seek more than just my single absolute Truth, I can add the metaphysics of objects and thus get to the position that your theory seeks to overcome. Even here I am hopeful regarding “consciousness” however, since it’s merely a humanly fabricated term which thus harbors the potential for effective definition. After all, science seems to have done tremendous things in recent centuries, and it’s still incredibly young!

      Like

      1. Philosopher Eric,

        You are spot on with your analysis of my position on SOM. Being a noumenalist, I am in full agreement with Parmenides that there is only one thing, and that one thing is an objective reality that is separate from any appearance one might assign to it and separate from any opinion one might have of it. I am also in full agreement with Kant that the noumenal world is the objective reality which underwrites and is responsible for our phenomenal reality. An objective reality is an axiom, regardless of whether we grasp or understand what that objective reality actually is. Simply because we do not understand what that objective reality actually is, there is no need to freak out, create gods, create a construct of idealism, invent the big bang, postulate multiple universe theories or throw up our hands and toss the baby out with the dirty bath water. A slow, deep breath needs to be taken first.

        I concur that our conscious experience is the only thing one can know for certain, our own conscious experience is also the key to unlocking this mystery of an objective reality. The first incremental, metaphysical step in that process which will lead to an understanding of that mystery is to move away from subject/object metaphysics as an architecture of reasoning, because by design, SOM imposes constraints on an objective reality by stating that it is a subject. Therefore, the entire notion of an objective reality is dead on arrival (DOA) and becomes whatever one says it is. Under the architecture of SOM, an objective reality corresponds to our own creative, vivid imaginations. So the question confronting the architecture of SOM becomes: Does an objective reality correspond to us, or do we correspond to an objective reality? I hope you can begin to see the suppressive nature of SOM and the contradictions and paradoxes inherent within that architecture.

        Liked by 1 person

    3. Lee,
      It’s good to hear that my rough assessment of your position was about right, though of course there are all sorts of complexities that I don’t yet grasp. Regarding your struggle against established paradigms, I’m actually in the same boat (though surely the ones that I fight are generally different). What I seek most right now is not for others to agree with my models, but rather for them to demonstrate that they grasp associated intricacies. How might I otherwise receive effective criticism from which to improve? And if my models actually are pretty good as they stand, how might others grasp this without a reasonable grasp of the particulars? I suspect that you find this problematic as well.

      Like you I actually consider myself to have a perfect belief in noumenalism. I have absolutely no use for platonism, idealism, or any other ontological variety of anthropocentrism. But even though subject/object metaphysics is off the table for me in an ontological sense, not so regarding epistemology. Because I am not a “god” but rather a product of a system that I’d like to objectively understand, I seem fated to a subjective perspective If the noumenal underwrites the phenomenal, then I must make do with the phenomenal regarding my epistemology, and even given an absolute noumenal ontology.

      I consider the topics of philosophy to constitute the foundation upon which our sciences structurally rest, so I believe that our failures in reality study stem from our failures in the field of philosophy. Without a respectable community that has its own generally accepted principles of metaphysics, epistemology, and axiology, our mental and behavioral sciences seem highly susceptible to the softness which they display. Furthermore I’ve developed one principle of metaphysics, two principles of epistemology, and one principle of axiology, from which to potentially harden them up. (Apparently hard science is naturally less susceptible to these foundational issues, though I believe that they could benefit from my principles somewhat as well.)

      I hesitate to bring this up, but now that I see you’re actually a strong naturalist, there is something I’d like to ask. In your first comment I had you pegged as a panpsychist. If that is the case, could you provide some details about your position there?

      Like

      1. “I consider the topics of philosophy to constitute the foundation upon which our sciences structurally rest, so I believe that our failures in reality study stem from our failures in the field of philosophy.”

        I agree with this statement 100% as did the late physicist Niels Bohr. Niels Bohr desperately wanted a philosophical framework with which to unify and/or explain the contradictions observed within classical and quantum physics. Unfortunately, there was no money to be made in philosophy, the money lie within the architecture of quantum mechanics, so a philosophical framework was never pursued.

        A unification theory was my original focus of philosophical research, consciousness was a secondary field of research that evolved much later, albeit the two topics are overlapping and entangled in such a way that it is mind boggling. So yes, I do have a model of consciousness and that model is predicated upon micro-panpsychim which is then grounded in singularity. Because it is grounded in singularity, the laws of physics break down and are not applicable, so there is no “combination problem” in my model. The laws of physics and mathematics are nothing but useful tools attempting to describe the complex,intricate relationships of discrete forms of consciousness as they engage in meaningful relationships with each other in a continuum of construction, decay, reconstruction, deconstruction, and reconstruction, all of which results in the novelty of expression we observe and “objectively” experience through our own unique form of consciousness. Even though those conscious experiences may be radically indeterminate and articulated as subjective, they are still objective experiences, they are NOT subjective experiences.

        Corresponding to the Parmenidean reality/appearance distinction, consciousness is not the reality, consciousness is the appearance. So now, one is compelled to ask: What is consciousness exactly? In my model, consciousness is clearly defined as an objective experience that is inclusive and universal, therefore, consciousness is fundamental in explaining the complex and often intricate relationships that physics attempts to capture with the laws of physics and mathematics.

        I established four guiding principles for myself that have to be met before any epistemology or ontology is considered viable.
        1. Logical consistency: It has to stand up under the scrutiny of analysis, in other words, it has to make sense.
        2. Ease of explanation: It has to be easy to explain, easy enough that even a young child could understand.
        3. Inclusiveness: Any epistemology or ontology has to include every phenomenon through all of space and time, it cannot exclude anything.
        4. It’s not about us: it does not correspond the doctrine of anthropocentrism.

        That’s a brief synopsis without going into the details……

        Liked by 1 person

        1. Lee, I would be grateful to hear the details, especially in a form that a young child could understand. 🙂 I’m trying to determine to what extent your understanding matches mine.

          So I agree that experiences are objective, but I’m not convinced we should throw out the subjective. I think we need to be able to explain the subjective “hard problem”. As Chalmers puts it, you should at least be able to explain the meta-problem, i.e. , explain why there seems to be a “hard problem”.

          *

          Like

          1. James,

            I’ve checked out your web page and your theory does not correspond with mine. Before anybody can even start talking about consciousness, there needs to be a metaphysical foundation upon which to build. Currently, there is no such foundation. Metaphysically, there needs to be a definition of consciousness that is inclusive and universal. Once that foundation has been firmly established and universally accepted, then theories such as yours, Giulio Tononi’s, and David Chalmers will become useful. Until that threshold is crossed, ongoing speculations and research on consciousness is a hopeless, useless waste of resources. And remember, consciousness is just one slice out of the entire loaf of bread which is necessary in order to develop a universal theory of everything.

            The reason that homo sapiens like subject/object metaphysics (SOM) is because it conforms to our current paradigm which is a paradigm of control. And that is precisely why SOM was crafted as an architecture of thought by Plato, Aristotle and their Greek cronies. SOM does not correspond to Parmenides’ teaching that there is only one thing, and that one thing is an objective reality that is separate from any appearance one might assign to it and separate from any opinion one might have to it. According to Parmenides, homo sapiens correspond to that objective reality, according to SOM, an objective reality corresponds to homo sapiens. An unknown objective reality is a subject, therefore it can be whatever one says it is. SOM corresponds to and reinforces our paradigm of control.

            Fundamentally, David Chalmers “hard problem” is not that difficult, the difficulty resides in the meta- problem. The meta-problem of consciousness is this: There is a genetic defect in the underlying form of reasoning and rationality, and unless one is willing to address that defect nothing will change, because nothing can change. Please understand, this defect in reasoning is only problematic in the context of homo sapiens progressing to the next evolutionary ontological level of conscious experience. Reasoning and rationality, with its stratagem of control is perfectly suited for our primary experience, it is a primordial, rudimentary property of all discrete forms of consciousness. Nevertheless, it is the very mechanism which obstructs the natural migration to the next evolutionary ontological level of experience.

            Like

        2. Lee, I’m afraid my website is far from complete. As you say, it lacks a good discussion of the ontology on which it is based, even though there are hints. I’m still working out how to describe it, which is why I’m here. So here is a summary of the ontological discussion:

          There are two things worth talking about – patterns and physical stuff. Patterns includes all abstractions, rules, relations, etc. Physical stuff is just everything physics talks about. My catchphrase is “Physical stuff exists, patterns are real.”
          All physical interactions/events, including those associated with consciousness, can be described as
          Input —> [Mechanism] —> Output

          So to put this metaphysically, for something to exist, that is, for something to be a noumenon, it must be a mechanism for at least one such event. But our only access to any knowledge of the neumenon/mechanism comes from the events of which it is capable. In some cases we can break a mechanism down into sub-mechanisms, but that only leaves us with sub-neumenon which we can only access via their sub-events.

          Now whether an event is an “experience” (i.e., a consciousness associated event) depends on what constraints you want to put on the Input, the Mechanism, the Output, or some larger system which includes the Mechanism. If you don’t put any further constraints at all, you get panpsychism.

          So that’s my shot at an ontology that meets your guiding principles. Your turn.

          *

          Like

          1. James,

            Input —> [Mechanism] —> Output…… (This is a crude model, but it’s simple enough.)

            “But our only access to any knowledge of the neumenon/mechanism comes from the events of which it is capable.” (and that capacity is clearly demonstrated in our phenomenal world as infinite capability, so that’s cool)

            “…for something to be a noumenon, it must be a mechanism for at least one such event.”

            The first statement is acceptable, but the second statement is problematic because you have assigned properties to the noumenal world which automatically place constraints on that “unknown” objective reality. To reference Captain Kirk: Assigning any qualitative properties to the objective reality of the unknown violates the Prime Directive of Parmenides.

            “Now whether an event is an “experience” (i.e., a consciousness associated event) depends on what constraints you want to put on the Input, the Mechanism, the Output, or some larger system which includes the Mechanism. If you don’t put any further constraints at all, you get panpsychism.”

            One could make that statement without contradictions, so that’s cool too.

            The only knowledge we have access to is our phenomenal world, and since our phenomenal reality is the appearance of reality, we should be able to locate “tells” and/or clues located within that phenomenal structure that will give us some insight into the noumenal realm. We don’t have to re-invent the wheel here, there are some heavy weights in the field of philosophy who’ve already done the heavy lifting. Other than Parmenides, the next genius would be Immanuel Kant. Kant was convinced that our phenomenal world is an expression of power, and the only place that type of power could reside and/or originate is in the noumenal world. Understanding power and developing a metaphysical model of power is just one of the keys needed to unlock the mystery.

            Like

          2. Lee,

            “The only knowledge we have access to is our phenomenal world, and since our phenomenal reality is the appearance of reality, we should be able to locate “tells” and/or clues located within that phenomenal structure that will give us some insight into the noumenal realm.”

            I agree with this statement. But my understanding of your position (which may be wrong) is that the distinction between the subjective and the objective is misguided. I’m wondering what you see as the differences between subjective and phenomena, and between objective and noumenal.

            My own current view is that we only ever have access to the subjective. For objective reality, all can do is construct theories, models of what we think that reality is, which we do every waking minute of our lives. And we can only judge those models by how accurate their predictions are for future subjective experiences. So the objective is, for us, the models we construct from the subjective and which are predictive of future subjective sensations.

            Like

  19. Eric,

    “And if my models actually are pretty good as they stand, how might others grasp this without a reasonable grasp of the particulars? I suspect that you find this problematic as well.”

    I doubt very much that I will have any difficulty grasping anything you attempt to articulate. As for myself; people respond to my philosophical positions with silence, I instantly become the invisible boy. But I do get that, because my philosophical world is the under belly of underlying form, and most people have no idea what underlying form is let alone what that world means. Personally, I don’t have much use for the surface appeal of our phenomenal reality because by design, it’s a huge distraction. If one seeks to find meaning and understanding, the answers are in the underlying form.

    In one of your recents posts, one statement in particular immediately caught my attention: “I consider value to be the strangest stuff in the universe…” Would care to elaborate on that statement?

    Liked by 1 person

  20. Lee,
    Thanks for your optimism about grasping technical elements to my theory. (To be clear, above I didn’t mean that you personally have a problem understanding my theory, but rather that you probably have a problem with others not understanding yours.) By beginning from my “lecturers”, the way to gain an understanding of my theory I think, would occur through attempts to put it into practical use. For example, if you were to read some random article that my theory applies to (and most articles do), as well as write something about it given your perception of what I’d say, then that would be an effective implementation. Then I could send you my own actual assessment of that article (without reading yours of course), and here you should be able to see various important discrepancies from which to improve your understanding in subtle ways. It’s like learning physics by means of working through the problems at the end of each chapter.

    The issue however is that you should understandably decide that your ideas are far better than mine. Thus why take the time learn the intricacies of crap theory? Hmm… But given that each of us have our own theory to peddle, a solution may exist. Theoretically we could trade! Thus I attempt to learn your theory by means of article implementation, while you attempt to do so with mine. But let me warn you that my theory should take some time. The lecture side alone might be analogous to at least a university semester of classes. (By now Mike has probably endured the equivalent of three or more of my lecturing semesters! How many would you say Mike?)

    For now I’ll address your second response and then go back to the previous. Yes I do consider “value” to be the strangest stuff in the universe. So what do I mean by this?

    I must have mentioned several times in this extensive thread alone that I find it useful to break reality into “mechanical” and “computational” forms of causal function. The mechanical typewriter displays the mechanical side pretty well, while a standard computer instead processes input algorithmically for potential output. Reality’s first example of computer thus should have occurred through life’s genetic material — chemical algorithmic computation. Reality’s second example of computer should have come through central organism processing for multicellular life —neuron algorithmic computation. Reality’s fourth example of computer should have come through technological computers — electricity based (though light could be a substitute medium for example).

    It’s the stuff which I theorize to drive the conscious form of computer that you’re asking about, or “value”. Without it, nothing matters to anything throughout all of existence. I theorize that it’s possible for the neuron based computer (#2) to produce this punishment/ reward stuff, and for something else to experience to thus potentially drive a conscious form of computer (#3). The hard problem of consciousness is essentially the engineering of the “how” to value production. I don’t consider this particular question all that important however. The important part would be testing to see if consciousness functions on the basis of value as I’ve described, and if so to thus help our mental and behavioral sciences harden up. I consider nothing to be more strange than this value stuff, because it marks the difference between significant existence (such as me when I’m experiencing horrible pain) and insignificant existence (such as my own dead body).

    Now on to your previous response. Though we have radically different ideas, it’s wonderful to meet a kindred spirit in both passion and topic. While attempts to reconcile classical and quantum mechanics led you to theorize the nature of consciousness, I was led there given my quest to straighten out the nature of value for our mental and behavioral sciences.

    Though I believe that reality ultimately exists beyond our subjective experiences, surely neither you nor I have “perspectives from beyond”. Surely we have within the system subjective experiences, even though we do seek more. Thus it would displease me if your theory were to become widely accepted (given my investments in my own), and you could say the same in reverse. Furthermore given countless unique teleological perspectives, and well beyond just the human, I consider it useful to not refer to any of them as “objective”, but rather “subjective”. Objective noumenal realm should alway remain beyond them, and we’re merely causal elements of the system.

    Another disparity between our subjective views is that I currently consider classical and quantum mechanics to already be “reconciled”. Neither are “True”, though remain useful I think from their own levels of abstraction. Am I correct that quantum equations effectively become classical equations whenever “normal” figures for velocity and such are entered?

    I’ve mentioned that my four principles of philosophy are meant to found science, and so harden up our soft sciences. Furthermore I consider the outer limits of physics today to have effectively devolved into “soft science”. I wonder if you’re familiar with Sabine Hossenfelder over at http://backreaction.blogspot.com/?m=1 ? Apparently in her field evidence is being forsaken for the “beauty” of mathematics. She’s been on a quest to fix this problem for years, and has just released a book in this capacity. Technically I’d say that it’s not failed physics which has permitted her colleagues to go astray however, but rather failed epistemology. If my second principle of epistemology were generally accepted however, this nonsense should structurally become quite discouraged in her field. It reads:

    There is only one process by which anything conscious, consciously figures anything out. It takes what it thinks it knows (evidence), and checks to see how consistent this happens to be with what it isn’t so sure about (a model). As a model continues to remain consistent with evidence, it tends to become progressively more believed.

    Regarding Heisenberg’s Uncertainty Principle, I’m a huge fan! But I believe that he, Bohr, and indeed most modern physicists today, have gotten their physics mixed up with their metaphysics in this regard. Thus I offer my single principle of metaphysics to potentially put things straight. It reads:

    To the extent that causality fails, there’s nothing to figure out anyway.

    What I mean by this is that if the uncertainty to our QM measurements reflect a fundamental void of noumenal causality, then naturalism fails — nothing exists to figure out in that regard anyway. Thus it would be effective to say that reality functions supernaturally. Conversely however there is Einstein’s metaphysical interpretation. This is to say that noumenal reality does reflect ultimate causality, though the idiot human is out of its league in this regard. Like me, Einstein adopted a naturalistic form of metaphysics. In order for us to be wrong, supernaturalism must be right.

    Regarding your model of consciousness, you’ve defined it as, “an objective experience that is inclusive and universal, therefore, consciousness is fundamental in explaining the complex and often intricate relationships that physics attempts to capture with the laws of physics and mathematics.” Furthermore given my first principle of epistemology, I must not argue with your definition. According to it, none are true or false. But here I must ask, how do you consider it useful to define everything to be conscious? Does this not render the term meaningless?

    Like

    1. Eric,

      I will work on a proper response but I can respond to your critique as follows: “Furthermore given my first principle of epistemology, I must not argue with your definition. According to it, none are true or false. But here I must ask, how do you consider it useful to define everything to be conscious? Does this not render the term meaningless?”

      (I do agree with your first principle of epistemology 100%, nevertheless, there is an objective reality which is unknown that does not correspond to your first principle, and that objective reality is the grand prize……. Immanuel Kant insisted that the noumenal world is unknowable. I agree that this objective reality is unknown, but unknowable? Absolutely not. It can be known because we objectively experience that reality twenty-four/seven, regardless of whether we are consciously awake, dreaming, or sound asleep.)

      Meaning is the objective, and since you have no idea of what the metaphysical definition of consciousness means in my model, it cannot be critiqued. Now, if a metaphysical model of consciousness can be developed, that model can be used to replace all of our current models which physics holds so dear to their heart, like gravity, electromagnetism, the nuclear force, both strong and weak, spin, mass and charge, including but not limited to the qualitative properties of both inner and outer space, along with the entire architecture of law which is another myth. The institution of science has been living in the dark ages ever since the time of Galileo Galilei. Other than our technological advancement, we have accomplished nothing more than replacing the immortal gods of the Greeks with the immortal laws of nature. A model of consciousness is not only useful, it is absolutely essential of we are to move past the era of ignorance and mysticism.

      Liked by 1 person

    2. Eric,

      “Though I believe that reality ultimately exists beyond our subjective experiences, surely neither you nor I have “perspectives from beyond”. Surely we have within the system subjective experiences, even though we do seek more ……Furthermore given countless unique teleological perspectives, and well beyond just the human, I consider it useful to not refer to any of them as “objective”, but rather “subjective”. Objective noumenal realm should alway remain beyond them, and we’re merely causal elements of the system.”

      You are imposing your own constraints on that objective noumenal realm with your last statement. I understand your own personal reasons to not abandon the SOM paradigm, although I do not find those reasons justifiable. Teleology implicitly suggests purpose, be it good bad or indifferent. My response to that concern is, if there is purpose, what difference does it make. First, any purpose, (if there is any), is not within our control and second, it’s not about us anyway, so what’s the big deal. According to Parmenides, if there is only one thing, then our phenomenal world is part of that reality, consequently our experiences would be objective experiences. The objective noumenal realm is beyond our grasp, but that does not render null and void our own experiences being objective because the phenomenal realm is an expression of that noumenal realm. As an expression, it is the appearance of reality, a living, metaphorically breathing tapestry. The only payoff for an SOM ontology is that it gives our locus of consciousness, the I of self its coveted sense of control. And if a transient sense of control is valued over intellectual honesty, then that is your decision, and I will respect that choice. But at the end of the day, we really won’t have much to discuss.

      Richard Rorty once said that without a vocabulary that captures either how the world really is or a core human nature, there is never even a possibility of locating a metaphysical foundation for truth. My model is a vocabulary that captures both how the world really is and a core human nature. My theory of everything is profoundly simplistic and elegantly beautiful, beyond anyone’s wildest imaginations. It’s so outlandishly wild and crazy, it will either get one excommunicated from the Church of Reason or a noble prize in Oslo, there isn’t much room for it in between those two extremes. The equations can be silk screened onto the front of a t-shirt. Now a core human nature, that is quite another story all in itself. A core human nature stands in the face of who and what we believe ourselves to be and flies like a lead balloon. Now you might be interested in that theorem because a core human nature is underwritten by addiction.

      I wrote a book, it’s titled “The Immortal Principle: A Reference Point”. I was offered a contract from John Hunt Publishing in the UK. John and his entire staff felt that his publishing company did not have a broad enough market reach to do justice to my work and felt that a University Press would be the best venue for my book. Consequently, I accepted their advice and did not accept their contract. To date, I’ve decided not to publish my work because it is too controversial. I’ve tested the waters many, many times, and believe me, if I was living four hundred years ago, I’d be burned at the stake if I didn’t keep my mouth shut. No worries, it is what it is. I’m sixty-five years old, too old to fantasize about visions of grandeur, so I’m good.

      Liked by 1 person

      1. Lee,
        I suppose you’re right that our positions are probably too divergent for it to be productive for us to attempt to grasp the intimate details of the others’ theory. That’s the sort of thing which I should be doing with potential collaborators. But then perhaps you should be as well? Since your mind seems quite sharp, why not take your theory as far as you can?

        I’m approaching fifty, as well as banking on at least another thirty years of this. Furthermore I mean for the friends that I pick up here and elsewhere to at least keep my head straight as they tag along, and preferably become actively involved. Given various unfortunate paradigms, it should take a community to do what I believe is needed. And in truth I believe that my ideas will prevail with or without me, given my perception of progress in science. Of course I mean to help speed this process up.

        We’d all like it if Mike would post a bit more often than he has recently, though we do generally find other intellectually stimulating places to go. You’ll appreciate that he’s built an unusually respectful community here, though in truth I can’t say that I’ve noticed any sympathy for the ideas that you’ve expressed. Your converse however has proven exceptionally enjoyable! Thus when your name appears in the future, the last thing I expect is “invisibility”.

        Like

          1. I’m sure you do! By the way, thanks for retweeting that Massimo tweet challenging the dualists to explain the “coincidence” of the worlds only scientifically confirmed case of shared subjective experience, to occur in two separate people who just happen to share a thalamus! Of course we’ve discussed this case before. Over at his new “pay per blog” site, I challenged him to ask dualists like Chalmers to explain why might this be the only such documented case, and since I don’t tweet, wouldn’t have known that he did actually end up reading the article, thus arming himself with this evidence against supernaturalists.

            For anyone who missed it: http://www.cbc.ca/cbcdocspov/m_features/the-hogan-twins-share-a-brain-and-see-out-of-each-others-eyes

            Liked by 1 person

          2. Thanks Eric.

            So you subscribed? I haven’t subscribed yet, mainly because I haven’t read that much of Massimo’s writing lately, although I probably will if he posts something exclusive to patrons that I really want to read.

            Liked by 1 person

          3. Well Mike, he’s still playing with the format. Even a lot of his most faithful readers don’t seem to have come over yet. Of course $36/year should be nothing for people who spend countless hours monitoring what goes on over there, though this does seem to be an issue so far. And without some of them, no I don’t find it as compelling. (I needs me some Kaufman!) But then I watched him drop Sciencia Salon, which did seem quite popular, and previously you watched him walk away from Rationally Speaking. The man is simply a dynamo. I don’t understand how there can be enough hours in the day for him to do what he does.

            Still one thing that has been uncomfortable for me is that his stoicism site has been combined with this one, so we all now see how utterly devoted he is to his secular religion. It’s not for the faint of heart! But in the end he’s done tremendous things for me, and like always, I know that he’ll build this site up into something that I consider valuable. If you haven’t been watching his site much for free however, then that’s your answer.

            By the way, apparently I’m the person to blame for this: https://platofootnote.wordpress.com/2018/07/23/the-patreon-experiment/comment-page-1/#comment-32327

            Liked by 1 person

  21. Mike,

    “…my understanding of your position (which may be wrong) is that the distinction between the subjective and the objective is misguided.”

    Your understanding of my position is correct, SOM is misguided and a massively suppressive intellectual architecture. Whenever one is convinced by a rational argument, one must be compelled to consider: What is being suppressed by that convincing rational argument, and who or what pays the price for such a rational argument? The answer to that question is always the same, the who or what of what is being suppressed is the objective reality of the “unknown”. And the “unknown” is the objective reality we seek to understand. Humanities attempt to discover the objective reality of the unknown is the predicate for, and very objective of Western syllogistic logic.

    “My own current view is that we only ever have access to the subjective.”

    That statement clearly reflects the subliminal influence of SOM on the way you reason. To correct this error in reasoning your statement would read; “we only ever have access to the objective, and it is only because we do not understand that objective reality that we label the experience as subjective, when indeed, what we are experiencing through consciousness we are unable to determine with the certainty we desire.” The experience then becomes an objective experience which is “indeterminate”.

    “So the objective is, for us, the models we construct from the subjective and which are predictive of future subjective sensations.” (That is a true statement predicated upon the SOM paradigm.)

    SOM as an architecture of thought is actively participating in this statement as well. Your statement should read as follows: “So the objective is, for us, the models we construct from the indeterminate objective experiences and which are predictive of future indeterminate objective sensations.” This statement reflects one’s passion for a belief, not a nominal belief beyond the weight of evidence, but a substantial belief based on personal experience of one degree or another.

    Whenever there is uncertainty, which is most of the time, the locus of consciousness, the I of self will make the determination and rule on what a particular experience means. By this process, the objective reality of the unknown becomes subordinate to and must now correspond to our truth, not the objective reality of the truth which is unknown. This dynamic is also responsible for allowing one to have a sense of control, because without a sense of control, there is no sense of self.

    I hope this helps, because SOM is a pervasive architecture of thought that everybody just takes for the gospel.

    Liked by 1 person

    1. Lee,
      Thanks for the clarification. Just to be sure I’m understanding your view correctly, let me quickly summarize it in my own words. Labeling our inaccurate perceptions of the objective as “subjective” implies an ontological distinction, one that you see as problematic. The only real distinction is in how much of the objective remains unknown by us.

      If I’ve got that right, then I agree. However I still do use the word “subjective”, just without attaching any ontological significance to it. To me, it’s an epistemic distinction. But I’ll grant that a lot of philosophers of mind do see it as an ontological one. It’s largely the point of Nagel’s “What is it like to be a bat?” thought experiment. And I definitely agree that’s a misguided notion.

      So rather than SOM, you might say I more adhere to SOE (subject object epistemology).
      Or do you see even SOE as problematic? If so, what language would you use to describe a completely erroneous perception such as a hallucination or a dream, where the experience has little or no relation to any objective reality (meaning it has no predictive value whatsoever)?

      Like

      1. Mike,

        SOE is still problematic. Just because an experience could be a hallucination or a dream, it is still an objective experience, an experience that just happens to be a radically indeterminate one, one that has no real meaning other than the brain short circuiting, seeking more stimuli by entertaining itself or making shit up. When you use the phrase: “…where the experience has little or no relations to an objective reality (meaning it has no predictive value whatsoever),” you are latently drawing a correlation between an experience and the need for a sense of control, i.e., “a predictive value”. Whenever subjective is used within syntax or our vocabulary, it suppresses and excludes the unknown, because now, the experience, whatever it is becomes whatever one says it is instead of acknowledging that objective experience as indeterminate. It is always best to be intellectually honest with oneself by simply stating: “I don’t know”. The more comfortable an individual becomes with articulating those three words, the greater the opportunity for meaning and understanding, because as soon as the locus of our own consciousness makes its decision and rules on an experience to determine for itself what it means, the door to the objective reality of the unknown is slammed shut.

        It’s all a matter of preference; does one want to be intellectually honest with oneself, or does one prefer to create an intellectual construct that makes one feel safe and in a sense of control. Control is an apparition, it truly is the ghost of rationality. The need for a sense of control is the impetus which drives the religious ideologies of both Western and Eastern idealism, as well as the dogma we see within the scientific and academic institutions, let alone the power mongers of our own government. That paradigm needs to shift, and the best place to start is on the individual basis and distancing ourselves from the architecture of SOM.

        Like

        1. Lee,
          I’m not quite sure if I understand your criticism of SOE. In my view, SOE (as opposed to SOM) is inherently about the limitations on what we know. It is a framework for being able to do what you urge, acknowledge what we don’t know. When I say something is “subjective”, I’m saying that it is relative only to a single person’s perspective, with all the biases, blindspots, and human weaknesses that every single person possesses. It lacks the reproducibility and predictive track record we usually require to classify something as “objective.”

          “you are latently drawing a correlation between an experience and the need for a sense of control, i.e., “a predictive value”.”

          This gets in to what we see as the evolutionary adaptation that brains provide. I think a strong case can be made that the adaptation is to make movement decisions for a mobile organism. Brains take in information from the environment and build predictive models (perceptions), run action / sensory simulations of various decision scenarios (imagination) and make decisions, all in service of reflexive instincts that evolved to enhance genetic survival. When I talk about predictive value, it’s in that context.

          “because as soon as the locus of our own consciousness makes its decision and rules on an experience to determine for itself what it means, the door to the objective reality of the unknown is slammed shut.”

          The problem I see here is that much of this happens before conscious awareness.
          Consider optical illusions. https://en.wikipedia.org/wiki/Optical_illusion We don’t consciously decide to experience them. Indeed, they trick our perceptual machinery in a way that we can’t override.

          Certainly if we have time, it pays to scrutinize our initial impressions and reflect on what we actually know, but this is often something that happens after the initial pre-conscious categorization. For example, if I’m walking in the woods and see what I think is bigfoot, but after further reflection realize it could have just been a large grizzly bear walking on its hind legs, I think I’m being honest in the way you suggest. But crucially, this is me revising my theory of the objective, hopefully to a more predictive one, but it remains a theory, a model, a representation of what I hope matches reality.

          I do agree that a desire for control underlies much of religion and science, but within moderation and a personal context, I don’t necessarily see that as a vice. As gene survival machines, what else are we supposed to do to ensure the well-being of ourselves and those we care for? It’s worth noting that science and technology actually succeed at it far more than other endeavors.

          Like

          1. Mike,

            I understand your position and I do not necessarily disagree in principle. The SOE model may be a compromise to the SOM model, nevertheless, the SOM model is so deeply entrenched within culture at large, moving away from it will take bold and radical steps. I am no romantic, so I do not see this paradigm shift happening anytime soon if ever. We see the influence of SOM every time somebody talks about consciousness, consciousness as first person subjective experience rolls of the tongue sooo easily, it’s like..”Yeah, everybody knows the earth is flat, duh.”

            “I do agree that a desire for control underlies much of religion and science”

            Your are being quite modest here, the desire for control is pathological and I will add, absolutely essential to being human, because without a sense of control there is no sense of self, because the two are coextensive. Self is an intellectual construct reinforced by a sense of control, but at the end of the day, I also agree in principle with the likes of Daniel Dennett and Thomas Metzinger even though reductionist theory is whoa fully incomplete with only a part of the story being told. From an evolutionary perspective, the pathological need for a sense of control is only problematic in the context of the natural migration to the next evolutionary level of conscious experience.

            Now that’s a bold statement. But it would naive at the least, and pathologically arrogant and the worst to think that the evolutionary process is finished with Homo sapiens. And indeed, if there is another evolutionary ontological level of conscious experience, what would that experience be, and what would that experience look like? These are questions my book explores as well.

            I enjoyed the discourse Mike, so thanks……

            Like

          2. Mike,

            “I’m not quite sure if I understand your criticism of SOE…”

            I’ve been ruminating on this critique ever since I responded to it. I guess my main opposition to the SOE architecture that you describe is a matter of semantics. Personally, you may have no difficulty in making the distinction between SOM & SOE. So here is my difficulty, not everybody is either you or I. The subject and the object, no matter how it is articulated within syntax or vocabulary cannot be divorced from the SOM model because both the subject and the object are a derivative of that model. The observer and the observed is another derivative of the SOM model. Inherently, the SOM model asserts dualism.

            “When I say something is “subjective”, I’m saying that it is relative only to a single person’s perspective, with all the biases, blindspots, and human weaknesses that every single person possesses.”

            This is a true statement and both you and I know what that statement means. Why not add clarity to statement by saying; “When I say something is subordinate………” The idiom subordinate adds a lucid distinction to the statement which cannot be confused with the SOM model the way that the idiom subjective does. So yes, I think it does boil down to semantics since meaning & understanding is the targeted objective.

            Thanks for tolerating my fixation on semantics……

            Like

          3. Lee,
            I appreciate the follow up. I totally agree that SOM is deeply problematic. It’s largely what I was arguing against in the post, albeit with different terminology. Although I actually think pointing out the distinction between SOM and SOE is another avenue to call attention to the unconscious assumption that there’s something ontologically distinctive about subjective experience. But I can understand your concern about continuing to use that terminology.

            I’ve actually been wondering lately if the amorphous term “consciousness” itself remains productive. I know many cognitive neuroscientists eschew any mention of it, preferring to focus on more concrete concepts like perception, attention, memory, introspection, etc. It may be that as a concept, consciousness needs to go the way of vitalism, the old (now discredited) belief that there was some vital force that separated living things from non-living systems. It’s now understood that life is just physical processes. Maybe the best way to dispense with the hard problem version of consciousness is to dispense with the vocabulary that it attempts to reify.

            The challenge we have, of course, is that we’re talking about something that people very much want to be true. They want to find a reason to believe in it. This is the skeptic’s challenge overall. Often we’re skeptical of ideas people want to believe in, which often makes our message an unwelcome one. I fear it’s the challenge you face with the people who adhere to SOM.

            Like

          4. Hey Mike, I’m a little concerned you’re getting into baby/bath water territory. Even though we have figured out a lot about what goes on in living things, we haven’t stopped talking about life and living things, even though we have stopped talking about elan vital. I’m pretty sure Consciousness will always be a thing, but possibly not qualia.

            *

            Like

          5. Hey James,
            I’m concerned about it myself, which is why you haven’t seen me take a hard line on this position, and I’m not sure if you ever will. That said, it’s worth asking what the concept of consciousness actually brings to the table that sharper concepts like introspection can’t.

            For example, asking whether newborns are conscious typically leads to people arguing past each other with differing definitions of “consciousness”, but asking whether they introspect leaves less room for ambiguity, and doesn’t sound like we’re questioning whether they’re subjects of moral worth.

            Like

          6. Mike, given that consciousness is an umbrella term, like life, then of course we will have need of sharper concepts, like introspection, or metabolism, or respiration, etc. And so when people start talking past each other we will have to drill down to these sharper concepts to sort out who is saying what. And sometimes we will coin new words to express concepts that haven’t been discussed before, like psychule. :). In any case, I’m pretty sure we’re pretty much stuck with consciousness, and I’m ok with that.

            *

            Like

          7. James, I’m fine with it as an umbrella term. In fact, that’s the way I use it, as a broad term referring to a variety of cognitive functions. But a lot of people see it, often unconsciously, as a distinct thing or force, like a type of ectoplasm, essentially what Gilbert Ryle called “the ghost in the machine,” and refer to anyone who dismisses that notion as dismissing consciousness itself. Anytime someone asks if X is conscious, as though consciousness is something X either has or doesn’t, they’re buying into that ghostly version.

            The question is how to use the word “consciousness” while making clear which version we’re talking about.

            Like

  22. Eric,

    You come across very mature for a fifty year old, I would have guessed you were closer to seventy.

    “Like you I actually consider myself to have a perfect belief in noumenalism.”

    As a noumenalist, here is a historical anecdote that you might find interesting. The Hebrew God’s name is YHWH, (without the vowels of course). The meaning conveyed by that name corresponds to the Parmenidean realty/appearance distinction, insisting that YHWH is separate from any appearance one might assign to it and separate from any opinion one might have of it. There are no qualitative properties or characteristics that can be assigned to YHWH and there are no opinions that one can have of YHWH. The meaning of the name encompasses all of space and time, there are no boundaries or limitations that exist for YHWH, even when those boundaries and limitations appear to exist. The meaning of the name is reflective of a perfect belief in noumenalism. This evidence is clearly reflected in the first three commandments. Of course, just by viewing our current landscape, it doesn’t take a genius to realize that nobody adheres to those grounding tenets of noumenalism.

    Another historical anecdote: In the Buddhist tradition, Nagarjuna developed the architecture of the tetra-lemma with its feature of double negation in an attempt to deter the disciples of Buddhism from creating an intellectual construct of the ultimate reality, aka, the noumenal realm. Nagarjuna was accused of teaching a doctrine of nihilism because his teachings introduced “emptiness”, his own unique description of the unknown, which again was the noumenal realm. To counter the accusation of nihilism, being the Eastern Maharishi that he was, his response was: “If everything that we see and experience comes from emptiness, then anything is possible.” Unfortunately, the disciples of Buddhism have taken the architecture of the tetra-lemma with its feature of double negation and used it in its convoluted way to create and justify their own intellectual construct of the ultimate reality, aka, the noumenal realm, what we recognize as Eastern idealism.

    Thought you might find that interesting folks………..

    Liked by 1 person

    1. Hey, who’re you calling “mature”?! 🙂

      Well maybe so. But since you went “eastern” there, I do have an additional point to make about the sort of presentation that you’re referring to. I follow the methods of the shrewdest political mind that I know of, also known as Mahatma Gandhi. I believe that his methods explain a good bit of why most who have watched and oppose me, choose not to debate me in public. He wasn’t being “nice”, he wasn’t being “moral”, but rather he was being “effective”. That’s exactly what will be required in my own quest to develop a respectable community that has its own generally accepted principles off metaphysics, epistemology, and axiology, or a premise from which to better found the institution of science itself.

      Liked by 1 person

      1. Eric,

        You’re sharp dude, and I like that. I agree with your stratagem and I hope you are successful. But Dialectic does has it’s own inherent weaknesses, it’s a double edged sword. It is an effective means of discourse, pointed and sharp; but make one mistake, and the very words used to make one’s case can be used by an opponent to undermine and dismantle the entire argument. Maybe you should practice up on your Rhetoric, it’s an art form. It is no coincident that both Dialectic and SOM are the children of Plato, Aristotle and their Greek cronies because they won the war against the Sophists. To the victor go the spoils and both Dialectic and SOM have stood the test of time and become our prevailing paradigms. Don’t underestimate the power of Rhetoric, it stood the test of time long before the Greeks dismantled it as an architecture of discourse. It requires a little more patience, the element of passion, the appeal to the senses, but as a form a discourse, it is less risky, and less intimidating. I should know, I’m a master of dialectic and my opponents feared me……. I’m older now and I’ve learned that……… nah, maybe instilling fear isn’t such a good idea after all. Effectiveness, that’s the prize……

        Liked by 1 person

    2. Well Lee, I have admitted that I’ll need a bit of help from my friends. But just between you and I, it’s also possible that my skills in the art of rhetoric aren’t too bad. Consider our exchange. An opposing theorist is now wishing me well and offering me advice. Of course a true master at the craft would never make such an admission. I’m not that good!

      Like

  23. Mike,

    I was struck a few years back by a statement Richard Rorty once made. He felt that in order for us move forward in understanding, what was needed was a new vocabulary and a completely new set of metaphors. I’ve been experimenting with vocabulary ever since and I’ve found the experience both challenging as well as enlightening. One cannot underestimate the power of a new vocabulary and its ability to convey meaning.

    “I’ve actually been wondering lately if the amorphous term “consciousness” itself remains productive.”

    I share your sentiments. Talking about consciousness or conscious experience is always in the context of the experience itself. It’s analogous to trying to come up with a vocabulary that captures a core human nature, when all that we have as a reference point are the behavioral patterns, i.e., the effects. The behavioral sciences have been trying to solve that riddle for hundred of years and are no closer now than when they started. Consciousness encounters the same difficulty. The method of research has to shift to a metaphysical model, one that is generalized, one that contains qualitative properties that will accommodate all discrete forms of consciousness. It’s doable, but it has to be done in incremental metaphysical steps. Having said that, trying to build a metaphysical model of consciousness utilizing the dualistic architecture of SOM is a non-starter. Until one is willing to scrap the SOM model entirely, any attempt to craft a metaphysical model of consciousness is D.O.A.

    “The challenge we have, of course, is that we’re talking about something that people very much want to be true. They want to find a reason to believe in it. This is the skeptic’s challenge overall. Often we’re skeptical of ideas people want to believe in…”

    My models do not encounter those type of difficulties, my models actually garner a reaction of complete and total shock, like, I’ve got to be some kind of a maniac, or heretic, or something. Because my models stand in the face of what people actually believe, let alone want to believe, be it the scientific, academic or religious communities at large. So in that context, I am a heretic. I like your characterization of consciousness be being a code word for immortal soul, that was a good one. If one finds that everyone is opposed to one’s models, then there just might be some merit to them….. I can’t remember the theoretical physicist who said it, but his comment was that the reason we don’t have a theory of everything is because nobody has presented one that is crazy enough.

    Like

    1. Lee,
      A number of philosophers have attempted to come up with new vocabularies before. The problem is that words are ever shifting things. Success leads to the new terms entering the overall culture where they often start morphing in meaning, corrupting the original message.

      What’s needed is a language that is precise and unambiguous but won’t be modified by the culture. Physicists would say they’ve found such a language in mathematics. Of course, no physics theory is purely mathematical. They always come with “baggage”, metaphorical description in language.

      I have to admit that I’m reflexively suspicious of most metaphysics, even though I know that every scientific theory is, in essence, a metaphysical statement that can never be proven, only disproven.

      “I like your characterization of consciousness be being a code word for immortal soul,”

      I have to own up that I stole that phrase from Elkhonon Goldberg. He uses something like it in his book, ‘The New Executive Brain’, in his brief and dismissive passage about consciousness, concluding that the only reason we still discuss it is that, “Old gods die hard.”

      Like

      1. Mike,

        “…I know that every scientific theory is, in essence, a metaphysical statement that can never be proven, only disproven.” (And when all of the constructs have been dismantled, and the residual baggage is finally cleared away, the only thing that is left is…………….)

        I love your position Mike, you are on track to being a true noumenalist, and as a true noumenalist myself, I am compelled to quote Nagarjuna; “anything is possible.”

        Liked by 1 person

    2. Lee, at the very beginning you stated “Dismantling SOM would be the best place to start, because fundamentally, there are no such things as subjects and objects, just the things we do not understand, and because we do not understand them, we label them as subjects and objects, crafting an intellectual construct that in the end suppresses meaning.” More recently you said “trying to build a metaphysical model of consciousness utilizing the dualistic architecture of SOM is a non-starter.” But I haven’t seen good reasons for that assessment.

      In point of fact, the more I think about it, the more it becomes necessary to distinguish a subject and an object in order to explain what consciousness is about. In my paradigm (Input —>[mechanism] —>Output) the Input has a very distinct relationship with one particular mechanism (the subject), which it does not have with any other (objective) mechanism, such that the Input might mean “red ball” to mechanism 1 (you) but would not have that meaning for mechanism 2 (me). You could come up with new vocabulary to describe this difference in relationship, but the one we have, subject/object, seems to work nicely.

      *

      Like

      1. James,

        “the Input has a very distinct relationship with one particular mechanism (the subject), which it does not have with any other (objective) mechanism, such that the Input might mean “red ball” to mechanism 1 (you) but would not have that meaning for mechanism 2 (me).”

        Correct me if I’m wrong here: In your model, a rock would be an object whereas a human being, either you or I would be a subject…. If this is the case, that distinction cannot be justified because it corresponds to an anthropocentric ontology, an ideology which I do not support. One is then compelled to consider: What is the meaning of “red ball” to a rock? The correct answer is I don’t know, but I guarantee there is one.

        Like

        1. [Correcting you]
          In my model, any physical system can be an object or a subject. As an object, that system can be observed/measured which means it can be Input to an event. As a subject that system can produce output in response to appropriate input.

          Now “meaning” can only apply to certain kinds of (input —> output) events. Specifically, “meaning” can apply to events where the input can be said to instantiate semantic information and the output constitutes a valuable response to the semantic meaning of the input.

          So the rock could be a subject with a red ball as input, if you bounce the red ball off the rock, but you would be hard put to ascribe value (relative to the rock) to the output (red ball with different velocity). I personally wouldn’t ascribe consciousness to that event, but a panpsychist might.

          *

          Like

          1. Thanks James,

            By your internet stage name, am I correct to assume you live in the Seattle area? I grew up north of Seattle on the outskirts of the town of Snohomish. But now I live in the West central mountains of Idaho.

            As you already know, I am a panpsychist; and in my model, “meaning” is everything. Meaning is the underlying form of any, all, and every relationship within our phenomenal world. I’m not talking about a relationship that is predicated upon the abstraction of law, where there are only two discrete outcomes to that relationship, either obedience or disobedience. I’m talking about a “meaningful” relationship. Refer to my original bullets points, especially point number three (3). In a viable model, meaning cannot be excluded….

            Like

        2. Lee, yes I live in Seattle, have for about 23 years.

          Clearly we are using different definitions of “meaning”. When I say that semantic information has meaning, I define that as having a direct causal history link to some physical system, such that the output of the event in question gains its value in respect of that causally distant physical system.

          What do you mean by “meaning” ?

          *

          Like

          1. James,

            “When I say that semantic information has meaning, I define that as having a direct causal history link to some physical system, such that the output of the event in question gains its value in respect of that causally distant physical system.”

            I understand this statement and I do not necessarily disagree in principle. Now, in order for this model to be viable, it would have to be inclusive and not exclude anything. Yes? Meaning is not ambiguous, my emphasis is on the bullet point which states that in order for any theory to be considered a viable model is has to be inclusive, your model excludes things.

            That’s why I find it so fascinating when people have a problem with panpsychism. It all goes back to the “hard problem”. How do physical states give rise to the objective first person experience of consciousness? To avoid any form of dualism, the only obvious answer is that physical states in and of themselves are discrete forms of consciousness. Consciousness is not some magic trick that just pops up out of nowhere in our phenomenal realm some 11 billion years after the hypothetical big bang. Panpsychism is the simplest and most parsimonious explanation. Now, just because we do not understand this axiom or its dynamics, doesn’t mean we should get all twisted up inside and freak out. Our collective understanding of our phenomenal world is about as bankrupt as it gets for we “know” very little IF anything.

            Like

          2. Lee,
            “Panpsychism is the simplest and most parsimonious explanation.”

            Hope you’re okay with me jumping in here since I have an opinion on this. I did a post on panpsychism last year. https://selfawarepatterns.com/2017/06/24/panpsychism-and-layers-of-consciousness/

            The TL;DR is that it depends on which definition of “consciousness” you’re using. I identify at least five layers:
            1. Reflexes: autonomous reactions to the environment
            2. Perception: building models of the environment, expanding the scope in space of what the reflexes are reacting to
            3. Attention: prioritizing what the reflexes are reacting to
            4. Imagination: running simulations, expanding the scope of what the reflexes react to in time as well as space. Reflexes become feelings here, dispositions to act instead of autonomous actions.
            5. Metacognition: self reflection, introspection

            If you define “consciousness” as 1, then panpyschism is true, since everything interacts with its environment. But it’s a version of consciousness missing things like perception, attention, imagination, episodic memory, emotion, and introspection.

            Like

  24. Lee,
    I’m pleased that you and Mike have gotten into the problematic nature of language. Perhaps somewhat the way you have a problem with subject/object metaphysics, I have a problem with people referring to our terms by means of “is”, such as “What is consciousness?” I consider this to be one of academia’s most destructive paradigms. Consciousness, time, life — they don’t exist out there to discover, but rather are humanly fabricated tools to define as usefully as possible. I consider it quite unfortunate that normal speech gives us the impression that true terms exist to be discovered. Wittgenstein wanted to use “ordinary language” to help, and though I’m happy that he tried, his solution was clearly flawed. I believe that my first principle of epistemology could be far more effective.

    It occurs to me that you and I have been using the “metaphysics” term in somewhat different ways. I use it literally as “beyond physics”, or the foundation upon which such an endeavor will rest. In order for physics to be effective for example, causal metaphysics is required. Thus from here I wouldn’t state that subject/object metaphysics is inherently misguided any more than subject/object physics. There are all sorts of things that one could mean by these statements. I’d simply ask for clarification.

    By “subject/object metaphysics” I interpret you to mean what I would call “subject/object ontological existence”. This is to say that subject stuff and object stuff exist out there, or dualism. Well I’m as strong a monist as they come. Physicists don’t seem to appreciate when I mention that their metaphysical interpretation of Heisenberg’s Uncertainty Principle puts them in the dualism camp. Nor do they seem able to counter my position. I’ll need to work on my rhetoric there.

    On the floating nature of language, I don’t actually consider this to be a problem. Useful definitions hold their meaning where they’re needed. For example before Newton people didn’t have sufficiently useful definitions for “force”. Then he submitted “the product of mass and acceleration”. Given its effectiveness his definition hasn’t changed much since then (beyond the theory of my hero Einstein).

    I have a definition for consciousness that I suspect would be similarly useful, though far more basic issues seem to need help first. We need a community of respectable professionals that has its own generally accepted principles of metaphysics, epistemology, and axiology from which to better found science. Then I believe that our soft sciences will begin to become more like physics by means of more effective theory.

    Like

  25. Eric,

    “It occurs to me that you and I have been using the “metaphysics” term in somewhat different ways. I use it literally as “beyond physics”, or the foundation upon which such an endeavor will rest.”

    Incorrect, I too use the term “metaphysics” in the same context as yourself.

    “By “subject/object metaphysics” I interpret you to mean what I would call “subject/object ontological existence”.”

    This is true. Both the term subject and object owe their origin and existence to the SOM model which we both agree is a dualistic paradigm. Subject/object physics cannot be compared to subject/object metaphysics because one is an apple and the other is an orange. Subject/object physics is all about the observation of the experiments, i.e., the observer and the observed. There’s a distinction here and its a huge one. One can ask for clarification on the experiment and its finding from the observer. Because of the prevailing influence of the SOM model, I’m just saying that it is too easy to blur those lines of distinction when we use the term subject, subjective, or subjectivity in our vocabulary.

    Now referring to useful tools: I catch what you are saying and I do not disagree in principle. What you are talking about is developing useful tools, tools like language, tools like mathematics, Newton’s Philosophiæ Naturalis Principia Mathematica and Einstein’s General Theory of relativity and the fabric of space/time. None of those models are true, nevertheless they are all very useful, just like the stick a chimpanzee crafts from a twig to extract termites from a termite mound, the only distinction between the useful tools is a matter of scope.

    I like your first principle of epistemology, in principle, it’s the same model used for construction theory, it also corresponds to what Robert Pirsig was trying to accomplish with his metaphysics of quality (MOQ). I think your first principle could polish some of its sharp edges by utilizing some of the vocabulary that Pirsig employed, like provisional truth instead of “no truth”. Pirsig received much of the criticism for his model only because it had a mystic flavor to it. Pirsig was a noumenalist and I don’t believe he was ever able to distance himself from that mystical label even though he tried. Now that he has passed, Dr. Anthony McWatt and others have turned his brilliant intellectual architecture of the MOQ into an Eastern form of idealism, which is really sad, because as far as I’m concerned, the man goes on my short list of geniuses.

    Here’s what my model of a core human nature tells me. The vocabulary captures a core human nature precision, it is concise, succinct and without refute; and I must add; and nobody, and I mean nobody is going to like it, let alone accept it. Which is really to bad, because it could be a game changer for the behavioral sciences, specifically when it comes to addressing the scourge of addiction.

    Liked by 1 person

    1. Lee,
      It’s good to hear that we’re square with the metaphysics term. I suppose that in a literal sense I simply don’t consider the phrase “subject/object metaphysics” to quite get to the place that “subject/object ontological existence” does. For example I can imagine someone asking me about my subject/object metaphysics. To this I could say that I agree with Lee Roetcisoender as a monist that there aren’t two kinds of stuff. And it’s not that I know this to be the case in the end, but rather that I presume it for functional reasons — a void in causality would mandate there to be nothing to figure out anyway. Why attempt to explain that which has none?

      Let’s also consider the concept of subsets. For example “tree” could be defined as a subset of “life”. Furthermore “oak” could be defined as a subset of “tree”…. and so on. None are true, though such hierarchies do often seem useful elements to our languages. Furthermore in the models which I’ve developed, I find it extremely useful to define “subject” as a potential subset of “object”, or also “subjective experience” to be a potential subset of Kant’s “noumenal reality”. My models aren’t true of course, though I have found them to be useful positions from which to apparently explain the past and predict the future.

      I realize that you consider this to be an extremely dangerous move. How might I counter all of those dualists who would naturally delete the condition of “subjective as a mandated subset of objective”? And I do hear you. I’m already at war with the physics community in this respect (since they disparage the great Einstein without coming clean that they’re only able to do so by means of dualism).

      All of that remains inconsequential to me however. What matters is that I’ve developed some models by which “subjective” can exist as a subset of the noumenal. Note that reality will be reality regardless of any inconvenient paradigms which hinder our theoretical grasp. Perhaps you’d understand why I’ve taken this path if you did figure out my models pretty well? And with that provision, perhaps I’d understand your “theory of everything”? Competing theorists rarely seem to have much sympathy for the others’ ideas, but who knows?

      Like

      1. Eric,

        We are communicating here, and that is not surprising to me once I learned you had a passion for noumenalism.

        “Perhaps you’d understand why I’ve taken this path if you did figure out my models pretty well?”

        I did, and I do understand why you’ve taken that path, and that path is not problematic for either you or myself.

        “All of that remains inconsequential to me however. What matters is that I’ve developed some models by which “subjective” can exist as a subset of the noumenal.”

        And that is the catch phrase Eric, those models will “only” matter to you because you know the distinction. The problem is, that other people do not know the difference, and that is what I call a fulcrum distinction. It tips a dialectic discourse in one direction or the other, and the underlying framework of the SOM architecture is a model of unprecedented power. Let me be concise here, as long as one is hamstrung by the rules of engagement employed by the other camp, i.e., the SOM paradigm, a noumenalist cannot win an intellectual argument under ANY circumstances. That is why it is absolutely essential to change the rules of engagement, and the only way those rules can be changed is by dismantling the very model the other camp uses to wage dialectic warfare. One has to cut their legs out from under them. So in that context, it is very consequential. These are just my thoughts Eric, nevertheless my friend, you are on a good path.

        Liked by 1 person

        1. Eric,

          I misspoke in my previous post: SOM is not a model of unprecedented power, it is merely an architecture that holds our thirst for knowledge and understanding hostage. As a model of discourse, rhetoric is the model with unprecedented power…….

          Liked by 1 person

    2. Sounds good Lee. Still you of all people must realize that I will not accept your position simply on the basis of your testimony. That would place faith over reason, or the last thing that we advocate. But for now let me give you some details regarding myself.

      I was perhaps an overly sensitive kid, and I suppose that this is what instilled in me a burning question. Why do we behave as we do? My epiphany came during trying times at the age of fifteen. From then on my observations began to make sense. Furthermore this answer was in stark opposition to what I’d been taught all my life by friends, family, and society in general.

      I decided that we’re all self interested beings. This is to say that value is created within us, and that this kind of stuff is all that matters to anything anywhere. I had always been taught, conversely, about the rightness and wrongness of behavior. Furthermore I also noticed that morality was predicted by my new theory. A standard person should naturally tend to advocate moral selflessness in others, given personal self interests. So strong and misleading has this paradigm been, I think, that our mental and behavioral sciences have remained hollow shells of what they might otherwise become.

      I went off to university expecting to refine my position in applicable fields, but was unimpressed with what I found. Why get myself caught up in the field of philosophy, when it openly states that it has no generally agreed upon understandings to provide? No thanks! Then of course fields like psychology frustrated me given that they lacked my own value theory and seemed tremendously speculative. About then I was smitten by physics, or a field which clearly had what the fields that interested me most did not. This gave me a model from which to understand how our soft sciences needed to change.

      I took physics as far as my relatively standard processing speed would permit (an upper division quantum mechanics class easily kicked my ass!) and then dropped down to get a degree in economics. Though economics is certainly soft, I consider this largely by means of its speculative macro side data. Micro economics does seem to harbor some pretty solid theory. Furthermore it’s the only science I know of which is already founded upon my amoral position of value. In truth much of my university education ended up being gratuitous, given that I’ve made my living in the field of construction.

      I’ve developed my position in isolation from academia over the years, though certainly augmented by magazine and newspaper articles of interest. In January 2014 I decided that I my position was relatively worked out and so started blogging heavily. The goal has been to both learn about the state of modern academia, as well as see who might be interested in the ideas that I’ve developed. So far this has been one hell of a ride! Four down and hopefully thirty or so to come. I’ve continued polishing my positions and adding new elements as necessary.

      Of course I was never happy about philosophy remaining without generally accepted principles. If science needs effective metaphysics, epistemology, and value theory from which to work, then this cannot stand! But now with two and a half millennia of western culture embedded in the field, surely we can’t just throw Plato and all the rest out like so much trash? I’ve decided that we’ll need both a traditional field of philosophy which remains fully embedded in the humanities, and a new breed of philosophy which has its own generally accepted principles from which to found the institution of science. I propose four such principles, with the last of them set up to free our mental and behavioral sciences of the distorting effects of our morality paradigm.

      Like

      1. Eric,

        I am somewhat taken a back by your brief autobiography, for my own story is not that much different from your own, beginning with being an overly sensitive kid. I self-published a book in December 2015, it’s titled: The Wizards Reign: An Inquiry into Acceptable Norms. The paperback version is no longer available, but the kindle version can still be purchased on Amazon. It was written as a self-portrait; be it good, bad, or indifferent, the narrative does a good job “showing” what my personal experience was like. The book really doesn’t have anything to do with where I am today, nevertheless, writing it was a catalyst for me which gave me the ability to move forward on my journey.

        I’m not sure what your goals and objectives are Eric, even though we are both in agreement that the institutions of science need a major overhaul. And yes, we can throw Plato, Aristotle and all of his Greek cronies out like so much trash because they are responsible for giving the Western world SOM. And it’s not like we cannot do anything, because with clear objectives and collaboration, changes can be made for the betterment of the human condition. And I do have some creatively clearcut ideas on how those changes can be brought about. A brief example would be an architecture modeled after the X-Prize paradigm founded by Peter Diamondis. This is where rhetoric would be used to recruit and groom a wealthy Philanthropist who has a passion for the Humanities. A cash prize could be awarded to the best idea reinventing our institutions, not only the scientific institutions, but the academic and political institutions as well. My personal favorite would be re-defining policing in America. When I was a kid, a policeman was known as a “peace” officer, now a policemen is known as a law “enforcement” officer.

        Law enforcement is an archaic paradigm that has to shift before any changes can be made on the cultural front. The human being has to be first in hierarchy, not enforcement of the law. J. Edgar Hoover once commented that law and order takes precedence over any and all types of personal freedom. Human beings are more sophisticated than brute beasts and need to be policed with dignity and respect, respecting that dignity over and above the enforcement of some arbitrary law such as illegal drugs for example. In the thirties, at least it took an amendment to our constitution to make people who use alcohol criminals. People who choose to use drugs in our culture today are classified criminals simply by the stroke of a pen through our legislative process.

        My overall goals may be somewhat different than your own, for I sought the ultimate prize, understanding who and what we are and what this place where we all find ourselves actually is. I still don’t know “who” I am, because there is no such thing as me, only a condition of becoming. Nevertheless, I do know “what” I am. I once commented to a fellow sojourner, a noumenalist like myself: “What if you were handed the keys to the box which unlocked all of the mysteries surrounding our existence and realized that all of that knowledge and understanding changed absolutely nothing?” With out hesitation, my friend wagging his head replied: “It changed you!” At the end of the day, if one choose to invoke teleology, I do think that’s what its all about, and that really is all that matters.

        Liked by 1 person

    3. Well Lee, if today you aren’t really in the place that you were when you published that December 2015 book, then your positions must be evolving. This would suggest a potential to come around to mine in the end. At the moment I understand your passion, though not your ideas, which I suppose is mutual. But passions matters. Beyond the blogs, you can always reach me here: thephilosophereric@gmail..com

      Like

  26. Mike,

    By all means jump in, I appreciate it, and thanks for the link. I can talk consciousness all day long..

    “If you define “consciousness” as 1, then panpyschism is true, since everything interacts with its environment. But it’s a version of consciousness missing things like perception, attention, imagination, episodic memory, emotion, and introspection.”

    If I am confronted with a dilemma, which your question becomes for me, meaning I have to choose between alternatives none of which are favorable, I would choose number 1. Not that reflexes is a bad definition, but by its very definition it is constrained and limited in scope. Now, the other qualitative properties that you list are characteristics of our own first person objective experience of consciousness which is a relatively new comer to the stage. Those qualitative properties could not and should not be projected onto the more primitive, primordial forms of consciousness. Corresponding to the inclusive directive of bullet point number three; there is one primitive, primordial qualitative property which all discrete forms of consciousness share, and that qualitative property is power.

    That is the first incremental metaphysical step. In order to develop a definition of consciousness with power as its underlying form, one must first develop a metaphysical model of power. Hence, the quintessential academic question predicated upon the SOM paradigm raises its ugly head: Is power a subject, or is power an object? According to Kant, our phenomenal world is an “expression of power”, and the only place that magnitude of power could originate is the noumenal realm.

    I’m going to cut to the chase right here and offer up my definition of consciousness based upon Kant’s insight. This definition is a partial model, but it will work to get us started. Consciousness is the form through which power is both realized, and actualized, with a character that is determinate and unified. What this means is; because of the awareness of power, power can then be both experienced and expressed. Reasoning is an expression of power, the latest qualitative property of consciousness to arrive on the scene. The first person objective experience of power is radically indeterminate, why? Because power is the “cause” of all determinations. One metaphysical step at a time………

    Like

      1. Mike,

        Dude, you are such a skeptic, but I will not hold that against you my friend. The answer to your question is: None of the above. My resource for the meaning of power comes from an obscure philosopher named Arthur Berndtson, another individual whom I’ve added to my short list of geniuses. Arthur’s work provided for me the final component that allowed my theory of everything which includes consciousness and a core human nature to finally coalesce, something that I have been working on for the last thirty-eight years.

        The material can be found on the jstor.org website: “Philosophy and Phenomenology Research vol. 31. No.1 (Sept. 1970), pg 73-84.” The paper is titled: The Meaning of Power. Berndston also has a book titled: Power, Form, and Mind. His metaphysics on the meaning of power is incomplete and does not address first cause nor consciousness, nevertheless, he does an excellent job with his metaphysics.

        Copernicus’ published his model in the book De revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), just before his death in 1543. There is evidence to suggest that his model was discovered over four centuries earlier in the Babylonian empire. Paradigm shifts take a long time to occur, if ever. The last great paradigm shift was the Flat Earth Syndrome, so we are overdue. I am a “hardcore” pragmatist, therefore, I do not expect my work to provide the catalyst for a new paradigm shift. Maybe someone who discovers my work etched onto a hard-drive or thumb-drive four hundred years from now, and then, maybe….

        Like

        1. Lee,
          Guilty as charged on being a skeptic. I try not to be annoying about it, but I’m a pretty severe case. I’m even skeptical of the skeptics. It’s not unusual for people who learn the full extent of it to apply the label “nihilist”, although I would stipulate that I’m a descriptive rather than a normative one.

          Thanks for the article reference. Just downloaded the paper, although it may be a while before I can read it.

          On causing a paradigm shift, who knows? In this age of social networks, you never know what might go viral. The trick seems to be summarizing your ideas for quick consumption. (Not that I’m great at that myself.)

          Like

          1. Mike,

            Skeptic? It takes one to know one. Take your time Mike, Berndstson’s essay is heavy content.

            In an isolated context, I suppose that I’m a descriptive nihilist as well, and I am also a normative one….. But in the broader context, I’m a true noumenalist; and from what I’ve determined so far is, that both you and Eric are on that path of becoming……..

            Like

        2. Lee,
          “But in the broader context, I’m a true noumenalist; and from what I’ve determined so far is, that both you and Eric are on that path of becoming……..”

          I’ve gradually become uneasy with owning up to any “-ist” labels. Generally I’m fine if someone else applies them to me (within reason), but I’ve learned that they almost always come with positions people assume I hold that I don’t.

          For example, since I don’t buy substance dualism, you could describe me as a monist, but I’ve had people tell me that proper monists shouldn’t accept mind copying as a possibility because it entails accepting multiple realizability, which strictly speaking is a type of dualism.

          It’s probably accurate to say that I’m a lot of -ists in a weak sense, but virtually none in a strong sense.

          Like

    1. Lee, your model matches my model! At least, everything you put into your description above reads onto my model. (“Reads onto” is a term of art from patent law which means everything you mentioned is there in my model, if called by different name.)

      And for a definition of power, you can go back to Plato:

      “Whatever has a native power, whether of affecting anything else, or of being affected in ever so slight a degree by the most insignificant agents, even on one solitary occasion, is a real being. In short, I offer it as the definition of be-ings that they are potency
      —and nothing else.”
      Plato, Sophist, 247d–e

      *

      Like

      1. James,

        That’s an excellent catch James! Download that essay from the jstor.org website: “Philosophy and Phenomenology Research vol. 31. No.1 (Sept. 1970), pg 73-84.” The paper is titled: The Meaning of Power. Spend some time with it and then get back with me on your comments. It’s quite an exhaustive treatise on the meaning of power and will broaden the scope of Plato’s work and add clarity to his insights.

        Like

  27. James,

    “Now “meaning” can only apply to certain kinds of (input —> output) events.”

    Maybe you slipped up and didn’t mean to employ this vocabulary, I do it sometimes myself….

    Like

    1. [assuming this was a reply to “What does (my model) exclude?”]

      If you’re suggesting that I’m excluding “meaning” from some events, I am, because my definition of meaning does not apply to all events. If you think “meaning” applies to all events then you are using a different definition of “meaning”. I’d still like to know what that definition is.

      *

      Like

      1. James,

        Meaning is meaning James. (Meaning is the lynch pin that hold everything together which coalesces in understanding). If we are compelled to debate the “meaning” of meaning, then all it lost……………

        Like

  28. Just thought I would add an additional dynamic to my critique of SOM being an architecture of dualism derived from dualistic reasoning. The most primordial architecture of dualism from which the model of SOM is derived is the construct of law. Law asserts dualism at the most primordial level. First, there is this “stuff” called law, which is a ubiquitous, magical, mysterious some “thing” which commands unwavering obedience from its unknowing, unsuspecting subjects. Second, there is this other “stuff” which either obeys the law or disobeys the law. Fortunately, there is a model which both the institutions of science and mysticism employ to circumvent the two discrete outcomes of this “stuff” called law; it’s referred to as forgiveness. It’s quite ironic that dualism underwrites and is expressed in all of our intellectual models, the very models we use in an attempt to understand our phenomenal world. For a strict monist, dualism is problematic.

    Liked by 1 person

    1. Lee,
      When you say “law”, are you talking about a law of nature, or a societal law?

      In terms of natural laws, I’m sure the term “law” in this context originally meant God’s law. But I think the vast majority of scientists and philosophers today consider the word “law” here to be a metaphor used to refer to strict regularities in how the universe works. Unlike societal laws, there are no lawbreakers, no exceptions, otherwise the “law” is not an actual physical law, so I’m not sure where forgiveness might come in. (Apologies if I completely misunderstood your point.)

      I think I noted somewhere else in this thread that some strict monists have problems with multiple realizability. Would you say you’re in that camp? Just curious.

      Like

      1. Mike,

        I am referring to the laws of nature also known as the laws of physics, be it a metaphor or a literal application of the term. Sylvester James Gates J.R., among other theoretical physicists, cite examples of where the ontology of law, an architecture of dualism which “governs” the natural world are often broken. If I recall, he cited asymmetry as one example contrasted against symmetry observed within experiments, but there were other examples cited, and they did use the vocabulary of the laws of physics being “broken”. Be it a strict regularity or a ” literal physical law”, the principle of dualism is a constant, a mysterious, ubiquitous, magical some “thing” commanding unwavering obedience from its unknowing, unsuspecting subjects, another “thing”.

        “there are no lawbreakers, no exceptions, otherwise the “law” is not an actual physical law”

        That is precisely the point of my post; there is no such thing as the construct of “law”, be it a metaphor, a physical law or the laws of God. The only exception to this position are societal laws which we construct, modeled of course, after the law of God. Dualism is a mind trap, one filled with contradictions and paradoxes, the likes of which even strict monist fall prey…

        I do not have a problem with multiple realizability where relationships are expressed in the terms of correspondence, i.e., different parts responding to one another in an unequivocal partnership which suggests a linear paradigm of just one thing. This paradigm is in direct contrast to the discrete outcome of conformity, compliance and/or obedience, just to name a few idioms, which suggest a discrete paradigm of dualism.

        Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.