Massimo on consciousness: no illusion, but also no spookiness

Massimo Pigliucci has a good article on consciousness at Aeon.  In it, he takes aim both at illusionists as well as those who claim consciousness is outside the purview of science.  Although I’d say he’s more worked up about the illusionists.

However, rather than taking the typical path of strawmanning the claim, he deals with the actual argument, acknowledging what the illusionists are actually saying, that it isn’t consciousness overall they see as an illusion, but phenomenal consciousness in particular.

First, in discussing the views of Keith Frankish (probably the chief champion of illusionism today):

He begins by making a distinction between phenomenal consciousness and access consciousness. Phenomenal consciousness is what produces the subjective quality of experience, what philosophers call ‘qualia’

…By contrast, access consciousness makes it possible for us to perceive things in the first place. As Frankish puts it, access consciousness is what ‘makes sensory information accessible to the rest of the mind, and thus to “you” – the person constituted by these embodied mental systems’

He then presents a fork similar (although not identical) to the one I presented the other day.

Both camps agree that there is more to consciousness than the access aspect and, moreover, that phenomenal consciousness seems to have nonphysical properties (the ‘what is it like’ thing). From there, one can go in two very different directions: the scientific horn of the dilemma, attempting to explain how science might provide us with a satisfactory account of phenomenal consciousness, as Frankish does; or the antiscientific horn, claiming that phenomenal consciousness is squarely outside the domain of competence of science,

Actually I’m not sure I agree with the first part of the first sentence, that phenomenal consciousness is necessarily something separate and apart from access consciousness.  To me, phenomenal consciousness is access consciousness, just from the inside, that is, phenomenal consciousness is what it’s like to have access consciousness.

But anyway, Massimo largely agrees with the illusionists in terms of the underlying reality.  But he disagrees with calling phenomenal consciousness an illusion.  He describes the user interface metaphor often used by illusionists, but notes that actual user interfaces in computer systems are not illusions, but crucial causal mechanisms.  This pretty much matches my own view.

I do think illusionism is saying something important, but it would be stronger if it found another way to express it.  Michael Graziano, who has at times embraced the illusionist label, but backed away from it in his more recent book, notes that when people see “illusion”, they equate it with “mirage”.  For the most hard core illusionists, this is accurate, albeit only for phenomenal consciousness, although others use “illusion” to mean “not what it appears to be.”  It seems like the word “illusion” shuts down consideration.

It’s why my own preferred language is to say that phenomenal consciousness exists, but only subjectively, as the internal perspective of access consciousness.  It’s the phenomena to access consciousness’ noumena.

I do have a couple of quibbles with the article.  First is this snippet:

but I think of consciousness as a weakly emergent phenomenon, not dissimilar from, say, the wetness of water (though a lot more complicated).

I’m glad Massimo stipulated weak emergence here.  And I agree that the right way to think about phenomenal consciousness is existing at a certain level of organization.  (And again, from a certain perspective.)

But I get nervous when people talk about consciousness and emergence.  The issue is that, of course consciousness is emergent, but that in and of itself doesn’t really explain anything.  We know temperature is emergent from particle kinetics, but we more than know that it emerges, we understand how it emerges.  I don’t think we should be satisfied with anything less for consciousness.

The involved neurons also need to be made of (and produce) the right stuff: it is not just how they are arranged in the brain that does the trick, it also takes certain specific physical and chemical properties that carbon-based cells have, silicon-based alternatives might or might not have (it’s an open empirical question), and cardboard, say, definitely doesn’t have.

Massimo has a history of taking biological naturalism type positions, so I’m happy that he at least acknowledges the possibility of machine consciousness here.  And I suspect his real target are the panpsychists.  But I’m a functionalist and see functionality (or the lack of it) as sufficient to rule out those types of claims.  When people talk about particular substrates, I wish they’d discuss what specific functionality is only enabled by those substrates, and why.

But again, those are quibbles.

It follows that an explanation of phenomenal consciousness will come (if it will come – there is no assurance that, just because we want to know something, we will eventually figure out a way of actually knowing it) from neuroscience and evolutionary biology, once our understanding of the human brain will be comparable with our understanding of the inner workings of our own computers. We will then see clearly the connection between the underlying mechanisms and the user-friendly, causally efficacious representations (not illusions!) that allow us to efficiently work with computers and to survive and reproduce in our world as biological organisms.

Despite a caveat I’m not wild about, amen to the main point!

116 thoughts on “Massimo on consciousness: no illusion, but also no spookiness

  1. Even though I think of phenomenal consciousness as something the brain *does*, substrate might still be important, insofar as what something can do depends on what it is. The whole-person level of behavior is too coarse, in my opinion, to guarantee consciousness-as-we-know-it. On the other hand, the molecular level is too fine grained. If (and it’s a big if) you can get a silicon-based “neuron” to talk to the other ~10,000 neurons it connects to in the exact same way a carbon-based one would, you’re golden. Phenomenal qualities are how one brain sub-network talks to the others. But if you take out too many of the dance moves, it’s not the same dance, even if it sounds the same to people outside the dance hall.

    Liked by 1 person

    1. So the idea is that consciousness requires a physical neural network. And even if we manage to produce the same outputs with an equivalent software neural network, it wouldn’t be conscious. It would be a behavioral zombie.

      Perhaps, but what in particular does this physics enable that is lost with the alternate physics that produce the same information flow? And how could we ever test this proposition?

      Like

      1. “And even if we manage to produce the same outputs…”

        I suspect that’s the kicker, getting the anticipated outputs. (I’ve argued you might not.) I think if you do get the right outputs over time (the Rich Turing Test), then you’ve proved computationalism correct.

        Like

        1. I agree. If we’re getting the right outputs, it becomes hard to argue we don’t have the inner stuff, or at least something similar.

          If we don’t get the right outputs, then the game becomes playing around with the architecture to figure out exactly what about the original topology is making the difference.

          I do think it’s very feasible that the software version could function, but be too slow for anything useful. Performance issues could drive us toward a hardware architecture more similar to biological brains, although with the much higher speeds of computer chips, it might not have to be nearly as parallel as the biological versions. Maybe.

          Like

          1. Somewhere I read about near-field effects — a neuron firing can affect nearby neurons even though they aren’t connected. It seems increasingly clear the connectome alone isn’t sufficient (but is almost certainly necessary).

            Certainly the synapses matter. Things like glial cells or myelin sheathing appear to matter. Near-field effects may matter (they happen so I would assume they matter), which means spatial geometry matters.

            That’s a lot to simulate. (It’s apparently what’s happening in Stephenson’s Fall; or, Dodge in Hell. It’s a physics sim running on entire server farms of quantum computers.) The connectome alone is a ton of data.

            Liked by 1 person

          2. Regarding myelin sheathing, I think dendrites, including the apical dendrites, have no sheathing and even some axons can lack sheathing while the ones with sheathing have discontinuous sheathing.

            There may be a bigger role for glial cells since it was discovered they are not likely insulators or merely structural elements but can have an electrical potential themselves.

            If the connectome model isn’t sufficient, then it would seem to be that computational models still have a lot of explaining to do in regard to how information actually flows in the brain.

            Like

          3. I posted a while back about a study that had managed to demonstrate neighboring neurons communicating with fields, but only in an artificial environment. It remains a speculative idea for neurons in actual brains, where neurons are typically separated by glia.

            One of the things we have to careful about, as amateurs reading studies or reports of studies, is the difficulty of fitting them into an overall context. It’s why I prefer getting my understanding from neuroscience books, which assimilate all that into a more coherent picture. They’re less up to date, but the authors usually have a good grasp on what are fringe results versus what has a lot of corroborating research. Lamentably, the books cost money (a lot of money for hardcore neuroscience ones), and take time to parse.

            Like

          4. It got flagged because of the number of links.

            Despite the quantity, the recent neuroscience books I’ve read still don’t mention it, indicating it remains of questionable validity for most neuroscientists. Of course, that could change as more data is accumulated.

            Like

          5. Yeah, I know why it got flagged. I have somebody (Wes) at my site who is always getting flagged for the same reason.

            The first article has Koch as a co-author and was published in Nature so the idea isn’t completely fringe.

            Like

          6. And yet, Koch didn’t think it worth mentioning in his latest book, even when discussing local field potentials. He does mention it in his earlier book, and even cites that paper. I wonder what changed between 2012 and 2019.

            Like

          7. Here’s another Koch paper from 2015.

            https://www.sciencedirect.com/science/article/abs/pii/S0959438814001809

            “There has been a revived interest in the impact of electric fields on neurons and networks. Here, we discuss recent advances in our understanding of how endogenous and externally imposed electric fields impact brain function at different spatial (from synapses to single neurons and neural networks) and temporal scales (from milliseconds to seconds). How such ephaptic effects are mediated and manifested in the brain remains a mystery. We argue that it is both possible (based on available technologies) and worthwhile to vigorously pursue such research as it has significant implications on our understanding of brain processing and for translational neuroscience.”

            It looks like Koch has his name on about 30-40 papers a year. This is not easy to study since we are trying to measure weak EM fields at a cellular level but any instruments we use to measure effects are likely to be intrusive to what we trying to measure.

            Like

          8. If ephaptic coupling is a factor, I tend to think it will be in enhancing the peaks of waves. It might provide more oomph. The peak of the wave already makes neurons more likely to fire than at the trough. It might also make the trough less deep. Maybe.

            On Koch and papers, he’s the chief scientist for the Allen Institute, so I suspect he ends up reviewing the work of a lot of studies, which is probably why he shows up at the tail end of so many author lists.

            Like

          9. “If ephaptic coupling is a factor…”

            Yes, certainly could be that. More neurons firing at the same time seem to be associated with conscious perception. There is an EM field feedback thing at work too. As more neurons fire, they trigger more neurons to fire which also increases the strength of the EM field which triggers more neurons to fire and could spread the firing to other parts of the brain. I think in my post I compared this to resonance between a tuning fork resonating because it is in close proximity to a vibrating one.

            Part of the theory would be that conscious perception occurs when the EM field reaches a certain threshold strength but that would also be likely the same time that a large number of neurons are firing in sync. So it could support either a theory that the neurons firing are responsible for consciousness or the EM field is. In a sense, the two may be opposite sides of the same argument.

            Like

          10. When talking about consciousness, we always have to be careful to distinguish between arousal (wakefulness) and awareness. Higher frequency oscillations (gamma waves) are definitely associated with arousal. But as I understand it, even someone in a vegetative state can go through sleep/wake cycles, including transitioning between different wave frequencies, but with no awareness.

            Awareness seems to require more selective activation. It’s dependent on the overall oscillations, but as more of a perquisite than a transmission mechanism.

            Like

          11. That gets tricky because some evidence suggests that some in an apparent vegetative state are partially conscious.

            You can also bring in Koch consciousness meter, which he does discuss in his latest book, that begins by zapping the brain with a magnetic pulse the analyzing the resulting EEG patterns.

            Like

          12. The thing is, to detect the consciousness of those in an apparent vegetative state you need a brain scanner, one that will show differentiated activity compatible with what the patient is being asked to think about.

            There’s no doubt that a magnetic pulse can induce neural activity. The question is whether neurons themselves generate anything sufficient for that, and if so, whether it’s anything significant in terms of information processing.

            Like

      2. By hypothesis, it’s different physics, so you can verify it by doing a physical analysis. Also, it’s *not* “the same information flow” except at a very coarse-grained, whole organism level. And *behaving* a certain way at the whole-organism level is not exhaustive of what I care about. I also care about *feeling* certain ways. And I resolve the semantic ambiguity of “feeling” by accepting that the meaning is spread across the many equally-good size/activity scales for that referent.

        Like

        1. Right. But a system that behaves like the original would have to have similar intermediate processing states to that original. They may not be the same exact feeling states as the original, bu they’d have to have their own version of those states. But from the new system’s perspective, it would look at the original and wonder if it had the same feeling states as it. Neither system would have access to the other’s internal experience. They could only extrapolate from their own experience what the other system was feeling.

          If it was an mind uploading situation, the copy’s consciousness could be utterly different from the original’s. But the copy would remember its experience as the original in terms of its current experience. It would never be able to know the difference. If the original system felt pain, but it only feels pain-alt, its memory of feeling pain as the original would be in terms of pain-alt. It would never be able to make a comparison, or even know the difference.

          Like

          1. “a system that behaves like the original would have to have similar intermediate processing states to that original.”

            I don’t see why. Show me the math?

            “from the new system’s perspective, it would look at the original and wonder if it had the same feeling states as it.”

            Yes, exactly. Like my robot from a few posts back, who pities us poor humans that lack compsciousness, making do with mere consciousness instead.

            Like

          2. Show the math? If I could do that I’d be having it peer reviewed. 🙂 No doubt all kinds of different internals are possible, but to get the same outputs given the same inputs, at least equivalent information flows have to happen. There are many ways to write a word processor, but all of them are going to involve holding a representation of the document in some manner.

            I totally missed the compsciousness spelling before. Nice. I agree. Of course, whether compsciousness is just a variant of consciousness or another thing in and of itself is in the eye of the beholder.

            Like

          3. I thought you were claiming more than that the input-output mapping, when treating the whole body as a black box, is the same. I thought you were implying that the top-level internal structures would have to be isomporphic too.

            Like

          4. I am still unconvinced of the functional isomorphism implication. A Giant Look Up Table (GLUT) provides the same whole-organism input-output mapping, but strikes me as a very different functional arrangement.

            By the way, I just now got the email from Aeon offering a link to Massimo’s piece. It’s great, except for the way he sets up “the fundamental divide”. He leaves his own view out of the list of options in this “fundamental divide”! The situation in philosophy of mind is not quite as bleak as he paints it. Views like his and mine are not exactly rare, even if they don’t get as much attention as (duh!) they deserve.

            Like

          5. The problem with GLUTs is that they rapidly grow out of hand. A GLUT large enough to do anything sophisticated quickly exceeds the size of the universe. Any possible system will have to use shortcuts and optimizations, with performance, capacity, and energy constraints causing a convergence to a limited number of options.

            I can’t say I have a very positive opinion on most of the philosophy of mind. The field does occasionally produce possibly useful hypotheses. Higher order thought and predictive coding theories come to mind. But too much of it seems preoccupied with rationalizing why dualism or other cherished intuitions are true after all, dreaming up one impossible “thought experiment” after another as justification.

            Like

          6. A GLUT is not a physical possibility, but it is a mathematical one. For a more realistic analogy, consider electric cars vs internal combustion engines. Both types of car will get you to work and shopping, but they’re not functionally isomorphic under the hood. For one thing, electric cars often don’t need a transmission. Low end torque is no problem for them. That example isn’t drawn from psychology or neurology, but I don’t see why it can’t be relevantly analogous.

            Like

          7. And yet they’re functionally equivalent enough for us to regard both of them as cars. We can go around on this forever, but ultimately there’s no fact of the matter. Functional equivalence will always be a matter of interpretation. Whether another system with different internals that can do conscious things is conscious will ultimately be a matter of philosophy.

            Like

    2. Paul,
      Lately some of us have been entertaining ourselves over at James Cross’s blog by considering the potential for phenomenal existence to be caused by neuron based electromagnetic radiation. So here consciousness would not just be a product of generic information processing (and even by means of a Chinese room), but rather a dynamic which certain material based physics produce by means of associated causal function.

      Observe that if certain EM fields happen to be responsible, then appropriate radiation from all over the brain might contribute to create a single conscious entity, or one experiencer each moment rather than all sorts. So here the ethereal properties of EM waves could provide a ready made solution for the combination problem. If you haven’t yet taken a look, you might enjoy it.
      https://broadspeculations.com/2019/12/01/em-fields-and-consciousness/

      Liked by 1 person

    3. “If (and it’s a big if) you can get a silicon-based “neuron” to talk to the other ~10,000 neurons”

      I’ll take your ~10,000 to be a correct estimate. But here is where I have a problem that has lingered in the back of my thoughts for a while.

      If neurons have that many connections and that many connections are required for consciousness, then how can neural signals using solely chemicals and ion flows possibly work fast enough and in sync over the billions of neurons and 10K * billions of connections to produce any sort of unified experience or a functioning consciousness. It might be fast enough in silicon but that wouldn’t explain how it works with chemistry.

      Like

      1. Let me try that again.

        To expand on this a little bit (and maybe Wyrd can do some better math with this than I can).

        In the argument for perception in the frontal lobes, visual perceptions form in the frontal lobe. I assume a significant amount of processing still takes place in the visual cortex. I think there are a few inches of brain between the visual cortex and the frontal lobe, so I would assume some number of neurons (100-200?) would need to fire almost in sync in the visual cortex and pass information through some number of connections (100?) to reach the frontal lob where the information would need to be passed around over another number of neurons (25?) to generate the perception. I think a synapse firing needs about 2-3 ms. Is there enough time for this to even be a possible way for information to flow in the brain?

        Like

        1. James,
          You might be overestimating how fast the mind works. Consciousness in particular is pretty slow by nervous system standards. (Nonconscious processes actually happen much faster.)

          But action potentials reportedly propagate at somewhere between 1 and over 100 hundred meters per second (depending on the axon thickness and amount of glia insulation). So I don’t think the distance between the parts of the brain is that much of an issue.

          What might be more of a factor is the number of chemical synapses in the circuit, since the conversion from electric to chemical and back to electrical signaling takes time.

          I’d also say that the notion that there is a full resolution image of the perception in the frontal lobes is one I’m personally skeptical about. That seems computationally wasteful. I suspect it’s more complicated. I would think the frontal lobes have action plans, with parameters set from the imagery in the visual cortex and higher resolution sensory regions. In other words, each representation higher in order needs to add value on the transformation from sensory input to motor output. But I’ll admit this is my own speculation.

          Like

          1. In the 1 – 100 meters per second does that factor in that the signal(s) may need to pass through multiple synapses? I mean it isn’t like there is a direct line from neurons in the visual to the neurons in the frontal cortex. Every neuron doesn’t have a direct connection to every other neuron. I am seeing 1-5 ms for a synapse to fire but if the overall information needs to go through 100 synapses (of course being modified along the way) then we are in the 100-500 ms.

            Like

          2. I guess I’m lacking some basic understanding of how signals move about but it might not be just me. I’m not exactly finding clear explanations anywhere else either.

            There is plenty stuff about how a single neuron fires. Most of these do not use pyramidal neurons as examples, however.

            To get from the visual cortex to the frontal cortex where do the signals propagate? Deep in the brain or on the surface of the cortex where the apical dendrites terminate? Some other way?

            The signals must go through multiple logic gates (if this is how it works) to get routed to right spot so there must be processing along the way.

            Like

          3. From what I’ve read, most of the information between cortical regions flow through the thalamus, although there are also a lot of cortex to cortex connections (the axon drops out of the cortex but runs to another cortical region), and a lot of information flows through regions like the corpus collosum. There’s also a lot of indirect information that flows though other subcortical regions such as the basal ganglia, amygdala, etc.

            There is information that flows laterally through the cortex itself, but that really is one region affecting adjacent regions. I wouldn’t think it’s a meaningful pathway between, say, the occipital lobe and the frontopolar prefrontal cortex.

            That said, don’t trust anyone who confidently tells you they understand how all this flows yet. There’s a lot known on the connections, but still major gaps on what they mean.

            Like

  2. Thinking in terms of how to engineer something phenomenally conscious, I’d want to pay more attention to that part of our mind that views and does something with the ‘illusion’ or the ‘user interface’. In effect then we can design this thing (the subset of mind that is the conscious entity that is waiting for some content) so that ‘what it is like’ for it to have certain conscious content presented to it translates directly into specific feature of its architecture. We should then be able to study it and see why it would feel like it does in a given circumstance. The feeling is real, but it is subjective to that entity and played out in what it does over time. For me that feeling is bound up with the relation between external senses, internal representations, positive or negative valence, potential attention/action sets and predicted outcomes, both sensory and valenced. (Sometimes wish I could insert a picture here!)

    Like

    1. In general I like the user interface analogy but I think it breaks down at some point. I think the thing that does something with the user interface is part of the interface. It isn’t something different. If there is an illusory aspect to consciousness, it is the illusion of something apart from the interface.

      Like

    2. I agree. I think there’s a lot to be said for looking at the boundary between sensory and action orient processing. Of course, that happens at multiple levels, for innate reflexive reactions, habitual reactions. But the level we’re interested in is the planning one, or at least where planning may potentially happen.

      On inserting a picture, if you have it on the web somewhere, if you paste in a url to a gif or jpg, WordPress usually renders whatever the image is. (Although not in the email notifications, just on the website.)

      Like

    3. [been tryin to decide where to jump in … choosing here]

      My goal here is to explain my current best understanding of how this whole thing could be engineered. Take it as a thought experiment.

      First, I have to explain that, thanks to a tweet from Keith Frankish (the illusionist), I have become aware of Ruth Millikan’s most recent book: Beyond Concepts. I only read Part I so far, but that’s been enough. Millikan’s work is about semantics, and more specifically, how natural systems develop semantics, which she refers to as biosemantics and we here have occasionally called teleonomic semantics.

      The goal of Part I of her book is to explain unitrackers and unicepts. A unitracker is a mechanism [!!!] that tracks one thing, which she usually calls the target. It can get input from all over, multiple senses, memories, etc.. She avoids using the term “concept” because that term carries certain baggage in at least one of her fields (cognitive linguistics?), but using a naive, popular understanding, the “target” being tracked is essentially one concept (possibly a unicept, but see below). So there might be a unitracker for “cat”, and one for “tiger”, and one for “that tiger there”. Unitrackers can be essentially permanent (cat), temporary (that cat there), can change (that cat there is named “biscuit”), can go away (“no, I don’t remember a cat named Biscuit”). Unitrackers can track abstractions, like the president of the U.S.

      A unicept (coined by Millikan to get away from baggage mentioned above) is the target of a unitracker that can get into consciousness, i.e., a target you can become aware of. Millikan never really talks about consciousness, but this is my understanding of the difference between targets which are unicepts and targets which are not unicepts.

      Millikan does not speculate as to the neural mechanisms of unitrackers, but that doesn’t mean I can’t. I’ll be blunt and brief so that I can put it into the context of this discussion. I speculate the unit of the unitracker is the cortical column (more or less). Thus, the whole cortex is pretty much unitrackers. Some of these unitrackers track direct sensory inputs, so for vision they track whatever is coming over the optic nerve. Because some processing has happened in the retina, these aren’t necessarily just pixels. Some unitrackers use these primary trackers to track low level visual features, like edges, boundaries, etc. And then more unitrackers use these secondary trackers, and so on.

      Note: activation of specific unitrackers can affect the input unitrackers. So if the “cat” unitrackers is already activated, it may influence its input unitrackers, like “eyes”,”fur”, etc. You can consider that predictive feedback.

      So how do we get consciousness? We (some of us?) have talked about multidimensional vectors, and how they can be used to combine (and subtract) concepts. So, king – man + woman = queen. What if you had a reasonably small set of neurons, say, 500-1000, that could instantiate such vectors and the functional vector operations. Chris Eliasmith has demonstrated computer simulated biologically plausible neurons that can do this. What if any given unitracker sent exactly one axon into this vector network such that, by itself, it induced a unique vector. That unique vector would count as a representation of the target of the unitracker. Thus the vector network would have the capacity to represent each unitracker that sent in an axon. Now what about the output of this vector network? What if the output went broadly out to other unitrackers. These unitrackers could use this as input with regards to their individual targets. So if the vector network was showing “tiger”, the “cat” unitracker might become more activated, but the “sand” or “Trump” or “flying” unitrackers not so much.

      So unicepts are the targets of those unitrackers which can be instantiated in a vector network. The instantiation of a unicept in the vector network is an experience. The qualia associated with this experience is the unicept, i.e., the target of the unitracker.

      Note that this vector network is effectively a global workspace. All of the unicepts can be considered to be competing to be instantiated. Whatever controls which unicepts are in fact instantiated is essentially controlling attention.

      Note also there is no single “viewer” of this workspace. Instead, there is an audience of unitrackers.

      Note also that there are a wide variety of unitrackers. Some can be goals (get beer from fridge), some can be action plans (move body to fridge, open door, grab beer), some can be physical objects (fridge, beer), some can be words (“fridge”, “beer”).

      So is this plausible? Supported by evidence?

      *
      [of course, the Prefrontal Cortex is the home of goal-type unitrackers]

      Liked by 1 person

      1. Much of it sounds plausible to me, although I’d have to read more to understand the exact value in the specific terminology. My issue with semiotics has been, not that I think it’s wrong, but that I’m not sure what value all its blizzard of definitions bring.

        The part I’d be most skeptical of is equating a unitracker with a cortical column. I doubt the mapping would be that clean. I tend to think a unitracker would be a vast hierarchy of activations, converging on a small area (Damasio’s CDZ zones), and diverging from them during imaginative or predictive retroactivation. The CDZ would likely be the unique core of the unitracker.

        I’ll have to give some thought to your description of the global workspace, but I think the idea that there is no single viewer is right, and matches the understanding of most GWT advocates.

        Like

        1. I’m not sure identification of individual cortical columns is “clean”, but like I said, seems plausible.

          Damasio’s convergence zones could simply be unitrackers of parts converging on unitrackers of wholes.

          Keep in mind the number of unitrackers is, well, large. Consider you have a separate one for each word that you know, plus each phrase you commonly use, plus each person, place, or thing you know, etc.

          *

          Like

          1. Oh, I fully understood that the number of unitrackers is large. To me, the term simply refers to concepts, and all their combinations, permutations, composites, hierarchies, and associations. It’s every thing we can perceive, imagine, believe, or plan.

            Like

        2. The alternative to unitrackers, where a single set of neurons is dedicated for tracking just one concept, is multitrackers, where a set of neurons can track multiple concepts, as in the vector-network (global workspace).

          So back to testable conjecture: I conjecture that the architecture of the cortex is most suitable for unitrackers whereas the architecture of some part of some un-named subcortical structure (that may or may not rhyme with Palamus) is suitable for multitrackers.

          *

          Like

      2. JamesOfSeattle: The terminology of ‘unitrackers’ is new to me but the concept fits with part of what is needed in an implementable account, and I like that it is getting us more in the direction of control systems, which is where we need to be in my view.

        Some thoughts building on this:
        – A unitracker is modelling what is persistent and useful in input data (or data in the next layer down even if we are not at the sensory level) and mapping that to useful discriminated output categories. The nature of this mapping is predictive and for a purpose.
        – Attention applies all the way down the stack, so that what a unitracker is pointed at to get its data (eg the pixels) is what it is attending to. The parameters of the unitracker are precisely the current values of its attention pointers.
        – We have mechanisms for initiating and terminating unitrackers.
        – Actions plans can amount to joining up an action controlling unitracker to a sensory unitracker, as in linking control of hand to sight of the beer. This does require a distinct mechanism for generating possible connections between unitrackers and instantiating the best one (eg reach with right or left hand, reach for beer or TV remote).
        – Valence is needed to drive the whole process forward, namely a measure of good and bad for the organism that determines what to pay attention to (that with highest positive or negative valence, ie highest salience) and what action is optimal (expected to result in highest future valence).
        – Self awareness requires that ones of the things we abstract and track is ourselves and our relationship to the world.
        – …and just to push this further, what is special about maths and logic, is that it is to do with the algebra of combining unitrackers, rather than being out in the world.

        Like

        1. Peter, some thoughts on your thoughts …

          As you may know, my paradigm for mechanisms is Input—>[mechanism]—>Output, so when you say “mapping” I think “output”.

          I would say mapping (output) is always for a purpose, but not always predictive, except in the sense of “I predict this output will be a valuable thing to do”. So there are unitrackers for action plans, like “reach and grab”, and the outputs of those unitrackers are signals for reaching and grabbing, probably going to one unitracker for reaching and another for grabbing.

          I’m not sure that “attention” applies in the way you propose. To me, attention implies alternatives. I don’t see a unitracker paying attention to some inputs and ignoring others. I just see it adding up its inputs and deciding how activated it should be.

          With regard to choosing between unitrackers (right hand v. left hand) I anticipate it works as a competition, with both unitrackers being activated while other goal-type unitrackers exert suppression of one or the other.

          Regarding valence, I’m not sure that it is involved as much as you (and Eric) seem to think. I think it is mostly involved in the creation, modification, and destruction of unitrackers, as opposed to their operation. I.e., “good feelings” tend to create them while “bad feelings” tend to to destroy them? [Haven’t thought much about this. Will now. Thx]

          Finally, unitrackers for “self”? Yup. Pure abstractions? Yup.

          *

          Like

          1. JamesOfSeattle: Generally in agreement and where there’s a difference it’s probably just in terminology or different things that we are putting a box around. To follow up on a couple points:
            – Regarding unitrackers implementing attention, I agree there are higher level choices to be made on what to pay attention to, but this is what I have in mind regarding attention at the unitracker level: Imagine watching coloured shapes appearing and moving across a screen. A unitracker would track an object. What that means in practice, in terms of implementing it as a tracker, is that it is watching the current position of the object plus the places it could go next and is set up to detect if it moved and update the tracked position. In that sense it is paying attention to the right sensory information based on predicting where the object might be next. A similar idea would apply to tracking the colour of the object if that is changing, or the shape. In each case the unitracker is tracking the position, colour and shape parameters as they change, and paying attention to just those pixels that will tell it if there is a change it needs to track. Those attentional pointers are then exactly the parameters that you would read off if you want to know the location, colour and shape of that tracked object. I have implemented this in software and so had to think about how it would work at the practical level.
            – Regarding valence, this seems fundamental to me because without it the brain can do anything at all or nothing – it would have no idea what to attend to, what to do or what to learn. It sets the direction of travel for everything that the mind is doing.

            Like

        2. Peter, the way I see it you are giving too much agency to the unitracker.

          Let’s take the example of two objects bouncing around a screen: a red circle and a blue square. Some unitrackers get activated off the bat :
          Objects, red, blue, (white, or whatever the background is)
          After some eye movements and time for movement, additional unitrackers will be activated:
          red+Circle+object+Moving, blue+Square+Object+Moving,
          As the objects persist for a certain time, more get added (without losing the old ones)
          (red+circle+object)+trajectory1, (blue+square+object)+trajectory2

          So over time the trajectories will change, and the unitracker of (red+circle+object)+trajectory1 will not be stable, but the unitracker for red+circle+object will be stable. I don’t think it’s correct to say that the unitracker is paying attention to its trajectory. I think it is better to say other unitrackers are paying attention to their inputs, one input possibly being the vector network showing red+circle+object+trajectory3. A unitracker for blue-circle-object might also be watching it’s inputs. If the circle changes from red to blue, the red+circle+object unitracker will stop becoming active and the blue+circle+object become active. However, the circle+object unitracker will remain active.

          Does this make sense?

          *

          Like

          1. JamesOfSeattle: Yes, I’m broadly OK with what you are saying, in that once initiated unitrackers can be fairly dumb and autonomous; and also that they are hierarchically connected, and only really get interesting at higher levels in the stack.

            Regarding the relationship to attention, when I implemented this in software I found the place at which attentional selection comes into play is in assigning a unitracker to a newly detected input. Once assigned it can track and terminate itself autonomously and in parallel with all other unitrackers. However, when a first detection is made and a unitracker has to be assigned, there is a competition for scarce resources (available unitrackers to take on what may be multiple new detections) and serial rather than parallel selection of the most salient new detection, then second most salient, and so on, until time in that cognitive cycle runs out. Interestingly this process seems to correspond to the gamma brainwave frequency and to there being about 8 different slots in working memory, in ways that drop naturally out of implementing it. There are about 8 opportunities to serially map unitrackers to newly detected inputs before time runs out and things move on.

            Valence comes in when assigning unitrackers, because salience (absolute value of valence) needs to be maximised. You have to have some criterion, even down to the lowest levels of processing, to decide what matters.

            For a really uniform model, the same concept can be extended up the stack to taking action, because action can also be controlled by a unitracker that is selectively connected back to the unitrackers of a sensed object – if you like, my hand controlling unitracker is paying attention to (taking its input from) the beer unitracker.

            I would say though that the above mentioned attention is subconscious. The attention that we are conscious of is when we deliberately intervene and override the above process, by taking the above process as input and modifying it as a mental action. This is where unitrackers that represent ourselves and our relationship to the world come into play, in representing ourselves to ourselves, and letting us selectively intervene in what are otherwise subconscious attentional processes.

            Like

        3. Peter, that all sounds right (and awesome!). Is this written up anywhere? How much of this is in your book?

          So now I have questions like: how do you manage valence? Is that hard-coded? Can you establish goal-tracking unitrackers which influence valence, and therefor high-level attention?

          *

          Like

          1. JamesOfSeattle; Yes that is written up in my book (although not expressed as unitrackers, which terminology is new to me) although I am wary of mentioning it here as I know that advertising is frowned upon. Happy to send free PDF to anyone on here who lets me know a personal email address, as the discussions here are stimulating and appreciated.

            The fundamental valence calculation (indicating pleasure or pain) has to be calculated in a way that is hardwired to the needs of of the organism and genetically determined. The relation of this to particular situations and attention/action options can then be learned through experience, and quite simple learning tables, automatically updated, suffice to achieve this. I have implemented this learning too in software, in a very simple example, and it is striking how quickly such learning can go from a random starting state to useful behaviours, in respect of both attention and action.

            I have not yet implemented high level goal-tracking unitrackers in software, and will have more time for software experiments from April. I would see these as having explicit access to the subconscious processing and that it is this explicit access to tracked sensory data, attention options, action options and valences that give rise to qualia and the phenomenal nature of the consciousness.

            I looked again at the notes of your architecture, and I think we are very similar, just labelling things a bit differently in some cases, and being more focused on different subsets of the whole.

            Like

      3. “I speculate the unit of the unitracker is the cortical column (more or less). Thus, the whole cortex is pretty much unitrackers.”

        So it would follow that if a large portion of cortex was removed then we would lose all the unitrackers found in that part of the cortex. This doesn’t seem to be the way the brain works. Many large sections can often be removed (although some are very critical for consciousness in general or particular capabilities) but most functionality is preserved. Memories and capabilities seem to be distributed throughout the brain with remaining sections able to plug the gaps.

        Sometimes people have a frontal lobe removed to treat epilepsy that can’t be controlled with medication. Results vary but some have no meaningful decline.

        “Forty-eight percent of patients did not demonstrate meaningful postoperative declines in cognition and an additional 42% demonstrated decline in 1 or 2 cognitive domains.”

        https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5317375/

        Like

        1. A lot depends on whether the removals are just in one hemisphere versus bilateral. Someone with a functional corpus callosum can lose large segments of one hemisphere and often only see modest cognitive decline, although it usually leads to sensory or motor issues on the opposite side of the body.

          Like

        2. Also, there is a large amount of plasticity. Unitrackers are created, modified, and removed all the time. In people who are blind because their eyes are damaged, the unitrackers normally reserved for vision get repurposed.

          Memories are distributed throughout the brain because the unitrackers for any given memory are usually multi-modal, involving objects, sights, sounds, people, etc., and so the associated unitrackers are distributed throughout the brain. I expect the activation of one episode memory involves the activation of all the associated unitrackers.

          *

          Like

  3. Interesting article. You might want to check out Brett Weinstein’s latest podcast with Sam Harris if you haven’t already. They discuss Determinism, Many-Worlds theory and ‘Free-Will’. Lots to do about consciousness. You can probably skip the first 20 minutes or so to get right into the action. Cheers.

    Liked by 1 person

      1. No, it’s the link I have below. Weinstein’s interpretation of the Many World’s theory and Determinism align with mine although he’s probably more scathing/dismissive than I’d be about it. The problem with Harris in these discussions is he is the ULTIMATE rationalist much like what happened with the Jordan Peterson 1st discussion on his podcast ‘What is truth?’ – which I still consider one of the greatest philosophical debates I have ever heard despite how disdained it is in social media circles.
        Weinstein keeps mentioning ‘Darwinian Fitness’ in his remarks in the below podcast which I also cannot detach from when I think of these topics and as I argued in my Copenhagen article. Cheers.

        Like

    1. Hi Ilyass,
      A caveat is just a limitation or constraint on what is being offered. I think I meant this point:
      “(if it will come – there is no assurance that, just because we want to know something, we will eventually figure out a way of actually knowing it)”

      While technically always true in science, I don’t see the kind of difficulties that would warrant that kind of hedge here.

      Liked by 1 person

      1. Oh, I see now.
        Thank you for clarification.
        Well I guess he is right in saying there is no assurance, like a 100% warranty, unless if you want it to be the case.
        But of course a short reading about the history of science says we will probably understand consciousness.
        Is it helpful to be certain about something?

        Liked by 1 person

        1. Generally no. But there are problems in science that we may never crack, such as whether there are other universes, the measurement problem in quantum physics, or what’s happening in the singularity of a black hole. (Although I hope there are always people trying.)

          The brain, in contrast, is very difficult, but there exists no fundamental barriers to studying it. I don’t know that it’s helpful to imply it falls into the category of truly intractable problems.

          Like

          1. Or like the beginning of the universe.
            I agree.
            Time will be the judge, I’m excited to find the scientific explanation of consciousness.
            BTW what do you think about Graziano’s theory? Do you think it has flaws?

            Like

          2. In general, I’m a fan of the attention schema theory. If you search this site, you’ll see I’ve often written about it. But I think it has to be fitted into the broader global workspace theory framework. (Which Graziano in his more recent writing admits to.) AST provides the mechanism for top down control of attention. But it’s missing the affective feeling (emotional) aspects of consciousness. Keith Frankish pointed out that what’s also needed is a response schema.
            https://selfawarepatterns.com/2020/03/04/the-response-schema/

            Like

          3. I’m actually in chapter 5 of his latest book.
            I asked you because I want to know all the criticisims, well the strong ones.
            I know this question is a bit crazy, and I’m jumping through topics like a scared champanzie, but bear with me please, I’m curious 🙂
            What do you think about the unfalsifiable hypothesis of simulation?
            And is ASI possible? What do you think about Nick Bostrom

            Like

          4. On the simulation hypothesis, as you note, it’s not testable, at least unless there are flaws in the simulation that it allows us to perceive. But even if they are, how can we know we’re seeing an actual flaw, or just an aspect of reality we don’t understand. Maybe wave / particle duality is such a flaw, but if so, do we have any way to establish it?

            Can’t say I’m a fan of Bostrom. There are dangers with AI, but I think they’re mostly overblown. He’s worried about a malicious god like AI, and we’re still trying to figure out how to make something with the spatiotemporal intelligence of a honey bee.

            Like

          5. Thank you for having such a patience to reply to my crazy questions.
            Yeah, it’s not testable, but I guess that doesn’t disprove it or prove it.
            Like God, it might exist, almost certainly not the god we can think of. ( simulation is more possibe since we can now run incredible simulations of reality )

            I think what Bostrom is saying, as I understand it, we’re building AI, this present AI looks stupid, but progress is made, and when we get to the point when AI becomes as smart as a human and can adapt, it will be smater than any human can even imagine, because of the difference in information processing power.

            So when you have a very smart AI, or AGI, we will be like mere champanzies in comparison, whatever happens next we already lost control.
            It’s a legitimate concern IMO.
            I believe you hope to make AGI, right?

            Like

          6. No worries on the questions. These are exactly the types of discussions I enjoy!

            My take on the simulation hypothesis is that if our reality is an illusion, it appears to be one that extracts painful consequences for not taking it seriously. I’m not sure we have any choice but to play the game.

            On Bostrom, I think he assumes an entity that is super-intelligent in certain narrow ways and ignorant in others, in just such a combination that leads to catastrophe. It’s conceivable, but as AI gets more intelligent, in all probability it will do so not just in its means but also in its ends.

            On making AGI, not me personally (although some commenters here are), but I am interested in understanding how they might plausibly work.

            Liked by 1 person

          7. Thank you.
            On AI, I think Sam Harris ted talk gives us an idea of what the concern is about.
            The biggest problem with giving AI the ability to adapts is control.
            It may go way out of control, that if something got wrong with it ( it need not be conscious to pose this threat ), we will be helpless.

            Like

          8. I’ve seen Harris’s talk. As usual, he takes common fears and predispositions and articulates them with a gloss of intellectual sophistication. But I don’t think he has any real insights.

            My view is a much bigger danger with AI is what humans might choose to do with it. For example, an AI could monitor you with far more patience and diligence than any human ever could. (On the flip side, a defensive AI could protect you from that surveillance much more consistently and capably than you could yourself.)

            Liked by 1 person

          9. Well we disagree on that, but I hope you’re right. Maybe we continue that discussion after I know more about AI.
            Of course I’m not even sure if ASI and AGI are possible, we’ll see in the future.
            I have one question in another topic, it’s a bit about life.
            Can you suggest a way to accept reality?
            I mean, maybe I have to explain:
            When I was 14 I becamed very intersted in my religion. ( Islam )
            By age 16, I had many doubts and I accidentaly watched a video of an arabic atheist activist, the egyptian Sherif Gaber.
            As you can imagine, I was shocked, pieces got glued, and suddenly I realized everything, but I denied it.
            After a traumatic experience I becamed an agnostic atheist.
            To be honest, the last thing that was comforting me a bit is the mystery of consciousness, but now I want to know what it is scientifically.
            I still find comfort in the unknown, because it allows to search for answers more.
            But I think this comfort is bad for me.
            For example I find compfort in not knowing what happens when we die, the biggest probability is eternal sleep, but I hope for something else.
            Maybe you can help? Being raised as religious and then becoming agnostic is not easy at all.

            Like

          10. I had a similar journey, although mine began in Catholic Christianity, and was stretched out over a longer period of time. Still, the final transition was jarring, so I understand your distress.

            I’ve found some comfort in the Epicurean philosophy, which could be described as prudent hedonism. But that philosophy doesn’t promise an afterlife, just that there’s nothing to fear in death.

            Yes, death is likely eternal sleep, but it’s a sleep we awakened from at birth and visit every night. We’re programmed to want to avoid that eternal sleep as long as possible, but I think it helps to understand that it’s programming that’s there for evolutionary reasons, not because it’s providing real insight into that state.

            Put in the Epicurean fashion: Death is nothing to us. When we are, death is not. When death is, we are not. We will never actually experience it.
            https://en.wikipedia.org/wiki/Epicureanism

            Like

          11. Yeah, I love Epicurean philosophy, however there is a flaw, eternal sleep terrifies me because of lack of changing.
            There is no pleasure no pain no good no bad.
            I guess Spinoza philosophy is also a good.
            We are the universe observing itself as Sean Carrol says, that’s a bit more comforting.
            Still thr eternal state of unchanginess ( is that a word? ) is still terrifying.
            Do you think it’s bad to hope something will happen after death?
            I’m not a fan of singularity mouvement, but I can’t deny it is attractive.

            Liked by 1 person

          12. “eternal sleep terrifies me because of lack of changing. There is no pleasure no pain no good no bad.”

            It might help to dwell on the fact that you’ll never experience this lack of changing, of no pleasure, no pain, no good, or no bad. You will only ever experience change, pleasure, pain, good, and bad. You’ll only ever experience being alive (which unfortunately could include the process of dying), never death.

            On hoping for something after death, it’s always possible someone will manage to create a technological afterlife. I’m not inclined to put too much hope in that scenario for those of us alive today. It’s also possible we are in a simulation and the simulation owner will provide some form of afterlife, but again, I’m not inclined to go there. But I don’t know that it hurts anything to hope for it.

            Like

          13. When I go through life, and have great experience ( enjoying time with friends, reading about science, learning more about the world, and helping people ), death is a threat to that, it will take everything good ( and bad ).
            I think there is a misunderstanding between us.
            I’m afraid of death not because of death itself, I know that ( in case oblivion is what awaits us ) I will not be there to experience anything at all.
            But it robs the chance to experience things and new things.
            I personally won’t be afraid to live forever, because the world ( universe ) is endless.
            But I’m afraid of death because the universe is endless, while I’ll end when I die.
            Note: the last sentences looks weird lol

            Like

          14. Another speculative possibility I didn’t mention above, but related to your last point. If the many-worlds interpretation of quantum mechanics is true, then some version of your consciousness, no matter how improbable, will live on until the heat death of the universe. If the MWI is true, we’ll all discover it in some branch of the wave function as we live on, despite the increasing improbability of us doing so.

            Some of those variations will be a good existence, some bland, some horrible. Who will be the more lucky? The versions of us that end sooner, or the ones that go on to experience a long horrible existence across vast expanses of time?

            Like

          15. Maybe, but the problem is: we can’t MWI.
            I don’t want to look too picky.
            But those version are separate from my own consciousness.
            So when I die oblivion is still all what I get.
            But thank you anyway this very discussion is great, since we’re both agnostic/atheists, it’s good enough that there is people who can understand my pain.

            Like

          16. Another variation is if space is flat and infinite, then every pattern of atoms eventually repeats itself infinitely. Not only that, every variation of every pattern. So somewhere in infinite space would be other copies of you. Similar to MWI, some of them would continue to live on, no matter how improbable it might be.

            On worrying about your own consciousness, consider this. All the atoms in your brain will be recycled over the next few years. So then, what connects the consciousness of you today with the consciousness of you as a child? Or the consciousness of you as the (hopefully) old person who will finally die? In the end, it’s information, patterns. But if information is what connects the disparate versions of you along your timeline, why doesn’t it connect you with the other you’s in infinite space or in other branches of the wave function?

            Indeed, any technological afterlife would work by instantiating a copy of you. If that would be comforting, then why wouldn’t the other copies of you not also be comforting?

            All of this is, of course, highly speculative. But if you’re looking for hope for an atheistic afterlife, they’re probably as good as it gets. Hugh Everett III, the originator of the MWI, reportedly believed in quantum immortality, and so wasn’t concerned about his mortality in this universe.

            Liked by 1 person

          17. Very true. Nor the infinite space scenario I mentioned above. It’s all speculative, but speculation that is an extrapolation from known physical laws. In both cases, you have to actually add assumptions to what we currently know to avoid them. Doesn’t mean they’re true, only that they’re possible.

            Liked by 1 person

          18. Yeah, the infinite space scenario, is amazing, I wish it could be true, have you watched Dark from Netflix, a mind blowing series, almost what you described is there, except it’s… spoiler.
            Yeah, speculation that goes along with what we know is really possible.
            I wish we could prove them scientifically.
            By the way, do you agree that we can’t know everything? I guess that’s also comforting, but also logical.
            Most people comfort themselves with irrational beliefs, I tried that, it only makes things worse.
            Fun fact: after realizing the problems in my religion, I almost becamed a new ager lol.
            But after some time, I feeled like I’m lying to my self, and it’s an incredibly bad feeling.
            So now I take logical thinking and experience ( science) as my guide.
            So do you think not knowing everything is a false conclusion that I have?
            And thank you very much for this intresting discussion.

            Liked by 1 person

          19. I haven’t watched Dark, although I’ve definitely heard of it. Have you seen Devs? It has themes along the lines of this conversation. (You have to watch the whole series for all of them to become evident.) Just did a post on it:
            https://selfawarepatterns.com/2020/05/02/devs/

            As patterns that exist within the universe, that is patterns that are subsets of the overall set of patterns, we definitely can’t know everything, at least not to the extent of Laplace’s demon. That said, reality appears to be a series of repeating patterns that follow rules, and we seem able to learn a lot about those rules.

            We’ll never know it all, but we can learn a lot, and I don’t think we should ever stop striving to know more. And on any one particular topic, I don’t know that it’s ever productive to assume we’ll never understand it. There have been too many statements in history along the lines of, “We’ll never know X”, only to have scientists eventually figure out a way to study it (sometimes only a few years later). Today’s hopeless metaphysics may be tomorrow’s science.

            I enjoyed the discussion. Hope we have many more!

            Liked by 1 person

          20. I’ll definetly check out Devs.
            I like your epistemological humbility, Of course we have to strive for more knowledge as long as we exist.
            ” I enjoyed the discussion. Hope we have many more! ”
            I wish that too, I actually want to meet people like you, people who are like me, I becamed poetic lol
            By the way you seem to take consciousness as information processing.
            Isn’t that a version of IIT?

            Liked by 1 person

          21. Thanks!

            On IIT, well, if it were actually about information processing, it might be.

            I currently lean toward variations of global workspace theory, with additions like Graziano’s attention schema. Ironically, these are much more about information processing than IIT.

            Like

          22. Validating Graziano’s theory? I think it’s going to take continued study of the brain, mapping neural circuitry, learning about the signalling that takes place, and seeing to what extent it remains compatible with his model. Graziano actually takes some stands on where in the brain the attention schema might be, and that seems like it will be testable sooner rather than later.

            Attention is focusing of resources on specific content or activities. It’s a complex multi-level process in the brain. There is bottom up attention, such as focusing on the spider you suddenly feel crawling on our arm, and top down attention, such as choosing to focus on reading this comment. Attention is a competitive process, with different percepts and activities all vying for the focus of limited resources.

            Graziano’s attention schema would be a core part of top down attention, the mechanism that helps the brain decide how to “load the dice” to certain outcomes. In that sense, I think something like the attention schema may be inevitable, although the details could end up being different from what he envisions.

            Liked by 1 person

        1. Most people think great God will come from the sky
          Take away everything, and make everybody feel high
          But if you know what life is worth
          You would look for yours on earth
          And now you see the light
          You stand up for your right, yeah

          That a popular lyric from Bob Marly and The Wailers. Many atheists dismiss how much “feel good now” hope exists for the prospect that there ultimately will be justice and good. I don’t. But it seems to me that a person who has been robbed of his/her supernatural hope, can only attempt to make up for that by finding hope in this world.

          Liked by 1 person

          1. Thank you for your beautifull words.
            I agree we have to find hope here, and that’s a thing that would transform our lives probably more than religions can.
            But still I’m terrified of not experiencing anything.
            Everything that has value comes through consciousness.
            Losing that consciousness is something I’m still having trouble with.
            So even after I do all the good in earth that I can think of, there still be fear of unchanginess.

            Liked by 1 person

          2. Well Ilyass, I guess something to consider here is how bad this particular fear feels to you. Modern psychiatrists prescribe drugs for all sorts of things. Would you consider it bad enough to talk to a doctor about such options? Or perhaps there is enjoyable work, hobbies or family activities that diminish the issue?

            When I was young a good friend would scare me about going to hell unless I accepted Jesus as my savior. So I tried that, but it didn’t make sense to me. So casting all that away ended up being a tremendous relief to me. Apparently you’ve got the opposite problem.

            Like

          3. Hey Eric,
            Thank you for the suggestion, I don’t think it’s possible in my country, I’m from Morocco, and almost all the psychiatrists are muslims.
            And taking drugs isn’t a real solution, it’ll be helpfull to cope with the anxiety, but not a fix to the problem.
            I think you’re right I have the opposite problem, primarly because I got intrested in my religion islam for heaven, actually I didn’t give a damn about hell, sometimes I thought it was just a made up thing god said to encourage us to be better people.
            But after wanting more in heaven ( lol ), doing a lot of worshipping, and reading Qur’an and Hadith, I started seeing weird contradictions, and unethical, really unethical things in Qur’an and all of religious literature.
            So after a tough 2 years living Hell, denying the problems and hating my existence and being angry at everything and even thinking about suicide, I came to the conclusion religion was made up, and the possibility that Allah exist is very close to zero.
            God in general I don’t know, and since I don’t know I don’t care.
            The only version of god that I respect now is pantheistic god.

            Anyways, what do you think about the value of life?
            I said it is consciousness since it allows me to experience amazing things.
            I will lose this value for eternity when I die, that’s the origin of my fear, I guess.
            I hope I explained my current condition enough.
            Thank you for listening.

            Liked by 1 person

          4. You’re surely right Ilyass that consciousness serves as the value of existing for anything. Personally I like to reduce this to sentience, or the ability to feel good/bad. I also refer to this directly as “consciousness” — anything that’s sentient is also conscious. If you lose your sentience then you lose your consciousness.

            So what is the value of existing for anything? Add up how good it feels, subtract how bad it feels, and that’s value for it over a defined period of time. Some people end up with wonderful lives though unfortunately many seem to end up with horrible lives. And why hasn’t the basic science of psychology come to accept this position yet? I believe it’s because the social tool of morality encourages us deny such self interested or hedonistic ideas. I did a post on this recently which should come up if you click my name. At the end of the post my friend Liam interviews me for his YouTube channel.

            Liked by 1 person

          5. Thank you Eric for clarifying your position.
            I think that bad feelings aren’t bad in themselfs, before you think I lost my mind, consider this:
            When I train in the gym, I enjoy my time, but my training isn’t about pleasure, it’s more about pain, that pain will strengthen me to adapt and become more tolerant to pain of the same kind, muscle mass and strength won’t be gained without accepting you will feel pain.
            I know it’s not the best example but I think you get it.
            As for the social vue on self-interest, I think I enjoy and feel really good when I make others enjoy themselfs, and lessen their suffering.
            But of course we do that for ourselfs, so I think we’re in agreement here.
            When I have time I’ll definetly check out the video, maybe now lol.

            Liked by 1 person

          6. Yes Ilyass, I think you get my position. The pain of working out, though it does hurt, may be more than offset by the hope of becoming a stronger and more healthy person. We are connected to past selves by means of memory. We are connected to future selves by means of hope (which feels good) and worry (which feels bad). Furthermore helping others, though apparently altruistic, can also feel quite good as well.

            Like

          7. I have to add something, although you might disagree.
            But I found that we can condition ourselfs to feel good when feeling bad.
            Again with the example of working out, I may be doing push ups, and when my muscles get tired I feel the pain, but at the same time I’m comfortable with it.
            Sometimes even enjoying, and this is not only me.
            I guess feelings are dynamic.

            Liked by 1 person

          8. No worries Ilyass. If there are any problems associated with my models then I need to know so that I might appropriately improve them. And if any are just plain wrong then I’d like to know that as well.

            On the sore muscles, if that’s a sign of progress then having sore muscles should at least feel hopeful to you. Furthermore you might be getting a taste of endorphin secretion in your brain as well, or drugs which naturally make us feel better. I’ve exploited this dynamic for many years with hot peppers. To me the heat doesn’t really hurt that much anymore, though the main irritation is that my entire head drips with sweat and gets itchy. In any case the net result is that I enjoy my hot peppers a great deal!

            Like

    2. Hey Ilyass, I’ve enjoyed your conversations with Mike and Eric, so I thought I would give you yet another perspective. [this goes pretty long. Probably two parts]

      Like Mike, I grew up as a good Catholic boy, and I had the good fortune to go to a Jesuit high school (all boys). One thing the Jesuits do right is education. I got to take elective classes, one of which was Atheism. They basically went thru the traditional arguments for and against, including Anselm, Aquinas, etc. I thought about it, and came to the same conclusion as you guys here.

      One other influence of note for me was martial arts. I was never much into sports or exercise, but the “martial” part gave it enough of an edge to keep me interested. My first experiences were jujitsu and western fencing in undergrad. Later I did a little karate, and then judo. While I fenced for all of undergrad (finished as Sabre captain for team, woot), the others I did for a few months to a year. I was somewhat interested in the self-defense aspect, but it’s not like I was ever likely to use that. I haven’t raised a hand in anger or defense since the age of seven.

      Almost twenty years ago I picked up a new one: iaido, which is just the art of using a Japanese sword. Ideally you practice with a real sword, but most people use fake ones (aluminum alloy, not sharp) mostly because they’re cheaper. The entire art is just practicing forms, series of steps. For each form you start from just going about your business (while wearing a sword in your belt), then something happens, usually forcing you to draw and do a preemptive cut or block, and then you have to decide what you have to do, which ends up being dispatching the opponent with a killing cut. [You have to practice the final cut, because it can only be a real option if you practice.]

      I bring this up because in Iaido you spend a lot of time thinking about what you’re doing and why you’re doing it. I mean, you’re literally practicing to kill. But you learn a lot about samurai philosophy, which includes ideas about life and death, including ideas like under certain circumstances it’s correct to kill, and under other circumstances it’s correct to die.

      So all this just tells you where I’m coming from. I’m going to write another reply to explain where it’s going.

      *
      [and when I say “it”, I mean all of it]

      Liked by 1 person

      1. Thank you very James for sharing your story with us, much appreciated.
        Yeah, it’s great to be with people who share almost the same conclusions as you.
        And I’m happy you enjoyed the conversation.
        Did you get deep into the religion, or just raised in it, I ask because the former suffers when quiting religion, like myself.

        Like

        1. It’s not so much that I got deep into it than I felt okay with it, I saw the good in it. I still do. I’ve raised my children without religion, and the one regret I have is that they’ve had no exposure to the stories and messages in the Bible, particularly the gospels. I think they will be fine, and will be good people, but I think they’d be better off with at least some of that. Some authoritative voice on how to live, other than just me and my wife.

          *

          Liked by 1 person

          1. They can develop a moral guide within themselves, the holy texts will help, but interacting with people and learn to have empathy with people and any sentient being.
            That’s my humble opinion.

            Like

    3. So Ilyass, you’ve expressed interest in Epicureanism in your discussion with Mike. I’m wondering if you’ve looked into stoicism. I haven’t studied either, but I’m pretty sure I’m closer to the latter. A major point is not to be to concerned about things you can’t control (like what happens after you die), but to focus on what you can control, which is pretty much just your own character and actions.

      I certainly understand your concern about the ending of your consciousness. For myself, I find comfort in understanding what’s going on with the universe and my part in that process. So I read that link about Spinoza and God as Nature, and in that context, “what’s going on with universe” is God waking up. The spark began with life, advanced with the addition of consciousness, continued with the development of intelligence, and accelerates with the development of artificial intelligence. The next step will be sending that artificial intelligence into the universe. Don’t know what comes after that.

      So I (and you) have a role in this whole process. What I do will either advance it or delay it. I hope to advance it. I can’t know for sure what to do to advance it, but I do know that I can play out my role by balancing my gut instincts with what I learn from society. In general we are given two options: cooperate (all for one and one for all) or compete (every man for himself). Turns out that within a given population, the competitors have the advantage (survival of the fittest), but for the population as a whole, cooperation works better. So I choose to cooperate, and based on one of your statements, so do you.

      As for the future, I’m a singularitarian. I think artificial intelligence will reach human level (in the very biggest, most expensive computers) in about ten years. About 10-15 years after that, human level intelligence will be relatively cheap. After that, all bets are off (so, singularity, which just means you can’t really predict what happens afterwards). I don’t think we’ll be uploading minds, and I don’t see the point. Uploaded copies of me are just copies I don’t care about. Same for multi worlds copies. Interest in such copies is just a vanity which probably will not advance the universal process. I won’t be having a life-size portrait painted of myself for the same reason. What I care about is playing my role in the process, hopefully in a way that advances as opposed to hinders.

      And I honestly think that if I live for another twenty years (I’m 58), I have about a 50% chance of living indefinitely. But even if I don’t make it, that will be okay. I’m already doing better than the historical average. And even though I haven’t done a whole lot, I think my ledger is on the plus side.

      *

      Like

      1. Thank you for your detailed reply.
        Yeah, I’m slowly trying to apply Stoicism, it’s a great life philosophy, but thinking about death isn’t that bad I think, yeah whatever heppens after it is not within our control.

        I agree, when viewing ourselfs as parts of the universe I feel grear! But still not enough to fix my problem, I think it has to do with experience as we age, you and most people here have a lot more experience, I’m only 18, and it was only 2 years of battling with life ( outside the shield of religion ), and I can’t count the many times my mind got changed.
        I’m very sceptical of singularity, but the future is ahead, so why not?
        I don’t the same optimisism you have about developement of AI, current AI is stupid, and the rate of developement doesn’t indicates what you excpect.
        Don’t get me wrong, I’m not saying you’re wrong, I’m only saying my opinion, i think it’s highly improbable.

        Like

        1. I appreciate your scepticism. Of course current AI is stupid. If I’m right, 10 years from now AI will still be stupid, but it will be about as stupid as the average human, which is still pretty stupid.

          I’m optimistic because I have reasons, and I see the path. Current AI is pretty good at recognizing static patterns, pictures of cats, faces, etc. The next step is to recognize patterns in changes. Recognizing that something just happened. Recognizing that seeing this means that that is about to happen. Recognizing that the person is walking toward the street and that if they keep walking they will get hit (and that now might be a good time to sound the horn so the person will stop walking toward the street). The step after that is to have several recognizers happening at once, and to have some system to choose what to do based on what the various recognizes are recognizing. Just the fact that I’m thinking about this means people have been working on it for a year or two [he said with a straight face].

          *

          Liked by 1 person

          1. Thank you for taking the time.
            Do you work in the field?
            No current AI isn’t that good at recognizing static patterns, things still progressing.
            Maybe I’m missing something.
            Also how can we program a machine to do something it isn’t programed to do?

            Like

          2. Oh haha, I was so ignorant back then. At 2020, there were already amazing AI developments, but I didn’t invest enough of my attention to read about it.

            Didn’t help that most experts in the fields weren’t expecting the current explosion. If I may ask, on the risk of polluting this old comment thread, what’s your current views on AI and its implications? Mainly, what do you think about the safety crowd?

            Like

        2. Replying to your request for an update:

          My current views for the timeline for human level AGI are pretty much unchanged. Actually, they may have advanced by a year or two. In 2020 I said 10 years. Today, 2024, I’d say 5.

          I, like everyone else, was surprised by how good the Large Language Models (LLM’s like ChatGPT) got. But I don’t think the LLM’s are an integral part of the path to AGI. They are an important offshoot, similar to the language area of our brain, but they don’t incorporate the basic intelligence structures needed for AGI.

          The reason I’m still optimistic is that I’ve seen what some others are developing and I think they are in fact on the correct path. Specifically, I’ve seen what the people at VERSES AI are working on, which they call “active inference”. [Caveat, i.e., Warning (heh): I put my money where my mouth is, which is to say, I bought stock in the company.]

          Regarding the “safety crowd”: I think AI, and so AGI, is an incredibly powerful technology, and so the biggest danger is that power being used by bad human actors.

          As for the existential risk posed by some, I think it is a concept worth taking seriously, but I don’t think the risk of doom is as high as people like Eliezar Yudkowsky think. I think to make an AGI capable of destroying humanity would take a collaborative act of stupidity on the scale of making nuclear weapons freely available for purchase at your local grocery store. I don’t think the “doom” crowd have a good understanding of the nature of goals and where they come from. Using LLM’s as an example of AI could lead to this problem because the general management of goals is absent in these models. If you look instead at what the VERSES people are doing, you’ll see that managing goals is paramount, and not only their own goals but the goals of other agents in their environment, which would include humans.

          So what do you think?

          *

          Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.