Kurzgesagt on the origin of consciousness

This video by Kurzgesagt is pretty interesting.  A word of warning; it’s funded by Templeton, which I know will bother some of you, but I found the content to be reasonably solid from a scientific perspective.

The only real issues I might have are the mysterian overtones at the beginning, and the assertion that consciousness and intelligence are different things, although that second issue might be ameliorated in the next video in the series.

I tend to think of consciousness as a type of intelligence, or more accurately a collection of intelligence capabilities.  Strictly speaking that does make them different, although I’m not sure this is the difference Kurzgesagt has in mind.

I do very much like the way the video describes consciousness as a series of increasingly sophisticated capabilities.  There is no bright line between conscious and non-conscious systems, just points where various people will interpret the system as either conscious or pre-conscious, depending on which definition of “consciousness” they prefer.

Consciousness remains in the eye of the beholder.

This entry was posted in Zeitgeist and tagged , , , . Bookmark the permalink.

90 Responses to Kurzgesagt on the origin of consciousness

  1. I do very much like the way the video describes consciousness as a series of increasingly sophisticated capabilities.

    Gotta say, me too. But that’s what I’ve been saying all along.

    But I’m also in the consciousness != intelligence camp. Intelligence will require consciousness, but not vice versa.

    *

    Liked by 1 person

  2. paultorek says:

    “Consciousness is in the eye of the beholder” sounds like an infinite regress problem waiting to happen! Although if it just means you can pick from a variety of loosely related definitions, well, sure.

    I’m in the consciousness != intelligence camp, the sapient-intelligence-requires-consciousness camp, and the consciousness-requires-some-low-level-intelligence camp. I go camping a lot, I guess.

    Liked by 2 people

    • On the subjectivity of consciousness, definitely it’s a matter of definition. There is no naturally delineated thing or process that matches what we mean by the word “consciousness”. It’s a word that refers to a hazy composite of capabilities, a word that has been muddled since it etymologically evolved from “conscience”.

      So sapient intelligence requires consciousness and consciousness requires low level intelligence. Of course, sapient intelligence requires low level intelligence. Maybe all we have is multi-layered intelligence built on particular drives, some aspects of which we label “conscious”?

      Like

  3. Reblogged with comments:

    There may not be a universal definition of consciousness, but it definitely has something to do with an awareness of existence – usually an awareness of (at least) one’s self. Add awareness of surroundings, of the the passage of time, the abstract idea of things or situations that do not yet exist in the future, cause and effect and consequences and rewards and ramifications and pretty soon you have layers of intelligence and sophisticated abilities that everyone would agree qualifies as consciousness… or would they?

    Many believe God was once alone in an empty universe before having created it. Except for “Self” the things mentioned above didn’t exist in that scenario – there were no surroundings or time or consequences – yet few would argue such a Creator would not have been conscious. At the other end of the spectrum we can program a robot with AI to process all these concepts and manipulate its environment to achieve its goals but – would it be conscious?

    Liked by 2 people

    • David, some of us definitely think the robot would be conscious. [Some of us think a Roomba is at least a little conscious.] What do you think?

      *

      Like

    • Thanks David.

      I actually don’t think there is a fact of the matter on these questions. Our intuition of a fellow consciousness is just that, an intuition, that fires when we perceive there is another system like us to some degree or another. The more like us it is, the more the intuition fires. But the further we get away from resemblance to us, the more controversial the designation becomes.

      Like

      • Mike, what are your criteria for “like-ness”? Presumably not physical appearance. Is it intelligent capabilities?

        *

        Like

        • James, are you asking for my personal criteria, or the criteria we as a species seem to use intuitively? As a species, we seem to feel more commonality with other mammals and birds than we do with reptilians or fish. And we feel far less commonality with invertebrates such as insects. All of this mostly based on appearance and behavior.

          But from the perspective of systems that process information similar to the way we do, that probably doesn’t misguide us too much. Vertebrate brains all share basic characteristics, with mammalian brains being far more similar with the addition of a neocortex. Birds have a nidopallium which probably make them closer to us than reptiles and fish. Invertebrate nervous systems are radically different from ours.

          In the end though, I don’t think there’s any “right” or “wrong” criteria. This is a subjective impression we take.

          Like

  4. Pingback: Cool Video – Origins of Consciousness | END TIMES PROPHECY

  5. makagutu says:

    Mike, have you seen this https://www.scientificamerican.com/article/there-is-no-such-thing-as-conscious-thought/
    Sorry if it is off topic. I just thought it is one of those you might be interested in

    Liked by 1 person

    • Thanks mak. I did see it, but had forgotten about it. In general I think Carruthers is right, although I think he oversells the case a bit, although he does back off a bit toward the end. We definitely don’t have access to the construction of our thoughts, but we do have some privileged access beyond what we have with other minds. But it is far more limited than people generally conceive, and it explains most of what people cite as why they think there is a hard problem.

      BTW, do you mind being referred to as mak? (I’m using it because I’ve seen others use it.) Or would you prefer Onyango, or something else? Just checking.

      Like

      • makagutu says:

        Mak or Onyango are all ok.

        My forays into consciousness have not been so deep. I think I have read of the hard problem of consciousness and I keep forgetting what it means almost immediately 😁

        Like

        • Thanks!

          The hard problem is described in a variety of ways, but it comes down to the difficulty of how a physical system generates conscious experience. I personally think the actual hard problem is the psychological one of accepting that there’s no evidence for substance dualism.

          Like

  6. J.S. Pailly says:

    I thought of you when I saw this video. It’s good to keep in mind that Kurzgesagt offers their own interpretation of the facts and that other interpretations are out there. They said as much in their “Can you trust Kurzgesagt?” video, and I have so much more respect for them since I saw that video.

    Liked by 2 people

    • Thanks. I think that’s good to keep in mind for any source. (People should keep it in mind when reading my stuff. I never intentionally mislead, but it is all based on my own current understanding.) This can be easy to forget when there isn’t a clearly identified author.

      Kurzgesagt does occasionally make statements I find dubious, but I haven’t found it to be consistent or pervasive.

      Liked by 1 person

  7. Maybe there is no bright line between living and non-living systems either, just points where various people will interpret the system as either living or pre-living.

    Liked by 1 person

    • Definitely. Things like viruses, viroids, and prions seem to sit on the border between living and non-living systems. If we found something like a viroid in an extraterrestrial setting, we would probably have a debate on whether or not we’d actually found alien life.

      Liked by 1 person

  8. Wyrd Smythe says:

    I’m a fan of Kurzgesagt, and saw the video before I saw your blog. But I stopped watching the video, maybe after a third. It didn’t seem to have anything I hadn’t heard before, and I have some doubts about consciousness as a spectrum.

    Or rather, I’m bemused by the vast gap between all other forms of consciousness we know (various animals, let alone machines) and human consciousness. How is it that only one species accomplished it? I find that mysterious.

    Is there some “critical mass” (so to speak) required to spark consciousness to what we experience? Is it some monumental evolutionary change that crosses that gap? If life is so common many fully expect to find some form of it elsewhere, why is consciousness so rare?

    And speaking of elsewhere, the Fermi Paradox seems to suggest, as common as life may be, perhaps consciousness is as rare elsewhere as it seems here.

    (Did you read about the METI conference where they seriously discussed the idea that aliens have quarantined humanity (in a zoo, for safety, whatever) and that’s why we haven’t detected alien civilizations? The underlying axiom seems to be that “of course alien civilizations exist; how could they not?” Science seems to be losing its mind in these post-empirical days.)

    Liked by 2 people

    • James Cross says:

      If consciousness is a spectrum, how do we know where we are on it? We might be on the low end barely distinguishable from worms without magnifying it. After all, we only have the experience of one planet to go by.

      Liked by 1 person

      • Wyrd Smythe says:

        “We might be on the low end barely distinguishable from worms without magnifying it.”

        Absolutely, we have no idea what might lie above us on the scale.

        I still think there is a vast gap between us and any form of consciousness below us. To me, the scale isn’t as significant as the discontinuity.

        (It might be interesting to consider whether consciousness, as we know it, represents a “sweet spot” between processing ability and messy static. Certain aspects of our brains seem to act as limiters or dampeners. Maybe “higher” consciousness would be literally like being high on LSD all the time.)

        Like

    • Do you really see that much distance between us and, say, chimpanzees? Certainly there are differences, but most of them seem to be more about extent than sharp qualitative distinctions.

      The one exception, I think, is symbolic thought, by which I mean volitional use of symbols, such as language, mathematics, art, music, etc. This seems to require a deep and recursive metacognitive capability, which only humans seem to possess.

      And you’re right, it’s only evolved once in the millions (billions?) of species that have ever lived on Earth. It may be a profoundly rare fluke. It required the evolution of a highly dexterous and social taxon of species (primates), a shift in environment that freed up some of our limbs for manipulation of the environment, and a runaway social intelligence arms race. How likely is that sequence of events? Maybe as likely as the one that produced elephant trunks, and no one sees trunks as an inevitable result of evolution.

      I didn’t hear about that METI conference. The zoo hypothesis is an old one, one that I think almost verges on theology, a rational for hidden superior forces that are watching over us and protecting us, just with modern scientific sounding terminology.

      Like

      • Wyrd Smythe says:

        “Do you really see that much distance between us and, say, chimpanzees?”

        You kind of answer the question in your next paragraph. 😀

        Yes, I absolutely do. Chimps would have to invent, on their own, new tools and techniques, and develop art and literature, for me to see them as at all close to us.

        I think that symbolic thought thing, and our ability to visualize new things, is a big part of that huge gap I’m talking about. Where’d we learn to do that?

        “Maybe as likely as the one that produced elephant trunks, and no one sees trunks as an inevitable result of evolution.”

        True, but one sees instances of grasping limbs throughout the animal kingdom. Things functionally equivalent to elephant trunks are common. Consciousness seems special and unique. It let us conquer the planet, and we see nothing like it.

        That’s weird!

        “The zoo hypothesis is an old one,”

        Heh, yeah, common in old pulp SF! 😀

        Ever get the feeling, sometimes, that scientists have been reading too much science fiction? That it’s gone to their heads?

        Like

        • Monkeys are known to take sticks or rocks and create ad hoc tools for various purposes, such as extracting bugs out of a tree or breaking peanut shells open. Definitely it’s not creating a spear or anything, but they do seem to have an incipient ability along those lines. But yeah, art and literature, not so much 🙂

          I suspect that inside your typical scientist is an old SF fan (not always of course, but usually). That said, I’ve always been impressed by SETI’s scientific rigor. They’ve been accused of being like paranormalists and UFOologists, but if that were true, they’d be claiming to have detected something every year. That they’ve never made such a claim speaks to their commitment to getting it right. I strongly suspect their search is in vain, but I can’t fault how they’re going about it.

          I know less about METI, but I haven’t been nearly as impressed by the interviews I’ve seen from some of its proponents. They seem a bit more dogmatic about certain things, although still nothing along the lines of paranormalists or crytozoologists.

          Like

          • Wyrd Smythe says:

            With ya on METI, they seriously underwhelm me. What’s cute is the opposing faction that took to heart a different kind of SF and desperately don’t want us sending any messages. “In a dangerous universe, the smart ones don’t advertise their presence!” 🙂

            And I quite agree that SETI is a lot more respectable. My buddy was part of that SETI@HOME network (or whatever it was called). His PC gave its idle time to analyzing SETI data.

            I think space scientists, especially, seem to have SF backgrounds. Read about the stars, dreamed about the stars, really wanna go to the stars! 🙂

            Of course, maybe they were attracted to the SF in the first place because of natural proclivities. I’ve always wondered how I took it up (certainly not from my parents or anyone I knew). From stories my parents tell, I was science-minded and techno-minded even as an infant. (My first two words were “light” and “star.”)

            My understanding with chimps is they learn to use a specific tool a specific way almost by accident, and because it’s successful, it’s passed on within the tribe. Usually nearby tribes don’t have the same skills.

            Crows, and other birds, have shown tool use, too, often pretty impressively solving multiple steps to achieve a goal.

            But I still perceive a vast gap between those abilities and ours. I might call it the difference between being “clever” or being “creative.” I agree lots of animals are clever.

            Like

          • I’ve wondered the same thing myself about what attracted me to SF. My dad liked it, but he also liked a lot of other things I never took to, such as sports or photography, and even though he liked westerns, I never cared for them as a child. (I did eventually learn to like some westerns as a teenager.) That said, we both liked SF and programming, so who knows.

            I’m not sure about the distinction between “clever” and “creative”. Thesaurus.com lists “clever” as a synonym for “creative”. These seem like highly subjective terms. It’s worth noting that most humans aren’t very creative, but we have language so most of us benefit from the creative ones, or can build on top of what they provide. If there is a creative animal, its creativity is more likely to be isolated and unnoticed.

            Like

          • Wyrd Smythe says:

            [shrug] Took to it like a duck to SF water, is all I can say! Having had miserable success trying to turn adults on to SF, it seems to require a certain mindset.

            As for “clever” versus “creative” I’m just trying to communicate a distinction I perceive. They’re subjective, but I think there are objective criteria that apply.

            I don’t agree most humans aren’t very creative. Quite the opposite, the quality I’m talking about exists in all humans. It’s what sets us apart from all animals.

            Not everyone is creative on a grand scale, sure, but anyone who invents a solution to some small problem is being creative. Designing a simple bookshelf or doghouse is creative. It’s an intentional mental process that seems far beyond any animal.

            It would certainly be pretty big news if anyone ever saw an animal being creative! 🙂

            Liked by 1 person

      • James Cross says:

        Where we really distinguish ourselves is in proactive violence,

        I’ve been reading The Goodness Paradox.

        Like

      • Mike, I want to know what you mean by symbolic thought. Do you mean the ability to create arbitrary symbols? Or do you mean thinking that somehow makes use of arbitrary symbols? I ask this because, for me, the latter is what defines consciousness, but the former is what sets humanity apart from [most?] other life on earth.

        *

        Like

        • Ack, I did it again. My kingdom for a comment editor!

          I see you answered my question with the word “volitional”.

          So, what if the major difference that allowed the ability for volitional use of symbols was largely driven by an expansion of concept-related memory space, i.e., neocortex? Maybe neocortex is hugely metabolically expensive (requiring lots of calories) and developmentally expensive (requiring lots of time to refine connections, so, learning). I could see during evolution that such expensive space would be reserved for the most essential needs, like concepts referring to senses and actions. An ability for volitional creation of symbols would need additional, unassigned (super expensive) concept storage space. The question then becomes what would make such an investment worth it? My first suspicion is a co-evolved capacity to suppress, and thus manage, attentional mechanisms, as seems the case with prefrontal cortex.

          *

          Like

          • Interesting question. From what I’ve read, the answer might have been social intelligence. Social situations probably required individuals to create new concepts in a way the overall environment never really did. Combined with the development of language (an inherently social mechanism), it’s probably what led to the arms-race that led to sapient level intelligence. That said, this is all speculation.

            On the prefrontal cortex, it’s worth noting that the human PFC is far larger than in other great apes, which as a group have larger PFCs than other primates.

            Like

          • James Cross says:

            Not sure if I’ve posted this before but there is a good amount of suggestive evidence that language and with it symbolic thought may have evolved from tool use.

            https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3223784/

            Like

          • Interesting. Thanks! That seems similar to memory being related to spatial navigation. In evolution, sometimes the first specific application comes first before the broader general capability.

            Like

  9. With risk to be the only one who doesn’t like this video explanation. 🙂

    This video is built on the idea that “not showing behavior = not consciousness”. I am not so sure.

    “Not HUMAN consciousness” – I can agree, but with that way of thinking we will never reach the truth about consciousness origin.

    (I don’t mean any spiritual or esoteric thing. Only realistic possibilities.)

    Liked by 1 person

    • Certainly we know it’s possible for a brain injured locked-in human to be conscious without having any ability to behave like they are. But those cases aside, what other criteria could we use for a non-human system to assess its consciousness?

      Liked by 1 person

      • I don’t know. Maybe being an individual system (no matter living or not)? We are all built up by the same rules. But if we think in that way, we must talk about levels of consciousness.

        P.S.
        Mike, have you ever asked yourself how and when the DNA molecule becomes “аlive”?

        Liked by 1 person

        • I don’t know that I’d consider DNA by itself to be alive. Certainly it’s a crucial component of a living organism, but it seems like it’s mostly data and programming. The living part seems to require all the enzymes, proteins, and other cellular systems.

          On the other hand, there are viroids, which are nothing but RNA that uses the host cell’s machinery. But RNA seems more flexible in its abilities than DNA, even if it doesn’t have DNA’s stability.

          Liked by 1 person

  10. Oscardewilde22 says:

    I must say I’m pretty surprised no one seems to be bothered by this. But this movie clip assumes causality between consciousness and behavior. Feeling hungry, we move to eat. I myself am interested in consciousness because consciousness doesn’t seem necessary for behavior. Isn’t that the whole deal.
    Otherwise we just have a very very hard problem of how consciousness is made and how it interacts to cause behavior.

    Liked by 1 person

    • Oscar,
      How would you say we could have goal oriented behavior without an awareness of the environment, of ourselves (at least our bodily selves), and the relations between the two? Or by “consciousness” do you mean something else?

      Like

      • Oscardewilde22 says:

        Hi,

        yes we never seems to agree. For me it is just a qualitative experience accompanying brain activity, but for me awareness and consciousness are two totally different things. I would say a cat is aware of its surroundings, but is it conscious? To be sure I will act like that and I am in favor of animal rights. But our behavior and consciousness go so hand in hand that its hard to imagine it could be otherwise. greetings

        Like

        • No worries on disagreeing. The most interesting conversations are between people who disagree but continue talking.

          The question I usually have for your view is, what aspects of qualitative experience are irrelevant for behavior, at least in a healthy individual?

          Like

          • Oscardewilde22 says:

            I must admit I am quite ignorant and maybe you van help me out. But my view is the view of most scientist, neuroscientist and philosophers right? I mean Chalmers, Dennet, Sean Carrol, and most other materialist that believe brain is mind. If otherwise I hope you can inform me a little bit better, since you are a little bit more knowledgeable then me.
            It’s hard for me to answer your question, since I do not know what a qualitative experience is and which aspects it has. I would say all aspects are irrelevant for behavior, except it gives somehow meaning to it. Greetings

            Like

          • I actually got the phrase “qualitative experience” from your comment above. But the usual answers I get are things like the redness of red. The issue is that redness is information. It tells us something about what we’re looking at. One theory is that primates can perceive it because it allowed us to see ripe fruit.

            Like

    • Fizan says:

      Hi Mike, I thought I’d share my thoughts in this thread where Oscar has hinted at some legitimate issue.

      The movie seems to give magical powers to certain words the author uses which then explain everything. His basic argument is:

      1. A ‘self’ is a living thing which needs energy to reproduce itself.
      2. The first ‘function of consciousness’ was probably an ‘awarness’ of the environment which helps to acquire that energy more effeciently.

      The first magic word is ‘self’ – At this stage there is only a system which is replicating itself it is not aware of itself or aware of being anything.
      The next magic phrase is ‘function of consciousness’ – At this point there is no abstract concept called ‘consciousness’ of which awarness is a function. There is only what the author describes i.e. an awareness of the environment.
      The biggest magic word is ‘awareness’ – By our common day usage we have already associated the word ‘awareness’ with being consciously aware. This seems to be why most people end up buying into such descriptions.

      It’s best the explore what is actually happening when we try to talk of such ‘awareness’:

      A physical compound from a system diffuses and reaches another system trigering a physical change in that system. This leads to a domino effect which brings this system closer to the other system.

      That is all that’s happening.

      It’s not too much different from how an electromagnetic wave from a magnetic system triggers a change in a metallic system which brings them close together. Yet we never seem to use ‘awareness’ in this context.

      Liked by 1 person

      • Hi Fizan, thanks for sharing your thoughts!

        On magic words, I guess a lot depends here on whether you consider the idea of less sophisticated versions of awareness or selfhood to be a valid concept, that they may have evolved gradually over time. Would it have been better if they used terms like “proto-aware”, “proto-consciousness”, or “proto-self”?

        Does detection of a gradient using chemoreception, and subsequent prediction of what direction food lies in, along with behavior modified by hunger’s presence or absence, amount to an incipient level of awareness? If not, what is missing?

        In terms of the physics leading to a domino effect and behavior, do you think that more sophisticated systems (mammals, us, etc) have something else going on? Unless physicalism is wrong, everything we think and do amounts to that domino effect. At least unless we want to argue that quantum indeterminancy figures into cognition, but even then, why would its effects only apply to sophisticated systems and not the simple ones the video discusses?

        Liked by 1 person

        • Fizan says:

          Hi Mike,

          Do I think in sophisticated systems (e.g. mammals) is there something else going on?
          I can see your hints towards dualism here but it’s difficult for me to answer since I haven’t a clue. I do have clues about other things which I present nonetheless.

          The way my thinking works is you tell me why this thing/ process/ system etc. is special compared to that similar other thing/ process/ system.

          What most people do in explaining that ‘specialness’ is to (perhaps unintentionally) piggyback it on words which already have associations to that specialness.
          So no, I don’t think words like ‘poto-aware’ etc. are much better either since when we read them the only thing we have to go on is our own understanding of the word ‘awareness’.
          The best way to make an argument would be to deliberately try and avoid any such magic words as much as possible.

          You made an effort towrds that end by doing this:

          “Does detection of a gradient using chemoreception, and subsequent prediction of what direction food lies in, along with behavior modified by hunger’s presence or absence, amount to an incipient level of awareness”

          But still, here again in your description there are such magic words present:
          1. Detection – implies an observer doing the detection
          2. using – again implies a self who has intention and purpose
          3. prediction – implies an imagination of consequences (which won’t be present at lower levels)
          4. hunger – That’s a big one because it is intself a conscious feeling

          That’s why I tried in my original description to give as much a physical account as possible of what’s happening. Doing that makes one think twice about jumping to premature conclusions about things we don’t grasp at this point in time.

          Liked by 1 person

          • Hi Fizan,
            I see what you’re trying to do, and I’ve advocated for that myself. Words like “awareness”, “hunger”, “fear”, and “feeling” carry too much baggage. They are words involving an assumed phenomenality. Restricting ourselves to lower level terminology can step around that baggage.

            On the other hand, how low do we have to go to adequately avoid the baggage in question? You flag words like “detection”, “using”, and “prediction”. However, I’ve worked with technological systems that these words can be appropriately applied to, such as a security scanner that “detects” the presence of a security card, or my laptop visually detecting my face for login. And I’ve programmed systems to “use” data. And many systems make predictions. We can of course eschew these information processing words too and only talk in terms of the underlying physical mechanisms at varying levels of abstraction, but it would seriously obscure the discussion.

            In the case of the phenomenal word “hunger”, we can replace it with something like, “detecting low levels of stored energy”. So, my question might be more like:
            “Does detection of a gradient using chemoreception, and subsequent prediction of what direction new energy sources lie in, along with activity modified by the detection or not of low levels of stored energy, amount to an incipient level of awareness?”

            If you do insist on the information processing words being excised, then we might go with:
            “A system, while in motion, has internal chain reactions to chemicals which alter its internal states. Whether the density of the chemical is increasing or decreasing results in different states. Some of these states alter the chain reactions that lead to the system moving toward an energy source, but only if other aspects of internal state are the patterns it usually has when its internal energy has fallen below certain thresholds. ”

            I won’t ask if this new version amounts to an incipient awareness, because all the references to internal states are necessarily vague, a vagueness I can’t remove without going into a lot more detail, which would entail thousands of words and getting thoroughly lost in the weeds. Describing my laptop at this level would lead to similar difficulties.

            I personally feel comfortable working at the information processing level of abstraction that we’ve already established can exist in non-biological systems that no one regards as conscious. In that sense, we can examine the increasing sophistication of evolved systems without ever bringing the phenomenal terminology into it.

            Then the question becomes when an organism can build certain models, make certain types of predictions, take into account aspects of its own internal states, etc. When what is being described amounts to “consciousness” would be in the eye of the beholder.

            Liked by 1 person

          • Fizan says:

            When you say “a security scanner that “detects” the presence of a security card” – you’re talking about ‘the security scanner’ the same way as e.g. I can detect the presence of a security card. The problem is you and the security scanner are not similar (or at least we haven’t established that yet). So we’re falsely talking about the security scanner as if it’s also a subject. The same thing applies to other such systems. It isn’t apparent that there is some identity / subject called the security scanner that is detecting something. It is similar to how a ‘rock’ falling down a slope doesn’t ‘detect’ the slope, yet it does behave according to the shapes of the slope. (The rock like the security scanner isn’t a subject).

            When what we want to establish is whether certain processes (e.g. computations) amount to sentience, it’s unscientific to start with that assumption already worked in and then figure out how it’s true.

            I do get your point on how doing the weeding out can make explanations longer and more complicated (and the question of how granular do we become). And as you noted the more you do it the more ‘vague’ it seems too. And I might add the more unrelated to subjective experience it seems to get. But that is why it’s a hard problem. I don’t think having this difficulty justifies just jumping the gun to our final wishful conclusions.

            Liked by 1 person

          • I think it’s worth remembering that we don’t really hold anything else in science or technology to this standard. For example, if I asked you to describe the operations of the device you’re reading this on in those terms, while possible in principle, as a pragmatic manner, you’d have to group identifiable collections of physical mechanisms under labels so you could refer to them without have to repeatedly re-describe them. In computer science, this is called levels of abstraction, and working with complex systems is virtually impossible without it.

            Along those lines, if I asked you to describe how a cell works, but without any reference to biological mechanisms, only in terms of chemistry or physics, you would similarly have to come up for new names for things like proteins, lipids, DNA, RNA, ribosomes, etc, as well as the thousands of chemical pathways that make up a cell’s metabolism.

            It is important to eventually account for all these layers. But due to the complexity involved, few people really understand all that accounting. I did some reading a while back in biochemistry and came away shaking my head at all that complexity. But I read enough to grasp that biology is chemistry and electricity.

            I’ll agree that the mappings, from neuroscience to information processing to cognition, is far from complete. But all of the available data currently points to this being the stack. Achieving and having any comprehension of those mappings will only be possible if we can use intermediate levels of abstraction.

            Of course, just as the average person doesn’t understand the relationship between biology and chemistry, the average person won’t understand the relationship between neurons and cognition, which will always leave them suspecting that something magical is happening and that the scientists are just saving appearances. I’d say that once technologies come out based on that understanding, that maybe it would convince people, but people remain unconvinced about the science that modern gadgets crucially depend on, so it’s not a given.

            Liked by 1 person

          • Fizan says:

            I think it might be possible to use some level of abstraction to describe processes as that’s what anyone ever does including scientists and lay men. Every thing is composed of other things and so on so it’s impossible to get down to the lowest (which we don’t know what that is) level of description or even to the lowest known level of explanation.
            That being said not every abstraction or word has associations with sentience. For example ‘rock’ or ‘system’ are also abstractions. Similarly, other words like diffusion, chemical, light, displacement, speed, reaction, entropy, gradient etc. are all words which encompass an underlying concept, make life easy but don’t seem to be automatically associated with sentience.

            The reason we’re not holding anything else in science to this standard is because other areas aren’t concerned with describing subjectivity. For example a physicist may say (somewhat poetically) ‘the metal spoon feels the force of the magnets attraction and moves towards it’ – no one would be really bothered by it since the physicist isn’t claiming that by virtue of his description the spoon has a subjective feeling.

            Liked by 1 person

          • How about this revision of the question?:
            “Does the varying intensity of chain reactions in a moving system, which originate from intersecting a chemical gradient, that trigger internal firing patterns that correlate with a model of new energy being available in a particular direction, which in turn triggers a chain reaction resulting in movement in that direction, amount to an incipient level of awareness?”

            Liked by 1 person

          • Fizan says:

            Much better 🙂 But it isn’t as convincing now.

            I would say no. Varying intensities of chain reactions does not seem to amount to an incipient level of awarness unless there is something feeling these varying intensities which there isn’t in this case.

            Liked by 1 person

          • Ah, I forgot about the hunger part. What about now?

            1. A moving system intersects with a chemical gradient.
            2. The varying intensity of chain reactions from the chemical gradient trigger internal firing patterns that correlate with a model of new energy being available in a particular direction.
            3. This happens concurrently while a model of the system’s internal energy levels is consistent with those levels being below a certain threshold, or at least that the system’s intake mechanisms are clear and ready for new fuel.
            4. The chain reactions from 2 & 3 trigger activity that results in the system moving toward the new energy.

            Even if 1-4 don’t meet your intuition of awareness, can you see how this might be a stage in that direction?

            Liked by 1 person

          • Fizan says:

            No, I still don’t see it being much different from my magnet and metal example. If this is a step in that direction then so is that. For me both of them aren’t for now. Perhaps some compelling evidence/theory could convince me that both of them are.
            A ‘specialness’ about your system (compared to other mechanical systems e.g. the magnet/metal) isn’t evident so if that’s what your argument is, it needs to become evident first why this case is special.

            Liked by 1 person

          • “Specialness” seems pretty subjective, but then I think consciousness only exists subjectively anyway, so it fits.

            Thanks for an interesting discussion!

            Liked by 1 person

          • Fizan says:

            What I mean by ‘specialness’ is a quality which the other mechanical systems (e.g. the magnetic one) don’t have. And it is apparent you do make that claim indirectly. To me the more sensible position would be that all mechanical systems do have the same qualities.

            Like

          • [pssst. The “specialness” you’re looking for is purpose, teleonomical in the case of chemotaxis.]

            *

            Like

          • Using that criteria, is there anything alive that isn’t conscious?

            Like

          • Fizan says:

            The word ‘purpose’ is problematic if you follow our conversation above. (it’s another ‘magical’ word).

            Like

          • Depending on your definition of consciousness, that “specialness” can be enough, making all living things that respond to their environment conscious. I personally don’t think that’s enough. But I was just pointing out the relevant difference between the bacterium and the magnet/spoon.

            *

            Liked by 1 person

          • Fizan, yes ‘purpose’ is a a magical word, meaning a high-level abstraction. It encompasses a large number of elements, but it is a concept you will have to grapple with to get where you want to go, i.e., understanding consciousness.

            *

            Like

  11. Yes, fun video. This Kurzgesagt “In a Nutshell” company reminds me of the collection of Crash Course video series’. Even the animation seems similar. I particularly enjoyed the philosophy and psychology CC series’. I’ve noticed that at least Hank Green always seems to do a great job in them.

    Mike, I wonder if you could tell me your perception of what I’d say this particular video glossed over and essentially left out? And if you do know what I’m referring to here, would you personally add that element to such a discussion, or like Kurzgesagt leave it out?

    Liked by 1 person

    • Eric, the problem with videos is that they take time to watch, or rewatch. I know there was no discussion of a second computer. There was discussion of affects, such as hunger, but perhaps with not enough emphasis in your view? Some of what you might see as missing may be discussed in the follow up videos, one of which is supposed to cover major theories (which I imagine will be IIT, global workspace, and others).

      Liked by 1 person

      • Mike,
        Well I didn’t mean for you to watch it again on my account. I was just asking in case the missing element was obvious to you. And if not, well this is part of my project to get you a “working level” grasp of my models. “Lecture level” understandings shouldn’t sufficiently help you understand where my models are strongest and where they are weakest.

        This is something that Kurzgesagt could only have missed given the subject matter. It can’t be added later without displaying a clear shortcoming regarding their first video. The question now is, is the missing element indeed important, as I believe, or is it not?

        It’s not that they didn’t discuss affects such as hunger sufficiently. Indeed, their focus upon calories was a main theme (which isn’t even slightly bold). What they missed was marking any kind of distinction between valueless forms of life, and sentient forms of life. The emergence of sentience was glossed over as an utterly non notable circumstance. As I recall they simply began talking about life that has desires, as if it’s just standard business for more complex things to have desires. There was no mention of any associated “hard problem”, or how something that can be punished and rewarded might function differently from something that has no personal interests whatsoever.

        What do you think? Might they have missed something big here, or do you consider sentience itself to not be all that notable?

        Liked by 1 person

        • Eric,
          I think the answer is that the bright line you describe, where before there was nothing valued, but afterward value came into the world, is too simple. Life from the beginning has valued its own survival and reproduction. It had to. Any life that didn’t got selected out of the gene pool. Goal oriented behavior precedes anything we’d call “conscious” by billions of years.

          You’re right that consciousness is heavily oriented toward goal oriented behavior, but it is an enhancement to that behavior. What it did was dramatically expand the scope of what an organism can react to, both in space and time. But it didn’t originate those goals and values. It only added prediction to them.

          Consider how much this single celled organism tries to stay alive. There’s nothing conscious here. (If there had been it might not have swam back into the acid or whatever.) But it seems strange to say its behavior isn’t goal oriented.

          Liked by 1 person

      • Mike,
        If sentience exists in you, which is to say that your existence can feel good/bad to you, and does not exist in some other forms of life, which is to say that existence does not feel good/bad for them, then there must be a theoretical “value” distinction between these two separate categories of life. Note that what I’m saying here is not merely inductive reasoning, but rather deductive. You simply cannot challenge my conclusion if you grant these two premises. So what I’m saying here is not too simple at all. Other issues continue to exist beyond the clear certainty that I’m now addressing.

        What you seem to have done is use standard conceptions of “goal oriented” to go beyond the sentience issue that I actually brought up. There are a couple of fancy terms to distinguish between the two (which you helped me straighten out a couple years ago as I recall). One of them is “teleological” function, or purpose driven. Here it would seem that the purpose is to feel good and to not feel bad. The other is “teleonomical” function, which is to say “apparent purpose to us gullible humans”. This term is reserved for the function life in general given our anthropocentric biases. We empathize with the living microorganism as if it “wants” to escape the acid, but not with falling rain as if it also “wants” to do what causality mandates that it do. We are what we are.

        This Kurzgesagt video glossed over making any distinction between sentient existence and non sentient existence regarding the function of life. I’m asking if you think that sentience if overrated regarding organism function as they’ve inadvertently implied?

        (Here I don’t get the sense that it was their goal to snub sentience, but rather that they didn’t know any better given standard big name consciousness theories today such as IIT.)

        Liked by 1 person

        • Eric,
          I think the issue here isn’t that sentience isn’t important. It is.

          But it’s a composite phenomenon, with components, some of of which are far more ancient than it is. I’ve discussed before how feelings are the imaginative narrative part of the brain receiving signals from the lower level machinery about reflexive reactions to a particular pattern of sensory input. We agree that what consciousness brings to the table is an ability to evaluate which reflexes should be allowed and which inhibited. But before consciousness came along, the reflexes were already there.

          Every life form, to one degree or another, possesses programming that evolved because it was adaptive, programming that enhances its ability to survive and reproduce. Sentience adds additional capabilities, but the change is far more incremental than you envision.

          The first creatures we might be tempted to label sentient weren’t very sentient. They were still mostly reflex. Gradually over the span of hundreds of millions of years, sentience increased. It had a major increase with the rise of land animals, another with the development of mammals and birds, and more yet as mammals increased in sophistication.

          In other words, you’re focusing on one rung of the ladder and declaring that all the progress happened on that rung. I think what you’re missing, or at least where you disagree with me, Kurzgesagt, and most evolutionary biologists, is that the progress is spread among most of the rungs.

          It’s a bit like saying that when someone turns 18, they instantly turn from a child to an adult. We all know it’s far more complex and gradual than that. Likewise, sentience didn’t snap into existence. The earliest versions were really only glimmers, and evolution built from there.

          Liked by 2 people

      • Two things Mikes. Given that we’re presuming that sentience didn’t exist at any level at one point in time, and now does exist at some level, sentience can’t possibly not have “snapped into existence”. In order for something to go from non existence to existence, by definition that’s exactly what must happen.

        Secondly, you’re suggesting that I’m not talking about incremental change, though I most certainly am. Consider my explanation for the rise of sentience. At first countless generations of creature must have had the punishment/ reward of sentience given brain function, though without behavioral effect. That’s how evolution tends to work — non functional traits are carried along that may or may not gain any functional uses before they disappear. But apparently sentience did become functionally incorporated at some point, and so this side of things was able to control output function in some regard — a marginally functional “second computer” emerged. Apparently this worked out better than non-conscious function could in some ways, or the “why” of consciousness. (I suspect that purpose driven function brought needed autonomy that non-conscious function simply could not.)

        Even in the modern human I’m known for saying that the conscious side does less than one thousandth of one percent as many calculations as the vast non-conscious computer which outputs consciousness. So would this be incremental change spread out over all the rungs? My goodness yes! Here I’ve got sentience snapping into existence (deductively no less), and imparting more and more functional effect over time, though even we modern humans do less than 1/1000 of 1% as many sentience based calculations as the vast computer which outputs sentience.

        If you haven’t listened to it already by the way, I think you’ll like the new Brain Science podcast. This time Ginger had on Paul Middlebrooks who does a hybrid AI and neuroscience podcast. He’s quite interesting and lively, which I guess explains why he has his own popular show. Anyway they talked about the “Deep Learning” path to AI. Of course they didn’t discuss sentience at all. Apparently that would entail an extremely “hard problem”, and the vast majority in neuro and computer scientists would rather side step that difficult issue. While obviously convenient, my dual computers model will be waiting just in case convenient architectures don’t quite get the job done. Of course I’ll also need people like yourself to gain working level understandings of my models so that you can effectively assess any strengths and weaknesses…
        http://brainsciencepodcast.com/bsp/2019/155-middlebrooks

        Liked by 1 person

        • Eric,
          In terms of how evolution works, traits that aren’t beneficial can persist, but only if they also aren’t detrimental, and even requiring energy can be a detriment. Now maybe an incipient type of sentience did develop that way, but I’d need to have some description of what this type of sentience entailed.

          But since I think sentience is the interaction between the reflexive part of the brain and the reasoning / imaginative part, it’s not what I think happened anyway. In my mind, it would have started with the most primitive predictions possible, and then gradually become more elaborated over time. That’s why the smell hypothesis for consciousness strikes me as plausible. It provides a mechanism for this capability to get started, and fits with the anatomical progression we see in vertebrates.

          I did listen to that podcast. It was entertaining. I also downloaded a few of his podcast episodes to check them out. I also heard the latest Sean Carroll podcast is worth checking out, although I haven’t had a chance to yet (it hasn’t shown up yet in my Overcast feed).

          Liked by 1 person

      • “Now maybe an incipient type of sentience did develop that way, but I’d need to have some description of what this type of sentience entailed.”

        Let’s see if this helps Mike. Reality harbors but one variety of sentience as I define the term. This is to say that there is no “incipient sentience” here, but just various examples of sentience. This stuff causes existence to be good/bad for what possesses it, or constitutes value. There is clearly something that it’s like to have sentience, and furthermore I’d say nothing that it’s like to exist outside of sentience. This is why I consider it useful to define sentience as consciousness itself — even if evolutionarily “functionless”, existence would still be experienced. I consider sentience to be reality’s most amazing element. Indeed, it constitutes value itself!

        Conversely you’re presenting a position that’s tied to engineering. As you said, “sentience is the interaction between the reflexive part of the brain and the reasoning / imaginative part”. Your convictions about this are worth noting. I harbor no convictions regarding the “how” of sentience. I do presume it’s causal given my metaphysics, though that’s about it.

        You may recall us discussing long ago that architects of great buildings aren’t permitted to learn the craft of engineering that goes into building such structures. This is because it’s though that such understandings could bias architectural design…

        Liked by 1 person

        • Eric,
          It seems to me that the statement that there is only one variety of sentience should be scrutinized. Do you see it as a fundamental force like electric charge? (Some people do argue this, but they tend to be panpsychists.) If it’s not something fundamental, then it’s a composite mechanism, and any composite mechanism, looked at in terms of its fundamental nature, is more a category of patterns than one simple thing, and is therefore going to have a variety of forms, at least unless we narrow our definition to one specific pattern, the human one, but I think that leads to places you’re not comfortable going.

          This is the problem with defining sentience purely in phenomenal terms, that is, purely from the subjective perspective. We only have access to one version of that type of perspective, our own. It’s not a large logical leap to assume that other humans have similar experience. But the further away we get from mentally complete humans, the less we’re able to make that assumption. Fish sentience is not human sentience, and if we insist that only the human variety is sentience, then we’re effectively throwing fish out of club sentience. (Many people are comfortable with this move, but again, I don’t perceive that you are.)

          The idea of architects not knowing any engineering is one I have a hard time accepting. I don’t know what the norms are for building architects, but I do know that IT architects who don’t have an understanding of the physical and logistical realities, as well as the cost consequences of their architecture, aren’t ones who tend to be successful.

          Liked by 1 person

  12. Oscardewilde22 says:

    Hi,
    I must say first time I understand philosopher Eric a little bit more, and I must say I am leaning a little bit toward him. I also have the opinion that the movie clip was skipping some essential part. I am still wondering, since you write?
    “We agree that what consciousness brings to the table is an ability to evaluate which reflexes should be allowed and which inhibited.”
    Is this a minority view or majority view among scientist of consciousness or are there too many views of consciousness to speak of a minority/majority view among scientist/philosophers.

    thanks anyhow.

    Liked by 1 person

    • Thanks for considering my ideas Oscar. I’m a bit curious about the question you’ve asked Mike as well. That was stated through his “single computer” terminology. We may share a theme here, though I personally wouldn’t have said: “We agree that what consciousness brings to the table is an ability to evaluate which reflexes should be allowed and which inhibited.”

      I consider consciousness essentially as a tiny second computer which is driven to function on the basis of sentience, just as standard computers are driven to function on the basis of electricity. Beyond the sentience input there are senses such as vision, as well as memory of past conscious experiences. We interpret them and construct scenarios about what to “do” in the quest to feel better. I suppose that could involve “inhibiting reflexes”, though I like to present a more expansive picture regarding the nature of conscious function.

      Like

      • Sorry. Sounds like I assumed too much in my comment by saying what we agreed to. You’ve talked before about consciousness running simulations. I guess the question is what is the output of those simulations? I think it’s choosing which reactions to allow or inhibit, assuming they’re not all inhibited. The cortices are often the breaks for the brainstem and limbic system.

        Liked by 1 person

        • (Whoops, you can delete that duplicate comment.)
          Mike,
          Allowing or inhibiting reflexes can exist under my version as well, so that’s fine, though there is also more. For example if you’re constantly in freeway traffic and hate it, consciousness should lead you to look for solutions (which is to say, run scenarios, or “think”). You might notice that one lane generally moves much better than others at a certain spot, and so start using it daily given your ability to remember such an understanding. I wouldn’t say this is about allowing or inhibiting reflexes however.

          Output under my model can come in the form of conscious muscle operation, as well as making decisions, as well as simply inciting more thought!

          Like

          • Eric,
            In terms of allowing or inhibiting reflexes, it’s wrong to think of it in terms of a one time event, but as a loop.

            So, you’re sitting in traffic. Reaction: negative
            You envision alternate routes.
            Reaction: take route 1: simulate scenario: reaction: more negative: inhibit
            Reaction: take route 2: simulate scenario: reaction: moderate: allow
            Reaction: take route 3: simulate scenario: reaction: good: allow
            Reaction: take route 4: simulate scenario: reaction: negative: inhibit

            In practice, the above may take place in parallel, with 3 probably winning over 2 and being the final allowed reaction.

            Which leads to new sensory inputs, new reactions, new imagining, new reactions, and so on and so on in a never ending loop, until we fall asleep or lose consciousness in some other manner.

            I’m also lumping habitual reactions in with reflexive reactions here. In truth, we have reflexive reactions which are allowed or inhibited by habitual reactions, and both reflexive and habitual reactions are allowed or inhibited by the frontal lobes.

            Liked by 1 person

    • Oscar,
      In terms of consciousness, I don’t know if there is any consensus view even on a definition of the term, much less how it works and where it resides.

      But in terms of specific functionality, things are more hopeful. My perception is that it’s widely accepted in neuroscience that reflexive reactions rise up from the brainstem through the limbic system to the cortices in the frontal lobe, which either allow on inhibit those reactions. Put another way, the lower level machinery reacts, and the upper level machinery decides which reactions to allow or override.

      Like

  13. Mike,
    For simplicity I’ll put my response for your last two replies here.

    On the architecture of great buildings, consider the aesthetic component. Furthermore there’s whatever people need these buildings to functionally do for them once they are indeed built. Neither of these things concern how to build the safe structures that engineers are charged with. Then after “non nuts and bolts” experts put their plan together, they hand it over to the engineers for their input. These experts then decide how they might structurally design what’s been presented and so get into cost and feasibility. Here it’s even the engineer’s job to propose altered architectural elements that would save money given engineering expertise. If not too aesthetically and functionally problematic, such changes might be implemented. Regardless, if engineers were in charge of the whole thing then their expertise should tend to bias design. (Furthermore I wouldn’t expect them to be sufficiently educated in either aesthetics or how people effectively need buildings to function.)

    So is “brain architecture” similar? I think it might be. And who should be providing neuroscientists with good architecture? Fields such as psychology, psychiatry, sociology, and so on I think. The problem however is that it’s been difficult for these sciences to gain broad generally useful understandings regarding our nature. (Perhaps objectively studying ourselves is difficult because, unlike anything else, this gets “personal”?) Furthermore the fact that we have no generally agreed upon principles of metaphysics, epistemology, and axiology from which to work should compound such problems. Given this void I can’t blame Lisa Feldman Barrett for trying her hand at architecture (though I do still consider her “theory of constructed emotion” to be crap).

    You aren’t a neuroscientist any more than I’m a psychologist, though we obviously do have such interests. But here’s my point. Shouldn’t we wonder if your presumption that sentience is produced as the interaction between the reflexive part of the brain and the reasoning / imaginative part, is flavored with engineering convenience? After all, this is a famously “hard problem”.

    On whether I consider sentience to exist as something fundamental or composite, I suppose that I didn’t fully grasp your meaning when you’ve asked this before. So either it’s a fundamental component of existence (as in panpsychism), or it’s composite. And if I do consider it composite, this suggests various different varieties of it must exist, though I’ve been referring to sentience singularly. Good question.

    I’ve adopted a metaphysics of perfect causality as you know. This suggests that there actually are no fundamental properties to existence, but rather just stuff that’s created by other stuff. So if I do consider sentience to be composed, then how could I also consider there to just be one basic kind of sentience?

    The thing to understand here is that when I define it to be singular, I do so from the confines of epistemology rather than ontology. I’m not talking about what’s Real here. I’m instead talking about effective understandings. So how does that work?

    I presume that for most things it doesn’t feel good or bad to exist. My computer would be such an example. And even if my computer does feel “strong pain” every time I press the “p” key, I have no evidence of it and so don’t bother myself with the possibility. I seek effective rather than Real understandings, which is actually what science is made of. I suspect that scientists will progressively adopt my view of sentience/ consciousness as it becomes understood to support models which correspond with evidence.

    On consciousness functioning in a loop, I certainly agree. I didn’t mention any looping in that traffic scenario for simplicity. The point of it was that when I decide that one particular lane is best at a certain point of my daily commute, I’m not sure it’s effective to call that “allowing or inhibiting reflexes”. In standard English “reflex” seems to have an automatic sense to it. Here I was referring to “figuring things out”, or something that I wouldn’t call reflexive.

    Like

    • Eric,
      Your discussion on architects and engineers might make sense for constructing a building. It’s a discussion about building something to meet a particular purpose. It might even hold if we were talking about AI design. (Although again, in my experience, technical architects who don’t understand the technology are more trouble than they’re worth.)

      But the human mind is what it is. We’re not designing it. (At least not yet.) So the question is, what is the best approach to understand what nature has produced? People have been sitting around theorizing for thousands of years, and while some of them have occasionally had decent ideas, it’s mostly been exercises in navel gazing.

      Some psychologists, using rigorous observation techniques, have come up with pretty solid theories, or at least plausible ones. The problem is that the profession of psychology, much like the economics profession, tolerates too many people who are driven by ideology instead of science. And psychology by itself is limited by what behavior can be observed or what can be self reported.

      Until neuroscience came along, what we had were mostly speculative theories. Now we’re finally getting to a stage where some of these theories can be tested, where we’re finally starting to understand how mind can in fact be generated from physical systems. The idea that we should subjugate this endeavor to speculative theorizing is not one I’m at all sympathetic to.

      This is why confining ourselves to phenomenal descriptions (like feeling good or bad) isn’t very interesting to me. It seems like we can discuss phenomenality all day still be where people have been for centuries. Maybe all the current neuroscience theories are wrong, but at least they’re theories grounded in empirical research that is telling us new things.

      Liked by 1 person

      • Well Mike, it sounds like I’m the optimist who believes that fields like psychology can become the reasonably hard sciences that neuroscience and humanity need them to become, while you’re thinking that they’ve had their chance and now it’s time for neuroscience to do its stuff. Hopefully at least one of us is right… and soon!

        Liked by 1 person

        • Eric,
          Just to be clear, I think psychology definitely still has a role to play. And philosophy of mind for that matter. But I think these fields are strongest when working in collaboration with neuroscience. The old school practitioners who see themselves as somehow detached from the findings of neuroscience are, in my opinion, headed for the dustbin of history.

          Liked by 1 person

      • That would seem to put us pretty square again Mike. From there I see it as the behavioral scientist’s job to understand our nature in personal and social capacities, though unfortunately these fields don’t yet seem to have come up with very much in terms of broad general understandings. That would be the missing “architecture” for neuroscientists to reverse engineer, I think. But if the still new field of neuroscience must not only straighten itself out, but also grasp what associated architects haven’t been able to, then that should be trouble!

        Do philosophers of mind have a role to play? Or philosophers of physics for physics? Or philosophers of anything for anything? I don’t think so, or at least not today. In a conceptual sense, how might a field which lacks any of its own generally accepted agreements, provide another field with effective advice? How can there be experts, without expertise?

        This might sound strange coming from a person who’s adopted the pseudonym of “Philosopher Eric”, but hear me out. In this age of science (which of course sprang from philosophy in recent centuries), philosophy is practiced more as an “art” than a “science”. Yes it is ivory tower navel lint pondering, but who’s going to say that any art, let alone one with such a magnificent history, shouldn’t be appreciated by anyone? Just because I personally have no interest in ballet, I’m not going to say that no one should!

        While I do believe that philosophy should continue to be explored and appreciated as an art by those who are so inclined, I also believe that another “philosophy” must emerge in parallel which is not about that. It’s whole point would be to develop a respectable community which is indeed able to agree upon various principles of metaphysic, epistemology, and axiology in order to better found the institution of science. That, I think, should clean up a good deal of the mess which we see in these fields today.

        Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.