My philosophy, so far — part II | Scientia Salon

Massimo Pigliucci is doing an interesting series of posts on his philosophical positions.

In the first part [19] of this ambitious (and inevitably, insufficient) essay I sought to write down and briefly defend a number of fundamental positions that characterize my “philosophy,” i.e., my take on important questions concerning philosophy, science and the nature of reality. I have covered the nature of philosophy itself (as distinct, to a point, from science), metaphysics, epistemology, logic, math and the very nature of the universe. Time now to come a bit closer to home and talk about ethics, free will, the nature of the self, and consciousness. That ought to provide readers with a few more tidbits to chew on, and myself with a record of what I’m thinking at this moment in my life, for future reference, you know.

via My philosophy, so far — part II | Scientia Salon.

I find myself agreeing with most of Massimo’s positions.  I agree with his quasi-realist stance on morality (see my morality posts for details), and his position on free will compatibilism.

Until a few months ago, I have to admit that I would not have agreed with him on consciousness, that is I thought there was a good chance that it was an illusion, or at least our common intuitions about it were.  After reading Michael Graziano’s ‘Consciousness and the Social Brain’, I’ve changed my views.  I now think consciousness is a real thing and that it is a model of the attentional state of the brain.

However, I do think his skepticism of mind uploading is unwarranted. If the mind indeed arises from physical operation of the brain, I can’t see any reason why it shouldn’t eventually be possible to analyze that physical operation and recreate it, either physically or in a virtual environment. Even if consciousness ends up requiring wet chemical reactions, it still seems like something we’d eventually be able to recreate, although at that point you might refer to it as engineered life rather than uploading.

Now, I do think there is plenty room for skepticism that it’s going to happen in 20 years and lead to a transcendent “rapture of the nerds” singularity, but I see that as a separate issue from us eventually being able to record, store, and re-instantiate our minds. It might be centuries before it’s possible, but short of substance dualism or some other ghost in the machine mechanism being true, I think humans will eventually do it. (Assuming we don’t drive ourselves extinct first.)

This entry was posted in Zeitgeist and tagged , , , , , , , , , , , , , . Bookmark the permalink.

36 Responses to My philosophy, so far — part II | Scientia Salon

  1. guymax says:

    I also now follow MP. He has a careful and balanced way of coming at the issues, and does not leap to conclusions.

    Niggling again, but it seems an important thing to say – In your last para. I would just add – ‘short of the perennial philosophy being true….’ Philosophers of mind are prone to forget that all but one theory of consciousness would require the falsity of a doctrine which is demonstrably unfalsifiable. If we forget this then we will be unable to explain why proving the truth of any other theory of consciousness is an intractable problem.

    Like

    • I totally agree on Massimo. I don’t always agree with him, but even when I don’t, I can usually see where he’s coming from.

      Which doctrine and/or theory of consciousness do you mean?

      Like

      • guymax says:

        What is often known as the ‘perennial’ or sometimes ‘primordial’ philosophy appears as Buddhism, Taoism, advaita Vedanta, Sufism and elsewhere. Mysticism if you like. Almost all current theories in the sciences assume that this explanation of consciousness is false. If there were no ‘hard’ problem of consciousness then it would be have to be false.
        .
        This is a crucial but simple point, usually completely ignored by the professors because they believe that mysticism is nonsense. What they miss is that it doesn’t matter that they believe it is false. What matter is that it is unfalsifiable, Their counter-theories are therefore unverifiable. No experimental evidence or logical argument can ever decide which is the correct one. The ‘hard’ problem is inevitable. It is the problem of trying to find a better explanation of consciousness than the one given by Lao Tsu and the Buddha, and it is possible to demonstrate that this cannot be falsified.

        I’m prone to writing too much and being too insistent. Sorry about that. But this is important. If you understand this point here you will be way ahead of many and quite possibly most professional philosophers, and will be saving yourself a great deal of time and trouble. There is no ‘hard’ problem. There is just the problem of convincing people that religion is not entirely nonsense.

        Like

        • As you know, I’m not religious, although I strive to be honest about the limitations of knowledge in this area. I’ll have to take your word for it that the eastern mystical concepts of consciousness are unfalsifiable. But I wonder if they would be falsified if a mind were successfully uploaded. Or would they remain valid after that had happened?

          Like

          • guymax says:

            We cannot perform any experiment that would falsify an unfalsifiable theory, so the question would not arise. It wouldn’t matter whether we are religious or not, the facts would remain the same. Ignore mysticism’s explanation and consciousness becomes an intractable problem. The whole of metaphysics becomes an intractable problem. .The evidence is not ambiguous.

            Like

          • In truth, I’ve never been too impressed with the hard problem. It’s always struck me as a failure to understand the limitations of introspection. Of course, it could be a failure on my part to grasp some crucial concept, but every time someone tries to explain it to me, it seems I hear some variation of, “I can’t introspect it, therefore problem.”

            Like

          • guymax says:

            It’s not a problem of introspection. It’s just the problem of explaining consciousness in a scientific way. Most people working on it do not bother with introspection, and are usually highly suspicious of it. It has been defined by Chalmers as the problem of explaining how matter gives rise to mind, but it also has less presumptious definitions.

            Like

          • So what distinguishes the question of how mind emerges from matter from how software emerges from computer hardware? (Aside from biology versus machine.)

            Like

  2. Hi SAP,

    I’m kind of surprised you agree with him on so much, since I thought we tend to think reasonably alike and I think there is much that is incoherent with his worldview, which I listed out at the end of my rather long comment.

    Perhaps you could help me to resolve these inconsistencies, because to be honest I usually find your reasoning much more plausible than Massimo’s on topics such as these.

    Like

    • Hi DM,
      Wow, I’m flattered. I was speaking broadly in my agreement with Massimo and specifically about today’s entry. But there are a lot of details that I’m not completely in agreement with him on. I’ll take a look at your comment and see what I can say. Were there any points in particular you were more interested in? He covered a lot of ground.

      Like

    • Hi DM,
      I took at look at your comment over at Scientia. I’m not going to address the mathematical stuff since, as you know, I’m still not sure exactly what I believe on it; I’m still agnostic to a large degree.

      As I noted above, I do disagree with Massimo on computationalism, which is why I think his views on mind uploading or machine consciousness are overly pessimistic.

      I agree with him on free will. If you read my free will posts, I think we’re largely in accord. I think you disagree with us on compatibilism. As Massimo indicated, this is a disagreement about terminology, not ontology. Despite non-belief, I’m not militant about terms that used to be theological. I often use “faith” as a synonym for “trust”, “spirit” as a synonym for “mindset”, and “free will” as a synonym for “volition”.

      On morality, as you know, I buy into Haidt’s moral foundation theory. (I don’t know what Massimo’s take on that is; unfortunately, unless he and Haidt bury the hatchet, we’re unlikely to find out.) I find that compatible with the quasi-realism Massimo discusses. Realism because it’s driven by evolved instincts and urges, but non-realism because there’s wide variation in how those foundations get expressed in cultures. Like Massimo, I don’t think moral rules are arbitrary, but within the range permitted by biology, they are relativist.

      Let me know if I omitted anything. I’m sure you’ll point out any problems 🙂

      Like

      • Hi SAP,

        Thanks for answering.

        Don’t have much time just now so I’ll have to be brief.

        I guess I’m interested in all 8 points I raised. I guess many of them really boil down to computationalism, so I can take it that you agree with me.

        I know you’re agnostic on Platonism but the fact that you find his position compelling means you ought to be able to help me out with understanding how he can hold mathematical objects to exist but to be dependent on minds. I just don’t get how that can be coherent in light of independent discovery.

        Similarly you might tell me if you think his reasoning on “relations without relata” is circular or not.

        I’m not too far from you and Massimo on morality, but I don’t understand how he can claim that his morality is based on something more solid than basic feelings. My morality is also based on feelings and I don’t think that’s a problem.

        On free will, the terminology question aside, Massimo claims to follow Dennett’s account of free will but I don’t see how Dennett’s account of free will is not compatible with an ordinary computer program making choices. It doesn’t even require AI or consciousness. So, in my view, present day computers can have free will in Dennett’s sense. I guess I would appreciate an argument to show I’m wrong on this.

        Like

        • I’d imagine it’s pretty late over there.

          I actually don’t think I agree with Massimo that mathematics are mind dependent, although this is part of the issue I raised on his Part 1 post about exactly what we mean by “mathematics”. I’m not sure if I’m a mathematical empiricist, a platonist, or an Aristotelian realist, but I’m pretty sure I’m not a nominalist.

          On “relations without relata”, I have to say I’m agnostic about that as well. If reality is structure all the way down, then “relations without relata” is what we are. As Tegmark argued, if the MUH is true, then the ultimate theory of everything should be nothing but relationships. But if there is a brute physical layer to reality, then all relations will ultimately be between its components.

          On morality and “feelings”, I find “feelings” too ambiguous since feelings can be whimsical or intensely powerful instinctive urges. I suspect Massimo meant that morality isn’t determined by whimsical feeling, but by the more intense variety.

          The relevant aspects of Dennett’s free will is escaping me, and I fear I don’t have the energy to fish it out of his 20 page paper tonight. I suspect he and Massimo would argue that common computers don’t have the complexity yet. My own thought is that they’re not yet complex dynamic systems, although multi-tiered multi-vendor systems might be.

          Like

          • Thanks SAP.

            I understand you are agnostic about whether OSR or the MUH are actually true, but that doesn’t mean that you can’t help me judge if Massimo’s argument for dismissing them is circular, as it seems to be to me.

            Like

          • I may be missing the specific argument you mean. Just skimmed over his statement about it in the Part 1 post, and it seemed more of an expression of opinion than a rigorous logical argument. He definitely seems to find the logic for the OSR and MUH unconvincing, but his statement about it seems intuitional.

            Like

          • By the way, I don’t understand how you can say that computers are not complex dynamic systems when almost all research on complex dynamic systems involves implementing and exploring such systems with computer software.

            Like

          • I think the key point there is that such systems can’t be predicted. We model hurricanes extensively, but can’t predict them. Now, I realize that’s a bit of game fixing since we explicitly design most computer systems to be predictable and tend to consider them non-functional when they’re not.

            If we explicitly designed a computer with it’s own agenda, actually with several competing agendas influenced by it’s history of prior experiences, such that it would be impossible for another computer to predict its actions, then a case could be made that it had a unique will of its own. But I’ll fully admit this is a matter of judgment.

            Like

          • So if we can predict what a human is going to do, e.g. by brain scans or by deep understanding of that person’s thinking process, does that mean that the person does not have free will?

            Like

          • We’re talking about an emergent quality, so there’s no sharp line that delineates it. Any such line would be arbitrary. There’s no sharp line between a boy and a man, other than arbitrary ones, but the distinction remains useful.

            Like

          • OK, so degree to which an entity has free will is the degree to which it is difficult to predict?

            Like

          • For me, it’s an indicative quality, but not solely sufficient nor strictly required, otherwise we might consider hurricanes to have free will (although we give them names since it often intuitively feels like they do). I think an appreciation, an awareness, to at least some degree, of the consequences of action is necessary.

            Like

          • OK. So by your account, it seems that a complex software system, such as those which already exist, might be said to have free will on the basis of being difficult to predict and also meeting other criteria as established by Dennett.

            There are software systems out there which nobody understand, especially genetic algorithms and artificial neural networks. Often, the only way to predict what they will do is to actually run them, which is basically the same way we could predict what a brain would think if we could simulate it.

            Like

          • Again, I think some appreciation of the meaning of actions is required. It’s not clear to me that there are any software systems yet that rise to that level, although I’m open to the possibility and acknowledge that this is a matter of judgment. Some might argue that consciousness is required, but I’m not sure if I’d agree.

            Like

          • Cheers, SAP. You’re right that I didn’t answer that point. I guess it would depend on what such an awareness looks like, and whether it needs consciousness, as you say. Does that mean that humans are not acting according to their free will when they act without considering consequences?

            Like

          • Good point. You’re causing me to dust off my free will contemplations. I should have said that they have the capability to have that understanding. If they choose not to have it, well that itself is a choice. This raises the interesting question of whether young children have free will. FW is entangled with the concept of responsible agents, and we don’t usually regard children or animals as falling into that category. Hmmm.

            Like

          • So, are humans aware of the consequences of choosing not to consider the consequences?

            Like

          • It seems like it would vary. A mentally impaired person might not be. I would think a mentally healthy adult would be. (And that seems to be the underlying pragmatic assumption of the law.)

            Like

  3. amanimal says:

    This is definitely going to lead to quite a bit of good reading and thanks for mentioning Graziano. Even though I just recently read his ‘God Soul Mind Brain'(which I enjoyed) I’d forgotten about ‘Consciousness and the Social Brain’.

    I can see I’ll have to go back and finish reading Part I eventually as this will be good for my continuing philosophical enlightenment – thanks!

    Like

    • Glad you found it useful. I don’t always agree with Massimo, but his writing often spurs me to think.

      Consciousness and the Social Brain was an eye opening book for me. I think you’d enjoy it.

      Btw, I’m making my way through Costa’s book. I think she has a lot of fascinating insights into our cognitive limitations, which I’m enjoying reading, but I’m finding her grasp of history, economics, and politics, at least prior to contemporary times, to be limited. Much of what she assumes to be uniquely wrong in our times are actually timeless problems throughout history.

      Like

      • amanimal says:

        Thanks for the thoughts on ‘TWR’ – noted for my reread, the “history, economics, and politics” are my weaker areas(esp the economics and politics) so I likely glossed right over some of the stuff you picked up on.

        I wondered why she didn’t relate her “supermemes” to the historical cases she cited more – assumed it was to avoid being overly speculative. She might have done well to coauthor with somebody that could have helped with that, but what do I know this being my first foray into such things 🙂

        Like

        • Thanks for recommending it. I’m still enjoying it, just not necessarily as an insight into civilizational collapse.

          I definitely agree that if she was going to come up with a theory about collapses, she would have been well served to have partnered with experts in the other relevant fields, or at least be well read in them. It’s interesting how often scientists fail to do that.

          Liked by 1 person

          • amanimal says:

            Is that to say you’re not sold on the idea that given increasing complexity and a “cognitive threshold” the differential rates of biological and cultural evolution might inherently lead to collapse?

            Like

          • I’m still in the early stages of the book (p 112) so there’s still a possibility I’ll be pulled around. But when I read the history of the Great Depression, World War II, the American Civil War, and many other episodes in history, I’m struck by how far outside of their “cognitive threshold” most of the players were. Often the story is how they somehow muddled through it better than their opponents did.

            I’m also struck by how mired Costa is in the 2008-2010 time frame. Most of the crises from then have actually turned out okay, but of course she couldn’t know that when she was writing the book. Yes, we went through scary moments, but we still made it through. Despite many missteps, we actually handled the 2008 financial crisis far better than the country handled the Great Depression in the 1929-1933 period. (Something Costa should have known given her subject.)

            I’m actually skeptical that there is any one explanation for all civilizational collapses. Civilizations face several threats and they don’t always bring about their own collapse. Sometimes they are simply conquered or brought down by drought during their prime.

            Liked by 1 person

          • amanimal says:

            Thanks ‘SAP’, I’ll ponder your reply and give you a chance to finish the book 🙂

            Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s