Stephen West and Massimo Pigliucci discuss David Hume

David Hume's statements on ethics foreshadowed...

David Hume (Photo credit: Wikipedia)

Stephen West, on his Philosophize This! podcast, interviews Massimo Pigliucci on David Hume.  This was a big win for me.  Two of my favorite podcasters discussing one of my favorite historical philosophers!  It provides some good insights into Hume’s skeptical and empirical philosophy.

One of the questions Stephen asks Massimo is what he thinks Hume’s take would have been on New Atheism.  Massimo’s response points out the contrast between Hume‘s staunch adherence to his intellectual positions, and his affability and diplomacy.  It also includes a brief discussion on the distinction between skeptics and atheists.  As someone who’s preferred label is skeptic, I found this interesting.

(I’ve always preferred the skeptic label because it’s more general.  I’m not just skeptical of  religious claims, but of many philosophical, political, social, and speculative science ones.  When I apply Hume’s advice to proportion my beliefs according to the evidence, a lot of stuff becomes questionable, including beliefs that many movement atheists hold with certitude.)

The interview finishes up with Massimo’s current fascination with stoicism.  A lot of what he says in this discussion reminds me of the emotional intelligence material I read some years ago.  It’s interesting how many concepts from modern psychology often converge with ancient philosophies like stoicism or secular Buddhism.

Anyway, if you’re interested in Hume and have 30 minutes to spare, I highly recommend this interview.  The earlier episodes that give an overview of Hume’s philosophy are also worth checking out.

This entry was posted in Zeitgeist and tagged , , , , , , , , , , . Bookmark the permalink.

14 Responses to Stephen West and Massimo Pigliucci discuss David Hume

  1. I had forgotten about that part in Hume where he says reason is the instrument of desire. I might have to go back and reread that sometime.

    Also it was interesting to hear Massimo’s take on what Hume might have thought of the new atheists, the four horsemen. I see what you mean by proportioning beliefs to evidence, and how this applies to extreme views on both sides.

    Thanks for bringing this podcast to my attention! It’s a nice way to pass the time without looking at a computer screen.

    Like

    • Glad you enjoyed it.

      I think Hume’s statement about reason being the slave of the passions is one of the most profound observations he made. It’s unfortunate that most people don’t understand it. They usually think Hume is just being snarky, that he’s saying attempts to reason are doomed because we’re hopelessly emotional creatures.

      What he’s actually saying is that the very desire to be rational, to be logical, is itself an emotion. Without emotion, instinct, desire, there is no reason. Emotion is what gives reason something to reason about. Reason is merely a tool of emotion, although when used effectively it can inform and influence those emotions.

      So much follows from that understanding. Most people’s fears of artificial intelligence arise because they don’t understand this crucial point. And most attempts to ground morality in logic (not to mention science) are shown to be ultimately futile in the light of this understanding. (Not to say that logic and science can’t inform values; they just can’t determine them.)

      Liked by 1 person

      • Yes, I see that as a profound observation as well. I think I interpreted it as “we’re hopelessly emotional creatures” when I read it over a decade ago. On the other hand, calling reason a “slave” to emotions might imply that emotions are out of control and we can’t be reasonable, but I agree with your interpretation. I don’t think Hume meant that.

        I think we have a tendency to think of emotion in opposition to reason, but what you’re saying is right. We have a desire to be reasonable. I also agree with your saying that morality can inform values, but not determine them. If we really tried to imagine a purely intelligent being with no emotions, I don’t think we’d end up with anything we’d recognize as a moral creature.

        Liked by 1 person

      • Wyrd Smythe says:

        I think I get Hume’s point (as you two have discussed it here); I’m just not sure I agree with it.

        “…the very desire to be rational, to be logical, is itself an emotion.[1] Without emotion, instinct, desire, there is no reason.[2] Emotion is what gives reason something to reason about.[3] Reason is merely a tool of emotion,[4]”

        [1] We’re taking as given that “desire” is an emotion, and I’m not sure that’s necessarily the case. If I’m hungry and desire food, is that an emotion? If I desire to learn calculus (because I’m curious about numbers), is that an emotion? That seems a broader definition than I usually grant.

        Watching a Twins game generates clear emotions. 😦 I’m not sure the desire for a beer during a game really is an emotion. (I suppose it depends on why I want the beer. “I’m watching a ball game!” seems like a good enough reason. 🙂 )

        [2] Instinct and desire (as separate from emotion!) broaden the grounds for reason even more. Is it possible that merely being conscious (in the human sense of self-awareness and thought) leads to reason? Is it possible to separate emotion, instinct, and desire, from thought? (I would argue that it is.)

        [3] This seems to depend so much on how we define “emotion.” Is curiosity an emotion? Where is the emotion in reasoning about quantum physics or the logistics of how to construct a large building?

        [4] I don’t buy that. Our goals can transcend their origins, just as any student can transcend a teacher. Emotion may have driven us to reason, or informed it, but as a tool I think it can stand well on its own.

        There is a tension between the idea of the dialectic, where pure reason obtains, and how human affairs are driven by our emotions, where often no reason obtains. You’ve posted before on how we often use reason as rational rather than to seek truth. This is a natural human tendency, but I’ve always believed it’s one we can overcome (as we do so many other emotionally driven behaviors).

        Like

        • Intelligent people do disagree on the exact definition of “emotion.” I used the phrase “emotion, instinct, desire” to try to convey the broad sense in which I was using the term (in the same broad sense that I perceived that Hume was using “passions”). If you want to substitute “instinct” everywhere I used “emotion,” I think you’d still get the meaning.

          “Is it possible to separate emotion, instinct, and desire, from thought? (I would argue that it is.)”
          It depends on what you mean by “separate” and “thought.” I think logical reasoning is initiated by instincts to satisfy instincts, but I suppose you could say it has a separate existence in the same way a program I wrote has a separate existence from me.

          On the dialectic, what motivates us to engage in a dialectic? Possibly the results of a previous dialectic, but what happens if we trace those through what motivated each previous one? What type of motivations do we eventually arrive at?

          I have indeed written before on the limitations of logical reasoning. We can never be 100% sure that we either as individuals, or as a culture, aren’t engaging in rationalizing, or in just faulty logic. As an experienced programmer, I’m sure you have a healthy respect for how unlikely a new program is to work without debugging. It’s why I think caution is called for on conclusions reached using logic extending far beyond the debugging provided by empirical evidence.

          Like

          • Wyrd Smythe says:

            “I suppose you could say [thought] has a separate existence in the same way a program I wrote has a separate existence from me.”

            In fact, in your recent AI post, you talk about not building emotions into AI. That suggests intelligence, at least so far as it applies to AI, is separate from emotion. If we understand intelligence well enough to build it without emotions, why wouldn’t we understand it well enough to use it as a tool that way?

            “On the dialectic, what motivates us to engage in a dialectic?”

            Well, yes. I’ve never denied that some sort of instinct or desire is the original motivation. I’m arguing that — having been the seed — the resulting plant transcends the seed. (In fact, with most plants, there’s really nothing left of the seed.)

            “I’m sure you have a healthy respect for how unlikely a new program is to work without debugging.”

            Your argument here seems to be: Getting a complex logical proposition right is challenging, so we should be careful about any conclusions we reach using (possibly faulty) reasoning.

            I agree, but that seems more an argument for working to get the reasoning correct. We don’t discard computer programs because {whatever} makes their reasoning faulty sometimes. We work to improve what they do with checks and balances. We find ways to take into account that sometimes they make mistakes. (Hell, as you know, a huge fraction of any software is error- and input-checking.)

            If we’d trust machines to think rationally, why can’t we learn to do the same? I do believe the dialectic is a worthwhile goal even if it might not be 100% possible.

            (Heh, it’s like gamma factor in SR. Some amazing stuff happens when you get in the 99.99999% range. We do achieve that with computer systems. It’s hard to know what to call a “unit” in programming, but regardless of what measure one picks, the errors-per-unit for high-quality software is way beyond what we achieve in most manufacturing. I used to laugh at the “Six Sigma” program… what, you mean I can make 3 or 4 typos per million characters I type, and that’s “excellent”? Ha! I don’t think so.)

            Like

          • In both threads, I’m using “emotion” to refer to human / animal drives, desires, instincts, and impulses, essentially the programming living organisms have coded in them from natural selection. Obviously AIs will have their own programming that will be to their logic processing as emotion is to ours.

            I agree that the dialectic is worth doing. I just think we have to be aware that any conclusions we arrive at purely through a dialectic have to be held with caution.

            Speaking of SR and gamma, I’m still looking forward to your posts on the twin paradox.

            Like

          • Wyrd Smythe says:

            I’ll be (finally!) writing about SR and space travel all this coming week. (In fact, I need to get started roughing out the posts. And diagrams! I need more diagrams!! 😄 )

            “I just think we have to be aware that any conclusions we arrive at purely through a dialectic have to be held with caution.”

            Totally agree; it almost goes without saying (and that obviously applies to any basis of analysis).

            What I’m getting at is that the dialectic is the best (or least worst, if you prefer) tool for analysis, so working to make it better — debugging the computer program, so to speak — seems the best choice.

            I don’t think we disagree about that. I think maybe we disagree on how possible it is — to what extent “bugs” can be error-corrected? Your most recent post about dark energy touches on how science is self-correcting. Rational thought is the same way, is my premise. To me rational thought and science are close to being the same thing.

            “I’m using “emotion” to refer to human / animal drives, desires, instincts, and impulses,…”

            Understood. I just get the impression that with humans you see emotions (very broadly defined) as a trump card, whereas with AI I get the impression you see it in a lesser role. We just have to be “not idiotic enough” to program them in.

            It seems to me that once you broaden the definition of human emotion to include instinct and desires such as curiosity, then: [A] we’ve already built some of these into machines; and [B] these may not be human wetware emotions at all — I contend that a true AI would be intellectually curious merely in virtue of being a true AI (that is: intelligence).

            Actually, I kind of define curiosity as a trait of intelligence. I contend that any true intelligence, regardless of substrate or programming or hard-wiring, would be curious. It’s a part of being intelligent.

            Like

          • Looking forward to the spaceship diagrams!

            On rational thought and science, I think we agree. If we disagree, I think it’s a very nuanced disagreement.

            For the emotions / AI part, I think I’ll just link to my comment in the other thread, rather than continue to double post 🙂
            https://selfawarepatterns.com/2015/04/06/should-we-fear-ai-neil-degrasse-tysons-answer-is-the-right-one/comment-page-1/#comment-9570

            Like

          • Wyrd Smythe says:

            Are saying we agree rational thought and science can transcend human emotion? Cool! 🙂

            Like

          • -_- Depends on what you mean by “transcend human emotion.”

            Like

          • Wyrd Smythe says:

            “Debug the program.” Come to valid, correct logical conclusions.

            Like

          • Subject to all my caveats above, sure! 😀

            Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s