Politics is about self interest

I’ve read a lot of history, including American history of the 18th and 19th centuries.  It’s interesting to read about the politics of these periods.  From a distance across generations and centuries, you can see the distinction between the self interested stances people took and the rhetoric that was used to justify those stances.

An example from the 18th century was the controversy about the new federal government assuming the Revolutionary War debt from the states.  Both sides of the controversy had philosophical reasons for their position, such as concern about federal power versus the benefits of establishing faith and credit for the United States.  But in general, the states that favored the idea (called “assumption”) still had a lot of war debt, while the states that were against it had paid most or all of their debt already.

This also holds for what was the most controversial issue in early America: slavery.  People’s stance on this issue seemed to be heavily influenced by the economy of their state.  In northern industrial states, slavery was becoming less economically viable and dying out, and was therefore seen as barbaric.  However, in the largely agricultural southern states, slavery remained a major part of the economic system, and was therefore seen as a vital institution.

It’s much more difficult for us to separate the stories we tell ourselves today from the self interested realities.  This is probably why some political scientists argue that people aren’t motivated by self interest when they vote.  But that idea simply isn’t backed by history or psychology.

In their book, The Hidden Agenda of the Political Mind: How Self-Interest Shapes Our Opinions and Why We Won’t Admit It, Jason Weeden and Robert Kurzban argue self interest figures heavily into our political positions.

This isn’t something we generally do consciously.  Citing psychology research that shows we often don’t understand our own motivations, they argue that our unconscious mind settles on stances that reflect our inclusive personal interests, with “inclusive” meaning that it includes the interests of our friends and family.

We tell ourselves a high minded story, one that we consciously believe, but like the public relations spokesperson for a large corporation, our consciousness is often uninformed on the actual reasons why the Board of Directors of our mind adopt a stance.  In other words, our self interested positions feel like the morally right ones to have, and people opposed to our positions seem evil or stupid.

Working from this premise, and using data from the United States GSS (General Social Survey), Weeden and Kurzban proceed to show correlations between political positions and various demographic, lifestyle, and financial income factors.  They also periodically glance at broader international data and, although the specific issues and populations vary, find that the general principle holds.

They identify some broad factors that have large effects on our political positions, including things such as sexual lifestyle, membership in traditionally dominant or subordinate groups (religion, race, sexual orientation, etc), the amount of human capital we have, and financial income.

The first factor, sexual lifestyle, generally affects your attitude on a number of social issues such as abortion, birth control, pornography, and marijuana legalization.  Weeden and Kurzban break people into two broad groups: Ring-bearers and Freewheelers.

Ring-bearers tend to have fewer sexual partners across their life, generally making a commitment to one partner, marrying them, and having a family with a higher number of children.  They often strongly value their commitments (which is why they’re called “Ring-bearers”).  A major concern for Ring-bearers is the possibility of being tempted away from those commitments, having their spouse be tempted away, or their kids being tempted away from leading a similar lifestyle.

This concern often makes them want to reduce the prevalence of lifestyles that lead to such temptation, such as sexual promiscuity.  As a result, Ring-bearers tend to favor policies that make promiscuous lifestyles more costly.  Which is why they’re generally pro-life, oppose birth control and sexual education, and oppose things like marijuana legalization, which is perceived as facilitating promiscuity.

Of course the reasons they put forward for their stances (and consciously believe) don’t reflect this.  For the abortion stance, they’ll often argue that they’re most concerned about protecting unborn children.  But the fact that they’re usually willing to make exceptions in cases of rape or incest, where the woman’s sexual lifestyle usually isn’t a causal factor, shows their true hand.

On the other side are the Freewheelers.  Freewheelers generally lead a more active sexual lifestyle, or aspire to, or want to keep their options open for that lifestyle.  They’re less likely to marry, more likely to divorce if they do, and generally have fewer kids.

Freewheelers generally don’t want their life style options curtailed, and don’t want to experience moral condemnation for it.  This generally makes them pro-choice, in favor of birth control and family planning, and in favor of things like marijuana legalization.

Like Ringbearers, Freewheelers usually don’t admit to themselves that preserving their lifestyle options is the motivating factor for their social stances.  Again, focusing on abortion, Freewheelers usually say and believe that their stance is motivated to protect women’s reproductive freedom.  But the fact that pro-choice people are often comfortable with other laws that restrict personal freedoms, such as seat belt laws or mandatory health insurance, shows that personal freedom isn’t the real issue.

Freewheelers also often don’t have the private support networks that Ringbearers typically enjoy, such as church communities, which Weeden and Kurzban largely characterize as child rearing Ringbearer support groups.  This makes Freewheelers tend to be more supportive of public social safety net programs than Ringbearers.

The next factor is membership in traditionally dominant or subservient groups.  “Groups” here refers to race, gender, religion, sexual orientation, immigrant status, etc.  In the US, traditionally dominant groups include whites, Christians, males, heterosexuals, and citizens, while traditionally subservient groups include blacks, Hispanics, Jews, Muslims, nonbelievers, females, gays, transsexuals, and immigrants.  It’s not necessarily surprising that which group you fall in affects your views on the fairness of group barriers (discrimination) or set-asides (such as affirmative action).

But there’s a complicating factor, and that is the amount of human capital you have.  Human capital is the amount of education you’ve attained and/or how good you are at taking tests.  Having high human capital makes you more competitive, reducing the probability that increased competition will negatively affect you.  People with high levels of human capital are more likely to favor a meritocracy.  On the other hand, having low human capital tends to make getting particular jobs or getting into desirable schools more uncertain, so increased competition from any source tends to be against your interests.

For people with high human capital and in a dominant group, group barriers mean little, so people in this category tend to be about evenly split on the fairness of those barriers.  But people with low human capital and in a dominant group tend to be more effected by increased competition when group barriers are reduced, making them more likely to be in favor of retaining those barriers.

People in subservient groups tend to be opposed to any group barriers, or at least barriers affecting their particular group.  People in subservient groups and with high human capital, once barriers have been removed, tend to favor a meritocracy and to be less supportive of specific group set asides.  But people in subservient groups and with low human capital tend to be in favor of the set-asides.

All of which is to say, more educated people tend to be less affected by group dynamics unless they’re being discriminated against, but less educated people are more affected by those dynamics.  Less educated people discriminate more, not because they’re uneducated, but because their interests are more directly impacted by the presence or absence of that discrimination.

And finally, Weeden and Kurzban look at financial income.  It probably won’t surprise anyone that people with higher incomes are less supportive of social safety net programs, which essentially redistribute income from higher income populations to lower income ones, but that people with lower incomes are usually in favor of these programs.

Most people fall in some complex combination of these groups.  Weeden and Kurzban recognize at least 31 unique combinations in the book.  Which particular combination a person is in will define their political perspective.

For example, I’m a Freewheeler (relatively speaking), mostly in dominant groups except in terms of religion, where I’m in a subservient group (a nonbeliever), have moderately high human capital (a Master’s degree), and above average income.  Weeden and Kurzban predict that these factors would tend to make me socially liberal, modestly supportive of social safety nets, opposed to religious discrimination, in favor of meritocracy, and economically centrist.  This isn’t completely on the mark, but it’s uncomfortably close.

But since people fall into all kinds of different combinations, their views often don’t fall cleanly on the conservative-liberal political spectrum.  Why then do politics in the US fall into two major parties?  I covered that in another post last year, but it has to do with the way our government is structured.  The TL;DR is that the checks and balances in our system force broad long lasting coalitions in order to get things done, which tend to coalesce into an in-power coalition and an opposition one.

In other words, the Republican and Democratic parties are not philosophical schools of thought, but messy constantly shifting coalitions of interests.  Republicans are currently a coalition of Ringbearers, traditionally dominant groups, and high income people.  Democrats are a coalition of Freewheelers, traditionally subservient groups, and low income people.  There may be a realignment underway between people with low human capital in dominant groups (white working class) and those with high human capital, but it’s too early to tell yet how durable it will be.

But it’s also worth remembering that 38% of the US population struggles to consistently align with either party.  A low income Freewheeler in traditionally dominant groups, or a high income Ringbearer in a traditionally subservient group, might struggle with the overall platform of either party.

So what does all this mean?  First, there’s a lot of nuance and detail I’m glossing over in this post (which is already too long).

Weeden and Kurzban admit that their framework isn’t fully determinant of people’s positions and doesn’t work for all issues.  For example, they admit that people’s stances on military spending and environmental issues don’t seem to track closely with identifiable interests, except for small slices of the population in closely related industries.

The authors’ final takeaway is pretty dark, that political persuasion is mostly futile.  The best anyone can hope to do is sway people on the margins.  The political operatives are right, electoral victory is all about turning out your own partisans, not convincing people from the other side, at least unless you’re prepared to change your own position to cater to their interests.

My own takeaway is a little less stark.  Yes, the above may be true, but to me, when we understand the real reasons for people’s positions, finding compromise seems more achievable if we’re flexible and creative.  For instance, as a Freewheeler, the idea of content ratings and restricting nightclubs to red light districts suddenly seem like decent compromises, ones that don’t significantly curtail my freedom but assuage Ringholder concerns of being able to keep those influences away from them and their family.

And understanding that the attitude of low human capital Americans toward illegal immigrants is shaped by concern for their own livelihood, rather than just simple bigotry, makes me look at that issue a bit differently.  I still think Trump is a nightmare and his proposed solutions asinine, but this puts his supporters in a new light.  Most politicians tend to be high human capital people and probably fail to adequately grasp the concerns of low human capital voters.  In the age of globalization, should we be surprised that this group has a long simmering anger toward the establishment?

In the end, I think it’s good that we mostly vote our self interest.  We typically understand our own interests, but generally don’t understand the interests of others as well as we might think.  This is probably particularly true when we assume people voting differently than us are acting against their own interests.

Everyone voting their own interests forces at least some portion of the political class to take those interests into account.  And that’s the whole point of democracy.  Admittedly, it’s very hard to remember that when elections don’t go the way you hoped they would.

66 thoughts on “Politics is about self interest

  1. Well, no. A better approximation is Drew Westen’s The Political Brain. People don’t vote their self-interest, because self-interested people don’t vote. Hell, even the Marxist idea – that people vote their class interests, not their self interest – is closer to the truth.

    Voting is a half hour to an hour or so, with an incredibly tiny chance of putting one of your candidates over the top. Unless you personally have millions of dollars riding on some particular government policy, it’s not worth it, if only the impact on YOU counts in your book. Ditto even if you fudge “self interest” to include friends and family – unless you have many thousands of friends.

    People’s self interest certainly influences which ideas and attitudes they’ll accept. Upton Sinclair: “It is difficult to get a man to understand something, when his salary depends on his not understanding it.” That’s enough to explain the correlations. But culture, “identity”, and personal experience also influence ideas and attitudes, probably more so.

    Liked by 2 people

    1. I don’t think Weeden or Kurzban would argue that emotion isn’t a major factor. Their point would be about what leads to that emotion. I haven’t read Westen’s book, although I’ve heard the sentiment many times. Too often, it amounts to the idea that people are voting in a way we dislike because they’re just being emotional rather than calm and rational like us.

      That may be a comforting story for the left to tell itself, but I think it’s one actual candidates settle on at their peril. Voter messaging does have to tell a visceral emotional story, but voters won’t find it emotional in the right way if it doesn’t speak to their inclusive interests.

      Voting is indeed a hassle. Undoubtedly that’s why 40% of eligible voters didn’t do it in November, despite it being the most hyped and intense election in decades. In normal presidential election years, it’s at around 45%, and for midterms it’s around 60-70%. So if the point is that many people don’t vote due to short term self interest, then yes, that’s definitely true.

      The people that do vote are simply acting within a longer view of their interests. I live in a deeply red state, but still vote even though I know my vote won’t contribute to the candidate I want to see win. Among other reasons, this time it contributed to Trump losing the popular vote by millions.

      Weeden and Kurzban spend some time justifying their use of inclusive self interest, rather than just individual self interest, referencing Richard Dawkins’ ‘The Selfish Gene’ to note that most of us aren’t evolutionarily wired to only be concerned with our very narrow individual interest. A narrow definition as a benchmark about whether people are self interested is a false standard.

      On your last paragraph, I agree that the correlations aren’t perfect. If you notice, the word “tends” appears a lot in the post. But on culture and identity, what determines which culture and identity we choose? Showing that a self identified conservative votes for conservative candidates doesn’t really tell us much. Showing why they’re a conservative is what Weeden and Kurzban are going for. And they have a lot of data on their side.

      Liked by 3 people

      1. The broad sense of “self interest” doesn’t deserve the name, unless you’re applying it to genes, rather than people. Suppose an African American takes a lower-paying job in order to work for racial justice in America. That, like all human behavior, is the result of “selfish genes”, but to call the person or their behavior selfish would be to overstretch the term beyond recognition. Nor can such a broad group of people they help be considered a matter of “inclusive fitness”.

        Most people don’t choose their culture, so much as grow up in it. Identity is also at least partially foisted on us by our experiences. I certainly agree that emotions aren’t irrational or arational, necessarily. People typically have reasons for their emotions. But those reasons need not be selfish in any useful sense.

        Liked by 1 person

  2. Great post, Mike, and one that makes very good sense to me. I’ve long felt that people don’t recognize their own bias, and don’t realize when they’re simply acting out a statement of their needs. I’ve seen this a lot in my own family, where as a result of being a modern concoction of two or three families spliced together, we probably have a good chunk of those 31 categories represented. I wager that’s not entirely uncommon. It makes for an interesting Thanksgiving!

    I’ve long been pained to observe the way these (unconscious?) political stances become the cause of real barriers to thoughtful communication. I take this as a call for mindfulness, for self-reflection and consideration of where other groups are coming from. And I think you’re right: when doing so room opens up for productive compromise.

    I don’t know what category I’m in, but something of a mix I think. I’m pretty ring-bearing on a personal level, at least when it comes to my own relationships, but strongly believe to each his own and think freedom of sexuality and freedom of identity should perhaps be added to the Bill of Rights. It’s really hard to say what we should do, because politics to me is a game of scarcity the way it’s currently played. It is driven at a deep level by the unsettling feeling of ending up on the outside looking-in, of the exertion of power to control an outcome, and of the massing of power to have influence, and I think the authors are right in noting that those who are remote from such an eventuality are probably less concerned by it. And this is reflected in their political outlook.

    Michael

    Liked by 3 people

    1. Thanks Michael!

      I know what you mean about families. My own extended family is mostly conservative, but a few of us are rebels, with one cousin being a new ager, his brother a nihilist, and me the skeptic.

      I think mindfulness and self reflection is very good response to this kind of information. Realizing that everyone has reasons for their point of view is important. As a society, we’ve developed a habit of thinking of those who disagree with us as vile or ignorant. Compromise with someone we think of as evil is very difficult, but compromise between two groups with conflicting interests seems like a manageable problem.

      I’m not sure of your category. You seem like someone with a good amount of human capital, so that might be a liberalizing influence. But I probably didn’t emphasize enough in the post that these 31 groups are statistical averages. Probably few individuals in any one group are going to perfectly match their group’s statistical profile. I know I’m much more supportive of safety net programs than mine would indicate.

      Good point on politics and scarcity. No society can do everything. Every one must prioritize. But the act of prioritization is rarely neutral. It wouldn’t be political if there weren’t winners and losers.

      Liked by 3 people

  3. I was introduced to this social psychologist named Arlie Rochschild when writing a paper on my cultural anthropology course; she made a similar point in her ethnographic study of Tea Party supporters (I believe the novel is called Strangers in Their Own Land though), that people do not necessarily vote out of economic self-interest. Instead, she remarks that people vote out of emotional self-interest. For example, though many Tea Party supporters may enjoy the rich environment of their states, they may not necessarily vote for people with a strong background in environmental regulations as they would not trust the “Big Government” to support this – or around similar lines. Similarly, people are less likely to vote against their emotional self-interest (i.e. Republican, conservative, libertarian) if they fall under categories such as LGBTQ.

    Liked by 2 people

    1. Good point. There is definitely more to self interest than economic self interest. The question with emotional self interest is, what is behind that emotion?

      Often the calculation can come down to the question, will it be good or bad for “people like me”? If it’s good for my group, then it will indirectly increase my social standing. If it’s bad, then it will indirectly hurt it. Admittedly, the mix of all of this in any one human being is extremely difficult to untangle. It’s only in looking at large scale populations that the relationships start to become obvious.

      Sometimes the combination of factors in one person can lead to surprising results, such as Caitlyn Jenner being a conservative Republican and (initially) a Trump supporter. People, focusing on her transsexual identity, were shocked, but it makes a little more sense when you remember her sports and business background.

      Liked by 2 people

      1. Yeah, you do make a good point; that looking large scale populations, relationships between different factors – human capital, ethnicity, and such – do become clear but that the combinations of factors in one person does lead to surprising results. If it weren’t for the latter, I would have to say that discovering this does nothing to alleviate any pessimism because it means we are just conducting cognitive heuristics.
        On the other hand, similar to how former idealists confront tragic truths, it kind of makes you want to re-explore the other side of the political spectrum – to not be a default political orientation by your background – but that may be just me.

        Liked by 2 people

  4. I am very tired and didn’t read your piece entire, so I am just reacting to the title.

    Consider what economics created when economists decided that economic exchanges were only driven by self interest.

    I am not saying they are not involved, just that complex issues are complex, not simple issues in disguise.

    I will read your piece tomorrow … and apologize if appropriate.

    Liked by 2 people

    1. Definitely self interest is complex, and I might should have titled this post “Politics is about inclusive self interest”, but it seemed snappier without the “inclusive” part, although that does get mentioned in the post. “Inclusive” in the case includes the interests of our families and friends, as well as “people like us”, which may have indirect effects on our social standing.

      Liked by 1 person

  5. Interesting post as always. I’ve actually been reading some political books lately — the latest was “Democracy for Realists”. The argument in that book is that people don’t vote their self interests, but rather engage in identity politics. They spend some time analyzing the different voting models and then debunking them. However, I wonder if the identity model and the self-interest model presented in that book are really at odds? As you stated in one of your responses to a comment, what created those identities in the first place?

    I may need to look up that book.

    As always, thank you for thought-provoking posts.

    Liked by 2 people

    1. Thanks BIAR! I think you’re right that both models are accurate, for what they do.

      But Weeden and Kurzban’s point is that when we’re looking for causal factors, we have to be careful not to find correlations between synonymous notions. The example they use is asking why someone likes going to parties. One response, with probably a 100% correlation, is that they’re an extrovert. That seems like a causal relationship, until we remember that part of the definition of “extrovert” is enjoying social gatherings, like parties. That means what initially appeared to be a causal explanation amounts to: they like going to parties because they like going to parties, which tells us little.

      That’s not to say that tribalism isn’t a factor, but the question is what attracted people to that tribe initially? Usually there are some key factors for that initial attraction. Once they’re in the tribe, they may adopt many of the tribe’s stances on other positions that they otherwise might have been neutral on, at least unless any of those other stances conflict with their own visceral interests.

      ‘Democracy for Realists’ sounds interesting. I may have to check that one out myself. Thanks!

      Like

  6. Okay, I had a chance to read the full post and, yes, I do believe self-interest is a factor in people’s decisions. I would be shocked to find otherwise. And … (and you knew that was coming, no?) … and the most common element of voting patterns in the country involves people voting against their own economic interests (voting for Republicans, the party of the rich, the party of wage suppression, the party of war). I suggest that people also often vote their bile. I think a great many people voted for Mr. Trump as a poke in the eye of self-righteous liberals who have referred to them as ignorant goobers who do not know their own self-interests. People also vote their religions, which have absolutely nothing to do with their real self-interest and only involve imaginary self-interests. (Voting along religious lines is often a judgment about perceived morals.)

    Liked by 3 people

    1. Thanks for reading it. I know it’s a long post.

      I think my response would be to ask, are you still assuming you know their interests better than they do? Perhaps a better way to look at it, are we (liberals) perhaps assessing what we think their interests should be in terms of our values, but not in terms of theirs? The work of Jonathan Haidt comes to mind here. Their interests may be in terms of values that we and they simply don’t share.

      I just read something where working class whites are turned off by liberal attitudes of trying to make it easy for them to attend college. Their reaction to college as a solution to their problems seems to be, “Don’t force your version of the American dream on us.”

      Liked by 1 person

  7. Just read the post, Mike. It’s very interesting indeed and I found myself agreeing with most of the groups and their biases especially of the human capital and dominant/ subservient groups.

    I wasn’t able to relate to the Ringbearer one though which is surprising considering I would classify as a Ringbearer. I’m all for pro-choice and marijuana legalization. The reasons are exactly women’s freedom and freedom in general. Perhaps this may reflect that if your choice is not aligned with your groups you might actually be truly motivated by the high minded story of the opposite group?

    And as far as political suasion I think if the politicians know of the people’s biases they can use that to spin stories that those people will find attractive during campaigns, leading to a greater following and votes (they obviously don’t have to hold up to the promises once in office). I think that’s exactly what Trump did.

    Liked by 1 person

    1. Thanks Fizan. On the Ringbearer group, I didn’t have space to make this clarification in the post (it was already far too long) but every group is a statistical average. In any group, there will be individuals closer and farther from the overall group profile. Most will be near it, but some will vary, particularly if their membership in other groups creates issues they care more about.

      I think you’re right about Trump. I even had a Trump supporter say to me a while back that he knows Trump sucks, but that he was the only candidate talking in terms on his (the supporter’s) needs. I think the rest of the political establishment has to come to terms with the fact that there’s a market for what Trump is selling.

      That doesn’t mean they have to cater to everything in Trump’s platform, but working class people in multiple countries are showing that if society doesn’t figure out how to get them on the globalization ship, they’re prepared to burn that ship down.

      Liked by 1 person

  8. Yes Mike, politics is about self-interest. In fact I consider conscious existence itself to be about self-interest. So if true, then what’s a useful conception of “self-interest”? As you know, I have a theory about this. Furthermore I consider this specific aspect of our nature to encourage us to deny its very existence, thus resulting in nothing short of the softness of our mental and behavioral sciences! (Of course the establishment will instead say that there is a natural softness to these fields due to measurement difficulties, thus absolving them of expected blame— a self serving position which actually justifies my premise.)

    You’re quite aware of the theory that I speak of, but if possible I’d like you to go further. If you would, please tell me why it is that (according to me) my theory encourages us to deny my theory? I consider this to be the crux of the matter…

    Liked by 2 people

    1. Eric,
      I have to admit that I don’t know the answer. I do know your theory, but I don’t recall us discussing this particular aspect of it.

      But this reminds me that one of the authors of the book discussed in the post, Robert Kurzban, wrote another one, ‘Why Everyone (Else) is a Hypocrite’, which he discussed in an interview with Julia Galef on a recent episode of Rationally Speaking: http://rationallyspeakingpodcast.org/show/rs-188-robert-kurzban-on-being-strategically-wrong.html

      He gives an answer about people denying reality that I found interesting, and that might resonate with the answer you have in mind. From the transcript:

      The way I think about this is when we talk about self deception, in almost every single case, what we’re really talking about is something like look, this person has this belief which somehow they really shouldn’t have. They should have a belief that they’re more likely to get a broken leg. They should have the belief that they’re a worse driver. They should have whatever belief that’s closer to what the reality is in the world.

      But they don’t have that belief. And then the question is why? And my argument is well, it’s because having the false belief is useful for persuading others about how wonderful you are.

      In that context, yeah, I actually do — and I’m reluctant to say this since we’re taping, but I do actually think that these strategically wrong beliefs are the product of evolved systems that were specifically designed to be wrong in this way that, yeah, is helpful in the long run.

      There are two ways to interpret that, one of which I think is compatible with your model and one may not be. The compatible way is simply that people are mistaken in their beliefs, but that the mistake ends up being adaptive. The one that might be incompatible, because it involves unconscious scheming, is that people unconsciously deceive themselves for strategic value.

      Sorry if this ended up being utterly outside of what you were asking.

      Liked by 2 people

    2. No apologies necessary Mike. This actually gives me a taste of the besused superiority that I think my professors felt when they’d ask us students cryptic questions. Here they could feel superior since they had the answers while we did not. But then what gave them the right to feel this way, when they didn’t actually figure anything out themselves? It always grated me how they were simply taught first whatever they were teaching us.

      Regarding the Galef interview of Kurzban, I did have a listen and enjoyed it, but no that wasn’t it. If a bad belief by chance provides someone with a good outcome, that’s just not going to be a very interesting circumstance regarding predictive theory. So that couldn’t be my point. (Perhaps I flatter myself here!) Then regarding your “unconscious” scenario, I do not fear getting into such quasi conscious speculation, so long as we’re clear that this is quite different from the vast supercomputer that I call the non-conscious mind. But as it happens I’m not talking about self deception here at all. A good hint should be that it’s extremely simple, even though I consider it to largely be why there are so many problems in philosophy and mental/behavioral sciences today.

      My theory of course is that feeling good and not feeling bad constitutes the value of anything, and therefore the value of any person. So if this is the case, then why can it be best for a person to instead display altruism?

      Liked by 2 people

      1. Thanks Eric.

        As an aside, not really about your theory, but I’m starting to think that the distinction between consciousness and unconsciousness doesn’t exist, at least in terms of mental processing. More and more, it’s seeming to me that the way to understand it is that all actual cognition is unconscious, with what we call “consciousness” being a secondary representation of the brain back to itself on some of its processing. We never know thoughts in and of themselves, only their introspective representations, just as we never know the outside world directly, only the representations our brains build for it.

        Anyway, on your theory, am I correct in assuming that the answer is we don’t admit because doing so feels bad? And we do altruism because it feels good?

        The term “feel good” is a bit problematic though. I sometimes do things even though it feels bad, but I do it anyway because I anticipate later feeling good because I did it. Maybe this is the hope aspect you discussed before?

        Liked by 1 person

    3. Mike,
      Well yes consciousness surely is just a representation of reality. I don’t know that what I’m conscious of exists, but rather just that my consciousness itself exists. Keep going with that. But also know that from my own models there is only a vast supercomputer that I call “the non-conscious mind”, and a tiny computer that it creates which encompasses all that we idiot humans know of existence (“consciousness”). I avoid the “unconscious” term when I can, since in practice people seem to use it to mean a kind of melding of the two minds, and without even acknowledging the existence of the basic one that creates the other. Thus it can be problematic when I say “not conscious” or “non-conscious”, since the interpretation seems to come back as “quasi conscious” or “unconscious”. To me we must first acknowledge the two basic forms of computer before we get into their melding. So I instead like to use the “subconscious” term for this.

      Regarding your answer, you seem to be saying that we don’t admit our selfishness because this feels bad to us, and that we do do altruism because it feels good to us. Well sure, but given my premise that feeling good and not bad is all that matters to anything, those are simply tautologies. (Flashbacks to pompous professor bemusement, so I do understand! Furthermore I’ll again state that I suspect you to have ten time the mental processing capacity that I do, so I certainly don’t consider myself superior.) A simple scenario may clear this up. From there I should be able to make my point about how this dynamic causes philosophy and our mental/behavioral sciences to remain so soft.

      Let’s say that I am some kind of estate agent and that you are a potential buyer. So then what’s my job? Do I present to you with an accurate representation of reality as I see it, which among other things may be to demonstrate my own need for you to buy from me? Or do I instead use my theory of mind skills to assess whatever it is that you believe and desire, and so play upon those concerns to encourage you to want to buy from me?

      So then let me ask again, why can it be be best for me (as salesman) to display altruism rather than my own desires? (And of course in life we’re all buyers and sellers. But cheers for getting my “hope” dynamic down, which is complemented by “worry”.)

      Mike as you know I’ve been following Massimo Pigliucci for years, but for some reason I’d never delved into his Rationally Speaking podcasts. Then this Kurzban interview got me thinking that “Hey, this Julia Galef is pretty damn sharp!”. So I went back to episode #1 where she even took Massimo to task for his belief that philosophers, with no generally accepted understandings in their field, still provide humanity with associated expertise. (In June I challenged him here as well: https://platofootnote.wordpress.com/2017/05/29/the-metaphysics-of-constitution-and-bodily-awareness-a-case-of-philosophers-studying-chmess/comment-page-4/#comment-21511)

      I must say that I’m far more impressed with Jason Weeden than Robert Kurzban. It’s kind of like how I liked Jon Mallatt for rolling his eyes on the Ginger Campbell show regarding inside and outside senses. They’re all just senses, though he still had to pacify his partner and go along with that interoception and exteroception business. In life we must make compromises to get what we want, which is actually my point.

      Liked by 1 person

      1. Eric,
        I’ve actually become pretty leery of the word “subconscious”. Apparently a lot of people take it as meaning a separate “subterranean” consciousness from the one we experience. Since I never mean that by the term, I’ve decided to stick with conscious and unconscious, or non-conscious. I’m not sure what you mean by “quasi-conscious”.

        For me, the crucial distinction is whether we can introspect it. If we can, then it’s part of consciousness. If we can’t, then it isn’t. The only grey area is maybe stuff that happens within the scope of introspection that we never actually introspect, but my tendency is to include it in consciousness.

        My point on representation isn’t about the outside world, but of our own mind, although the issues are similar. We can never have first hand knowledge of any of our cognition. Our only knowledge of any cognition is through the introspection mechanism, and the accuracy of what we think we know is only as accurate as the introspection mechanism. And introspection evolved to be effective, not necessarily accurate.

        “So then let me ask again, why can it be be best for me (as salesman) to display altruism rather than my own desires?”
        Because it might engender goodwill that might increase long term sales more? Alternatively, the salesman might get benefits unrelated to whatever he’s selling, such as maybe giving samples to a cute coed he might like to date, although I guess he’s still being a salesman in that case, just of a different product.

        I’ve actually been listening to Rationally Speaking for years. It’s been a long time since I listened to that first episode. Yes, Julia is excellent, but I do miss the interplay that her and Massimo used to have, although he sometimes had a tendency to be a bit too bossy.

        I actually found both Weeden and Kurzban impressive. But the Weeden interview is what got me to read their political book, and I have to admit I haven’t bitten yet on Kurzban’s solo book.

        I don’t remember Mallatt taking that stance about exteroception and interoception. In my mind, it’s a useful distinction, although I’m never clear on which side touch is supposed to fall. I think Feinberg and Mallatt treat it as a sense on the border between the two, although I’ve read others who reserve the term interoception for only internal body senses.

        Liked by 1 person

    4. Mike,
      Here’s how I see mental classifications in general. There is a vast supercomputer that is not conscious in my head, and I call this a “non-conscious mind”. Furthermore this vast computer is so advanced that it creates a separate form of computer that constitutes what I know of existence, and I call this a “conscious mind”. Of course everyone uses the “conscious” term, and at least some scientists today use the “non-conscious” term (not that this term yet has a home at Wikipedia), but I don’t know of anyone other than myself who says that there is a vast non-conscious computer that creates the conscious form of it. Can you or anyone effectively criticize my position here? To this point I’ve neither noticed complaints about it, nor that anyone other than myself plainly states it.

      Anyway if it’s effective to say that there are simply these two forms of computer in my head, then any useful distinction other than “conscious” and “non-conscious” could only be a melding of the two. Freud must not have had any idea what a “computer” was back when he was developing his theory, though it’s pretty clear that his “unconscious” term, as well as the standard “subconscious” term, were never meant to be all conscious or all not conscious. So today I would have us interpret them as meldings of the two basic forms of computer. They could be considered “quasi-conscious” in the sense of “partly”. Furthermore today I’d actually rather that we stop using the “unconscious” term altogether, given it’s potential to be interpreted as “non-conscious”, or one of the two basic forms of computer. I’d prefer for the “subconscious” term to take the melded role exclusively, though it’s not that big a deal to me. The main thing is that my model itself becomes generally accepted.

      On your introspection test from which to decide what’s conscious and what isn’t, I’m fine with it. I believe that all of the elements of consciousness that I’ve identified in my own model, the three forms of input, the one form of processor, and the one form of output, are introspection privy.

      In the sales scenario, apparently I wasn’t direct enough. The “display altruism” position wasn’t actually meant to provide altruism, but rather to make the potential buyer think that the seller is being altruistic. Perhaps I should have said “fake altruism” or something like that? Anyway it’s my position that the most effective sales people tend to quickly get a sense of what their clients want, and so are able to manipulate them to their own purposes. Given their abilities to make others feel good, I find that these are the sorts of people that tend to be most liked in life in general.

      Let’s try this one final time: Why can it be best for a person to display altruism rather than to actually behave altruistically?

      Mallott let his difference of opinion on the senses be known from around minute 9. I get the sense that this book is essentially Feinberg’s, though perhaps Mallott was brought in to help sell it? Could it be that Feinberg knew that he needed a more likable person, thus demonstrating my position, as well as the position of this post that politics are about self interest? This wouldn’t surprise me at all. And who’s more likeable than Jon Mallott?

      Liked by 1 person

      1. Eric,
        On conscious vs unconscious vs non-conscious vs subconscious, I think we’re talking about three categories of brain processing.
        1. Processing that happens when we are awake and responsive that is within the scope of introspection.
        2. Processing that happens when we are awake and responsive that is NOT within the scope of introspection.
        3. Processing that takes place regardless of whether we are awake and responsive.

        It seems like we’re agreed that 1 is consciousness. We usually refer to 3 as autonomic processes. The question, it seems to me, is what to call 2. I think when we talk about the unconsciousness or non-conscious processes, most people understand we mean 2 rather than 3.

        “but I don’t know of anyone other than myself who says that there is a vast non-conscious computer that creates the conscious form of it. Can you or anyone effectively criticize my position here? ”

        I don’t know of anyone else who phrases it exactly like that, but I perceive there is wide consensus in psychology and neuroscience for something like it. My criticism is that you seem to posit a very sharp divide between conscious and non-conscious processing, referring to them as separate computers. (Although your use of the term quasi-conscious might indicate otherwise?) As we’ve discussed before, I’m not comfortable that it’s that clean.

        We’ve talked before about a concept I think we agree on, the imaginative simulation engine. It seems like your version of consciousness matches this pretty closely. However, my own understanding is that the simulation engine is crucially dependent on information from throughout the neocortex, forming what some neuroscientists call the GNC (general networks of cognition). And I think even most of what happens in the simulation engine falls outside of consciousness. I used to think the dividing line was between the details of the simulations and the results, but now I’m not even sure about that.

        Only some of the results seem to be noticed and modeled by the introspection mechanism. Note that the introspective model is not the original processing, but a simplified streamlined and summarized version, effective perhaps as a feedback mechanism for the simulation engine. We never have direct access to the original processing, so technically all of the simulation engine is outside of consciousness.

        This introspection / metacognition / feedback mechanism by itself seems like definitely a small part of the overall system, but I’m not sure it’s what you necessarily have in mind for the conscious computer. (Although as always I could be mistaken.)

        “Why can it be best for a person to display altruism rather than to actually behave altruistically?”
        My response has to be similar, to make people think well of them, to enhance their reputation, which might be useful to them for a wide variety of purposes, including larger future sales, or other social standing dynamics. But maybe the answer you’re looking for is because it makes the potential customer feel good?

        On Mallatt vs Feinberg, I don’t know. To be honest, I can’t recall much of Mallatt’s personality. I was enthralled at the time with the topic. And I can’t recall if I’ve seen or heard a Feinberg interview. I do know the overall book remains something I’m very impressed with.

        Liked by 1 person

  9. Mike,
    First off yes, that’s the answer that I was looking for. We are naturally encouraged to portray ourselves to be altruistic so that we might benefit from what others might give, even though our own happiness should be all that actually matters to us in the end. It is largely because we are naturally encouraged to deny what’s valuable to us, that I consider philosophy and our mental/behavioral sciences to remain so soft. I’ll come back to this later when I have more time, though this is certainly the right sort of post for such speculation. For now I’ll get into the mental terms that we’ve been using.

    You’ve presented the following as a definition for consciousness. “1. Processing that happens when we are awake and responsive that is within the scope of introspection.”

    There are some differences between this definition and my own however. From my definition consciousness does also occur outside of wakefulness. The dreams we have as we sleep are one example. Then beyond sleep there are all sorts of metal states in which consciousness gets skewed, but exists nonetheless. Alcohol changes things, as does cocaine. I’ve invented a term called “sub-conscious” to reference degraded conscious states, spoken with a slight pause (not to be conflated with the “subconscious” term that we’ve also been discussing). As I define the term, full sedation is required (at least) to eliminate consciousness.

    Then there is the question of whether or not consciousness must be introspectable. Well from my model… sort of. I do consider all aspects of my consciousness to concern things that I could theoretically ponder, should I have the faculties to do so. Note that experiences like dreams can be tough to consider. Instead of relying upon introspection however I simply reference the three forms of input, the single form of processor, and the single form of output to define where consciousness does and does not exist.

    Next was “2. Processing that happens when we are awake and responsive that is NOT within the scope of introspection.”

    So here you’re using the wakefulness concept again, and now to represent the “unconscious” that can’t be introspected. But then I believe that Freud wanted a term that didn’t simply concern wakefulness, and so I believe that people in general include dreaming and such to be associated with the “unconscious.” Anyway here people seem to mean a melding of the two basic computers that I propose (not that people other than me cleanly acknowledge the existence of the two). So the unconscious can be seen as partly conscious or quasi-conscious, though I’d prefer that we use “subconscious” so as not to be confused with the non-conscious computer.

    Finally there is “3. Processing that takes place regardless of whether we are awake and responsive.”, and you call this “autonomic processes”. This seems closest to my “non-conscious mind” term, as long as it’s algorithmic rather than mechanical, as well as excludes the conscious form of computer that it creates. As I define the term I’m using a “non-conscious mind” to write you this response right now (made by Motorola, actually). This sort of computer simply takes inputs and processes them algorithmically for outputs, though they have no purpose in the sense that nothing can be good or bad for them. Their existence is personally inconsequential.

    Anyway given the mess we have today, it’s understandable to me why it would be hard for you to trust a “clean” system like mine. But then what’s wrong with it? I theorize a normal computer for which existence was personally inconsequential, though I suspect that it had limitations given that it couldn’t be programmed well enough for open environments. So a second computer evolved on top of the first to solve this inadequacy. This one had to personally decide things given that existence could be good and bad for it. With these two forms of computer the only thing that should otherwise exist is a “quasi” melding of the two.

    It sounds like we’re in agreement that the conscious is dependent upon the non-conscious. I used to say that conscious processing should be less than 1% of the non-conscious processing, but that seems too high. I recently switched to a less than one thousandth of a percent figure.

    Liked by 1 person

    1. Eric,
      Good point about dreams. My strong tendency is to include them in consciousness since we can obviously introspect them (at least sometimes, retrospectively, to some limited degree). Or maybe they fit into the quasi-conscious category. I can’t recall reading anyone who explicitly relegated dreams to the unconscious. (Admittedly, I’ve never actually read Freud.) It implies that any classification schemes we come up with (like my awake and responsive one) are going to be artificial to at least some extent.

      On whether consciousness has to be introspectable, I’ve been struggling with this question for the last several months. A lot of literature (including F&M) talks in terms of primary, sensory, or phenomenal consciousness as though it is independent of introspection. But we know that people can process sensory information without being conscious of it. Which makes me wonder, if there’s no inner eye watching it, does it make sense to talk about inner experience? And is sensory and action processing without that inner quality what we mean when we use the phrase “subjective experience”?

      “Anyway given the mess we have today, it’s understandable to me why it would be hard for you to trust a “clean” system like mine. But then what’s wrong with it?”

      This comes back to those conversations we’ve had in other venues about being too divorced from empirical data. My issue with it is that I don’t know that it’s validated by the evidence from psychology and neuroscience. I know you don’t hold psychology in much regard, but I wonder how you account for neuroscience like Roger Perry’s split-brain patient experiments, which seem to show that consciousness is not in control.

      And like I’ve said before, even if we equate consciousness with imaginative simulations, the prefrontal regions which coordinate the simulations depend on and interact heavily with the regions involved in various forms of perception, which exist throughout parietal, occipital, and temporal lobes. Indeed, the meat of the simulations can be said to span large portions of the neocortex. I’m not sure where we could coherently draw the border.

      For example, if I look up from my desk and see the bookshelf in the corner of my office, the conscious experience of my seeing the bookshelf, of its grey color and overall shape, of the books on the shelves, is crucially dependent on interaction with the posterior temporal lobe. Remove the posterior temporal lobe, and you remove my ability to perceive visual concepts such as the bookshelf. I don’t think the temporal lobe is itself conscious. Yet the information it provides is crucial for my conscious visual experience of it.

      And if we do take psychological studies into account, like the ones the thesis of Weeden and Kurzban’s book are based on, then a substantial portion of those simulations appear to be unconscious (or sub-conscious, or however we choose to label it). How else can our minds form political stance strategies that we’re not conscious of?

      All of which keeps bringing me back to the introspection standard, with all its potential consequences.

      “I recently switched to a less than one thousandth of a percent figure.”

      That’s a pretty low number. I wonder if you mean it literally? If so, it would mean that, of the 86 billion neurons in the human brain, less than a million are involved in consciousness. Perhaps, but that doesn’t leave much substrate for anything more complex than what happens in a bee’s nervous system. In my conception, it would only leave space for the very core nexus of the information flows.

      Liked by 1 person

  10. Mike,
    It sounds like you’re rethinking the “wakefulness” criteria, given that dreams can be introspected. And apparently you’re still deciding if you want to assert that something must introspect in order to be conscious. My own advice there is to categorize traits which represent whatever you can introspect, and then use them to define consciousness in general for all subjects which are suspected to harbor them. There should be “something that it’s like” to exist as these subjects, even when they don’t introspect.

    Of course there is no true definition for consciousness or any other term, but rather just more and less useful definitions in the context of an argument. I believe that the criteria which I provide for consciousness happens to be quite useful in general, so I’m hopeful that this definition will one day become generally accepted.

    My own ideas were developed through introspection rather than through empirical evidence, but that doesn’t mean that they should be considered divorced from such evidence. When I read articles that present evidence of our nature, I’ve certainly noticed consistency with my models. Can they also predict what a given study will end up finding? If my models ever become prominent enough to warrant such testing, I suspect that they’ll do just fine there. But what I find most problematic right now is showing others how my ideas work. I’d very much like for you to gain proficiency in them so that you can effectively test them out yourself.

    I consider dreams to be examples of conscious rather than non-conscious processing, though concerning a degraded state of consciousness (sleep). Drunkenness would be another example of degraded consciousness. Some people should never experience anything other than degraded states, such as through Downs Syndrome. But I don’t know of an existing term that represents these “sub” levels of consciousness in general, so I’ve invented my own. This is “sub-conscious”, spoken with a slight pause at the mandated hyphen.

    Then as for “subconscious” (or even “unconscious”), I use this term to represent ideas that can be said to meld the two computers. For example I could be angry with someone, as well as consciously understand why I feel this way. But then again I might decide that I’m not actually angry, and yet feel signs of anger regardless. So my non-conscious mind should be giving me these feelings, which appropriately affect my behavior, or “subconscious anger”. Actually this suggests that language plays a role here. A dog should simply do what it feels without telling itself that it is or is not angry. Conversely we have the potential to trick ourselves through language, and thus blur conscious behavior with non-conscious influences. Anyway this dynamic should represent how our minds form political stance strategies that we’re not conscious of.

    Regarding the automatic skills that we learn however, I do not classify them as “subconscious” at all. We subcontract much of what we do, such as typing and enunciating words, over to the vast non-conscious mind for it to take care of.

    I’m not sure how split brain patients challenge my theory, given that the non-conscious mind is such a prominent aspect of my model.

    As far as my assessment that consciousness should be less than one thousandth of a percent of the total, perhaps I haven’t been clear what I mean by this. Let’s say that various neurons fire in a way that causes you to feel pain. I’m not classifying such firing to be part of the conscious computer, but rather the non-conscious computer. For the conscious computer there is only the pain, and this occurs through output of the non-conscious computer. All inputs, processing, and output of the conscious computer must occur through the “hardware” of the non-conscious computer, though they don’t exist as those structures. So I guess I’m saying that zero percent of the brain is conscious, if we mean the material stuff by which the non-conscious produces the conscious. Nevertheless as output of the non-conscious computer, I’m saying that a second computer is created that constitutes what I know of existence.

    So if zero percent of my brain is conscious, you might ask, “where” does my consciousness exist? Well I don’t actually know, though I presume it to be somewhere around the brain, since I consider the brain to house the non-conscious computer which produces consciousness.

    If this is the case then why would I say that the conscious processor does any processing at all? Why would I say that it does less than one thousandth of a percent of what the non-conscious side does? Well I’m quite sure that I interpret the sensations that I experience, as well as construct scenarios about them in my quest for happiness. For example, I’m currently telling my non-conscious mind to cause my fingers write these words to you. Whatever conscious processing capacity that it takes for me to figure out what I want to tell you, I imagine it’s useful to also say that there is over 100,000 times as much processing that happens on the associated non-conscious side of things.

    This is definitely helping me get a better grasp of what I mean, and I do hope it’s translating over to you as well.

    Liked by 1 person

    1. Eric,
      The awake thing wasn’t meant to be a rigorous criteria, just a rough border between what we were calling the unconscious and functions that continue when we’re not aroused. Arousal might be a slightly better criteria, since dreams are more likely to happen, or at least be remembered, when we are in REM sleep, in other words, when we’re more aroused than we are when in a deep dreamless slumber, albeit not to the point of being aware of and responsive to the environment.

      “There should be “something that it’s like” to exist as these subjects, even when they don’t introspect.”
      This is the part I’m not sure about. If they can’t introspect, then they don’t really have any ability to know their own experience, to know their particular “something that it’s like”. If that’s true, is it really like anything to be them? Don’t get me wrong, there’s definitely perception, attention, and affect processing happening, but all of those things seem able to happen in us without us being conscious of it. If those things are happening in them, but without any inner eye to perceive it, is there any inner experience taking place?

      Agreed that there is no true definition of consciousness (or of any other definition), but there are definitions that are closer or farther from our intuitions about it. Our intuition of our own consciousness is that if we can’t introspect it, it’s in the unconscious. But our intuition of animals is if they exhibit non-reflexive behavior, they’re conscious. But if we can exhibit non-reflexive behavior unconsciously*, then these intuitions are in conflict with each other, and one may have to give.

      *I’m actually not entirely sure this is true yet. It might be that the unconscious is more limited than this. I’m wondering if there’s evidence one way or another.

      I’m not necessarily trying to persuade you here, but to see if you (or anyone else) has arguments to counter these points.

      On your ideas, I’d actually encourage you to test them now, by reading about what science is currently finding. No one else is going to test them unless they’re consistent with the substantiated studies that are already out there. My own understanding is constantly being fine tuned by everything I read. It’s an approach I highly recommend.

      I would say that if you’ve consciously decided that you’re not angry with someone, but you still find yourself snapping at them, that you’re still unconsciously angry with them, and it might take substantial reasoning to figure out why. Of course, per the next thread, it could also be because you didn’t get enough sleep last night, or ate something that’s upsetting your stomach, and your brain is misinterpreting your interoceptive state as anger.

      On the split brain patients, it isn’t the non-conscious aspect, but the role you describe for consciousness. For example, you say, “I’m currently telling my non-conscious mind to cause my fingers write these words to you.” I did a post on the split-brain experiments a long time ago: https://selfawarepatterns.com/2014/03/24/consciousness-the-interpreter-the-lexicographer-the-reporter/ (Note: I was very taken back then with Graziano’s attention schema theory, and so tried to relate those findings to it, but no need to go down that particular rabbit hole.)
      The TL;DR is that you might think you’re telling your non-conscious mind to type, but the unconscious underpinning mechanisms of your consciousness might be telling it that it’s responsible for what’s happening.

      Definitely these discussions always help sharpen my thinking. Thanks for engaging in them!

      Liked by 1 person

  11. Mike,
    On not being sure that “there is something that it is like” for something to be in, let’s say “pain”, that is if a given subject doesn’t have something that contemplates this feeling, I think this must simply be a matter of definition. If something feels bad to good, as I define consciousness, it is conscious — there will be something that it’s like to be it by definition given “pain”. I suppose this is tautological, but then I also don’t know how “introspection” is being defined such that this inner eye would be necessary for something to feel good or bad. If that’s the case then such a thing will necessarily have introspection as well. Though we may be talking in circles here, we’ll need consistent terms in order to break out of that.

    I do read about what science is currently finding. Sometimes I even get such information from your site and twitter feed. When I read about interesting new studies I naturally relate them back to my own models to see if they’re consistent with these findings. This material brings the potential that my models will need to be altered or even be abandoned. But the thing is, when I tell people that various types of data seem to support my own models, this just isn’t convincing. How could it be if others don’t understand how my models work? So as I see it, that’s my biggest problem right now. As I recall, you told me that this would be the case around the time that we met. I’ve never disputed your claim, but nevertheless try to help others understand.

    It sounds like we’re square on “unconsciously angry”. There are lots of things that we don’t consciously acknowledge, even though we behave otherwise. So let’s say that you’re put into a confrontational position regarding someone’s model of how something works. The natural tendency should be for you to oppose given the position that you were put into. As the findings suggest, you should also see no associated bias there. We’re all nothing more than human.

    On split brains, yes I believe that I’m telling my non-conscious mind to operate my fingers and such. Why? Because I realize that I don’t know anything about the amazing associated robotics that must actually be required to make them do what they do. I don’t know how these muscles work, or even how many I’m using each moment. Still I feel like I’m consciously operating them given that they automatically take instruction so well. So this must be yet another illusion — the non-conscious computer must be doing this rather than me, even though it does feel like I’m doing it. Thus I’m comfortable saying that I’m responsible for less than one thousandth of a percent of total mental processing.

    Let’s see if we can get back on topic. I began by asking you why it is that my theory that personal happiness is all that matters to anything, encourages us to deny that personal happiness is all that matters to us? I theorize that our denial of our selfishness largely holds philosophy and our mental/behavioral sciences back. You answered that perhaps we deny our selfishness so that others will think better of us and thus help us get what we want. Yes that’s how I see it as well. It’s quite ironic that we must portray ourselves to be altruistic in order to most effectively be selfish, and yet this does make perfect sense for a highly social creature.

    That Aeon article that you linked to in the post under “don’t understand our own motivations” demonstrates what I’m talking about here. It should be extremely important for us to have a sense of the mentality and general attributes and liabilities of the people that we come across in life. Thus we evolved to profile others in associated ways. So a person that is not clueless here should develop various prejudices about people (which is not to say that all prejudices are inherently effective).

    Then consider the effects of this dynamic upon various people who are genenerally considered inferior in certain ways. Given a modern culture that tries to deny differences among people (since this might be repressive to some, though portraying altruism better gets us what we want) what do we do? In order to reap the benefits of being perceived as altruistic, we must publicly deny that such differences exist, and even if we still act like they exist as the studies find. This is of course quite consistent with my models.

    Beyond such modern “political correctness”, I’m saying that many centuries of philosophy have been tainted by our incentives to deny that our own happiness is what’s ultimately valuable to us. So instead it was decided that there are “moral oughts” which exist, and so strong has been the incentive to deny our selfishness that when our mental and behavioral sciences emerged, they fell in line with this paradigm as well. This, I think, is largely why they remain so soft. So far they’ve been unable to overcome the social tool of morality, and so haven’t even attempted to identify what’s ultimately valuable regarding existence. The mental theory that I’ve developed is actually a side project to my main project, or a demonstration of what I’m able to do given that I, conversely, do not attempt to deny our selfishness.

    My central theory is that there is an element of consciousness which constitutes all that’s valuable to anything. This is the stuff that motivates a fundamentally different kind of computer that creates teleological existence. My plan is to institute a “real” as opposed to “moral” form of ethics, and thus re-found our mental and behavioral sciences upon a more solid platform.

    Is politics about self interest? Well yes, but there is so much more to say as well…

    Liked by 1 person

    1. Eric,
      My conception of introspection is that it’s not necessary to feel good or bad, but it is necessary to know whether we feel good or bad. In humans, the feeling and knowledge of the feeling are inextricably entangled. There’s a concept in marketing called “the curse of knowledge”, the idea that it’s very hard for us to imagine not knowing something we know, particularly if we’ve known it for a long time. In the case of pain, we’ve always known when we feel it, so the idea of feeling it without knowing about it seems strange. Still, you and I once discussed the idea of pain signals being received by the brain without us being conscious of it, and whether that actually counted as pain.

      That said, after posting my comment yesterday, it occurred to me that I need to learn more about the human unconscious. So I started reading Leonard Mlodinow’s book ‘Subliminal, How Your Unconscious Mind Rules Your Behavior’. I’m a little leery of reading a book on the mind by a physicist given my past experience with physicists and consciousness, but neuroscientist V.S. Ramachandran recommended it, so I’m taking a chance. Hopefully I’ll be on firmer ground with the unconscious once I finish, and will gain some insight into this question.

      Although given the debate between high order theories of consciousness and first order theories, I suspect there will still be room for doubt. It turns out my dilemma is at the heart of this debate, about whether raw unreflected sensory and affect processing can count as phenomenal consciousness. Actually reading a bit about high order theories has somewhat weakened their plausibility for me. But one thing I can say regardless, the meta-representations in human consciousness make it qualitatively different from whatever experience most animals have.

      Glad to see that you’re keeping up with the science, and happy to know that my twitter feed is useful for you. On conveying your theory, maybe the best approach isn’t to tell people that such and such data support it, but to describe it (succinctly, which I know is often a challenge). As we’ve discussed before, I do think you risk triggering reflexive opposition when you talk in terms of your own theory being revolutionary and speak of entire fields as being incompetent. You might make more headway by just dropping those aspects of your pitch.

      On the one hand, I do agree that a significant portion of academia is plagued with political correctness. I think much of the animosity toward evolutionary psychology comes from this (although admittedly some of it comes from appallingly low quality evo-psych posturing). People don’t want to hear how much genetics affects their view of the world. But science should uncover whatever it rigorously can.

      I also wouldn’t deny that centuries of philosophy are plagued by confusion. Indeed, everyone seems to agree on that. We just tend to disagree with which parts are clear eyed and which are confusion. But I have more trouble seeing what you’re talking about for the social sciences. Doesn’t Keith Frankish’s Aeon article and Weeden and Kurzban’s books indicate this problem isn’t as pervasive as you might perceive?

      Or perhaps a better question is, what are some examples of social science that you see as suffering from the problems you identify? Or can you link to well accepted principles that are examples of what you mean?

      Another possibility is to describe predictions your theory makes that more conventional ones don’t, then show how the data validates those theories.

      Anyway, just some thoughts on possible approaches.

      Liked by 1 person

  12. Mike,
    Thanks for the clarification on your conception of introspection — or that it isn’t required to feel good/bad, but rather just to “know it”. My “lower order” definition for consciousness however mandates that consciousness exists with the feeling, not with any introspective knowledge about the feeling. (Not that it’s “true”.) For a person who has never felt pain, and thus can’t know about it, an experience of pain will still be an example of his/her consciousness as I define the term. Also it could be that a given person has felt it in the past, but has absolutely no memory of it (and from my model memory occurs through a second form of input to the conscious processor). For this person the experience of pain will also demonstrate consciousness, even without introspection from which to effectively interpret this input and construct scenarios about how to potentially alleviate it.

    Though I present a lower order definition for consciousness, understand that it would make no sense if a given creature felt pain simply in order to automatically react, or without interpreting such an input and constructing scenarios about what to do next. In that case it might instead simply be programmed to react based upon standard computational inputs. Except for flukes of evolution, as well as debilitated states of it, consciousness should not generally exist just for the hell of it. If ants or flies can feel bad or good (?), this should be because effective consciousness was adaptive, and according to my own “why” of consciousness, these creatures would have needed to teleologically figure certain things out to live as they do. The theory is that standard programming doesn’t suffice in more open environments.

    My current thoughts on pain being signaled that isn’t fully felt, is that it’s not the signaling that constitutes the pain, but rather exactly what’s felt that constitutes it. We’re all familiar with being under trying circumstances, perhaps with an associated adrenaline rush, and in this state somehow getting injured. Given the circumstances we may put that pain aside for a while somewhat. But I’m saying that it’s whatever is actually felt that exists as input for the conscious form of computer, not what might be expected to be felt given neuron firing and such.

    I’m happy that you’re looking harder at modern conceptions of what’s meant by “unconscious”. Our discussion has certainly been helping me get my own thoughts straight here. I’ve now looked up Leonard Mlodinow and suspect that I’ve gotten a reasonable sense of his position from the following video: https://m.youtube.com/watch?v=NJ-IfVHJH58

    His presentation provides me with nothing to complain about. Here you might wonder how my own models deals with the automatic dynamics of our function that he presented? Well if you recall I provide a “Learned Line” by which the tiny conscious processor passes off a great deal of what is supposedly done “consciously”, for the non-conscious processor to instead take care of. As far as I can tell all of his observations may effectively be placed under this conduit. Whether such dynamics are referred to as “unconscious”, “subconscious”, “subliminal” or something else, the point is that there’s a vast non-conscious computer that both constructs as well as aids a tiny conscious computer. Under the presented circumstances it seems effective to say that there is a melding of these computers.

    I’m quite aware that I trigger reflexive opposition when I talk of “revolution” and such. But still it’s difficult for me to say what people what to hear rather than describe my position itself. This is one of many reasons that I need either you, or someone like you, to grasp the full implications of my models. I suspect that such a person would then be inclined to decide “Yes this project does seem worthy. I’d like my own work and name to be associated with it”. Todd Feinberg needed Jon Mallatt in this capacity as well I think, so I don’t believe that my circumstances happen to be all that unusual.

    As far as philosophy goes, I really have found lots of great ideas in it since I began earnest exploration back in 2014. I didn’t expect this to be the case. Nevertheless I will not stand by a profession that supports epistemic dualism, or a “science stuff” and a “philosophy stuff”. Naturalism mandates that there can only be one kind of stuff. I’ll need a diplomatic person with me to make this case without angering those who have a vested interest in maintaining the status quo.

    To the extent that philosophy explores the nature of reality, it must be consistent with the rest of science, and indeed, it must be science. Today philosophy has developed a respectable community, but unlike science this community hasn’t developed its own generally accepted understandings. I consider this void to be holding science back.

    (I think that David Hume was right that “is” cannot beget “ought”, though as a sentimentalist it seems to me that he didn’t quite get the reason right. I think he was right that is can’t get you ought because moral oughts don’t actually exist. I think he got the answer right because is is all there is. I seek the establishment of an amoral form of ethics, and I believe that this will help harden up our soft sciences.)

    As far as that Aeon article and the Weeden/Kurzban book, my ideas would be extremely suspect if there weren’t scientists corroborating them. Fortunately for me they do often enough. And beyond such individual examples I enjoy that the science of economics itself happens to be founded upon the premise of my ideas. But unfortunately as a side science this field seems to be disregarded here. As an economics undergraduate (and I wish I would have saved the book!) I recall a disclaimer that read something like this: We economists study how people tend to behave, not how it’s best for them to behave. To do otherwise would get into value judgements, or the separate realm of philosophy”.

    “Or perhaps a better question is, what are some examples of social science that you see as suffering from the problems you identify?”

    A general observation of such suffering is that these fields continue to remain soft, thus diminishing their ability to provide useful results. Obviously their reproduceability crisis is a symptom of this circumstance.

    “Or can you link to well accepted principles that are examples of what you mean?”

    There was a crash course psychology video about motivation that addresses this. (https://m.youtube.com/watch?v=9hdSLiHaJz8) They called their first option “instinct”, which is what I call it as well (whether mental or mechanical). So to me they got that part right, though instinct isn’t the full picture. Then they mentioned a second potential form of motivation as “drive reduction theory” which concerns positive and negative stimuli. Well that’s the big one! That’s my theory! But it seems to me that they didn’t realize what they had given that they didn’t understand the power of “hope” and “worry” — as if a simple hunger strike is able to contradict this theory. It doesn’t! So they went on to some clearly flawed theories and apparently the whole question is left unresolved today.

    I’ll keep my eyes out for data that my theory seems able to explain better than other theory. If I’m right this shouldn’t be too hard to find.

    Liked by 1 person

    1. Eric,
      I apologize if this response seems miserly. I didn’t want to be unresponsive for too long. It’s been a crazy week so I’ve had trouble sitting long enough to compose anything.

      I agree that the signal, in and of itself, isn’t pain. Pain requires an evaluation, which happens in the brain (in humans in the anterior cingulate cortex). And I totally agree that a damage signal that just results in reflexive action isn’t pain. It has to be a communicated state that is used in an evaluation of possible actions, in other words by the imaginative simulation engine (in my understanding).

      Thanks for that Mlodinow video. Good find. I’m still reading the book, but his remarks are a valid sampling of what he covers in the book. So far, everything he describes fits into the category of automatic behavior, either as instinctive reflexes or as habits. That’s good for the simulation engine understanding of consciousness.

      On higher order theories versus first order ones, I don’t know if there will ever be a way to resolve this question. We can only say that higher order meta-representations are integral to human consciousness. We can’t imagine raw perception without it. We can say that animals have primary or sensory consciousness, but access, higher order consciousness appears to be far more limited. I might do a post on this at some point.

      I don’t know that the whole philosophical profession supports epistemic dualism (if I understand what you mean by it). But I’ll grant that a large portion do. Massimo’s insistence that H2O and water are distinct concepts comes to mind. Ned Block’s recent distinction between philosophical and scientific reduction is another. These distinctions strike me as unproductive, obfuscatory rather than clarifying. I think they’re distinctions brought up to retain questionable concepts. I encounter this every time I say that the conscious experience is the neural firing pattern, rather than talking in terms of correlates, although I can understand it when people want to be epistemically cautious.

      I think the only way that philosophy can find the consensus you’re looking for is to simply fall back to what science can or cannot show. Many scientists might agree, but it largely eliminates the subjects philosophers tend to ponder. There’s no scientific data that can tell us whether a doctor should put a suffering terminal patient out of their misery. Or whether the Ship of Theseus or a copied mind retains the same identity. Ultimately there are no facts of the matter for these questions. Consequently, there will likely never be a consensus on them.

      You’ve referenced economics favorably a number of times, but surely you know it gets just as much oppobrium as the other social sciences. I do sometimes wonder if economics and the other social sciences shouldn’t adopt meteorological type models, that is, tracking historical variables, then using those historical records to compute a probability estimate of the outcomes in a given situation. But I suspect it’s more complicated than I’m imagining.

      The reproduceability crisis is a real issue, but it isn’t in just psychology, but also in medicine and other fields, including hard sciences (http://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970?WT.mc_id=SFB_NNEWS_1508_RHBox ). But I perceive that these issues are methodological rather than axiomatic (at least in the absence of outright fraud, which is rare). Studies with larger sample sizes and stronger statistical significance have a much higher chance of being replicable. A large part of this seems to be about the standard used for concluding that an observed effect is statistically significant.

      I’ll give that CrashCourse video a watch when I get a chance. (My bandwidth where I’m typing this isn’t conducive to it.)

      Liked by 1 person

  13. Much appreciated Mike! Not only was that not a miserly response, but you didn’t even shoot me down! You have given me more to say here however.

    One is that I don’t consider human consciousness to be all that special. I believe that all functional conscious life must harbor a motivation input that constitutes the personal value of its existence, along with senses and memory, and that these inputs are interpreted for the construction of scenarios about what to do. This should occur in mammals, birds, reptiles, fish, and even insects if any of them are conscious. Of course we have far more complex minds than the rest of them do, though otherwise the only functional difference as far as I can tell, is that we evolved a second mode of thought, or a mode that uses formal languages such as English and mathematics.

    For many years I took the position that you’ve just mentioned, or that a conscious input can’t exist unless it’s interpreted by the conscious processor. This made things work out a bit more symmetrically for my models. Nevertheless maybe six months ago at your site some issues came up that made me rethink this. From then I stopped saying that effective interpretation must occur in order to feel bad or good. Such punishment/reward must have been an independent spandrel of certain non-conscious processes, though evolution ended up using it to motivate the second form of computer by which we experience existence. (Massimo recently put up an article on his site which mentioned that octopuses seem to have evolved their consciousness independently from the rest of us. Perhaps so. Beyond octopuses, theoretically any conscious alien life would need to stick to the basics of my consciousness model as well, I think.)

    Speaking of Massimo and Ned Block, I went back to an interview of Ned as Scientia Salon from May 2015 to potentially get a better sense of what’s meant by higher order consciousness, or “access”. According to the comments I left, apparently I took Ned’s version of access consciousness to be the part of thought that constructs scenarios about what to do. Then there is his phenomenal consciousness, which seems more like “interpreting inputs”. I consider all functional conscious life to do both as mentioned, though obviously not to think by means of language. I wonder if this language element is what you mean by “meta-representations”?

    Massimo’s statement that H2O is different from water seems like a failure of epistemology, and specifically my first such principle. There are potentially only “useful” definitions, not “true” ones. But then Ned distinguishing between scientific and philosophical reductions seems quite dualistic.

    I like to be epistemologically responsible as you know. To me the only real difference between science an philosophy, is that one has a respectable community with its own generally accepted understandings, while the other has a respectable community that has not yet developed such understandings. (So philosophy is like science before science was science.) I consider each profession to do the same sort of thing, or take what they think they know (evidence) and use this to test what they’re not so sure about (theory). (I’d actually like for the philosophy community to acknowledge this to be the only process by which anything conscious, consciously figures anything out, or my EP2.)

    Regarding the questions you’ve asked, I’ll give them a shot. I believe that a sensible answer does exist from which to decide welfare issues regarding a suffering terminal patient, and through my amoral form of ethics (ASTU). First we define the subject to be considered, whether an individual or group, and over a specific period of time. Theoretically what’s best for this subject will be the maximization of its utility over that duration. If a subject is defined to be nothing more than the suffering terminal patient before death, well that’s easy. Even if the patient doesn’t want to die, my theory suggests that death will be better for him/her, and to the exact magnidtue that unhappiness overcomes happiness over that period.

    We could (if we want to) say that the Ship of Theseus or a copied mind retain the same “identity”, just as we already say that I retain the same identity from second to second. But none of them are the same as the originals. Even I’m not the same from one moment to the next. Furthermore a copied me is most certainly not the same thing as I am. In order to be me this subject would need to physically be the exact same thing, and this means it must even occupy my exact place in time, space, and any other dimensions of existence. Here’s a simple demonstration of this. If he were standing next to me, but otherwise an exact copy, personally getting hit would hurt me rather than him. We wouldn’t be the same.

    I appreciate economics because it’s founded upon the same premise that founds my ideas. It feels validating to me that if anyone were to come up with a better explanation than the one that I’ve developed, then the science of economics itself would need to be so altered. This seems unlikely. Also I think it’s worth noting that the economics which is set up for individual people and companies does work pretty well, or the “micro” side. Most problematic seems to be the “macro” side that attempts to predict recessions and such, and I think mostly due to speculative information regarding full economies.

    Economics hasn’t been the catalyst for change that I’d like it to be, and I think because it’s a side science rather than a basic one such as psychology. As that Crash Course video from last time demonstrates, psychologists haven’t yet accepted my own “amoral” form of conscious motivation. Derek Parfit famously demonstrated how repugnant my position happens to be, and I do agree with him. My reply however is that reality itself can be quite repugnant, and so science is going to need to acknowledge these repugnancies rather than deny them.

    Liked by 1 person

    1. Thanks Eric. I fear this one might be a bit sparse as well, so I hope you can excuse it too.

      “From then I stopped saying that effective interpretation must occur in order to feel bad or good. Such punishment/reward must have been an independent spandrel of certain non-conscious processes,”
      It seems like you’re carving a third or middle position between a reflexive response to stimuli, which we agree doesn’t involve pain, and a conscious one. This might be the unconscious simulations I’ve mentioned before, although I’m less certain than before that those exist, or to be more precise, that action scenario simulations happen where we have no conscious awareness of the results, even if we never have access to the underlying details.

      For example, I might dislike someone because they remind me of an obnoxious person I knew years ago. The effects on my experience from that past acquaintance on my assessment of this new person is unconscious. I don’t know why I’m disliking them. But the results of that dislike, the feeling of it, is conscious.

      More related to pain, maybe my stomach is bothering me, but it never rises to the level of me being conscious of the discomfort. However, the discomfort effects my judgments, it factors into the simulations, but again, I’m only ever conscious of the results of the simulations, not the details, so I’m not aware of how my stomach discomfort is effecting me.

      (Of course, I can be conscious of the details of a simulation, but that’s a situation where I’m simulating the simulation, consciously thinking through the details, but I’m not aware of the details of the details.)

      Or did you mean something else by the middle ground you identified? (Or am I mistaken in perceiving that you meant a middle ground?)

      Peter Hankins recently highlighted a paper by Block on consciousness, where he describes his distinction between philosophical and scientific reduction, and another related one he makes between deflationist reductionists and inflationist reductionists. http://www.consciousentities.com/2017/08/issues/ I personally don’t buy these distinctions. Once you avoid being an eliminativst, the distinction he makes between deflationists and inflationists seems to amount to the language usage, at least to me. And he includes people in the inflationist category that I think actually belong in the dualist one, which makes me suspect he’s trying to have his dualist cake and eat it too.

      “Theoretically what’s best for this subject will be the maximization of its utility over that duration.”
      But how is this not a value statement? How would you convince someone who rejects utilitarianism to accept it?

      “Even I’m not the same from one moment to the next.”
      The problem is that this seems to make the idea of being the same person meaningless. If I’m a different person when I’m sitting in my chair versus a moment later when I stand up, then the concept doesn’t seem useful. As soon as we add back elements to preserve identify over time, the picture becomes blurry for whether the copy is or isn’t the same person. If they have your deepest secret memories, it seems perverse to say they’re not you in at least some sense.

      I suspect when(if) mind copying becomes available, eventually there’s going to have to be a legal and/or social definition of what’s happening identity-wise when a mind is copied. Particularly when we consider all the things that typically go along with identity, such as citizenship, property ownership, marriage and other legal commitments, or legal responsibilities for actions prior to the copy. I don’t think science will be of any use whatsoever in answering those questions, which means different cultures may come up with radically different answers.

      I think economics, and as you noted specifically macroeconomics, is beset by ideologies which often ignore or downplay empirical results. No theory ever seems to completely die, they just seem to get resurrected as neo-theories. And there are always enough conflating variables to call into question any principle anyone dislikes, or retain a cherished one. It’s why people still argue whether government spending has an effect on the economy, despite the history of the economy in World War II. The fundamental problem is that people have a stake in what economics says, which makes it inescapably political.

      Repugnance also seems like a value statement. If everyone finds a proposition repugnant, does it make sense to still call it “good”?

      Liked by 1 person

  14. Mike,
    I’m providing yet another long one here, but try not to worry about being brief or slow. You haven’t been, but try not to worry about that regardless.

    First off I’ll say that we do seem pretty square with this “unconscious” idea, and perhaps somewhat through the work of Leonard Mlodinow. Personally I’m worried about the potential for this term to be conflated with my own “non-conscious” term, and so prefer “subconscious” or “subliminal” for it, but that’s my own thing given that I theorize two opposing forms of computer. (They oppose in the sense that existence is inconsequential for one but not for the other.)

    I’ll emphasize that this quasi conscious stuff seems incredibly pervasive in us. Beyond those who resemble people that we’ve known who we might thus automatically think similar things about, we constantly judge others based upon how they look, talk, act, and so on. We can be racist and countless things more, either consciously or not. Apparently this is because such heuristics were needed for us to more quickly and effectively assess each other. Furthermore let’s not forget about theory of mind — we think about how we’re thought of by others, and perhaps this isn’t always done through full conscious assessments. For example it could be that a given attractive woman tends to automatically assume that unattractive women tend to feel threatened by her beauty.

    Regarding punishment/reward as a spandrel of certain non-conscious processes, apparently I wasn’t clear about what I meant. No I wasn’t referencing the middle ground between reflexive instinct and consciousness. Instead this is my explanation of how consciousness would have evolved given my model itself. So let’s try this from an evolutionary perspective.

    I believe that existence happens to be perfectly inconsequential for the vast majority of what exists. For this stuff existence will have no personal element, which is to say that regardless of what happens to it, nothing will be good or bad for it. Furthermore I believe that it’s possible for a computer to be structured such that it overcomes “the hard problem of consciousness”, and thus creates an entity for which existence can be good/bad. So do you agree that existence cannot be good or bad for most of what exists, like rocks and stars, though it is possible for a computer to create something for which existence is good/bad?

    If true, and if evolution functions teleonomically rather than teleologically, then consciousness must have come together in bits and pieces (rather than as a fully functional machine, such as what a god or a person might create). So before there was any functional consciousness, from my models evolution must have built non-conscious computers that also created what I consider to be reality’s most amazing stuff — personal value. Virtually inconsequential for the function of these initial non-conscious computers, this good and bad personal existence should have somehow been created somewhere inside these machines as spandrels. Then apparently they ended up getting fabricated into the conscious form of computer.

    You’ve asked how my theory is not a value statement? Well it is a value statement. Hopefully my evolution speculation above makes this clear. I realize that it may seem strange that someone could admit that they’re talking about “value”, though still claim that this can be explored scientifically. The morality paradigm that I fight says otherwise. But let’s think about this logically. If existence can be valuable to something, then shouldn’t it be possible to find the source of this value? I believe I’ve done this, and just as in physics, the way to see if my theory happens to be useful is to test its implications out practically. Based upon my theory I believe that I can provide a reasonable answer for any hypothetical question regarding what’s good and bad. This is not standard utilitarianism in the sense that it’s neither moral nor immoral (I’d instead call it “real”), and because a specific subject must be defined in all cases.

    On “person” and “identity”, it seems pretty standard to define these terms such that the sentient subject does change over time. But with a copy of a person we are talking about creating a new sentient being who just happens to be similar to another person. If you are standing next to a good copy of you, then each of you will think that you are you, though only you will be you. Notice that only you would then feel what you feel, while this other person would feel what he feels. Your paths would diverge from the point of the copying. If my mind were copied and sent across space to live in another body somewhere, then this sentient being would have the memories that I do as well as a new future over there. I conversely would still be on Earth. So from this perspective the legalities shouldn’t be difficult. Just because I am copied I doubt we’d decide that this other person then has the right to my property and my wife. But what if I’m ill and would like to give him my property upon my death? Well sure. Of course as far as marriage goes, that would be the business of he and my former wife.

    I agree that practical economics is political. We’re all selfish and so try to get the policies that we favor. The theory in the discipline itself, however, is something that I have tremendous respect for. I think that the field could teach psychologists a great deal, that is if the morality paradigm were overcome well enough for this lesson to take.

    Regarding the repugnancy of my ideas, wouldn’t it be strange if someone were to figure out what essentially constitutes value for any given subject, and then with this understanding it also turned out that by promoting one subject’s good, that this also promotes the good of all other subjects? I’d consider that quite strange. More plausible to me would be that we’d instead find various conflicts of interest between seperate subjects, and thus that there would be natural repugnancies regarding the nature of what’s good and bad. So I believe that I’m providing a plausible rather than implausible position here.

    There are lots of “physical monists” out there who thus reject supernaturalism, though I’m not entirely sure who beyond me takes the next logical step. This is “epistemic monism”, or a position that rejects two kinds of stuff in respect to our understandings. So I don’t believe that there is a “science stuff” and a “philosophy stuff”, but rather two respectable communities, though one of them has developed its own generally accepted understandings, while the other has not. To emphasize this I’ll leave you my second principle of epistemology:

    There is only one process by which anything conscious, consciously figures anything out. It takes what it thinks it knows (evidence), and uses this to assess what it’s not so sure about (theory). As theory continues to remain consistent with evidence, it tends to become nothing more than believed.

    Liked by 1 person

    1. Eric,
      On the term “unconscious”, I fear you might be fighting a losing battle on that one. That term seems to be used pretty pervasively in psychological and neuroscience literature to refer to all the processing that is not available for introspection.

      One thing Mlodinow did clarify for me is the distinction between the modern understanding of the unconscious and the old Freudian one. Apparently Freud saw the unconscious as repressed processing, cognition that our mind didn’t want to acknowledge, but something that with sufficient guided introspection, was discoverable.

      But the modern understanding is that it has more to do with the architecture of the brain and mind. Most processing isn’t available for introspection simply because the neural wiring between where it happens and where introspection happens simply doesn’t convey the information. That, and I’m sure the computational capacity of the introspection centers wouldn’t be adequate for modeling the entire brain.

      On existence being consequential, a lot depends on how you define “consequential”. Existence certainly isn’t consequential for a rock, or my car. But it certainly seems consequential for any living system, conscious or not. A single celled organism that has its homeostasis disrupted usually reacts to restore it. Plants strive to reach the sunlight. C-elegans worms strive to reach food. Often the reaction is in magnitude to the homeostasis disruption. Existence mattering predates consciousness.

      Now, you might argue that there’s no comprehension with their reactions, and I agree. But then I’d point out that the comprehension of most animals, even conscious ones, is very limited. There was a recent Aeon article pointing out that animals have sex for pleasure, never for procreation, at least not consciously. A male bear copulating with a female bear has no understanding or caring that it will produce cubs. That applies to most things animals do consciously.

      The difference between conscious and non-conscious life is that existence is consciously consequential in the former and only in terms of survival reflexes in the latter. But existence seems consequential for everything alive.

      On the other hand, existence is inconsequential for the Mars Curiosity rover. Despite its semi-autonomous systems, it doesn’t care that it will eventually be left in a depleted non-functional state (dead) on Mars. This would be true even if it were capable of simulating courses of action as well as an animal. It simply wouldn’t have the instincts animals have for self preservation. It might “care” about staying functional, but only to fulfill its mission, not as a purpose in and of itself.

      On scientifically determining value, what would you see as an example of an experiment that could measure it? Certainly, we can do experiments once we’ve identified a value, but what experiment can we perform to determine whether we should hold that value, except in relation to yet another value?

      Your arguments about identify seem like philosophical ones. Again, what experiment could be designed to determine whether a copied mind had a new identify or was imply the old one? If I simply disagree and insist that the copy is the same mind, what scientific measurement could prove me wrong?

      On repugnancies, I’m afraid I didn’t follow your line of reasoning. The problem, as I see it, is repugnance is often a very subjective thing. I find Donald Trump repugnant, but my neighbor sees him as their only hope. We’d agree that a dead disemboweled rat in our yard is repugnant, for very good evolutionary reasons, but I’m not sure how you could scientifically broker our disagreement on Trump.

      I think we’ve talked about this before, but your second principle seems like a restatement of Bayes’ theorem. Sean Carroll describes this as assessing new information in terms of our prior credences, our existing galaxy of beliefs.

      I think when considering the scope of science, we have to be scientific about it, and accept that there are some domains where no one has been able to find a way to scientifically measure the propositions. In my mind, the onus is on anyone asserting that they can be scientifically measured, to demonstrate it by either doing the measurement, or at least describing how it could be done.

      In my experience, when someone attempts to do that in terms of values, they’re always sneaking in (usually without realizing it) another value as an implied axiom. No one denies that we can do science in pursuit of a particular value. The question is whether science can determine which values we should pursue.

      Liked by 1 person

  15. Mike,
    I’m detecting a good bit of negativity from you with this one. Might there be anything about the positions I’ve stated which are personal for you? I realize that I’m putting you in a “devil’s advocate” spot, and that psychological studies do show that such roles tend to be adopted. But can you imagine why you might be extra sensitive regarding the positions that I’ve mentioned? If not then I’d hope that you’d want me to be able to effectively defend whatever arguments I have that are good ones. Maybe I can and maybe I can’t, but regardless we should each need reasonable objectivity to get anywhere with this.

    Regarding “unconscious”, I don’t consider it my role to figure out the terms and beliefs that modern psychologists and neuroscientists accept, and then make sure that I do so as well. There are plenty of students that do exactly this, which I’ll leave to them. Note that people like me who aren’t products of the system may be able to spot certain faults that should be less visible from within.

    If we go “by the book” I perceive that the human brain essentially concerns consciousness, though it’s also noted that some things occur that aren’t entirely conscious, or “unconscious”. We’ve each enjoyed Leonard Mlodinow’s video demonstration of this for example. Furthermore you’ve mentioned that modern scientists today even take “unconscious” to mean all processes that aren’t available for introspection. Right.

    Conversely my view is that zero percent of the human brain is conscious — not one neuron of it. I consider it to harbor a vast supercomputer, and that there’s a by-product of this computer that creates a fundamentally different kind of computer by which we experience existence. My guess is that conscious processing constitutes less than one thousandth of one percent of the magnitude of what’s non-consciously processed. So with this perspective, yes I do have a problem using the term “unconscious” for both “quasi” and “non” states of conscious function. If I were trying to change our understandings of basic physics, then I would consider this futile. But until mental dynamics get reasonably sorted out (which I suspect will occur during our lifetimes) it seems premature to say that I’m fighting a losing battle. Note that some scientists today do use the “non-conscious” term. Perhaps they’re tired of having their “non” being misinterpreted as “quasi”?

    “On existence being consequential, a lot depends on how you define “consequential”.”

    I agree Mike, so it would’ve been nice if I could have stopped you right there. I actually meant “personally consequential” rather than just “consequential”. Still I have to say that I don’t see how existence isn’t consequential for a rock, or anything else that’s physical. It’s all causal stuff and therefore there must be consequences associated with such existence — a rock falls when its dropped for example. And then I don’t see why you’re marking a “consequential” distinction between what lives and what does not. This seems like the position of our good friend Ed Gibney over at the Evolutionary Philosophy blog. Bringing this up now seems just a bit suspicious. I can’t remember you mentioning this before.

    My own position actually concerns what you said at the tail end of that as “…purpose in and of itself”. This concerns teleological rather than teleonomical existence, and I believe that it’s widely considered to concern conscious existence rather than living existence. If you have any questions regarding my scenario of how consciousness evolved given this clarification, I will do my best to address them.

    Perhaps “value” can effectively be analogised with the concept of “weight”. We can weigh things in terms of other things by means of the two sides of a scale — rocks versus water for example. Similarly we can assess values in terms of other values, such as “fun” versus “honesty”, or even “a car” versus “money”. Apparently you’re telling me that we can only relate values against other values — that there can be no basic “value”.

    Just as we don’t need to only relate how things weigh against each other, we don’t need to only relate values against other values. Observe that we can take the concept of “weight” itself as a stand alone idea, and even define it as a function of the mass of the Earth and an associated object. Thus “value” should be possible to take as a stand alone concept from principles of its own. I theorize this stuff as a product of the non-conscious mind that causes existence to be good/bad rather than personally inconsequential. So if there were no non-conscious minds producing value, then theoretically nothing would be good or bad for anything anywhere. I consider value to be a natural idea, though certainly a special aspect of reality.

    With an accepted conception of value we should be able to scientifically measure it, as you’ve mentioned. Today we can ask people how they feel given something like an electric shock, and their responses provide us with measurements associated with the value of their existence. We’ll surely have more objective ways to measure value some day.

    Here’s a simple experiment to demonstrate whether or not a copied mind has a new identity as I define the term, or is simply the old one: We begin by putting a man and his copy in seperate rooms. Note that if the copy can feel good or bad, then he happens to be a sentient subject. But if he isn’t able to feel whatever the original man feels, then it seems effective to say that he has a new identity rather than the orrigional man’s identity, even though he may be very similar to the man that was copied. So I consider sentience to be a useful way to define identity. Of course you’re free to define “identity” such that these men have the same one if you like — there’s only potential usefulness rather than truth here.

    On repugnancies and the separate politics that you and your neighbor have, my theory is not really about values, and certainly not about helping people who have different values reconcile them. That seems more like the morality paradigm, which I oppose.

    I don’t consider Bayes Theorem to be a full theory regarding the nature of human beliefs, as mine is, but rather a very useful statistical heurist. The two certainly seem different as far as I can tell. A philosopher once claimed that he could think of lots of ways other than my EP2 to figure things out, though in the end he couldn’t provide a single example. I find violations to this principle often enough, and certainly when it’s decided that the questions that philosophers ask are fundamentally different from the ones that scientists ask.

    Liked by 1 person

    1. Eric,
      If there is anything about your positions that are personal for me, I’m not conscious of it. Of course, in a thread where one of our subjects is the unconscious, it seems germane to admit that I can’t eliminate the possibility of unconscious biases of some sort. But as far as I can determine, the negativity you detect is me simply pointing out issues I see with your ideas. Re-reading my last response, since there were so many of them, I can see how you might interpret it that way, but it was meant to be conversational, not adversarial.

      On the “losing battle” remark, my only point was about the alternate terminology you were defining and that terms like “unconscious” seem pretty baked in. If you want to have precise meanings (admittedly terms like “subconscious” and the like are ambiguous), I think your best bet is to come out with completely new terms different enough from the traditional ones that misunderstanding is minimized. (Admittedly, it’s difficult to eliminate entirely due to the limitations of language.)

      On the consequential discussion, I did bring up the idea that it predates consciousness in our early discussions within the context of Damasio’s concept of biological value. You didn’t seem to see the similarities back then, but I’ve never stopped seeing them.

      But I’ll grant that there are three distinct conditions here. The first, that of a rock, doesn’t change its behavior under varying conditions. A rock does whatever it is a rock is going to do regardless of whether it encounters external forces conducive to it maintaining its current form or not. This is also true of storm systems and stars. A star doesn’t avoid a black hole to avoid being eaten.

      The second are systems that change their behavior based on certain stimuli. So a single celled organism that encounters a chemical gradient between noxious and nutritious chemicals, will swim toward the nutritious chemicals and away from the noxious ones. It has preferences. Those preferences aren’t conscious of course, just reflexive programming, but they’re there.

      The third, the one I think you’re referring to, are systems that can predict the results of various actions and have feelings about states that promote or threaten their homeostasis and gene propagation. But consciousness, the prediction engine, exists in service to the same biological value drives that exist in the second system.

      I’m not seeing the comparison between value and weight. Weight is a measurable objective attribute. It may vary on different planets or in space, but it’s based on a more universal property, mass, which is an objective property of matter, regardless of which society it exist in.

      But value is a very different matter. It ultimately has no objective existence. My undergraduate degree is in accounting, where asset valuation can only be assessed by market value, by what the company paid for the asset on the open market at a certain point in time. And a principle of business management is that the value of goods and services is determined by what customers are willing to pay. Economics recognizes that there are no true “proper” prices, only what is set through supply and demand.

      Ethical values are the same way. Their only objective existence is what a society will agree to value at a certain point in time. In the 18th century, slavery was acceptable to all but a few fringe abolitionists, and no one really questioned the idea that women’s options should be more restricted than men’s, positions most people today would see as abhorrent. I’m not sure what experiment someone in 1777 could have performed that would have revealed the folly of the values they held back then.

      On Bayes’ theorem, you might want to read Sean Carroll’s book, ‘The Big Picture’. He has an extended discussion about epistemology and Bayes’ role in it, one that you I think you’d see considerable resonance with your ideas.

      Although perhaps not. I seem to see a lot of similarities between your ideas and others that are already out there, similarities you don’t seem to perceive. You asked me above if I had any personal bias regarding these ideas. I don’t think I do, but I wonder if you’d be open to the possibility that you have a personal stake in not recognizing the similarities between existing ideas and your own ideas that you see as groundbreaking.

      One of my motivations for continually bringing up what I perceive to be similarities is to get you to think about them, and if there actually are aspects that set your ideas apart, to get you to identify them and maybe alter your descriptions to emphasize them.

      Liked by 1 person

  16. That’s good to hear Mike. Of course we all have preconceptions, since without them we’d have to continually reinvent the wheel. Hopefully you and I have pretty good ones. Some people seem not to. And then as for biases that are or aren’t conscious, well it should at least be good for us to keep them in mind as potential problems.

    For distinct terminology I could take things this way: “conscious”, “non-conscious”, and then use a middle ground of “quasi-conscious”. Maybe.

    So Damasio was actually the culprit with his “biological value” concept? No that didn’t make it into my memory. I guess the first of his three classifications concerns something that doesn’t change its behavior under various conditions — the rock doesn’t evade the hammer or have anything like senses. Then the classification that does tend to react to its environment to preserve homeostasis in some regard is “life”. Of course robots can theoretically do this as well, which would either make them “alive,” or life wouldn’t be the critical element to this classification. Then finally there is consciousness. Hmm. Do you consider him to have good reasons for using these particular distinctions?

    My own schema works like this: First existence functioned “mechanically”, which is to say that it didn’t take inputs and process them algorithmically for outputs. (Sort of like his first.) Then secondly with the replication associated with “life” (and I don’t define the term), the first computers would have emerged that processed inputs algorithmically for outputs by means of its genetic material. (Sort of like his second.) Apparently during the Cambrian some life went beyond their programming at the genetic level and so developed central organism processors as well. Like all computers they functioned “mentally” as well as “mechanically”. Then possibly still in the Cambrian some of these central processors got so complex that they started producing a strange new spandrel.

    Before this spandrel, everything was inconsequential to itself and to everything else. Regardless of any “senses” that were processed for output, nothing mattered to any creature, whether it was injured, died, procreated, or anything else. Existence was “valueless” here. Nevertheless some of these non-conscious minds began producing something that did harbor value, or was sentient, or felt good/bad. Though a spandrel initially, this stuff must have eventually gotten fabricated into a new kind of functional computer, or the conscious mind. (Okay that’s kind of like his third as well.)

    I’m saying that this value/sentience is ultimately like gravity — a physical and quantifiable aspect of reality. I consider it quite strange. Furthermore I mean for an amoral form of ethics to be developed based upon it, and given that it’s all that’s good/bad to the human or to anything else. It is “value” at its most primal level, though this could be expanded to the economic ideas that you’ve mentioned (which probably isn’t helpful right now). Without this stuff existence is personally inconsequential, though with it existence can be anything from horrible to wonderful.

    Okay so Damasio did do something kind of similar. I must have been put off by his notion that life itself is “valuable”. I’m a bit sensitive about that because I use this notion as a physical property of nature that’s quite instrumental to my theory. Life doesn’t inherently have any, though a sufficiently complex computer can produce it.

    Let me ask you a recent question again. Do you agree that existence cannot be good or bad in a personal sense for most of what exists (which includes both Damasio’s first and second classifications), though it is possible for a computer to create something for which existence is personally good/bad?

    (This is kind of like physicists theorizing a particle that they’ve never detected directly, but believing that it exists because it makes lots of other models that they do have evidence for, work out. But it seems to me that we do have evidence for this particular aspect of reality. We actually feel its existence.)

    I believe, at least, that I’ve consciously wanted to find connections with other theorists. I’ve certainly enjoyed consistency with the work of Jaak Panksepp and various others here and there. Furthermore there’s the entire premise of economics. I’ll grab that Sean Carroll book and take a look.

    In truth while developing my theory I simply used my two principles of epistemology to help keep my own ideas straight, not as something that I thought that I’d ever feature. I figured that they must already be well accepted ideas in general. Then while blogging I started to see all sorts of instances where these principles were violated, and so began quoting them as my own. But if Sean Carroll or someone else is able to take credit, I would like that. I think they’re needed today in general, and would even help others understand my ideas.

    Liked by 1 person

    1. Eric,
      Just to clarify, the biological value concept comes from Damasio, but the three conditions thing was something I came up with to find common ground between his and your view. If I recall correctly, Damasio characterizes consciousness as one of many mechanisms to maximize biological value.

      I think the difference between his and your view (and honestly my view is closer to his, so between your view and mine) is whether existence is consequential for, say, a bacterium. There are two ways to approach this question. In one, the fact that the bacterium changes its behavior to bring about certain states means those states matter to the bacterium.

      But in the other, the fact that the bacterium never predicts any consequences means that states are not consequential for it. What blurs things is that most conscious creatures can only predict their own reflexive reactions, such as most animals having sex for pleasure rather than procreation. So how much comprehension is necessary to trigger the second way (your way) of looking at this?

      I’m curious why you think consciousness was a spandrel, a non-adaptive byproduct of another adaptation. What was it a spandrel of?

      Myself, I think consciousness evolved gradually, with no bright line between pre-conscious and conscious life. In my view, there was never a first conscious animal, just as there was never a first human, only gradual morphing of what was there before into what we now recognize today.

      “Do you agree that existence cannot be good or bad in a personal sense for most of what exists”
      I think it depends on what you mean by “personal” here. If you mean “conscious”, then I do agree. But if the “personal” word is removed, does it change your proposition?

      “though it is possible for a computer to create something for which existence is personally good/bad?”
      I think ultimately, anything that biology can do will eventually be possible with technology since it’s all ultimately physics. But most machine intelligences won’t be programmed to maximize biological value but whatever value it’s been engineered for.

      I don’t think Sean Carroll would consider himself the originator of what he describes, but only someone relaying existing principles, so I don’t think he’s seeking credit. He does seem interested in promoting an outlook he calls “poetic naturalism”, which for me amounts to non-eliminativist reductionism, a viewpoint that resonates pretty well with me.

      That said, like with anyone, I don’t agree with Carroll on everything. He wrote an essay a few years ago questioning the value of falsifiability, an essay that I felt was unfortunate. (To be fair, his views were nuanced and charitable interpretations of his essay are possible.)

      Liked by 1 person

  17. Mike,
    I’ve asked recently if your negativity towards my consciousness theory might stem from personal implications rather than the more objective sort of assessments that we try to maintain? You doubted this to be the case, but have conceded its possiblity. I may now have found a good candidate for this however. On the surface it’s not too damning, since I consider this tendency to be extremely prevalent in academia today. I’m talking about treating our terms as consepts that need to be “discovered” rather than “defined”. I’m talking about violations to my first principle of epistemology.

    No one explicitly challenges my EP1, and I suspect because it’s even tautological. An untrue definition is just as contradictory as a married bachelor. But even though we don’t deny that there can be no true definitions when the question is plainly asked, in practice something else seems to happen. Observe that our encyclopedias write about what “time” is, for example. The same can be said for “space”, “life”, “consciousness” and so on. Though everyone agrees that terms are simply fabricated, in practice we look for which definitions are true and which definitions are false.

    This harms us in two related ways I think. The first is communication. For example perhaps the primary reason that you haven’t quite been able to understand how my consciousness model works, is because you’ve been relating its points back to your own framework of consciousness? Then the other way that implicit conceptions of “true” rather than “useful” terms seem to hurt us, is that it discourages theorists from adventurously using somewhat different definitions to make potentially useful points. For the theorist words are essentially tools from which ideas are built, and sometimes existing tools just aren’t sufficient.

    Now that you’re more openly displaying the consciousness model that serves your purposes, I do hope that we’ll be able to acknowledge that these are two separate definitions that were developed by theorists who intend to say two separate things. Mine came after my amoral form of ethics was developed, and so was a continuation of that theory. Your theory seems to more concern function. For example from your definition consciousness gradually emerged the more functional it got, thus rendering it impossible to exist as a spandrel. Conversely with my ethical premise, if we were to add something to a regular computer that caused it horrible pain, though the computer functioned no differently than before, then this circumstance of us adding whatever we do, would by definition illustrate consciousness. In order to potentially understand how your models work, I must replace my own definitions for yours when considering your models. I hope that I’ll soon be able to understand them virtually as well as you do, and so be able to offer effective assessments. I’d love for you to be able to do this for me as well.

    On to some of your points:

    So Damasio didn’t quite have the three ideas that you’ve mentioned? Well okay. They do seem to have helped me understand your position better however.

    I have no problem saying that existence “matters” to a bacterium in a functional sense rather than a (personally) consequential sense. (Existence also matters to it in a “consequential” sense as I see it, given that it’s causal.) So perhaps you, I, and Demasio are square with that?

    I consider the other way a bit problematic, if I understand you there. The bacterium is not non-conscious just because it doesn’t predict, but rather because it presumably has no traits associated with my consciousness model. The bear is not simply conscious to the extent that it can predict, such as that sex brings offspring, but it most certainly is conscious as I define the term, given that sex gives it pleasure.

    “I’m curious why you think consciousness was a spandrel, a non-adaptive byproduct of another adaptation. What was it a spandrel of?”

    Hopefully we’re now clear that from my definition of consciousness, it’s certainly possible for it to be a spandrel, unlike from your functional definition. Furthermore I suspect that consciousness began this way because evolution functions randomly rather than teleologically — a non functional consciousness is virtually assured at the beginning here. So I’m not claiming that some specific adaptive trait would be expected to produce this (initially) non-adaptive trait. Instead we’d expect non-functional consciousness rather than a fully effective machine.

    You’ve asked if the “personal” idea is necessary for good and bad as I define the concept. Yes it’s essential. We each agree that trees aren’t conscious for example. Here we might say that getting no water is “bad” for a tree, though only by transporting purpose as we see it, I think, over to it. I don’t believe that dying and death is “personally” bad for a tree. If it contains a computer that manufactures pain regarding such circumstances however, then things would get personal as I define the term.

    Can you give me an example of your non-eliminativist reductionism? Sounds interesting.

    Liked by 1 person

    1. Eric,
      I may use language such as X is Y, but bear in mind that my theory of truth is much more pragmatic than yours. In other words, truth for me is about what works, not some metaphysical forever unknowable thing “out there”. In the end, I think we’re both ultimately instrumentalists. Theories for us are tools ultimately to make predictions about future observations. And “reality” is simply another theory, another model, for making predictions. In the end, the models are all we actually have.

      On your definition of consciousness, I think I do understand it. You describe it as an organism (or other system) feeling good or bad about existence, and you posit that it exists to give an organism more flexibility than is possible with a straight stimulus-reflex driven system. I agree with this, as far as it goes. But I find it unsatisfying, at least for my purposes (which as you note, are not quite the same as yours).

      I want to understand the mechanism of feeling good or bad, and how it relates to the overall workings of the mind. In my understanding, we feel because it’s a necessary input to the predictive simulation engine. No simulation engine, no need for feeling. That’s what I was saying about the bacterium. It doesn’t predict, so it has no need to feel. (Well, it might be better off if it could feel and predict, but it doesn’t have the substrate for it.)

      If you say that the spandrel is the ability to feel without the ability to predict, this gets into what feelings actually are. I see them as communication of the organism’s reflexive reaction to give higher level systems a chance to inhibit them. They’re inputs into a mechanism to select which reflexes should be allowed and which inhibited for any particular situation. In its earliest incarnations, probably to do things like help a fish determine whether it should go for that food near something dangerous.

      To me, talking about feeling without prediction is a bit like talking about data input without a computer. Either concept seems incoherent without the other, an incomplete concept, more properties of the same overall system rather than independent entities. So in my view, the evolution of feelings is the evolution of prediction. They are yin and yang to each other, two sides of the same coin. Unless of course I’m missing something 🙂

      For non-eliminativist reductionism, consider the typical example of a table. A table is composed of atoms, which are mostly composed of empty space. Does the table exist? An eliminativist might insist it’s an illusion, that there is no table, only a collection of quantum field excitations and interactions of the fundamental forces. Of course, virtually no one actually asserts that a table doesn’t exist. And no one can operationally be an eliminativst about everything. (At least unless you’re a “mad dog” reductionist like Alex Rosenberg.) Getting through the day requires accepting the pragmatic existence of things like clothes, cars, houses, etc.

      But it’s not unusual for people to be selectively eliminativist. An example is people who say that consciousness doesn’t exist, because it’s reducible to other components. I’m totally on board with the reductionist aspect of this, but not with concluding that because we can reduce something, that the higher level entity doesn’t exist. I think the structure, the arrangement, the patterns of those lower level components are significant, that they are what makes something syngertistically more than its parts.

      I’ve also seen people over the years assert that religion, biological life, or volition don’t exist. Or that psychology should be dispensed with in favor of biology and neuroscience. For an instrumentalist, this amounts to throwing away useful tools. I’m not inclined to do that unless someone clearly demonstrates that they’re useless or detrimental.

      Carroll describes this acceptance of higher level abstractions being useful pragmatic concepts as “poetic naturalism”. I’m not sure about the poetic part, but I agree with the philosophy.

      Liked by 1 person

  18. Mike,
    Yes well said. For me “truth” is “some metaphysical forever unknowable thing “out there”” rather than just “what works”. I require my terms to exist as tools which are sharp, so when I say that something is “true”, there will be no “maybe” about it. Of course there must be ontological truth regarding reality, not that we idiot humans can have such certainties. Perhaps I’d forget this if I let “truth” exist as “what works”? Still I must accept your definitions in order to potentially understand the points that you make. I would nevertheless encourage you to leave absolute terms absolute, and then soften them up as needed with other terms. For example we could say “seems true”. There are certainly times where even we humans need our terms absolute, so to me it seems like a shame to degrade them.

    The point that I was actually making however, concerns us speaking of “discovering” what terms like consciousness actually mean. I can see that you don’t consciously think that you have a problem with this, and perhaps you don’t. But there would still be the quasi-conscious to consider here. If everyone speaks of consciousness by means of what it “is”, it should be difficult to not permit this flawed perspective from bleeding in. Whether time, space, life, gravity or any other term, definition is needed rather than discovery. Unfortunately the field of epistemology hasn’t yet straightened science out in this regard.

    It seems to me that you understand my model of consciousness in a broad conceptual sense, somewhat like what we’d expect a physics student to understand through a lecture. But for an effective practical understanding the student should need the experience of answering associated questions, or the nitty gritty details. That’s where true learning seems to occur, or at least for me. Today you’re able to ask me insightful questions about my models, which is good, but without also being able to predict my responses. The better you get at answering such questions as I do, the more that you should understand how my models works. I’ll need you to comprehend much better than you currently seem to if you are to effectively evaluate the merits and flaws of the positions that I hold.

    In this effort, understand that I’m making a distinction between “functional existence” and “personal existence”. We anthropocentrically refer to various things as “functional”, which is fine since this can be useful for us. But I use “personal existence” as something else entirely, or a punishment/reward dynamic. I use it to found a “real” rather than “moral” form of ethics for example, as well as to exist as the premise behind teleological existence.

    I’m sure you agree that correlation does not get us causation. Though feelings may be correlated with prediction, the first is an input which incites the second as I define these terms. So from here they must be distinct ideas that may or may not exist together. Under my models they simply aren’t two sides of the same coin.

    On your non-eliminativist reductionism, doesn’t my EP1 address that sort of thing? It seems to me that “table” is just a humanly fabricated term. Define it (and all others) however you like. (And even then I’d hope for us to say that the table only “seems to exist” rather than “exists”.)

    On a personal note, how have you been doing with this new hurricane? You were a bit too close for my own comfort. Watching what’s happening in Houston must be bringing you back to your own flooded community last year.

    Liked by 1 person

    1. Eric,
      On truth, I think the issue for me is that we never have absolute knowledge of truth. Even tautologies such as, “all bachelors are single,” requires that our knowledge of the concepts of “bachelor” and “single” are correct. If that’s the case, then using your definition, we can never, or almost never, use the word “true” except after “seems”. To me, this removes any utility from the word “true”. It makes it redundant since if I use the word “seems”, I never bother with the word “true”.

      I think for propositions where our certitude is reasonably high (say >95%, although the threshold is context dependent), saying something “is” or “is true” is a reasonable use of language. Such statements must be considered provisional, subject to change based on future observations. But as long as we understand that epistemic limitation (and it’s good to periodically remind ourselves about it), we can responsibly say something is something else when we have grounds for it.

      On discovering meaning, I think there are definitions that we do discover. For example, the insight that a heart is a pump, that lungs are gas exchange systems, that small intestines are a chemical extraction system, or that the brain is a computational system, are all essentially definitional meanings of these organs that we had to discover. In the same sentiment, I can define a brain as an antenna, as many spiritualists want do to, but it won’t be a useful definition, except in terms of new age rhetoric.

      Coming back to consciousness, we have a range of intuitive concepts we mean when we use the word “consciousness”, such as arousal, responsiveness, awareness of surroundings, self awareness, subjective experience, inner experience, imagination, etc. As I’ve said before, I doubt there will ever be one precise scientific definition that meets all those intuitive meanings. It’s why I resist simple definitions of consciousness. If they’re simple and accurate, they’re usually little more than synonyms of one of the intuitive concepts, offering limited insight into the subject, at least for my purposes.

      Correlation does not prove causation, but consistent correlation implies a tight relationship, possibly the equivalence I’m envisioning. Can you think of examples of feeling that have no predictive aspects?

      On the hurricane, thanks for asking, but we’re good. The Houston coverage did dredge up memories of our floods, although given that most people here are still recovering, it isn’t far from the surface yet. But all we got here was steady rain for several days. I know people to the west who were more effected, but we largely dodged the bullet, this time. But everyone is keeping an eye on hurricane Irma in the Atlantic, something gulf coast and east coast residents are used to doing this time of year.

      Liked by 1 person

  19. Mike,
    I suppose that I’m going to remain more worried about these epistemological issues than you. But if most were to have the responsibility that I consider you to have, then I don’t believe that our situation would have nearly the urgency that I see in general today. I believe that the discipline of epistemology can, must, and will develop a community with its own generally accepted understandings, and that this will help scientific endeavors in general — especially on the soft side. I hope to live to see this.

    You’ve mentioned intuitive conceptions of consciousness as “arousal, responsiveness, awareness of surroundings, self awareness, subjective experience, inner experience, imagination, etc.” I don’t know of any (sensible) intuitive notion of consciousness that my own models do not address. If you suspect my models to have some weaknesses regarding them, just let me know and we’ll take a look. Of course my conception of consciousness isn’t actually very simple (except perhaps for me!). You might have only been talking about simple models however.

    “Can you think of examples of feeling that have no predictive aspects?”

    Well perhaps. To begin, I designate “feeling” as motivation input which drives the thought processor that both interprets inputs and constructs scenarios (prediction) in the quest to feel good and not feel bad. For a simple example, the feeling of a hurt finger is interpreted (“ouch”), which motivates scenarios about why this is happening so that such punishment might be alleviated, or indeed, to serve as a lesson through the memory input so that the finger might receive more protection in the future. But it seems to me that these are two separate dynamics — one provides input to the processor, and the other provides prediction given such input. Couldn’t there be input to the conscious processor, when it doesn’t inherently predict anything?

    If I were being electrocuted to death, but remained conscious for five minutes of it, I presume that this would feel very bad to me. But I also can’t imagine being able to effectively predict anything while I was experiencing such an extreme sensation. I don’t believe that I’d even have the cognitive capacity to predict “I’m going to die”. And if I were to instead survive such an ordeal, I should have strong memories of the conscious aspects of this experience. Looking back upon it, I doubt that I’d remember thinking of anything at all beyond how horrible it felt.

    I theorize for my ASTU that we’ll some day have a measurement scale of the good/bad that a given sentient being experiences each moment. If the units on the positive side end at 100, how far shall they be taken on the negative side? Well perhaps 100,000. This is to say that perhaps the greatest pleasurable experience possible, is 1000 times weaker than the most horrible experience possible. Thus if a person were to experience one hour of perfect torture, it would take 1000 hours of perfect pleasure to neutralize the experience. Over those 1001 hours, on the whole existence would be neither good nor bad.

    Why would evolution make pain for the human so potentially extreme, and even to the point where it circumvents the prediction engine? I doubt this was a spandrel. It must be that the human kept trying to “beat pain”, and so the people who felt it most acutely tended to pass their genes on a bit better. It must be that the most extreme levels of pain, where we can no longer even predict, was adaptive. These experiences must have taught us to protect ourselves better, given the memory input to the conscious processor.

    Liked by 1 person

    1. Eric,
      Something tells me that people will always be arguing about epistemology. Science is constantly fine tuning its methodologies. Is a p-value of .05 sufficient for significance in social science studies? For a long time, the answer was yes, but with the replication crisis, that might be changing. Should theoretical physics back away from testability, as some theorists urge? I hope not.

      The trick, it seems to me, is to be scientific in our assessment of scientific methodologies. Science, over the long term, tends to stick with what works and shun what doesn’t work, with “works” referring to establishing reliable facts, facts that stand the test of time. Of course, that sometimes requires admitting when we can’t establish reliable facts.

      On your model and intuitive conceptions, your model doesn’t cover all of them, but I don’t consider that a defect. For example, we typically say an organism is conscious if it’s responsive, but there are many cases where responsiveness doesn’t really mean that, such as the startle reflex of a 16 week old fetus, or the reaction of a worm to noxious stimuli. I also don’t recall you model talking about metacognitive self awareness. Again, for the layer you’re addressing, I wouldn’t expect it to.

      On the experience of someone being electrocuted, I don’t know what you’d be experiencing in that scenario. I actually once experienced an electric shock and blacked out for the second or so it was going through my body, but I’d imagine it depends on how much electricity is reaching your brain.

      But your example reminds me of a counter-example, a photo taken during the first Iraq war of an Iraqi soldier’s burned body in a burned out vehicle. The soldiers arms were over the edge of the windshield. I imagine the soldier’s final seconds were filled with unspeakable agony, yet he was still trying to get out of the vehicle. My interpretation of that gruesome scene is that he was doing prediction in those final seconds before he lost consciousness, about getting out of the vehicle and the fire. We’re obviously not talking D-day invasion type planning here, but it seemed like he would have been projecting a few seconds into the future.

      Feinberg and Mallatt, in their book, talk about the adaptive purpose of agonizing pain, particularly in noting that it doesn’t look like fish suffer from it. Their take was that the “purpose” of agony was to motivate the animal to hide and heal, an impulse that is apparently adaptive on land, but not in water. And it seems like most animals will heed the hide and heal impulse, unless their injuries are too severe for them to mechanically do it. Again, to me, finding a place to hide involves some primal predicting.

      I know when I’ve been in a lot of pain before, I sometimes just laid there. But my decision to lay there was arguably a prediction about what the best course of action might be. (Probably heeding my own hide-and-heal impulse.) That’s the problem with positing scenarios where we’re conscious and not doing prediction. The very act of perceiving and imagining is predictive modeling. The basic purpose of a forebrain, I think, could be said to be prediction, and the forebrain (thalamo-cerebral system in humans) is where we feel. The very act of appreciating that feeling, it seems to me, is a prediction about what that input means.

      Liked by 1 person

  20. Mike,
    That people will always be arguing about epistemology doesn’t quite get to the issue here. They’ll always be arguing about physics as well. Nevertheless the physics community has developed its own generally accepted understandings for human use. Conversely the epistemology community has not yet provided us with such tools. I consider this void problematic for all varieties of science, but particularly for the softer portions that need the most help.

    It’s also notable that science is only a few centuries old. Thus it would seem premature to decide that the aspects of reality which philosophers explore can’t be agreed upon. I refer to this perspective as “epistemic dualism”. Academia seems to have generally overcome physical dualism, but seems quite acceptant of epistemic dualism.

    On your observation that my models do not place “responsiveness” as inherently conscious behavior, I’m happy to see that you don’t consider that a flaw. Then as far as “metacognitive self awareness”, perhaps I do have that one. For the human I identify a second mode of conscious processing which involves the use of language. This seems to cover our potential to think about thinking. Still I’m not entirely sure that dogs don’t get “meta” even without formal language. This would be classified under the standard thought of “constructing scenarios”. Dogs at least seem to think about how they are thought of by others.

    On human pain, an injured body part certainly encourages us to not use it since it tends to hurt worse when we do. Most fish seem quite able to hide and heal, though perhaps this often isn’t adaptive for them. Instead it may be best to continue trying to pass on their genes even with the injury, or at least if life is short.

    Apparently strong continued pain was adaptive for us, and I consider this incredibly unfortunate. My thought is that we must have kept fighting the pain when it wasn’t sufficiently severe, and thus the people who felt it most acutely tended to survive the best. Perhaps memories of strong pain is at least instructive, since at its worst levels (as in hammered fingers and ripped out teeth) pain seems not only debilitating, but a tragic crime of existence.

    It is notable that evolution does provide quick shots of adrenaline to counteract pain, and presumably to get us to safety under trying circumstances. That Iraqi soldier probably first felt amazing worry about his situation rather than pain, and then continued on with the initial adrenaline rush while burning up. (Though his adrenaline rush was surely a small consolation, us hot pepper junkies cheat biology for it.) Then there’s the hiker who ripped off his arm after being trapped for days, as displayed in Crash Course Psychology #17. (That’s the one that illustrates where I think I can help psychologists better understand motivation.)

    I’ve mentioned that the potential pain to pleasure ratio might be 1000 to 1. Care to venture a guess of your own? Furthermore if you were charged with estimating the value of existing as a given sentient life form over some period of time, and perhaps even the value of existing as yourself, in a conceptual sense, how would you make this assessment? What parameters would you propose that constitute the personal value of existing as any given conscious entity?

    Liked by 1 person

    1. Eric,
      On epistemology, science, and philosophy, I guess the way I think about it is, that when we figure out a way to start establishing facts in a certain area of investigation, it moves from philosophy to science. Atomism was once metaphysics, because prior to the 20th century, we didn’t have the means to establish reliable facts about it. Now it’s thoroughly in the realm of physics.

      The philosophy of mind is another area, although many would argue that the transition is still underway. Prior to neurological case studies, the mind lay firmly in the realm of philosophers. Today, it’s in the domain of scientific psychology and neuroscience, and any philosopher of mind who isn’t tracking the neuroscience is probably engaged in navel gazing.

      If I recall, your use of the word “language” is in a broad sense to include the underlying capabilities. For me, language is a particular case of symbolic thought, which includes art, music, mathematics, or any endeavor where we use a symbol as a representation of an aspect of conscious experience. But our ability to do that requires a level of recursive metacognition. There is scant evidence for metacognition at all outside of primates, and is pretty limited even in non-human primates.

      On dogs and metacognition, I think we have to be very careful about trusting our intuitions too much. Much of what people intuit about their dogs is wrong. For example, most people think their dogs display guilt, but usually the dog is just reacting to our own reaction to whatever mess we’re walking in on. We project aspects of our own experience on them, but when scientists rigorously test for those aspects, they usually don’t find them.

      Do dogs, or other animals, have some level of metacognition? I don’t know. As I noted in a post, testing for metacognition is difficult. Failure to find it isn’t evidence of its absence. All we can say is they don’t appear to have it to the level of primates, and non-human primates have it far less than we do. The closest seem to be chimpanzees, who are capable of learning a few words, but not really using them except in the simplest associative manner.

      On the range of possible pain to pleasure, and parameters for the personal value of existence, I don’t really know. I’ve been lucky in my life so far that I haven’t had to experience extreme levels of pain. As you noted, in extreme cases adrenaline and natural opiates kick in.

      Could we, in principle, quantify the level of pain or pleasure someone was experiencing? We might be able to measure the level of arousal, and perhaps the valence. But those levels could be identical for vastly different situations, such as the extreme muscle burn an Olympic speed skater experiences in comparison to someone suddenly having that burning feeling due to a medical condition.

      What you’re calling personal value, seems like it would depend on the organism’s current perceptions (both exteroceptive and interoceptive), affects, their interpretation of those sensory inputs in terms of their personal psychology, and, in the case of social animals, cultural indoctrinations. In some cultures, certain forms of self mutilation are interpreted as a coming of age experience, while in others it would be a horrific situation. Personal value seems like it would extremely difficult to measure.

      Liked by 1 person

  21. Mike,
    I’m pleased with your observations regarding how philosophy gets converted into science. The implication is that all realms of philosophy will at some point make this transition, at least to some extent. But unfortunately today “team philosophy” takes a great deal of heat from “team science”, given marginal signs of progress in philosophy. To defend themselves today apparently even full monist philosophers sometimes make the case for epistemic dualism, or that there must be a “science stuff” which can be agreed upon, as well as a “philosophy stuff” that can’t be agreed upon. Instead of inciting these professionals to feel defensive enough to theorize such a contradiction, I’d hope for “team science” to stop these detrimental accusations and try to be constructive. I believe that Philosophy needs to become known as the precursor to science, and thus as history suggests, the integral first part of science. Hopefully philosophers would then put aside their defensiveness enough to consider their work in more effective ways. Given the naturalistic connection to all of reality, it seems to me that our mental and behavioral sciences will not be able to harden up much without at least more effective approaches to epistemology, as well as a form of ethics that goes beyond what’s “moral”.

    Our conceptions of the uses for and power of language seem pretty consistent. Still I’d have you be wary of psychological studies which suggest that the human is amazingly different from other animals. I’ve noticed “motivated science” to this theme in the past. For example regarding your next post (as viewed through Ginger Campbell’s Brain Science Podcast) I thought that Lisa Feldman Barrett was doing fine with her position, but then utterly lost it by associating feelings with words — as if languages teach is to feel various complex feelings. (I suppose that I should bring this up in your next one.)

    It’s pretty clear to me that dogs can feel “guilt” just as acutely as children can (and I’m not really a “pet person”). Observe that a dog may have an emotional attachment to its owner that is similar to what a child has for a parent. The dog and child need this love for personal wellbeing, and therefore it naturally hurts them when they are thus deprived. Also there will be things that the dog and child know that they’re not suppose to do, even though doing such things may be very tempting. Circumstances may come up where disobedience occurs, since each moment we do what we feel (regulated by our hopes and worries about the future as well). Once done in a way that can’t be hidden, regret should naturally set in. Here these subjects realize that they’ve jeopardized the love that they depend so much upon, and might foresee extra punishment beyond as well. I consider the feelings here as “guilt”.

    Also my parents have a dog that seems quite afflicted with jealousy. I’m told that if a given grandchild is seen carrying around a doll, that their dog has been known to covertly take the doll into their bedroom and rip it to shreds. They consider this to be their dog’s protest statement regarding the love and attention that this child receives from them. I have little ability to deny their claim, since their dog doesn’t otherwise take things like dolls to their room and chew them up.

    Dogs did not evolve to speak natural languages, as we began to do hundreds of thousands of years ago, and without this powerful second mode of thought I wouldn’t expect associated levels of intelligence to also have evolved in them. But beyond intelligence it’s not clear to me what’s mentally different from the dog and from the human. Any suggestions?

    Here’s a question. Is recursive metacognition required in order for there to be language? Or is that not actually the issue, though if a given creature does evolve the capacity for language, as we did so long ago, then it will naturally have far greater potential for recursive metacognition (since words provide a platform from which to potentially describe how we feel)? I suspect the latter to be more the case. In many animals the feelings should exist, as well as be personally considered, though without the tool of language to aid in these assessments.

    On pain, I’m really just guessing as well. I know what I consider to be amazingly horrible examples of it, though if I recall correctly this has always subsided to merely extremely bad minutes later. But I suspect that even initially when it feels pathetically bad, that I could still be burned or otherwise hurt in order to potentially feel far worse than even that. I’m not going to test my theory however. 😮

    I suppose that there are people who know the amount of damage that it takes to reach ultimate pain, which is to say, the people who torture others for information from time to time. Here I’m not just talking about highly publicized US military torturing, which is surely horrible, but rather where someone is unaccountably taken to be tortured and anonymously killed and disposed of. I presume that this happens all the time. Here death is one’s only potential salvation, though the longer that takes, the more horrible things become. The people who do this sort of thing should know when further damage no longer feels any worse given the reactions that they observe. I’m only guessing that the worst I could ever possibly feel has 1000 times the magnitude of the best I could ever possibly feel.

    I realize that you do know that I wasn’t asking your opinion about our potential to quantify how good or bad a given subject feels, but I’m not going to push you on that. I’ll just say that I believe that you believe what I believe in that regard. In fact I’m starting to suspect that you understand a good bit more about my ideas than you’ve let on. You’ve been providing me with what I consider to be a valuable service, or information about how things stand today in the fields that most interest me. In return I mean to infect you with outside ideas that are sensible enough to challenge the status quo. Furthermore I’ll need someone about like you to gain great mastery of them so that they might be tweaked where needed, and so that we might go on to do what these fields have not yet been able to do. It seems pretty clear that there is little potential left for further revolutions in our hard sciences, though our soft sciences and philosophy are just begging for better ways to go.

    Liked by 1 person

    1. Eric,
      “Still I’d have you be wary of psychological studies which suggest that the human is amazingly different from other animals.”

      I am wary of it, as I think I’ve noted in the posts that discussed metacognition. On the other hand, I’m even more wary of rejecting science because it produces results I dislike, particularly when the methodologies look fairly solid. We have to be on guard for motivated reasoning from all directions.

      On your question about the mental differences between dogs and us, I could only repeat what I said above. Our intuitions on this are not to be trusted. I fully understand the intuition of complex emotions in dogs, such as jealousy or spite. I’ve had those intuitions myself and know how powerful they are. But science hasn’t been able to verify most of it.

      ” Is recursive metacognition required in order for there to be language? ”

      I think it is. Consider what is required for the dog concept. First, you just saw the word “dog”, matched it in your head to the sound we make when talking about it, then to the sensory concept you have of dogs. Now, since it’s an established concept, you can do that with minimal conscious involvement.

      But the first time someone told you that a certain type of animal was called a dog, you had to access your current conscious experience, map it to the conscious experience of the sound of the word, and then later recall that matching. When you learned how to spell “dog”, you again had to access your conscious experiences and do the mappings.

      “Or is that not actually the issue, though if a given creature does evolve the capacity for language, as we did so long ago, then it will naturally have far greater potential for recursive metacognition”

      If you’re saying that language is the killer app for human level metacognition, my response is, maybe. It could also be that metacognition aided in managing social relationships and that was its evolutionary driver. Or it might be both, co-evolving together. The problem is that we don’t know when language began, or the level of social sophistication of homo erectus or other early hominids.

      I do wonder why only primates might have metacognition. Maybe in its earliest nascent development, it provided some advantage when navigating around in trees. Or maybe even then it was more about social relationship management. But if it’s that, you have to wonder why other social animals don’t show signs of it. (Aside from assuming it’s there and that we just can’t detect it.)

      Eric, I promise I’m not being intentionally dense about anything. If I misunderstood anything you asked or said, it was a real misunderstanding. I do sometimes ask questions to get you to think about what I see as issues, but I don’t think I did that in the last response. Anyway, sorry if I’m seeming uncooperative.

      On hard science and revolutions, late 19th century physicists thought they had just about figured everything out. There were just those pesky matters of the black body problem and the Michelson-Morley experimental results to explain, and then they’d be done. Of course, those turned out to be serious cracks in the reigning paradigm. Who knows what revolutions our current cracks might lead to?

      There may actually be a revolution coming in social sciences, but possibly for a different reason than you envision. Big data. It’s changing the way social scientists have to find answers. Instead of doing surveys, which count on respondents being accurate and honest, using big data, such as online search data, shopping behaviors, Facebook behavior, and other sources, can reveal things about human nature very difficult to uncover with self report and small sample sizes. There’s a book, ‘Everyone Lies’, which discusses this trend. I can’t say it was a great book, but I found it interesting.

      Liked by 1 person

  22. Mike,
    Regarding the potential for dogs and cats to have complex feelings: Given that I’m no pet lover, I don’t believe that I personally desire them to have such complexity. This just seems more sensible to me. If they didn’t actually feel such things then we wouldn’t expect them to act in ways that seem like guilt, jealousy, frustration, hope, and so on. So it’s possible that they appear this way because of how they feel. I’d expect social species to naturally face many of the same challenges that we do, so similar feelings would be appropriate. Furthermore I doubt it would surprise many today if the studies you’ve mentioned were not actually executed well. But yes, I’d like to know the specifics of how these experiments have been structured.

    I’ve gone through your July 8 “Layers of self awareness and animal cognition” post again. As you know my own model of human consciousness doesn’t contain a specific “metacognition” component, and I don’t yet see the need to add one. My thinking is that the capacity to think about thought is dependent upon having a conceptual understanding of “thought”, and I’m not sure how this could occur without a language from which to represent and contemplate it. So as you’ve surmised, I do consider language to be the “killer app” for metacognition, and much more.

    For example here is a metacognitive statement: “I used to think that I was smart when I was young, but now I think that I was stupid.” Apparently it’s thought today that the human has a natural capacity to think such a thought given inherent human metacognition. I’m saying that without associated terms, which are far more nuanced than the standard nouns that I expect conscious life in general to potentially conceptualize, the human will not be able to think, “I used to think that I was smart when I was young, but now I think that I was stupid”. How else could such a concept be consciously grasped? This should be beyond a “feral child”, to say the least.

    I believe that dogs are able to grasp a reasonable bit of human language anyway. Their neurons should already fire in ways associated with visual inputs like “trees”, “rain” and so on. Furthermore if we’re always using specific sounds to refer to various people, places, and so on, then when heard by dogs, associated ideas should naturally come up. If grandma tells their dog “Go get grandpa!”, I presume that it will interpret this input and construct scenarios about what to do. If it knows these terms well enough, and especially if grandma seems to need grandpa, the dog may very well do as its told. But to instead think, “I used to think that I was smart when I was young, but now I think that I was stupid”, I just don’t see how anything could get hold of such an idea without developing it through associated words. And even if a given monkey can use sign language a bit, I’d say the same for it. (I’m not actually sure how being able to perceive a foreign object on one’s coat by means of a mirror, demonstrates the ability to think about thought.)

    Sorry about implying that you weren’t quite being straight with me. I now see that I did mention the term “estimate” in my question, which naturally implies the quantification that you went into. My bad. I should have simply been happy that your speculation concerned the answer that founds my theory itself!

    I’m also looking forward to the honesty of “big data”. But the thing that such information should do is give us less biased ways of assessing theory. Thus any revolutions should occur through better theory, mine or others, since that’s what more accurate data should tend to champion. To date big data seems to be telling us that we lie to everyone, including ourselves. That’s not only consistent with my own models, but supports the idea that there’s a social tool at work which encourages us to tell these lies. This could be the “morality” dynamic that I’ve been mentioning, and so perhaps we’ll need an amoral form of ethics to help these sciences become more complete.

    Liked by 1 person

    1. Eric,
      Apologies for the brief reply. I’m a bit short on time this morning. but thought you might be interested in this answer.

      “But yes, I’d like to know the specifics of how these experiments have been structured.”

      Although not specifically covering dogs, this paper gives high level overviews of the methods for testing for metacognition, starting on page 14. (Although the whole paper is worth reading.)

      Click to access Metcalfe%20EvolMetacog.pdf

      If I recall correctly, many of the cited papers are paywalled (albeit recoverable through SciHub).

      Liked by 1 person

  23. Mike,
    Technically my query about experiments concerned ones which suggest that animals such as dogs do not have rich emotional lives, not experiments which suggest that they have no metacognition. I quite agree with that (and I also disqualify those three rhesus monkeys, which I’ll get to in a moment). So maybe if you come across any good studies which suggest that dogs and many other social species of animal cannot feel things like jealousy, you could pass it along? It seems pretty clear to me that they feel all sorts of things. I realize that my position contradicts that of Lisa Feldman Barrett from your next post, though I presume that there are better explanations than the dearth of being taught terms like “jealousy”. (As in “Look sweety, this is a “wug”. I cringed with that one!)

    I’ve thoroughly gone through the Janet Metcalfe paper that you’ve provided above so that I might get a better sense of how “metacognition” is being treated in general. It’s been quite helpful! Still my own models suggest that the standard view needs improvement. I’ll first present my own perspective here, and then relate it back to the standard one.

    Metacognition is literally defined as thinking about thought. Therefore we might wonder when the term of “thought” first surfaced in us, as well as how thinking about it might have made us stronger? In the evolution of language, at some point we’d expect people to be able to express things like, “I think that our fire is too big”, or “I think that we should hunt rabbits tomorrow. What do you think?”. Maybe this occurred 10,000 years ago, or even 30,000, though it must have been quite useful. Not only could one then express one’s self more effectively, but perhaps more importantly one could think in more effective ways given the “thought” concept itself. This is what language does in general as valuable new terms become invented.

    Thinking about thought however shouldn’t have been much more adaptive for hunter gatherer societies than other ideas that might be thought about. Though perhaps useful, the statement “I used to think that I was smart, but now I think that I was stupid” seems relatively normal. Academic specialists should have been required to sufficiently appreciate the genus of “I think, therefore I am”. Thus for metacognition to find much of a niche, civilization would be a minimum. While natural language seems to be an amazing achievement, or a powerful second variety of thought, metacognition could be classified a normal example of language use.

    To now go back to the standard perspective on metacognition, apparently researchers have expanded the concept in a couple of ways that make it seem more useful and yet uniquely human. They’re theorizing the need for internal representations, and regarding tasks which require a good bit of memory rather than present stimuli. Thus a human or monkey that’s clever enough to decide to either take a video game test for a good reward that’s speculative, or not take it at all to get an assured small reward, must have an internal representation of its desire for rewards in mind, while other animals must not. Poppycock! Some animals are simply more clever than others at making such assessments, but all conscious life must have “internal representations” that rewards are desired. That’s how the conscious, unlike the non-conscious, is theorized to function.

    Once again, I believe that we’ll need effective models of mental dynamics in order to straighten this sort of thing out. My own models suggests that conscious life uses it’s memory information, its sense information, and its good/bad feelings, to run scenarios about what to do given the ultimate goal of feeling better — and with or without language. Perhaps if such a model were to become generally accepted, then there wouldn’t be the impetus to use fancy terms like “metacognition” as platforms from which to finally figure out how we happen to be unique.

    Liked by 1 person

    1. Eric,
      Sorry for not responding directly to your query. Barrett, in her book, cites this paper on an experiment to investigate apparent dog displays of guilt.
      http://www.ncbi.nlm.nih.gov.ololo.sci-hub.io/pubmed/19520245
      Her summary:

      In each trial, a dog owner offered his or her dog a desirable biscuit, then explicitly instructed the dog not to eat it and promptly left the room. Unbeknownst to the owner, however, an experimenter then entered the room and influenced the dog’s behavior, either handing the treat to the dog (who ate it) or removing the treat from the room. Afterward, the experimenter either told the owner the truth or lied. Half the owners were told that their dog had obeyed and to greet their dog in a warm and friendly manner; the rest heard that the dog had eaten the biscuit and should be scolded. This created four different scenarios: obedient dog with a friendly owner, obedient dog being scolded, disobedient dog with a friendly owner, and disobedient dog being scolded. What happened? The scolded dogs performed more behaviors that people perceive as stereotypically guilty, regardless of whether or not the dogs had disobeyed. This is evidence that dogs were not experiencing guilt at performing a forbidden act; rather, their owners were perceiving guilt when they believed the dog had eaten the biscuit.

      Barrett, Lisa Feldman. How Emotions Are Made: The Secret Life of the Brain (p. 266). Houghton Mifflin Harcourt. Kindle Edition.

      She discusses another paper that appears to show jealousy in dogs, but points out that the paper doesn’t control for what emotions the interacting humans may have been showing. She also published this booknote: https://how-emotions-are-made.com/notes/Animals_and_human_body_movements

      To be clear, she does accept that dogs have affects, just not the complex emotions that humans make. She makes a strong distinction between affects and emotions. Myself, I see an affect as a component of the overall emotion, so I’m okay with saying dogs have simple emotions, such as Panksepp’s primal ones, but not the more complex ones that require substantial intelligence.

      Glad you read through Metcalfe’s paper. I don’t agree with all of her interpretations, but I haven’t found any issues with her discussion of the scientific evidence.

      On metacognition, language, and all the rest, I think we’re going to have to just accept we disagree on this point. Which is fine. If we always agreed, at least one of us wouldn’t be thinking 🙂

      Liked by 1 person

  24. Nice article! Sorry I haven’t been visiting lately…just hunkering down with my novel.

    I wonder what the authors make of people who belong to “subservient” groups that are actually more financially prosperous than the others, though still not a majority? Or those who don’t share many cultural ties to those subservient groups, despite belonging to them through external measures?

    Liked by 1 person

    1. Hey Tina. Good to hear from you. No worries. Hope the novel’s going well. At least you’re working on it (unlike me).

      On people in subservient groups who are well off, if I remember correctly, it depends on how much human capital they have. People with low human capital who nonetheless strike it rich, often retain the attitudes of their low human capital peers, mainly because their position is less firmly established than those with higher human capital.

      For people in subservient groups who are well off with high human capital, I think the authors would expect them to favor a meritocracy. They would oppose discrimination, at least against their own group, and wouldn’t have strong interests in set-asides. Although they might have stronger interests in the set-asides if they did have cultural ties (family members, close friends, etc) to others in their group. I think the authors would also expect them to be economically centrist.

      Their social views would tend to depend on their lifestyle propensities, unless perhaps their particular group is defined by sexuality (gays, transgendered, etc).

      Of course, these are all tendencies. No one will match their profile perfectly. I’m certainly more supportive of social safety nets than my profile might suggest.

      Like

      1. It’s interesting that self-interest is expanded in this way to take into account cultural and psychological tendencies, since I usually hear that phrase used to describe financial interests primarily. This analysis helps to make sense of our current political situation in which cultural inclinations seem more important than others, at least as I see it, and also makes it harder to call the other side idiots for voting against their own self-interest (since, when looked at it in this broader sense, they’re not.)

        I’ve always felt that people give hackneyed arguments (“It’s against the law” or “It’s a woman’s right”) to support what’s really an emotional stance. That’s not to say that everyone only rationalizes, but when I hear the same old argument in the same exact words, I tend to think the person spouting it hasn’t really given their position much thought, which means the reason doesn’t seem to matter to them so much as winning the argument.

        Like

        1. The authors definitely take a broader view of self interest, one that encompasses things like social position, as well as the interests of someone’s family and close friends. I sometimes wonder if a better term wouldn’t be “immediate” or “visceral” interests, to side step the definitional debate about “self interest.”

          But once you start viewing things in that light, people on the other side look less like idiots. You might still oppose them (particularly if it goes against your own interests), but it feels like a different kind of opposition, at least to me.

          On hackneyed arguments, I think that’s definitely true. And pointing out to someone that they’re rationalizing never seems to be productive. Consciously, they’re convinced they’re right. Unconsciously, they’re not really interested in being rational, only in securing their interests, whether that be financial, social standing, or some other aspect.

          The problem is that it’s much easier to detect when someone else is rationalizing than it is to detect it in ourselves. This has made me skeptical that anyone can ever be confident that they’re not themselves rationalizing. It helps to expose our reasoning to others and see if they can pick it apart, but even that’s not guaranteed if our collaborators have the same biases.

          Liked by 1 person

          1. I suppose the only way I can tell when I’m rationalizing is afterwards, although even at the time I can get an uncomfortable feeling that nags at me, telling me I’m not saying what I mean. Oddly, it comes more often when whatever I’m saying isn’t getting any opposition at all, which goes back to what you said about pointing out that someone’s rationalizing. As you say, best not to. Maybe it’s even better to just nod and smile, and let people come to that realization themselves.

            Liked by 1 person

    2. “Maybe it’s even better to just nod and smile, and let people come to that realization themselves.”

      That’s definitely one strategy. If I do debate them, I generally find it best to stick to logically countering their actual points and ignore the rationalizing part. (Not that I have a perfect record on this by any means. 🙂 )

      Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.