A thought about objectivity

The idea of objectivity gets a lot of criticism. One common complaint is that it’s a fantasy viewpoint, a God’s eye view that doesn’t exist, a view from nowhere that we can never take. This is a common complaint I’ve seen from people who think studying consciousness in a third person manner is misguided. It also ends up being interlaced in Carlos Rovelli’s relational quantum mechanics interpretation, as well as other interpretations of quantum physics like QBism.

But I think this is a fundamentally misguided way of thinking about objective propositions. It isn’t that it’s a view from nowhere, it’s that it’s the collective view from many subjective viewpoints. Both subjective and objective perspectives involve the creation of models. But subjective models are created in one mind with that mind’s unique perspective.

A single person may try to vary their perspective as much as possible, but there are fundamental limitations. They can’t escape the way their unique constitution and experiences color, filter, and skew any perceptions they might have. So the models from any subjective view are unavoidably going to have blind spots and misconceptions.

The way to get around these issues is to collaborate with others, to assess the commonalities between their subjective models, and create a new model, one that can be continually refined and tested against new perspectives. That, I think, is what an objective “view” is. It isn’t a view from nowhere, it’s a model created from many viewpoints, and that ideally allows us to predict what the view might be like from other perspectives.

Of course, objective models can have their own blind spots and illusions. Even in a collaboration, we can’t escape the way our social zeitgeist, our cultural milieu affects our collective perceptions. Even when we broaden the collaboration to multiple cultures, there may be species level blind spots that get in the way. But models at these levels should still be far more robust than the ones in any one individual mind.

And all of this assumes that these models are being tested. Science’s strong philosophy of taking reality checks as much as possible can often act as an additional safeguard. Of course, evaluating evidence has many of the same issues noted above, so nothing will ever be perfect.

What do you think? Is the idea of objective knowledge fundamentally misguided? If so, how do we explain the success of science in the last 500 years?

51 thoughts on “A thought about objectivity

  1. It does seem to me difficult to argue against the primacy of a physical world that is objectively there (or there from multiple subjective viewpoints), given that we all seem to find the same stuff when we look from our different perspectives, even if we have slightly different takes on it…and generally stuff is still there when we come back to it.

    It also seems to me a little odd that in respect of time, physics seems to have nothing to say about why our ‘now’ seems to be where it is, given that in terms of the equations of physics we could find ourselves anywhere on the timeline of the evolving solution.

    Liked by 3 people

    1. The absence of a now in physics is puzzling. It seems to be an unavoidable concept for any system that needs to take in information from the environment and make decisions, including us, animals, robots, and any information processing system. It might apply to any life, although in most cases it would be competence without comprehension. We know the past with a much higher certitude than the future. We can’t avoid existing at the boundary between past and future.

      The question is, where between fundamental physics and these types of systems does now emerge? Is it only once we have teleological or teleonomic systems?

      Liked by 1 person

    2. I like this response because it emphasizes the ontological use of the word “objectivity” – meaning, what is really out there in the world. Mike was emphasizing the epistemic use of the word – which is perfectly fair, since, although both uses are common, I think the epistemic meaning is more common. It also shows how the ideas are related, in that our strongly overlapping takes on the world are an important indicator of the presence of an (ontically) objective world.

      Liked by 1 person

  2. Objectivity, like kindness, is something one strives for but never acquires in the absolutes. (Nature abhors absolutes, don’t you know.) A complaint that no one can be 100% objective is an idiot complaint, probably from a philosopher, not a scientist. (Philosophers are obsessed with absolutes.) So, one strives for objectivity and if another points out where I have been less objective than I could have been, then I will stand corrected.

    Liked by 2 people

    1. That’s a good point. Objectivity, like so many other things, isn’t a binary thing that’s either present or absent, at least in epistemic terms. It seems like the more viewpoints that are taken in, the higher the objectivity. So even a single person working to take different perspectives might be more objective than someone only taking their customary one. But a model validated by only one culture seems less objective than one validated through multiple cultures.

      Like

  3. It does seem to me, but I could be wrong, that if we consider the multitude, then the matter becomes objective. It would be akin to arguing that a majority held opinion is right. But maybe I misunderstood you.

    Liked by 2 people

    1. Yes and no. I probably should have talked about the importance of expertise in all this, and emphasized the importance of testing more. Of course, assessing expertise and test data is also a collective endeavor.

      Like

  4. I agree. Well said. Which is why, I think, it’s important to read – indeed, why we read. And also, to listen to others who disagree with our point (perspective). If we do that that makes us smarter. However it appears that specialization and competitiveness within Fields inhibits that. Confirmation bias wins out over objectivity.
    Which makes us stupider.
    If our model is correct – it should be able to accommodate new information. If that new info is also correct/accurate. Seems we’re stuck, if not even going backwards?

    Liked by 2 people

    1. I think the competition can be productive, and get us closer to truth, as long as we all agree on a standard of resolution. In science, that always comes down to observation, either experimentally, or if that isn’t possible, systematically.

      Matters seem more difficult in philosophy, where intuition jousting seems to prevail in a lot of cases. Often the only way things seem to get resolved is the rare case when someone admits they were wrong and renounces their prior position, or their camp merely dies out. Actually, thinking about Max Planck’s remark, that science advances one funeral at a time, that might also be more true in science than anyone likes to admit.

      Liked by 1 person

  5. I mostly agree with your view of “objective”. Yes, some people take it to be an unrealistic “God’s eye view”. Nagel has a book “The View from Nowhere” which I also saw as unrealistic.

    To make any sort of judgement at all, is to judge according to a “standard”. I put “standard” in quotes to allow for temporary ad hoc standards. To judge objectively is to judge in accordance with shared community standards. And that’s really all that it could be. We also have our personal standards, which might be more restrictive than community standards. I see a subjective judgement as one made in accordance with personal standards.

    To relate this to consciousness, a child’s consciousness develops as that child begins to form standards and to view the world in accordance with his own standards. There is both a biological component and a sociological component in how we develop those standards. Computers cannot be conscious because we design them to use our standards rather than to develop their own.

    Liked by 2 people

    1. Ah, that’s where that phrase came from. Thanks. Can’t say I’m surprised. It seems like Nagel managed to single handedly make those types of claims respectable again.

      Interesting point about standards. I might use the word “associations”, or maybe “categorizations” instead. But I do disagree with your point about computers. Machine learning is computers learning new standards. (My laptop had to learn my face in order to log me in by looking at it.) Yes, it currently takes them a lot longer to make those connections, but I think a lot of that is because they have fewer standards coded into them at the beginning than animals do.

      Liked by 2 people

  6. I just happened to have watched a Feynman video yesterday, his Fun to Imagine interview where he espouses the concept that we all need context. That his might not be yours, but that we need a common one as a baseline to discuss, (collectively assume) the features of a subject. https://www.youtube.com/watch?v=P1ww1IXRfTA

    Such thoughts, to me, must, at least on some level, acknowledge the tautology of our existence. That and the inherit absurdity that that imposes. We can’t discuss any of this, without our ability to discuss any of this which comes from our elevated cognizant abilities resulting in a consciousness that’s capable of discussing its own existence. That we could ever consider some Final Layer of Understanding, knowing full well what our basis for discussion stems from a figment of our own self realization.

    Ultimately, unless insanity is the goal, shoving a stick into the sand and declaring, “We’ll start from here,” must gird all discussion on any topic, no?

    Liked by 2 people

    1. I think that’s right. It’s along the lines of the Bayesian view of science, that we all start with existing prior credences, which then have to be adjusted as we make new observations or think through what we already know. With each of us going through the same process, we generally end up converging on most things, which seems to indicate that there’s a reality “out there” that we’re converging on.

      Of course, that assumes that all those other minds are actually out there. Maybe there’s only one mind in reality that’s imagining the others. But solipsism is a dead end. Once we do accept that the other minds are out there, and that our communications with them are reasonably good fidelity, then it seems safe to assume the convergence is a meaningful one.

      Doesn’t mean we won’t be broadsided occasionally by reality, both personally and collectively.

      Liked by 1 person

  7. > “objective propositions” – “it’s the collective view from many subjective viewpoints.” I found this statement to be subjective. Here is a simplified example. Suppose we have five personal viewpoints of what 2+2 is. They stated that 2+2 is 3, 5, 7, 1, and 6. Is this collective view from many subjective viewpoints better than a single answer that 2+2=4?

    Objectivity and subjectivity are the terms invented by humans, used by humans, and should be analyzed concerning human positions, actions, etc., and not regarding “reality,” the Universe, etc.

    I think it is preferable not to use the terms objectivity and subjectivity separately but use them instead in relation to each other.

    There is no 100% objectivity, and it would be better to announce the scale used to measure objectivity. It is quite possible that one’s personal statement about something can be much more objective than the statements of hundred other people. I think independent observers should measure the degree of objectivity of somebody’s statement. They should check if this statement could still stand out if you take out all personal positions, views, and beliefs of the statement’s author and participants in the discussion.

    Liked by 2 people

    1. I did mention testing in the post, and as I admitted to Mak, I probably should have discussed expertise. I think those factors would ameliorate the 2+2 issue you discuss. Of course, it won’t make it go away, because it’s always possible that all our attempts to test what happens when we put two apples together with two other apples is hopelessly blinkered by cultural or species level biases. All we can say is that the collective decisions about it seems to be a productive one.

      I agree that epistemically there is no 100% objectivity, and with your point about it possible for a single person to be more objective than another person, assuming they’re working to look at an issue from multiple perspectives. Of course, assessing any one person’s objectivity is subject to the same issues as assessing the proposition they are making.

      Liked by 1 person

  8. That’s what I always tell all those who say that science is about “objective truths”. There is no such thing. There are only “inter-subjective” models. It might sound like lexical hairsplitting. In most contexts it is, but when it comes to the philosophical problem of consciousness it makes the whole difference.

    Liked by 2 people

    1. From an epistemic standpoint, I think you’re right. It seems like the distinction between intersubjective and objectivity is an ontological one. Of course, like all ontological claims, it’s a theory, a model, which we can only assess subjectively or inter-subjectively with ever larger numbers of perspectives. Still, when the model’s predictions turn out be accurate for large numbers of perspectives, the claim to it being objective seems strong. But as others have mentioned, it never reaches 100%. We never get absolute knowledge, about anything.

      Like

      1. I don’t think that a model’s prediction accuracy for many perspectives is an assurance of “objectivity”. (unless we posit this per definition for “objectivity” = the same inter-subjective model for many perspectives). It is like looking at the shadow of a cube, resulting in a square. We all agree to see the square, all measure the same size and orientation. Yet, I find it difficult to speak of a shadow as being an objective model giving us an “objective” knowledge (let alone absolute knowledge) of what the cube is. It remains an “inter-subjective” perception of things, with no possible “objectivity” (in the sense of the knowledge of things as they are in themselves). At the bottom, it boils down to what already Kant pointed out with the noumenon-phenomenon polarity, or to the good old Plato’s cave allegory. Models will always and forever remain inter-subjective abstractions of our consciousness and in our consciousness. No way out, not even in principle.

        Liked by 2 people

  9. The way I understand objectivity is like how a vector in space “is what it is” but you can describe it from any base you like — it won’t change the vector itself, only the description of it. So objectivity is the synthesis (or rather what remains) of truths across as many (ideally all) subjective perspectives (which in the end are just like differing vector bases).

    Liked by 2 people

    1. Thanks James.

      My take. Subjectivity is the collection of predictive models formed from the perspectives of a single mind. The models, including both the ones of the environment and the system itself, are probabilistic and so only accurate enough to be adaptive. Any difficulty in reconciling with physically measurable processes is due to the limited accuracy of the self models.

      I’m sure that statement won’t be controversial at all. 🙂

      Like

      1. I want you to define almost all of those words in terms of the physical. What’s a model? What’s a mind, and what’s a perspective relative to mind, and why is it different from a brain perspective? Is a self model necessary and how does it relate to physical processes?

        *
        [that’s (almost) all I want to know]

        Liked by 1 person

        1. A model is a complex collection of associated conclusions, predictions, and/or actions, which is physically manifested as a pattern of neural firing patterns, possibly along the lines of Damasio’s CDZ (convergence-divergence zones). If you want to say unitrackers linked by semantic pointers here, I wouldn’t object.

          A mind is the operations of the brain involved in taking in sensory information and making decisions about what the organism will do. (A brain, in case you ask, is a concentration of nervous system functionality near the distance senses, typically the head, in an organism.)

          A perspective is the location and mechanisms involved in taking in information, as well as categorizing that information in terms of learned and innate associations. A location might involve a literal physical location, which will have effects on what information can be taken in. Mechanisms like how many types of light cones are on a retina, the animal’s ability to detect chemicals in the air (smell), the range of vibrational patterns (hearing) it is sensitive to, and a host of other factors.

          Any useful modeling of the environment, I think, requires at least an incipient self model. That model may only be a body schema. For some animals, it may include something like the attention schema, as well as models of initial evaluations (feelings) used in action scenario simulations. (Again all done with neural firing patterns.) A social species is likely to have a more developed self model for use in social interactions. Humans have an extremely developed one capable of recursive metacognition.

          Hope that helps.

          Like

          1. Gonna focus on model for now. Conclusions and predictions are abstractions, and maybe actions are too, but actions have an obvious physical component.

            As you might guess, I’m trying to drive you in a direction. I guess I can explain where I’m headed now, and for the sake of simplicity I’ll go straight to the highest level and invoke unitrackers. A unitracker is a physically isolatable pattern recognition unit. Activation of the unit is a “conclusion” if the input came from the designated “pattern” inputs. Those inputs can be/often are from other unitrackers. Activation of the unit by other means (other than the pattern inputs) constitutes a prediction if that activation causes the pattern inputs to be more likely, i.e., lowers the threshold for the input unitrackers.

            My main point is that all of this is information processing, and so substrate independent. The subjective perspective is the informational perspective. The subjective perspective has access to the predictions and conclusions, but only as given abstractions. This access can be physically accomplished via pointers, but again, from the informational perspective, those pointers simply reactivate the given abstraction.

            *
            [okay I feel better, so I’m done, unless you have questions/comments]

            Liked by 1 person

          2. I think I’m onboard with most if not all of that. Certainly that this could all be instantiated in an alternate substrate.

            I would just point out that the patterns that trigger a conclusion (unitracker) are built up over time with across many different episodes. Sensory information in any one event is always sparse and gappy, so in practice the conclusion / unitracker is always a prediction, one that back propagates to earlier conclusions / unitrackers, and eventually to the earliest sensory regions. Comparisons between the feedforward and feedback signals serve as error correction.

            We perceive what we expect to perceive, with our expectations improving as sensory info continues to come in.

            Like

  10. Collecting multiple subjective perspectives may not help. We all live in a bubble, surrounded by like-minded people. We simply do not associate with or even actively avoid people who are not like-minded. Since we don’t know many people whose opinions differ from ours, we tend to think that most people think like we do. We may think that our opinion is “objective” because most people we know think likewise, but it may be just an illusion.

    Liked by 2 people

    1. Too true. That’s the cultural blind spots I mentioned in the post, but maybe for sub-cultures in what you’re describing. Testing our conclusions helps, but as I noted, even the evaluation of observational data requires a lot of judgment and expertise. If that judgment is happening within a closed social bubble, then it’s going to have blind spots and missed biases.

      The only solution I know is to be open to interacting with people with different opinions, and actually pay attention to what they’re saying. And only dismiss it if we can find logical reasons to do so.

      Of course, that strategy has practical limits. If someone comes to me and says I should be open to paranormal phenomena, my previous forays into that subject are probably not going to incline me to sink much time into it. And for complex technical subjects, most of us are largely dependent on what the experts say, although when the experts disagree, it’s worth listening to the arguments of the different substantive camps.

      Liked by 1 person

      1. And only dismiss it if we can find logical reasons to do so.

        Sometimes, logical arguments must be dismissed for moral and ethical reasons. E.g. eugenics can be scientific, has lots of perfectly valid logic behind it, and even can be deemed “objectively good for society”, but the practice is simply inhumane.

        Liked by 2 people

        1. I don’t think logical arguments necessarily mean completely without emotion. Logic without emotion is valueless.

          I have to admit I’m not well well read on eugenics, but most of what I’ve seen about it implies the underpinnings are a caricature of the actual science, with determinations of who is fit or unfit more about ideological preferences rather than any valid evolutionary theory. But to your point, even if the science really was ironclad, we wouldn’t want to do it. Another example is we could learn a lot if we did invasive medical experiments on humans, but few of us want to live in a society where that happens.

          Like

  11. I think you need to define the context when objectivity is valuable and desirable. There are plenty of situations where objectivity is irrelevant or where a subjective opinion of one person or a group of people is all that’s needed.

    Liked by 2 people

    1. What would you see as examples where objectivity is irrelevant? Even in cases when deciding who to marry, I’ve seen people make terrible choices based on how they felt in the short term, while ignoring serious warning signs obvious to their friends and family.

      Like

  12. What we call objective knowledge is knowledge abstracting away from individual personal points of view. The philosophical misconception of equating it with absolute knowledge (a.k.a. God’s point of view) has caused a lot of trouble, in philosophy, in politics and, yes, in religion. Trouble is, deflationary views of truth tend to suffer from the opposite problem — they often get conflated with cultural relativism. Walking the middle path is often hard, but it is, I think, necessary.

    Liked by 2 people

    1. Good point about abstracting away from individual viewpoints. And I agree that the idea of absolute knowledge is problematic. To some degree it’s a rationalization, a false standard. We can’t meet it, the reasoning goes, therefore all viewpoints are equally valid, or at least the rationalizers personal viewpoint is.

      I also agree about the messy middle. It’s not nearly as satisfying as veering hard one way or the other. It forces us to admit we can’t absolutely rule out the views of those we disagree with, only defend the reasons we give those views a lower credence than they do. On the other hand, changing our own mind later, if it becomes the right move, is easier if we admitted the uncertainties in our previous views.

      Like

      1. I should have added that because we gain objective knowledge by abstracting away from individual stand-points, it is necessarily inferential in nature. And being inferential makes it in principle defeasible. Hence the problem which derailed old-style AI efforts: unlike traditional logic learned at schools and universities, our real life logic is non-monotonic, which is hard to model in software (and was quite impractical given IT resources available at the time).

        The notion that all our objective knowledge is in principle defeasible makes some people profoundly uneasy. Yet our legal systems are based on the assumption that court/jury findings are truths — unless/until proved otherwise. And the world does not collapse into a mess of relativism in which every opinion is equally valid. As Feynman once pointed out, the aim of science in particular, is to prove current knowledge wrong as quickly as possible. Thus our objective knowledge being defeasible is a strength, not a weakness. But it does depend on a shared appreciation of what makes a good argument. If that shared appreciation starts unravelling, trouble follows. I am sure I do not need to point out recent and/or current examples.

        Liked by 2 people

  13. From my own perspective the idea of objective knowledge can only be ontologically misguided, that is unless I happen to be an all knowing god. And if I were an all knowing god then it seems to me that I’d have to know that I was an all knowing god. I certainly don’t know that I’m an all knowing god! For this reason it seems to me that the idea of objective knowledge is ontologically misguided, for me at least. That wouldn’t be the case if any of you or anyone else happen to be all knowing gods. In that case however I’d expect such a god to empirically demonstrate their infallibility.

    If we scale “objective” back to the science based position of Mike’s actual post however, then sure, subjective entities like me can at least engage in science. It’s a model based institution rather than “truth”, but does seem to be getting better over time. Personally I think that going forward better science will largely depend upon better philosophy.

    Liked by 2 people

    1. Requiring that one be God to have objective knowledge doesn’t seem like a useful standard. Although it’s one that people who want to dismiss objective data often adopt.

      But it seems like there are two issues here: objectivity and knowledge. It doesn’t seem controversial that we can hold beliefs (in the sense of propositions) about objective reality without being God.

      The question then is what we need to label a particular belief (proposition) as “knowledge”. The classic answer is that it must be justified and true. But it seems like truth is just a circular standard. So that leaves us with justification.

      Do we ever get 100% absolute certitude with any kind of justification? No. But we can get ever higher levels so that we can say a sufficiently justified belief is much more likely to be accurately predictive than one that can’t be justified.

      So the question becomes, what do we call this higher probability belief? If we require 100% before we can use “knowledge”, then that word is useless. I think it’s productive to instead use it to label high probability beliefs. In this view, “knowledge” isn’t 100% certitude, but it is high certitude.

      In that sense, I think we can say we have objective knowledge. It seems like philosophy that requires absolute certitude is an impediment rather than a help for the pragmatic enterprise of science.

      Liked by 1 person

      1. I guess I don’t see how science would have problems if no one could ever say that they have “objective” knowledge. It’s just a title rather than something that would create more truth. But then if “subjective” knowledge seems to squishy then maybe try something else. I like to reference the beliefs of respected professionals in some regard. For example I’d like such a community of philosophers to better found science with various generally accepted principles of metaphysics, epistemology, and axiology.

        The problem I see with using the “objective” term is not only that we exist in this world and thus are subjects of it, and so objectivity seems inherently false for us, but that we commonly seem to display biases. I don’t want to imply however that you’re creating problems with your desire to use the “objective” term more liberally Mike. I’m not convinced it’s needed though.

        Liked by 2 people

  14. Sounds like you have a mouse in your pocket Mike because you keep using the word “we” in your circular arguments…. You really should exercise some modesty and only speak for yourself.

    In agreement with Roger, Penrose rejects the notion that consciousness and/or mind is computational. His reason is simple; he calls it understanding. Consciousness has the capacity to understand whereas an algorithm no matter how complex will never be able to understand. Algorithms follow rules whereas the system of mind has the capacity to circumvent any rules that are devised no matter how complex those rules may be.

    In other words, consciousness can “know” what an algorithm can never know; and it is this “knowing” which sets consciousness apart from any other physical system in the universe. If we as a species do not further evolve to the point of where we actually understand ourselves or the universe we inhabit, then maybe as a species we are an algorithm imprinted on carbon which reduces us to philosophical zombies after all…..

    If one cannot “know” anything with certainty then clearly, that system must be a philosophical zombie; because at the end of the day, “we” all live in an objective reality. The evolutionary process marches on, with or without us.

    Liked by 2 people

    1. I don’t think science can tell us what to value, except in intermediate terms for goals related to other values, particularly longer term ones. Ultimately though, we hit things we value just because we value them, like happiness and survival, although science can us insights into why we value them (evolved instincts, etc).

      Liked by 1 person

  15. I love this. It makes so much sense to me. I guess we still can never have a 100% truly objective viewpoint, but we can get closer by basically taking the average of multiple subjective points of view.

    Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.