Are there things that are knowable but not measurable?

It’s a mantra for many scientists, not to mention many business managers, that if you can’t measure it, it’s not real.  On the other hand, I’ve been told by a lot of people, mostly non-scientists, and occasionally humanistic scholars including philosophers, that not everything knowable is measurable.

But what exactly is a measurement?  My intuitive understanding of the term fits, more or less, with this Wikipedia definition:

Measurement is the assignment of a number to a characteristic of an object or event, which can be compared with other objects or events.[1][2]

There’s a sense that measurement is a precise thing, usually done with standard units, such as kilograms, meters, or currency denominations.  But Doug Hubbard argues in an interview with Julia Galef, as well in his book How to Measure Anything, that measurement should be thought of as a reduction in uncertainty.  More precisely, he defines measurement as:

A quantitatively expressed reduction of uncertainty based on one or more observations.

Hubbard, Douglas W.. How to Measure Anything: Finding the Value of Intangibles in Business (p. 31). Wiley. Kindle Edition.

The observation part is crucial.  Hubbard argues that, for anything we care about, there is a difference between what we’ll observe if that thing happens and what we’ll observe if it doesn’t.  Figure out this difference, define it carefully, and you have the basis to measure anything, at least anything knowable in this world.  The more the differences can be defined with observable intermediate stages, the more precise the measurement can be.

One caveat: just because it’s possible to measure anything knowable doesn’t mean it’s always practical, that it is cost effective to do so.  Hubbard spends a lot of time in the early parts of his book discussing how to figure out the value of information to decide if the costs of measuring something is worth it.

In many cases, precise measurement may not be practical, but not all measurements must be precise in order to be useful.  Precision is always a matter of degree since we never get 100% accurate measurements, not even in the most sophisticated scientific experiments.  There’s always a margin of error.

Measuring some things may only be practical in a very coarse grained manner, but if it reduces uncertainty, then it’s still a measurement.  If we have no idea what’s currently happening with something, then any observations which reduce that uncertainty count as measurements.  For example, if we have no idea what the life expectancy is in a certain locale, and we make observations which reduces the range to, say, 65-75 years, we may not have a very precise measurement, but we still have more than what we started with.

Even in scenarios where only one observation is possible, the notorious sample of one, Hubbard points out that the probability of that one sample being representative of the population as a whole is 75%.  (This actually matches my intuitive sense of things, and will make me a little more confident next time I talk about extrapolating possible things about extraterrestrial life using only Earth life as a guide.)

So, is Hubbard right?  Is everything measurable?  Or are there knowable things that can’t be measured?

One example I’ve often heard over the years is love.  You can’t measure, supposedly, whether person A loves person B.  But using Hubbard’s guidelines, is this true?  If A does love B, wouldn’t we expect their behavior toward B to be significantly different than if they didn’t?  Wouldn’t we expect A to want to spend a lot of time with B, to do them favors, to take care of them, etc?  Wouldn’t that behavior enable us to reduce the uncertainty from 50/50 (completely unknown) to knowing the answer with, say, an 80% probability?

(When probabilities are mentioned in these types of discussions, there’s almost always somebody who says that the probabilities here can’t be scientifically ascertained.  This implies that probabilities are objective things.  But, while admitting that philosophies on this vary, Hubbard argues that probabilities are from the perspective of an observer.  Something that I might only be able to know with a 75% chance of being right, you may be able to know with 90% if you have access to more information than I do.)

Granted, it’s conceivable for A to love B without showing any external signs of it.  We can never know for sure what’s in A’s mind.  But remember that we’re talking about knowable things.  If A loves B and never gives any behavioral indication of it (including discussing it), is their love for B knowable by anybody but A?

Another example that’s often put forward is the value of experience for a typical job.  But if experience does add value, people with it should perform better than those without it in some observable manner.  If there are quantifiable measurements of how well someone is doing in a job (productivity, sales numbers, etc), the value of their experience should show up somewhere.

But what other examples might there be?  Are there ones that actually are impossible to find a conceivable measurement for?  Or are we only talking about measurements that are hopelessly impractical?  If so, does allowing for very imprecise measurement make it more approachable?

This entry was posted in Philosophy and tagged , , , . Bookmark the permalink.

93 Responses to Are there things that are knowable but not measurable?

  1. J.S. Pailly says:

    I remember reading something about Newton, that a lot of his success was due to the fact that he lived in a time when mechanical clocks were becoming a whole lot more accurate. That allowed him to make very accurate measurements of things like how long it takes an object to fall to the ground, and then use those measurements to to refine his mathematics. So I guess in a sense, Newton’s laws weren’t knowable until they became measurable.

    Liked by 2 people

  2. Very interesting question. The first thing that came to my head was that for something to be non-measurable it would have to be non-physical. I briefly landed on something like the 20,000th digit of pi, or the nth digit, such that you could not create a structure that could be measured to that accuracy using all the material in the known universe, But then I realized that non-measurabilty as a practical matter doesn’t count.

    So any other candidates?

    Maybe contradictions? How do you measure whether a man can be both married and a bachelor? How do you measure if someone is a philosophical zombie?

    What about infinities? What about algorithms (like the one by which we know the 20000th digit of pi)?

    *
    (May have to cogitate on this a bit)

    Liked by 2 people

    • Thanks for taking it on. On contradictions, I think the knowability aspect catches that one. (Of course, if we dispense with the knowabiity requirement, then all kinds of things become unmeasurable.)

      On algorithms, what in particular are you thinking about? Whether it works? Or whether two implementations are effectively the same algorithm?

      Like

  3. Hariod Brawn says:

    Thinking of time, then we can measure the past (events), though how do we measure the future (events)? Correct me if I’m wrong Mike, but doesn’t physics tell us that in some sense the future already exists? Maybe that same physics is itself a measurement of the future? Ask me one on sport.

    Liked by 2 people

    • On physics and the future already existing, as I understand it, it depends on which physicist you’re talking to, along with which interpretation of quantum mechanics you favor.

      But the goal of any scientific theory is to predict, given initial conditions, what will happen. So I suppose you could characterize physical laws as a measurement of the future. Of course, that assumes the theory in question isn’t about to encounter a black swan and be falsified.

      Liked by 1 person

  4. Mark Titus says:

    At least one Self-Aware Pattern believes itself knowable, but doesn’t know if it is measurable. I think it gives itself an unnecessary headache.

    Liked by 2 people

    • Which self aware pattern did you have in mind? 🙂

      Personally, I suspect anything knowable is also measurable, but I did this post to see if anyone could point out a case there that isn’t true.

      Like

      • Mark Titus says:

        Not sure you got my point. You and I are both “self aware patterns” (your definition of a person as I understand it), who think we know ourselves, But there is nothing we can measure that we know (or think we know) about ourselves. So that would be a case of knowing something that is not measurable.

        Am I wrong about what you mean by a “self aware pattern”?

        Liked by 1 person

        • Hmmm. Well, remember that we’re talking about knowable things. If I think I know something private (internal to my mind) about myself but don’t, is there any way for me to ever know that I don’t really know that private thing? (If you think this through, it shows that the reliability of introspection is an illusion.)

          On the other hand, if I think of myself as a person who is always right, but my track record doesn’t back that up, isn’t that ultimately something that can be measured?

          Liked by 1 person

          • Mark Titus says:

            I had in mind things like pain–a headache, for example. I know I have a headache–it is a “knowable thing,” even though it may be psychosomatic or a genuine migraine. The pain itself is not an illusion. (Of course it is not observable, which is why physicians resort to asking, “On a scale of 1 to 10, how would you rank your pain?”)

            What drew me to your website (from a comment you made on Aeon) was your apparent identification of a conscious person as a “self-aware pattern.” I liked that–an organism that is not just aware, but is aware of its awareness. It is that awareness of awareness that seems to imply the existence of non-observable, non-measureable things.

            (My first two comments were a bit clumsy. Sorry about that. This one, I hope, better expresses my illusion.)

            Liked by 1 person

    • I think you might have answered your own question when you mentioned the doctor asking a patient to rate the pain on a 1-10 scale. It’s not an objective measurement, but then we’re talking about a subjective thing, so a subjective measurement might be the best we can do. Someday a doctor may be able to look at a brain scan and objectively measure how much pain a patient is in.

      Glad you found your way here! I haven’t commented much lately on Aeon and miss those discussions. No worries at all on the wording. Language is often a limitation in these discussions. It requires that we all give each other at least some degree of interpretational charity.

      On awareness of our own awareness, I’m not confident that applies to all organisms. This type of awareness, this type of self awareness, requires metacognition, and the scientific evidence for metacognition in animals outside of humans and a few other primate species, hasn’t been found, at least not yet.

      But on being observable, I think it depends on how we define “observation”. Do we only include external sensory perceptions? What about our “inner” sense of ourselves? Or do any conscious perceptions count? If the latter, then you could describe our awareness of our own awareness as an observation, and careful observation as subjective measurement.

      Liked by 1 person

      • Mark Titus says:

        Re: “On awareness of our own awareness, I’m not confident that applies to all organisms.” I thought it was pretty clear I was referring only to conscious beings.

        Your first and last paragraph with its questions suggests to me that we are on page 100 of two different books. It would be fun to try to translate them for each other, but I think a blog is not the place for it.

        Like

  5. SelfAwarePatterns, “Personally, I suspect anything knowable is also measurable,”

    I am just curious, do you think that there are things which are unknowable? If yes then any examples? Also unknowale to whom? Do you think that what is unknowable to some subjects can be knowable to other kind of subjects?

    Liked by 3 people

    • Are there things that are unknowable? Well, I think incoherent things are unknowable. There are coherent things that are currently unknowable, such as how far away the nearest extraterrestrial civilization is. There may be things that are forever unknowable, but I don’t see it as productive to ever assume any one item is in that category, primarily because many things that were once thought to be in that category, such as what the stars are composed of or what atoms are made of, later turned out to be things we could learn.

      We can never know what it’s like to be a bat, or a dog, or a cat. Assuming these creatures can know what it’s like to be themselves (something I’m less sure of than I used to be), they know something that is unknowable to us. And they can’t know what it’s like to be a human.

      Liked by 1 person

  6. @EdGibney says:

    I got into a long debate about this with my wife a few years ago about measuring the value of nature. I come from the engineer / MBA background which does teach that you can measure everything, but what I came to understand while trying to resolve this was that the value of some things run to infinity. In the case of the value of nature, the Supply vs. Demand curves that would determine its price makes no sense when you get close to the left axis where the supply is a unique and irreplaceable entity. So no, I no longer think that we can actually measure some things like the value of nature. There are no other yardsticks of comparison to use.

    Liked by 3 people

    • My reaction would be to wonder what is meant by “the value of nature”. It seems like a hopelessly amorphous question. It might be easier to measure the value of, say, an acre of forest, and what the loss of cutting it down might be, or a stretch of clean river, and what the cost might be of polluting it. Granted, in both case, the number of side effects might be extremely difficult to track, although if we’re satisfied with very coarse grained measurements, it seems like that could be overcome.

      Hubbard in his book discussed measuring environmental factors, but I haven’t made it to that section of the book.

      Liked by 1 person

    • Hi Ed,
      Isn’t it great to have a spouse who’s able to be supportive (and I know she is from reading your blog) and yet still be an independent thinker to recon with? I wonder the gist of her criticism?

      My own criticism would be to say that you might have begged the question there, or started with the presumption that nature has infinite value. (By “nature” I believe you’re talking about ecosystems in general.) From there of course your supply vs demand curves will get crazy near the left axis. Infinity is a very useful mathematical term (which for example describes the division of zero by one), though it’s hard to say that it actually exists anywhere in the real world.

      From my own models life isn’t valuable in itself, but rather creates both positive and negative value for conscious subjects to experience. I’m actually worried that our ecosystem might harbor tremendously negative value, given the amazing severity of punishing sensations like pain. I don’t let this keep me up at night though — for me life is quite good!

      I’ve been thinking about you recently however as not only a naturalist in your sense of the word, but as a “causal naturalist” as well. I believe that it needs to become understood, not that causal naturalism is true (since we can’t know this with perfect certainty), but rather that this metaphysical position offers us our only potential to learn about reality anyway. Under a void of causality we can guess about what will happen next, or perhaps use our faith, though they aren’t the same as actually figuring things out. I brand this as a sort of antithesis to Pascal’s wager. I’d have scientists get more serious about causal naturalism, and thus shut down esteemed speculation regarding, for example, dualism and panspychism.

      Liked by 1 person

      • @EdGibney says:

        Hi guys! I think I may have begged a question, but not the one Eric suggested. For nature (or anything really) to have quantifiable “value” you have to be able to define a unit of measure. I don’t think we have such a thing for the intrinsic worth of life in this universe. As Eric said, it could be negative. Anti-natalists (see David Benatar’s recent magazine articles) would certainly agree with that. I view life as positive for we the living, but yeah, maybe that’s just because I have a supportive yet intellectually challenging spouse. : )

        Speaking of that, she’s another example of an object who’s value cannot be replaced. Traditional supply and demand curves start with the assumption that you are graphing manufactured widgets that can just be produced, or not, and in whatever quantity you want. But most things in life are not like that. As Mike said, you might be able to determine a price for an acre of land or a cord of wood that humans are willing to pay for such things, but even that doesn’t capture all the “value” that these things might otherwise have to the ongoing project of life. (That’s something we epistemically can’t know by the way—the future.) Additionally, the only reason you might be able to price these things is because they are small units that are essentially the same as many other small things. You could give a relative price for “the Amazon rainforest” if there were 10 of them. Each one would be 1/10th of the whole lot. But as you destroyed them, each one would be worth more and more — 1/9th, 1/8th, 1/7th, etc. until you get to the undefined unity price of 1/1. And if you destroy that unique object, you get Eric’s definition of infinity – 1 over 0. No economists think this way though as far as I know. Otherwise we would price our commons this way and treat them much differently. We’re seeing now, of course, as the commons are being destroyed and their price is becoming infinite.

        (Eric, I’ve been thinking of you too and feeling bad about not writing you back so I’m about to do so to continue the other discussion you raised.)

        Liked by 2 people

    • Ed,
      Yes in the end it gets down to a given definition for “value”, and we seem to take somewhat different approaches there. I base this entirely upon the punishing to rewarding sensations that sentient life feels. I think you’re okay with that, but are also part of a society that places ecological matters as the nexus, since without life there would be no sentience anyway. So it’s simply a different focus. But well done on the mathematical reduction to show that units of life head towards infinite value as quantities of it head towards zero! I don’t know if that tool was previously your rhetorical toolbox, but it’s surely a keeper!

      I’m quite pleased with Douglas Hubbard’s position that even “ineffable” aspects of our existence, such as how good or bad someone feels, can be measured in one way or another. I don’t think Mike quite trusted me about that in past discussions. Actually Mike’s last post gave me an Eric Jonas and Konrad Kording position arguing that much of neuroscience’s atheoretical work isn’t providing sufficiently useful answers, and this permitted me to propose my own theory from which to potentially help. I can’t complain much around here.

      I did check out David Benatar and his anti-natalist position. I like that he’s basing value upon the same thing that I do, and certainly his observations that things seem skewed on the negative side — there are no pleasures that come anywhere close to the potential horrors of pain. But I think his position itself is quite precarious given how small it happens to be. How could an ideology stating “We shouldn’t have kids because life sucks” ever be comprehensive enough? I’m sure this professor is made out as a nutter, but to each his own. I knew that my wife and I would need a child for our own well being, and was confident that we’d be able to raise a far happier person than most. Now that our boy is 14, I’m pretty sure it was the right way to go. But given great diminishing marginal utility and my economist background, I couldn’t agree when she wanted more. I believe that I’m similarly dependent upon my wife as you are yours, but if I were to tell her that she’s irreplaceable, I suspect that she’d think “Wait a minute, what’s he been doing to feed me a line like that?”

      I have your new email and suspect that there are indeed some other productive things for us to discuss. I’ll also mention what I’ve told you in the past, since you brought it up. No worries about when (or if) you get back to me. I’m always here regardless.

      Liked by 1 person

      • @EdGibney says:

        I have not used that progression to infinite value before. Your comment that infinity is 1 over 0 is what spurred me to think of it, so thank you for that!

        Liked by 1 person

  7. SelfAwarePatterns,

    Perhaps there can not be a perfectly exact measurement of any physical object. For example, what is the exact height of of a given chair. Whatever the currently best technological means employed for measuring this chair’s height, the result can be improved, and so on. Another point is that perhaps there is no exact height of that chair. I mean that the height of the chair is indeterminate. So it is not only a problem of measurement alone. It is a property of physical objects that not only that these can not be measured perfectly but that these do not have any perfectly exact measurements.

    Perhaps I have not expressed it clearly but tell me what do you think? Can you wrap around your head the idea that physical objects do not have any perfectly exact measurements? It is a question of the nature of physical reality.

    Liked by 1 person

    • Certainly, epistemically we can never do an absolutely accurate measurement. The best we can do is exert extra effort to narrow the range down ever tighter and increase our confidence in the range to higher and higher levels without ever being able to reach 100% confidence.

      Because of that, we can never establish with 100% confidence that there is one exact height, although I find it a productive model (theory), at least at macroscopic scales, that there is one. More precisely, that theory is that there’s one exact height at a certain period in time. The chair is always gaining and losing atoms, so it’s exact objective height (if there is one) is always varying over time.

      Of course, at the quantum scale, it comes down to what your preferred interpretation is.

      Liked by 1 person

  8. “although I find it a productive model (theory) —–”

    If a theory is productive then does it necessarily mean that it is true?

    Liked by 1 person

  9. Michael says:

    Hi Mike,

    Before I begin, just want to say it is nice to see you back in the blogging saddle…

    I’ve thought about this piece a little and I’m not sure you’ve actually asked a question, meaning that in the post it appears the definition of knowledge is what is objective and measurable. It’s a perfectly reasonable definition, but I think it precludes the question asked in the post. As a quick example, if I were to say I knew something, and you didn’t think it was knowable, you might ask me to prove it. I think I would then have to produce something measurable to give you a satisfactory answer, right? Something you could verify yourself. This is why I’m not sure you’ve actually asked a question that isn’t already answered by the definitions of the terms involved.

    The post and some of the commentary got me thinking about other questions/concepts that may be interesting. One is that measurement doesn’t always lead to knowledge. We believe, for instance, that Newton was incorrect about the ultimate nature of gravity. He made a great many observations and improved considerably our predictive powers, but ultimately he was incorrect. We may not have a complete knowledge of gravity ourselves. So, when does a pretty good understanding become knowledge? Does knowledge mean the ability to predict infallibly what will happen next? Can we still say we have knowledge if we’re right most of the time? Does the sort of increase in predictive capability that is typically generated by repeated measurements actually constitute knowledge?

    This sort of begs a question on what is knowable. It strikes me that we can know a great deal about what happens, or happened, but when it comes to understanding motivations it is a lot more difficult. Using your example of A loving B, if I were from an alien civilization I may interpret A’s physical actions as meaning all sorts of things and have no clue that A loved B. So if the relation of physical actions to motivations and meanings requires the context of culture, and perhaps even of personal history, and is thus fairly relative, is it knowledge?

    Similarly, people who are in love may have a hell of a time answering the question: why do you love me? And why do I love you? There’s a point at which even trying to answer the question belies the possibility of doing so. None of the reasons we can muster feel quite complete. There’s an intrinsic vulnerability to a love that is predicated on transient parameters of love-ability, and we sense it, and resist at some level describing love in exclusively transactional terms for that very reason. And yet those are the only terms that may be measured. I think in some sense the most authentic answer we can offer is that I love you because I know you. But what does it mean to “know” a person…? Is it the ability to predict their behavior? The ability to “understand” their behavior? It seems a difficult question to answer.

    Michael

    Liked by 3 people

  10. Hi Michael,
    Thanks! I hope I can keep up a somewhat regular posting frequency.

    Good point about the definition of knowledge and measurement. The classic definition of knowledge is justified true belief. But what do we accept as justification? If we require measurement, then I agree that by definition the answer to the question is set. Of course, some people will insist that there are other ways of knowing, although I don’t think I’ve ever heard a satisfactory answer to what those other ways of knowing might be. My question could be seen as another facet of the question of whether there are in fact other ways.

    Your question about whether infallible prediction is needed before we can consider ourselves to know something goes to a point I made on the previous post, that it’s not really productive to consider knowledge a binary trait, but a spectrum of how reliable a particular belief might be. So we can say that Newton knew how gravity worked at a certain level of reliability, but one that failed in special cases, cases that had to be resolved, some by Laplace, but most by Einstein. But Einstein’s understanding doesn’t help us much with gravity on the quantum scale, yet it seems perverse to say that Einstein didn’t know how gravity worked.

    If we hold out for a strictly infallible definition of knowledge, then knowledge is effectively impossible, except perhaps in tautological cases, since we never have absolute access to perfect information. There is always some degree of uncertainty, some margin of error, some imprecision in our model of anything.

    Your question about love actually gets to something I was uneasy about when using it as an example. The concept of love is an ambiguous one. It can refer to erotic love, friendly love, sibling love, love of child or parent, etc. While there are commonalities, each has its own set of expected behaviors. Love is a composite concept, composed of a shifting set of dispositions, which might indeed confuse an alien if they don’t understand our biology and culture.

    On nailing down why we love someone, I think it’s difficult for a couple of reasons. One is that many of our reasons for why we feel certain ways toward people happen at an unconscious level, a level we can’t always consciously introspect to. The other, as I once learned the hard way, is that people often don’t like to even think there are reasons, preferring that they remain mysterious (and therefore possibly magical and eternal).

    Liked by 2 people

  11. SelfAwarePatterns, “Well, that depends on your theory of truth”

    A theory is true if the reality is as the theory says it is.

    Liked by 1 person

    • That’s the correspondence theory. Metaphysically it’s true, but it’s really just a restatement of the definition. But how do we know if “the reality is as the theory says it is”? Observation and measurement? We discussed the limitations of that above. What justifies calling a proposition “true”?

      Liked by 1 person

  12. ” But how do we know if “the reality is as the theory says it is”?”

    I was answering your question that what is my definition of truth.
    What is the definition of truth and how one can know in any particular case what is true are two different things. So trying to know whether a given theory is true or untrue may be different depending on that particular case.
    But one method which perhaps can be applied in all cases is that if a theory is self-contradictory then definitely it is not true.

    Liked by 1 person

  13. Just to be clear, are you saying that measuring is knowing? Or that the things we can know also happened to be measurable?

    I might agree that everything CAN be measured, but I’m not sure that this is always meaningful in the sense of reducing uncertainty. It’s possible to mathematize things without clarifying them, or to do so in a way that mucks things up. Just because it’s possible to assign a numeric value to something doesn’t mean that that numeric value actually captures anything important or meaningful about what it’s supposed to clarify. I worry we could end up tricking ourselves into a false sense of knowing.

    The pain-o-meter was mentioned earlier and this might give you a sense of what I’m getting at. Time and again I hear people scoff at or make fun of that request. Everyone understands it to be a a form of communication, and I don’t think anyone objects to that, but what’s so goofy about the whole thing is the blatant attempt to turn something fundamentally un-mathematical into something clinical that only sounds objective. Though we understand that it helps for the doctor to know how bad we feel, can’t he simply interact with us as normal human beings? Or does he need an algorithm to figure out what ought to come naturally? We know this pain-o-meter is not really as precise as it tries to be, and its attempt to seem scientific is pitiful and laugh-worthy. Or just plain confusing. I’ve seen it put doctors in the position of explaining that the chart isn’t that meaningful, but please assign a number anyway.

    Science-y stuff sometimes stands in for real science just by virtue of being mathematized. It’s fairly easy to tell them apart when we’re looking at a ridiculous smileyface-frownyface chart, but sometimes the absurdity isn’t so transparent. Statistics, for example, are notoriously misleading in this regard. They give the impression of factual knowledge even when they’re being improperly used. More often I hear about studies that don’t differentiate between causation and correlation, or they’re related in such a way that they make for stupendous headlines: “Women who wear clothing or footwear increase their risk of getting raped by 4% percent.”

    In other words, knowledge is not the same as the mere assignment of numerical value. There has to be something guiding that mathematization, some knowing that stands outside of it, such as an understanding of relevance.

    On non-scientific knowledge, I think you gave an answer here:

    “Granted, it’s conceivable for A to love B without showing any external signs of it. We can never know for sure what’s in A’s mind. But remember that we’re talking about knowable things. If A loves B and never gives any behavioral indication of it (including discussing it), is their love for B knowable by anybody but A?”

    So, A knows he loves B. That’s knowing! It may just be one person, but it counts.
    And when B asks how much he loves her, he can stick out his arms as wide as they go and say “This much!”

    Inherent in the joke is the idea that love is not really something that makes sense to quantify, even if it can be quantified. There’s even a sense that quantifying it will detract from a true knowledge of it or mislead us by presenting it as something it is not, in this case, a mere commodity.

    Typical philosopher’s answer for you, huh? 😉

    Liked by 1 person

    • “Just to be clear, are you saying that measuring is knowing? Or that the things we can know also happened to be measurable?”

      You could say I’m asking if there’s a distinction. Or looking for examples that might show they’re not.

      The pain-o-meter is interesting. We all know that the number I give my doctor is utterly subjective. My 5 might be your 3, etc. Still, if we recorded everyone’s answers and correlated the averages with specific ailments, we’d almost certainly see reliable trends, trends that could be replicated in multiple studies.

      (The recent news stories about man-flu seem like an interesting case in point. Are most men just babies? Or do we really suffer more than women when we have viral infections?
      My own suspicion is that women in general suffer a lot more than men, and so are more psychologically prepared to soldier through the flu or whatever.)

      Is this scientific? Strictly speaking, it’s not objective, but could be considered inter-subjective maybe? A lot of social science is actually conducted using similar techniques. A lot of people say it isn’t scientific, and see it as proof that the social sciences aren’t really science. For me, the key is whether it produces reliable information in the sense of being rigorously replicable, and whether we keep in mind that the results aren’t going to necessarily pertain to any one individual case. (Although they will in most cases.)

      “More often I hear about studies that don’t differentiate between causation and correlation”

      The problem here is that the only way we ever know about causation is to observe consistent correlation isolated to a single variable. We never actually observe causation, ever. We can only infer it, and any inferred causation might actually only be correlation. That said, I agree that the media often does a terrible job of calling attention to studies that haven’t really even attempted to infer causation.

      “There’s even a sense that quantifying it will detract from a true knowledge of it or mislead us by presenting it as something it is not, in this case, a mere commodity.”

      I think my response here is similar to the one I gave Michael above. Our own sense of love is often assembled unconsciously, and the details of that assembly can’t be introspected, no matter how hard we try. On top of that, our culture has memes about love that discourage analysis of it. Since I typically analyze everything :-), and deeply believe there’s value in that stance, I’m deeply suspicious of those memes.

      No worries on doing a philosopher’s response? I pretty much asked a philosophical question, and those are usually the most interesting interactions!

      Liked by 1 person

      • Sorry if I went over the same ground you’ve already covered in your responses. I have to confess, I didn’t read them all, but I did just go back to read your exchange with Michael.

        I’ll try to frame this another way. Maybe it’s possible to measure all things, but measuring them doesn’t always give us complete knowledge of them. In other words, maybe some things don’t warrant measurement, or the measuring of them gives an incomplete picture that masquerades as the whole story.

        William James does a good analysis of this subject in respect to religious experience. Even if it’s true that religious experience grows out of or has as its origin some sort of physical malady or imbalanced constitution (and he thinks it often does), that doesn’t negate the meaning of the experience or reduce its value or make it less real. We can’t say that evangelicalism is “nothing but” a neurosis or low IQ. Genius often originates in mental illness, he says, but identifying genius as “nothing but” illness doesn’t make sense. We see the genius and the illness as two separate things in the same person. For him, this distinction makes perfect sense from a pragmatic view, but I won’t get into pragmatism. 🙂
        He chooses the example of genius because we inherently see the value in it. But it’s not so clear with other kinds of examples, such as love,or religion, which he says are analogous.

        So what I’m trying to say is, if we try to mathematize such experiences then what we’re really studying in most cases is the origins or physical causes of them. This is not complete knowledge of them, and that’s fine so long as we don’t try to claim it is.

        Are there other kinds of knowing? Is quantitative knowledge the only one that…um…counts? 😉

        I think it’s too narrow to say that there exists no other kind of knowledge besides the quantitative kind, but there wouldn’t be any example I could point to to change your mind, assuming your epistemological position is set. I think this is what Michael was getting at in calling this inquiry ‘begging the question’, maybe? After all, I could give you example after example of non-quantitative knowledge, and you could always say, “But that’s not knowledge.” Of course, you made it clear in your reply to him that your position isn’t set, which is why you wrote the post. But in any case the only thing I can do is appeal to your intuition to make a strictly qualitative view of knowledge seem too restrictive. (‘Intuition’ in the philosophical sense, not mere gut feeling…I know you know what I mean, but just in case someone else is reading this…)

        So here are some of the usual appeals. Do any of these count as knowledge?:
        Basic logic? Other kinds of analysis? Literary, for instance? Artistic knowledge? The connoisseur’s? Aesthetics? Morality? Law? Philosophical knowledge—Philosophy of math? Philosophy of science? Of logic? What about savor-faire? Emotional understanding?

        And do boiling these down to mathematical versions of them really account for the things they account for satisfactorily?

        You made a good point about causality. What are the implications of causality being itself unobservable? Can math tell us whether Hume or Kant was right? In other words, can quantification solve the problem of whether causality is really nothing at all or whether it’s a category of the mind that glues together two events in such a way that makes experience, such as it is, possible?

        Liked by 1 person

        • No worries at all on not reading the rest of the thread. Sorry if I implied I thought you should. I rarely read entire threads myself. Who has the time?

          Definitely we have to be careful about what we’re measuring. One thing Hubbard warns against is measuring something because it happens to be easy to measure, rather than because there is a high value to the information. (Although that assumes that we know what information has value and what doesn’t. Often measuring lots of stuff because it’s easy reveals unforeseen relationships.)

          On religious experiences, love, etc, I understand where you’re coming from. But I suspect people make more of these experiences than what is actually there. Our reaction to those things is often very culturally specific. And the notions associated with those reactions often seem to lack coherence. So, maybe there’s something there that can’t be measured, or maybe there’s less there than we think. (Or maybe I’m just a hopeless nihilist 🙂 )

          I think logic does count as knowledge. But I’ve made a career out of converting logic into computation. For the other examples (thank you for listing them), I think it would depend on what specifically about them we were discussing. I don’t think aesthetics in and of itself is knowledge, although I could see understanding what others will find aesthetically pleasing to be knowledge. I think savor-faire is knowledge, and the amount of it in a particular person does seem measurable, at least coarsely.

          I do think the knowable aspects of these things could, in principle, be reduced to mathematics, to quantities. That said, I’m not arguing that measuring many of these things in any precise manner is practical. And I do think any practical measurements would likely have the limitations you describe.

          On causality, Hume, and Kant, good question. I’m not sure. I have to admit that I haven’t read either at length. To the extent their views are consistent with or contradict observations, I think they can be measured. But remember that something must be knowable to be measurable. Ultimately the metaphysics of causality may be unknowable. (Kant’s notion that we have innate assumptions about causes is actually backed up by research on babies, who do have expectations about causal effects, although that appears to be modifiable through learning.)

          Like

          • “On religious experiences, love, etc, …I suspect people make more of these experiences than what is actually there.”

            I agree. I think what people make of their experience is often much more than what was really there, which is why I think phenomenology is useful. That said, what people make of their own experience has to be taken into account too.

            “Our reaction to those things is often very culturally specific. And the notions associated with those reactions often seem to lack coherence. So, maybe there’s something there that can’t be measured, or maybe there’s less there than we think. (Or maybe I’m just a hopeless nihilist🙂)”

            On cultural relativism, I think you’re right. Some of these notions about such experiences might even be relative to the individual. But that only makes me want to get beyond our reactions (theology, memes, etc.) to get at what underlies them. There’s something fascinating about the power of religious experience in that it strikes the person experiencing it as Truth, regardless of how logical or rational their notions about the experience are. Maybe irrationality plays a fundamental role. What compounds this is that what people say about their own experience can be incoherent too, as you say. But I still think that these are possible to know, at least to some degree, and a distinction should be made that allows for different ways of knowing. After all, you can’t measure something like that without first knowing what it is that you’re measuring. Unless you mean such things are not worth looking into at all?

            On logic, there you have a kind of knowledge that’s so fundamental it can’t be done away with if you want to have knowledge as measurement, but it isn’t itself measurement.

            And in quantitative knowledge there might need to be more than logic at play since we need to know what’s relevant in order to interpret the numbers, as well as knowledge of what deserves to be measured.

            “But remember that something must be knowable to be measurable. Ultimately the metaphysics of causality may be unknowable.”

            I’m confused here. Is measurement and knowledge the same? Measuring is knowing, knowing is measuring?

            On Hume and Kant, I think from a pragmatic POV we’d have to go with Kant (assuming we have to choose between the two). Hume really just threw a giant monkey wrench into things by making causality a mere human habit of thought. If we buy into Hume’s take on it, causality isn’t inherent in things it’s supposed to link, and therefore science is basically useless. More than that…we don’t really know if the sun will rise tomorrow. Kant’s the one who tried to fix the problem by showing that causality isn’t just a habit. Though it is for him something “in the mind”, it’s nevertheless fundamental to experience—necessary. Science is possible once again. On the other hand, what science discovers is never knowledge of the world in itself, since that is forever inaccessible to us, according to him. So we can’t know the world in itself, but we shouldn’t say we can’t know anything beyond our own individual experience of it (in terms of perceptions, sensations) or that reason is useless. Extreme skepticism is unwarranted since causality is common to all human experience.

            In other words, Kant believed that causality IS knowable. That’s the main thesis of his Critique of Pure Reason. And knowing that causality is the condition that makes experience possible gives credibility to the necessity that we find in causality. Without this knowledge, causality is an ineffective glue and science is nothing more than an elaborate habit of thought.

            Like

    • ” Unless you mean such things are not worth looking into at all?”

      I definitely think the psychology of those things (religious experiences, etc) are worth looking into. And I think, to the degree they have effects we care about, that psychology is measurable. Although again, from a practical perspective, it might only be at a very coarse grained level. What’s not measurable, and what I question classifying as knowledge, is any metaphysical assertions that arise from those experiences.

      “On logic, there you have a kind of knowledge that’s so fundamental it can’t be done away with if you want to have knowledge as measurement, but it isn’t itself measurement.”

      I can see this point. I guess I think of logic as more of a capability than knowledge per se. Innate logic seems like the way we think. (George Boole actually referred to his development of boolean logic as “the laws of thought”.) I know we can learn logic, but the very act of learning it is experiential, something grounded in observation.

      “I’m confused here. Is measurement and knowledge the same? Measuring is knowing, knowing is measuring?”

      Maybe? Hubbard’s definition of measurement is the change in observable effects. Can we know something without ever observing things about it? We can certainly have innate impulses about certain things, but given that many of those mislead us, it feels like a stretch to call those instinctive intuitions knowledge. Tautologies might be something we can know without observation, but even there, for us to know about the tautology, don’t we still had to have experiential knowledge of its constituents?

      I guess I’m looking for that incontrovertible example of knowledge that can’t be measured.

      Thanks for the Hume and Kant info. I didn’t realize that Hume had gone that reductionist. (I have heard about the not knowing the sun will rise thing, but I think that was him making a point about the limitations of induction, the black swans thing.) I think Kant was right that causality is an inherent aspect of nature, not just something in our minds. Although interestingly, I recall tests showing that crows, one of the more intelligent animals, have no notion of causality. If causality is something that’s only in our minds, it seems a very useful thing to have, and the very fact of its usefulness seems to imply that there’s something “out there” it’s related to.

      In any case, if we define knowledge as reliable belief, then I agree that Kant was right, causes are knowable, since believing in them leads to reliable predictions.

      Liked by 1 person

      • Agreed on religious metaphysical assertions. I guess that’s what I meant by “theology”—I think it gets in the way of our understanding of what’s really going on in religious experience. We (you and I) might want to dismiss religion on that basis, but I’ve heard that the experience is not necessarily related to those metaphysical claims. Those claims come sometimes the aftermath of it as a way of making sense of it.

        “I guess I think of logic as more of a capability than knowledge per se.”

        That makes sense. In a way, it is a capability, but as you point out, it’s also learnable in some sense of the word. But because of the way it’s learned, I’d say that logic is an intuited form of knowledge. It’s true that we rely on visual aids and charts like the square of opposition to organize our thoughts (if this is the sort of thing you were talking about in the learning of logic being “grounded in observation”), but I’m not sure that visual aids are all there is to it. All I can say is that in my experience of learning basic logic in a class, there was a sense of truth that came as I grasped these formal relationships. I didn’t feel the need to look anything up, and that wouldn’t have done any good anyway. It was all right there in my head, waiting to be elucidated.

        I guess some people feel this way about math? I wouldn’t know about that. 🙂

        “Hubbard’s definition of measurement is the change in observable effects. Can we know something without ever observing things about it?”

        This sounds pretty broad. I can’t imagine not observing something and calling that knowledge. But I’m assuming a broad sense of “observe”, not limited to physical objects.

        “Tautologies might be something we can know without observation, but even there, for us to know about the tautology, don’t we still had to have experiential knowledge of its constituents?”

        I’m not sure what you mean. Does knowing A=A require experience? Maybe it requires knowledge of symbols, but the idea isn’t tied to those.

        “I guess I’m looking for that incontrovertible example of knowledge that can’t be measured.”

        “Knowledge is measurement” can’t be measured. 🙂 (You had to see this one coming.) But really, it’s philosophy. It’s a judgement about measurement that’s not itself measurement.

        On Hume and Kant, don’t take me too seriously on my little pontification there. That was off the top of my head, and it’s been a very long time since I read either. Even longer since I read Hume. I remember him as a serious skeptic, but maybe I misunderstood him.

        That’s a very strange thing to say about crows! How did they do the experiment? I’m a bit incredulous. 🙂

        I’m working on what I call “bored games” with Geordie. There are little hidden compartments accessed by levers that he has to push and pull to get at the treats inside. Sliding drawers that only slide after he pulls out a cone. It’s pretty amazing to watch him learn. The funny thing is that I usually have to get down on all fours and use my teeth to demonstrate how to do things. He gets the idea better when I mimic his anatomy. This latest game is level 3, a doozy for him, but I imagine the crows would figure out the order of operation in no time. How can they have no notion of causality?

        Liked by 1 person

        • “Does knowing A=A require experience? Maybe it requires knowledge of symbols, but the idea isn’t tied to those.”

          Does the idea that something is equal to itself require experience? Interesting. I don’t know. It isn’t anything I remember thinking about as a child (to the extent I can remember my childhood at this point). But it does seem like the idea that bachelors are single requires exposure to the concept of “bachelor” and “single”.

          “”Knowledge is measurement” can’t be measured. 🙂 (You had to see this one coming.) But really, it’s philosophy. It’s a judgement about measurement that’s not itself measurement.”

          Have to say I didn’t see it coming 🙂 But is “knowledge is measurement”…knowledge? I agree it’s philosophy. But philosophy to me is often more about working assumptions than knowledge. A naturalist may feel that they know naturalism is true, but in my mind it’s a working assumption.

          Of course, you could argue that all knowledge are working assumptions, but I guess it’s a matter of where these things sit on the reliability spectrum. Philosophical positions seem far less certain than E=mc^2. But if the working assumption is more or less reliable, doesn’t that degree of reliability establish a measurement toehold? If no reliability of the working assumption can be establish (i.e. a position on the afterlife), then no measurement seems possible, but again calling that knowledge seem dubious.

          “That’s a very strange thing to say about crows! How did they do the experiment? I’m a bit incredulous. :)”

          Here’s a link to an article on the study: http://phenomena.nationalgeographic.com/2014/06/10/intelligent-crows-flunk-causality-test-but-babies-pass/
          Interestingly, while trying to find that link, I came across numerous other studies showing that crows do understand cause and effect, so not sure what’s going on here. It might just be particular types of causation that they don’t understand.

          Interesting that Geordie picks things up quicker when you demonstrate like a dog. That reminds me of the studies that show that humans learn some things, such as faces, much easier than others. Maybe each species’ brain is optimized to learn from other members of that species.

          Liked by 1 person

          • On A=A, bachelors… I’m not sure we need to remember childhood to answer the question of whether knowledge of this sort derives from experience, though I may be presupposing my own point of view. I think of both examples as a priori analytic knowledge since the bachelor example is a definition in which the concept of single is necessarily tied to that of bachelor. The predicate is “contained” in the subject:
            https://www.britannica.com/topic/synthetic-a-priori-proposition

            On philosophy and knowledge…I get your point. Philosophy seems to be about ripping the rug out from under everyone’s feet (sometimes even your own), whereas science is about working within a community and building upon ages of knowledge. Of course, this is just a common view of philosophy—philosophers are also riding on the coattails of history, but this doesn’t seem as evident…sometimes philosophers don’t want it to be. My husband and I were watching a local show on the Phoenix Mars Mission and he noted that each person working on it was an expert in one thing (maybe one of a handful in the world). On the whole it was like a symphony. This kind of community is not the case in philosophy, at least not to the same extent, and I suspect that has something to do with the nature of the kind of knowledge philosophers seek—often it’s the broad, foundational sort. Even when it tries to establish the impossibility of foundational knowledge, that itself is foundational in a negative sense. I can see why people think that philosophy deals in working assumptions, but—here’s the response you anticipated—so does science. I’d go further and say that science necessarily deals in working assumptions, by the nature of its scope and project. And if so, science requires a philosophical foundation in the grand scheme. But you’re probably already aware of that idea, maybe from debates surrounding Neil DeGrasse Tyson’s comments about philosophy, which some said were nothing more than thoughtless philosophizing (and I agree with them). But anyway…

            “…if the working assumption is more or less reliable, doesn’t that degree of reliability establish a measurement toehold? If no reliability of the working assumption can be establish (i.e. a position on the afterlife), then no measurement seems possible, but again calling that knowledge seem dubious.”

            First of all it seems to me that “reliability” is a philosophical position, but assuming reliability as the criterion, I don’t see why measurement is necessary. I can see how measurement might be considered a way of establishing reliability, but not the other way around. Maybe I didn’t get your point?

            On crow causality…a fascinating topic. The study was interesting and I’m glad they made the distinction between wild crows and those who understand the concept of puzzles. I’ve given that distinction some thought because I’m almost 100% sure that Geordie understands the concept of a game or puzzle. What I don’t know is whether that concept came from being taught what a game is (and how does that come about?) or whether it came from a natural inclination to play. Maybe both?

            I wonder if the wild crows simply weren’t able see the cause and effect of the cylinder trick since the first time around they didn’t actually figure anything out, they simply ate the meat attached to the block then found some more meat and ate that too. In other words, maybe they were distracted by eating and weren’t paying attention. I know I get that way when I’m eating.

            “Maybe each species’ brain is optimized to learn from other members of that species.”

            That, plus anatomy. He knows I’m capable of doing a lot more than he is and is used to relying on me, so I think he gets frustrated when I use my fingers and expect him to accomplish the same thing. And when he’s frustrated, I don’t think he feels very creative, so he doesn’t realize there are other ways of accomplishing the same task. But when I get on all fours, I think he sees that I’m mimicking him and takes my behavior as an example of what he should do. Pointing at my nose and then at his (or whatever body part) does seem to factor in…at least, he watches me and behaves as if he knows what I mean when I point at my nose and say “use your nose.” It sounds a lot more sophisticated than it seems at the time. All of this takes place in a matter of seconds.

            But what I find interesting about all this is his desire for instruction. He actually stands back and watches when he doesn’t understand something. Plus I see him watching other dogs very carefully, and he picks up on their behavior. He never used to scratch his hind legs in the dirt, but after seeing another dog do it, now he does it all the time. I actually saw him watching the other dog and predicted that he was trying to learn from her. He gets a certain look about him, a heightened level of attention that’s unmistakable. His girlfriend, Bean, apparently learned from another dog to be terrified of flies. But I don’t think they learn just any sort of behavior from each other…I suspect they have to like it for some reason.

            I think you’re right that it’s easier for members of the same species to learn from each other. My guess (at least with dogs) is that they know they’re the same and so they find grounds for comparison. Geordie’s always sizing up other dogs, and not just by smell. He gets competitive with them and he’ll do things that they do even if he wouldn’t care to do those things by himself. I wonder whether it might be easier to train a dog by using an already-trained dog to demonstrate?

            Anyway, I could go on and on about Geordie and what I think he knows, so I’ll stop now. 🙂

            Liked by 1 person

          • “Of course, this is just a common view of philosophy—”

            LOLS, I think one of the defining aspects of philosophy is that there isn’t a common view on most things. In the survey on what philosophers believe, the only question that had near consensus was that the external world exists (and even that had a substantial minority in disagreement).

            “Even when it tries to establish the impossibility of foundational knowledge, that itself is foundational in a negative sense.”

            A while back, someone pointed out to me that most of the philosophers I like: Epicurus, Hume, James, Russell, Dennett, could be considered “anti-philosophers” since they spend so much of their time shooting down the ideas of other philosophers. But I think you’re right that even this stance is itself philosophy.

            “I can see why people think that philosophy deals in working assumptions, but—here’s the response you anticipated—so does science. ”

            Certainly, although science’s working assumptions are more likely to be tested. People often assume that the scientific method is some philosophically worked out thing. While philosophers like Bacon certainly recognized and described what was happening at various milestones, the scientific method is itself the result of science. Science iterates on itself recursively and pragmatically, sticking with what works in pursuit of reliable predictive beliefs and dispensing with what doesn’t.

            “First of all it seems to me that “reliability” is a philosophical position, but assuming reliability as the criterion, I don’t see why measurement is necessary. I can see how measurement might be considered a way of establishing reliability, but not the other way around. Maybe I didn’t get your point?”

            And I’m not sure if I’m getting yours here. Are you making a distinction between establishing reliability (which I do think requires measurement) and valuing it? If so, are you saying that choosing to value reliability is a philosophy? I suppose that would be true, although I’d be skeptical of the philosophical field attempting to take credit for people holding that value. In other words, it’s not like it had to be argued for at some point. (Although maybe you’ll point out a history where it did have to be argued for. If so, I’ll learn something new!)

            Good questions on the crows. I’m not sure. Particularly after seeing all those conflicting results.

            “Pointing at my nose and then at his (or whatever body part) does seem to factor in…at least, he watches me and behaves as if he knows what I mean when I point at my nose and say “use your nose.””

            That’s interesting with Geordie. It seemed to me, with my dogs, that they never understood pointing. I remember pointing at something, and them just looking at my finger, never understanding that their gaze should follow the line of my finger to some other point in space. But maybe the fact that you’re putting your finger right at the object of interest makes a difference?

            Dogs seem to be very eager to learn patterns. I remember when I used to train my dogs, that establishing the pattern was the secret. For example, if we wanted them to only mess outside, the trick was to take them outside after eating and keep them out there until right after they had done their business. After a few times, they picked up how it worked and knew when they needed to be let out.

            Geordie learning from other dogs seems like an exhibit of dog culture. It seems all social species have some level of culture, shared practices they pick up from each other.

            Geordie has a girlfriend? Is it serious? Are we going to have little Geordies (or Georgettes) running around? 🙂

            Liked by 1 person

          • “If so, are you saying that choosing to value reliability is a philosophy?”

            Yes, in a sense. But I totally agree with you that philosophy is not responsible for making reliability something that people value. Common sense plays a much larger role here. On the other hand, the search for first principles was a search for reliability (certain truth, foundational, deductive) and that search doesn’t seem as popular anymore.

            That’s interesting that your dogs didn’t seem to get pointing. I think it depends on their attention level in that moment as well as the proximity of the object, as you suggested. It’s hard to say, though. I assumed all dogs understood pointing, at least when they’re motivated. The only time Geordie has a problem with it is when the object is far away (which does make the pointing ambiguous) or when he’s distracted. He’s usually pretty motivated to understand me when I’m talking to him, so that’s not a big factor for him. Also, it’s worth noting that I try to make sure he can see what’s going on. He’s very low to the ground, so if I’m standing up and pointing, it’s understandable that he might not get it. But when I’m on the ground with him, he does.

            So true about patterns. For the longest time I couldn’t get him to make a pee pee in the backyard, but lately I figured out that if I put on his harness, he’ll do it. I’m really happy about this. It’s a pain in the butt to take him in the front at night, especially since I have to worry about wildlife.

            Isn’t dog culture fascinating? I wonder what all the pee mail means for them. It seems very important. What information could they possibly be getting?

            Oh yes, Geordie has a girlfriend. It’s dead serious, at least for him. He does seem to tolerate her mischievous behavior in a way he wouldn’t with other dogs…it’s like she’s the only one who’s allowed to tease him. She gets right in his face and licks his lips! He wouldn’t tolerate that with other dogs, not even the cute ones he gets on with okay. 🙂 But no, there won’t be any Georgettes or Geordies, which is too bad. I’d love to populate the world with Geordies.

            Liked by 1 person

          • Pee mail? Territory marking?

            Liked by 1 person

          • Basically. There are a lot of theories about what’s going on and what they’re learning about each other, but it’s clearly very important to them. At the park, I’ve seen dogs get peed on through the big dog/little dog fence because they’re so eager to catch a whiff.

            Liked by 1 person

          • That reminds me of something I read a while back about dogs and their perception of the world through smell. Most of our models of the world are visual and auditory. The smell faculties of primates are pretty atrophied. But the smell sense in most mammals is far more developed.

            The dog/wolf model of the world is heavily smell oriented. Just as we have complex visual maps, they have complex smell maps, which we may be able to know about intellectually, but can never appreciate from a visceral primal first person perspective. It’s a reminder of just how different their experience is from ours, and how different experience can be for other types of minds.

            Liked by 1 person

  14. If reality functions by means of causality, then yes all aspects of existence must be measurable in some conceivable manner. Note that if it doesn’t, then trying to figure out how reality functions is futile anyway — a wasted enterprise. (This is my single principle of metaphysics, which argues that naturalism is our only option if we want to figure things out.) But causality doesn’t mean that our human measurements must always be effective. We could take aberrant meaurements that throw us well off course, such as meassuring rain on the only rainy day of the year. Furthermore note that correlation does not always provide us with causation. Even though the time displayed on my mobile phone may suggest how much light there is outside my house, we know quite well that my phone doesn’t cause the earth to move as it does around the sun. So our measurements can indeed trick us. Still under a causal setting the way to improve our understandings of reality should be to search for solutions which do continually make sense of our various measurements. Here we presume an extremely complex jigsaw puzzle that does fit together in the end.

    I’m thrilled that Hubbard has presented us with such a naturalistic position — good stuff!

    Liked by 1 person

    • I don’t think we have any reason to doubt causality in and of itself, at least at not at this point. (Although if someone ever figures out faster-than-light or time travel, the picture might become complicated.) I’m open to the possibility that some of those causes might not be deterministic (which I think you see as violating causality), that determinism may be an emergent phenomenon arising from vast numbers of quantum events. If so, I wouldn’t see it as outside of naturalism, at least as I conceive it, but then I consider myself an evidentialist more than a naturalist.

      Correlation certainly does not always provide causation. In fact, we never ever observe causation, only correlation. We commonly agree that we’ve established causation when we’ve narrowed the correlations down to consistent isolated time sequenced variables, but in the end, causation is always a theory. Like any theory, it’s always subject to being falsified by new observations.

      All that said, I do think anything knowable is measurable, in principle, but totally agree that it might not be practical.

      Liked by 1 person

  15. Pingback: What is knowledge? | SelfAwarePatterns

  16. Mike,
    I wonder if you could explain to me, conceptually rather than practically anyway, how fundamentally random events at the most basic level (and so they will not be influenced by other events), could create the apparently causal dynamics that we perceive at macro levels? It just doesn’t make conceptual sense to me right now that non causal dynamics could nevertheless spawn causal dynamics. Apparently this is what most physicists today believe, though I don’t yet understand the conceptual justification for it. I have asked several physicists online over the past four years however. Some have seemed perturbed by my inquiries, though most have exercised their rights to not respond at all.

    While it may be difficult to conceptualize how non causal dynamics could give rise to causal dynamics, notice that the contrary position remains quite simple and plausible. If everything ultimately functions causally at the quantum level, and thus remains perfectly determined ontologically, then of course we’d expect the same at higher levels. Here the only concession to make would be that modern physicists do not comprehend quantum causal dynamics (which shouldn’t actually be all that surprising).

    Perhaps it’s too much to ask physicists to not only do their jobs, but also to straighten out metaphysics (not to mention epistemology). Observe that physicists provide philosophers with consensus physical understandings, though philosophers do not yet return the favor.

    I offer “causality” as my single principle of metaphysics, though not from the perspective that it’s true. Instead I offer it because to the extent that it fails, there aren’t things for us to figure out anyway. No analogy is perfect, though to me this seems somewhat like an antithesis to Pascal’s wager.

    Liked by 1 person

    • Eric,
      As I understand it, quantum events aren’t completely random in the sense of all possible outcomes being equally probable. While we can’t determine the outcome of an individual quantum event, we can calculate probabilities of each possible outcome (resulting in the wave function). But when we think in terms of large quantities of quantum events, the aggregate shape of the outcomes, as we pull back to larger and larger numbers, become increasingly set, increasingly deterministic. By the time we get to something like the operation of complex molecules, the outcome is thought to be essentially deterministic.

      It’s possible for quantum randomness to “bleed” into the macroscopic world. In the Schrodinger’s cat thought experiment, a macroscopic event, whether the cat is poisoned, is dependent on a random quantum event, whether an atom radioactively decays. In general, this could also happen anytime someone makes a decision based on a single photon placement in a double slit experiment. This opens the possibility that there may be natural scenarios where quantum randomness might bleed into the macroscopic world, potentially making it non-deterministic, although I don’t know of any that have been conclusively observed yet outside of a lab.

      The thing about interpreting quantum mechanics is, as I understand it, you can’t keep all of the fundamental properties of the macroscopic world. You must give something up, whether it be determinism, locality, a single timeline, or objective reality. As bizarre as that sounds, it’s how our universe appears to work. Richard Feynmann said that if you think you understand quantum mechanics, you don’t understand quantum mechanics. I think by this he meant that if QM isn’t making you question the nature of reality, then you don’t yet understand how bizarre it is.

      Liked by 1 person

  17. Mike,
    I suppose that I was ignoring the probability distribution associated with our QM observations. Thus complete randomness is not observed in this regard. Furthermore given these extremely micro distributions our world could still seem fully causal to us in a practical sense, even if nature isn’t in the end. Still that doesn’t quite answer my question (not that you’ve implied it does). If the photon, for example, doesn’t have a fully determined causal fate, then how might causal dynamics still emerge from this sort of function? While that remains unclear to me, human ignorance is anything but.

    Regarding what I demand of reality in a conceptual sense, maybe that’s where this can get straightened out. As for objective reality, I could sacrifice that only under supernatural circumstances. (In fact that one seems extra god-like.) As for determinism, I could sacrifice that if there is a fundamental void in causality (though I wouldn’t expect anything causal to emerge from such a realm). Regarding locality and a single timeline, I could easily sacrifice them (if I get your meaning). Doesn’t quantum entanglement subvert locality? Of course time seems quite strange as well — no worries there. I also consider it possible for more dimensions to exist than the four that we perceive. The only thing that I truly struggle with conceptually, is causal dynamics that emerge from non causal dynamics. Just how might that work?

    I suppose that I don’t run afoul of Richard Feynmann’s proclamation, since I do not claim to understand quantum mechanics, and certainly consider the stuff bazar. I presume that this was the perspective of Einstein as well. But then what about Einstein’s critics? So assured have they been that causality ultimately fails, that they’ve felt the need to adopt a separate metaphysics in just this one regard.

    Liked by 1 person

  18. Fizan says:

    Hi Mike,

    Interesting post, got me thinking (again). I think it ultimately does depend on how one defines things. From my perspective, I see knowledge as something which reduces uncertainty or at least gives the illusion of doing so. In that sense, it is akin to belief. Measurement, from your definition, seems to do the same thing so they are essentially the same thing.

    But I’m going to take on the challenge anyway :), perhaps using some absurd scenarios.

    Firstly, I know uncertainty exists can I measure it?
    Let’s take gravity as an example. We’ve quantified it and refined our definitions/ understanding of it to get more and more useful estimations. But as with all things, there is some uncertainty. I know this exists but can I measure how much. You may be tempted to say yes we can because we have effectively been reducing that uncertainty all this time. However (here’s the absurd scenario), suppose over the next 1000,000 years our measurements of gravity start to increase slowly so that by then rather than the 9.8 m/s2 we are getting 20 m/s2. This trend would make us realize in due course that there was something missing in our understanding (the uncertainty) which we hadn’t accounted for. By that time (million years from now) we have a coherent understanding and predictability for this trend by revising and modifying our scientific laws and theories (as good science would do) to account for it.
    My target here is the real uncertainty which we know exists. How do we know this exists? because we know we don’t have 100% certain knowledge. In fact, we don’t know how certain our knowledge is because that would imply knowing what the certain knowledge is to measure it against. Our current measurements of uncertainty are based on other measurements which are themselves uncertain.
    So my question is can we measure how uncertain we are in our knowledge? though we do know that we are uncertain to some extent or another.

    Secondly, I know I exist can I measure this?
    So in this one from my perspective, I am 100% certain that I exist and there is no uncertainty in it for me. Since measurement was going to reduce uncertainty how can it be applied to this scenario? And I’m not talking about other people’s existence or other people being able to know of my existence.

    Liked by 1 person

    • Hi Fizan,
      Thanks!

      On how we measure uncertainty, I think it depends on uncertainty for which goal? For example, we can’t measure everything we don’t know about gravity. But we can measure what we don’t know, say, about how it will affect the trajectory of a particular body (very small uncertainty if we know all the bodies involved), or how it might behave in a black hole (lots of uncertainty). In the case of business, we can’t measure everything we don’t know about how the market will behave, but we can measure it in relation to a business decision, such as whether there will be enough demand to an expansion in manufacturing capacity.

      Good question on your existence. Normally the fallback position is what difference in observations the proposition being true or false might make. In this case, if you exist, you observe your existence, but if you didn’t exist, it’s not like you would be able to observe your non-existence. Unless we want to count the observation of your existence as a measurement (Hubbard might), I can’t see any way.

      Liked by 1 person

      • Fizan says:

        I’m taking your original question as a challenge. So whether we can KNOW something exists yet not be able to MEASURE it. I totally agree with your example and measuring uncertainty in terms of predefined goals. That would still be limited because any tools we employ to calculate the uncertainty are themselves uncertain, their certainty being defined by other tools which are again uncertain etc. However, as you suspect I’m going for the general uncertainty rather than a specific one.

        So I propose that we do ‘know’ that there are things we don’t know. So we know of the existence of uncertainty which is beyond our current understanding. Then the question in terms of your challenge becomes can we measure how much we don’t know. From another angle, it becomes can we measure how much we do know. I presume not because that would imply the contradiction that we already know what the ultimate 100 % truth is to compare our current level of understanding against.

        To give a different perspective to it; say in a group of shamanic exorcists. When in their view a person becomes possessed by a demon they may be able to have some degree of inclination (measurement) whether it is Demon A as opposed to Demon B, C, and D that has possessed the body. They may not be 100% sure and hence there is some uncertainty and yes they could give a crude estimate of that uncertainty. That’s all good. But then what if they come to learn that actually, demons didn’t even exist and it was something else entirely which was causing those behaviors.

        With regards to the existence measurement issue. Are we going to substitute measurement for all instances of knowledge? I would expect measurement to include something like degrees or inclination or gradations etc. i.e. something to quantify. Can I say I can measure that I exist? the question naturally would be against what? Non-existence? But I can never know of non-existence so how can I measure anything against it? Can I say that I can measure that I exist to a certain level/degree etc. ?
        If I say that I know non-existence exists it sounds like a contradiction.

        Does my measuring my own existence reduce some uncertainty? I would say no because none existed, to begin with.

        Liked by 1 person

        • I think Hubbard would argue that any reduction in uncertainty is a measurement. (And yes, he gets that from the Shannon definition of information.) The problem with the demon example is that the shamans never knew what they were dealing with in the first place.

          Maybe a better one is if I mention a historical person who you’ve never heard of and ask you to guess how long that person lived. Your initial reaction is you have no idea, but if they were famous, the probability that they died in infancy or even early childhood seems slight, and the probability that they lived more than 108 years also seems remote. So by eliminating the absurdities, we can reduce our uncertainty, which by Hubbard’s definition is measurement. That said, I’m not sure myself how convinced I am by that argument.

          On the existence question, I think I’m going to concede on that one and agree it’s something we know that can’t be measured. Hubbard probably wouldn’t, but he isn’t here, and I agree that it does seem like if you can’t compare your existence with something else, observe a ratio between it and some other observable phenomena, then it seems perverse to call it “measurement.”

          Liked by 1 person

          • Fizan says:

            I’m still going to press on the first one a little more. This may be a philosophical difference actually.
            “..The problem with the demon example is that the shamans never knew what they were dealing with in the first place.”
            I think if you asked the shaman at the time they wouldn’t have agreed. They might have imagined that there were finer explanations to the phenomenon but I suspect they would have imagined something like more categories of demons etc. I’m making all this up I know, but my point is what would humans of a million years from now think about our explanations (given they continued to progress intellectually). They may also say that we didn’t know what we were talking about when describing mental and behavioural problems. Or even other things which are seemingly well established like gravity.
            If I was a shaman and was reflective enough I might have realised that there are things that I don’t know yet (i.e. the role of the brain, how the brain is structured and communicates which may have a saying on how we behave or on those demons).
            At the time identifying certain behaviours as demonic possession and giving it some context reduced the uncertainty and helped healing measures to be taken vs for example criminal prosecution etc. In that sense I feel they were measuring what they though they knew and reducing uncertainty by categorising. They may also have known that there are things they don’t know.
            In modern times it’s similar in my opinion. For example the unusual orbit of mercury or the photo electric effect or the double slit may have seemed liked minor uncertainties but they opened up a whole new era. Before discovering relativity and quantum mechanics most of physics was still almost complete yet there were things they did not know (and may have suspected). At the time they couldn’t say I can measure how much we are uncertain in our world view/ physics etc.
            Right now it may be the dark matter and dark energy issue which is what we know we don’t know much about.
            So although we may be able to humbly acknowledge that we know there are things we don’t know, can we say we can measure how much that is? I feel not.
            In this case it is open ended, there is 100 % uncertainty as we don’t know what the end goal is (or even if there is any) the possibilities could be infinite. Any measurement will not reduce this uncertainty unless perhaps if we eventually did reach that 100 % certainty (where we know everything there is to know). But right now we can’t even be sure such a thing as ultimate 100% certain knowledge even exists and/or if we are headed towards it or not. So our uncertainty remains 100%.
            So in that sense I think there is a slight difference between knowledge and measurement again. I can say “I know that there are things I don’t know. How much and to what extent I don’t know”
            But can I say “I can measure that there are things I don’t know/ can’t measure. How much and to what extent I can’t measure”.
            In a sense measurement appears more presumptuous and contextual than knowledge.

            Liked by 1 person

          • Fizan says:

            Just to add “..if I mention a historical person who you’ve never heard of and ask you to guess how long that person lived.”

            In this case although I don’t have knowledge of the person but I do have knowledge of human life in general which can guide my measurement. I agree with Hubbard that should count as measurement and the measurement was of the thing I had knowledge of.

            Liked by 1 person

    • Fizan,
      Thinking about your points, I think this gets to the problem with knowledge as a concept. Yes, the shamans probably thought they knew things. Today we have more information about the phenomena that led them to conclude there were demons and we say we know they don’t exist. (Although I suppose you could redefine “demon” to mental diseases or what have you, but doing so doesn’t lead to productive solutions.) Our additional information prevents us from considering the shaman’s beliefs to be knowledge.

      Will people centuries from now look back at many of the things we think we know and say we didn’t actually know it? Undoubtedly. Scientists are very careful about not adding assumptions to what we can establish with observations, but often we add assumptions we’re not even conscious of, and those will probably bite us. This is the relativity of knowledge.

      But this interaction reminds of that Isaac Asimov quote:
      “when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.”

      This also makes me think of Rumsfeld’s famous (infamous?) distinctions between “known unknowns” and “unknown unknowns”. The first is measurable, the second isn’t. But unknown unknowns are…unknown.

      Now, you might argue that the fact that we know there are unknown unknowns is itself unmeasurable. Maybe. But it seems like we could measure how aware someone is of that fact by observing how they behave. For example, a military leader with that awareness probably follows different strategies than if they don’t have it. Having the awareness is desirable precisely so that it will affect our humility and decisions. I’m not saying this would be an easy measurement to make, or one that could be made with any precision, but I can see ways to reduce uncertainty about it.

      Revisiting the existence knowledge, one thing that occurred to me is that we didn’t really specify what we mean by existence. For example, what if we’re characters in a simulation? Do we “exist”? From our perspective we might, but from the perspective of someone outside of the simulation, we’re just code that thinks it exists, that is code that has a model of a self that isn’t really there. Maybe you’re a Boltzmann brain, a configuration of atoms that just randomly assembled, complete with memories of an entire life up to now, that will dissipate in the new few seconds….

      Liked by 1 person

  19. Fizan says:

    Liked the quote by Asimov. But taking your last thought of the simulated universe, if it were true then does that quote still hold the same ground?
    The eart would be neither spherical or oval etc. perhaps in code form it is more 2 dimensional ( holographic principle) and in that sense more like being flat?

    In the unkown unkowns there may be a difference between what you and I are saying. To put it simply I’m saying “can we measure how much we do know ?” I think it’s unlikely.

    Would scientists in the 1800s or anyone else (i.e. shamans) have been able to answer that? Unlikely.

    You are right it does get to what knowledge is. What ever we answer would be in relation to other knowledge which is in relation to other knowledge. Not very disimilar to the Shamans. And yes I know scientists are extremely mindful of adding assumptions yet they are fallable as any human is. The problem might be unconcious, yes. Or perhaps even with the assumptions we started off with. Can we ever avoid assumptions? probably not. (And what do you make of Gödel’s Incompleteness Theorem in this regard?)

    In order to measure ‘how much’ we know we have to know how much we don’t know. Yet we can still claim we have knowledge of what we do know. We just can’t contrast it against any background/ standard etc. to be able to quantify it.

    With another example: Do I have knowledge of science? Yes.
    Can I measure how much knowledge I have of science? No.

    You could only measure it by a close ended definition of science and then using that as the standard. But science as I understand is an open ended endeavour.

    You can always measure how much knowledge I have of science in ‘comparison’ to all the verified knowledge that has been published/ documented etc. But 10 years from know that background would have expanded, infact it is expanding every nano-second, making it a poor standard to compare against. And even if you did take a crossectional view you still can’t claim to have measured my knowledge of science. You could claim to have measured my knowledge of ‘known science so far’.

    With regards to existence knowledge that’s why I was clear to say it’s about me knowing of my own existence which I know with 100 % certainity. I’m not concerned about other people’s existence or others knowing of my own existence. What occured to you with regards to the simulation hypothesis and boltzmann brains occured to you because you exist. Those things in comparison are just hypothesis and conjectures within your existence.

    Liked by 1 person

    • On Gödel’s Incompleteness Theorem, I did a post on it a while back.
      https://selfawarepatterns.com/2015/12/28/godels-incompleteness-theorems-dont-rule-out-artificial-intelligence/
      It’s mostly concerned with the purported consequences for AI, computational theory of mind, etc, but it does touch lightly on the epistemic aspects. That said, I can’t claim to have given Gödel’s theorem a lot of thought.

      I’m not quite sure I’m catching your meaning when you say we can’t measure knowledge, such as someone’s knowledge of science. It seems like that’s precisely what we do when we make students take tests. If you’re saying we can’t measure their knowledge in relation to what might someday be discovered, well, yes, but then again I think we’re back to the unknowable.

      On knowledge of your own existence, you seem to be equating thought with existence. (You wouldn’t be alone of course. That’s what Descartes did.) My point was that if you’re within a simulation, that existence isn’t what you take it to be. You could argue that you still know you exist in some manner. But then what exactly do we mean by “existence”? It seems like we’re uttering the same word for multiple possible ontological states (physical instantiation, a defined simulation character, a piece of software with a model of a mind that doesn’t exist, etc). (Just so you know, I’m just pointing out conceivable issues. In general, I don’t find idealism or the simulation hypothesis productive concepts.)

      Liked by 1 person

      • Fizan says:

        Yes I think Gödel’s Incompleteness Theorem is going to lead us down another rabbit hole and I’ll leave it for now (I will read your post later as it seems interesting).

        With regards to testing students knowledge of science. I think in a more literal sense you haven’t measured their knowledge of science but their knowledge of the test instead. So there’s where I see a difference between knowledge and measurement. Measurement is against a comparator, knowledge doesn’t have to be. In order to measure you have to set conditions and then you can interpret that measurement in the context of those conditions. Measurement doesn’t exist outside the setting of those conditions but knowledge doesn’t need those conditions to exist.

        Going back to the Asimov quote he was in a way measuring wrongness in people who thought earth was flat vs people who thought it was spherical. And also in people who think the difference between the above two people is equivalent. He was able to do that with the condition that the earth was egg shaped (or some other variation of this). The condition for his last sentence is the difference between mistaking an egg for a sphere as opposed to a flat plane, which again rests on the condition of it being egg shaped in the first place. That measurement is all good in that context. But now suppose latest physics comes to hold with a good degree of certainity that the whole universe is a holographic representation of a 2 dimensional code (which is possible). The context and conditions have now changed.
        A future Asimov equivalent may say something similar and add “…But the wrongest of all is Asimov who even after saying that was under the impression he himself wasn’t wrong” etc. and so on.
        See how the context and conditions are so important.

        Back to hubbard’s deffintion: “A quantitatively expressed reduction of uncertainty based on one or more observations.”

        Suppose we saw 5 very large and long watery colored sticks moving around in the air for sometime. That would count as an observation but has it reduced any uncertainity yet? No.

        Uncertainity about what?
        Because there is no context here.
        But do we get to know about these? Yes.

        It is only when we will add context to these that the uncertainity reduces and in effect measurement has taken place. And I suspect the reason why people don’t like the idea of measuring somethings has more to do with the context in which they are measured. In the above example the context (which you add) could take many forms e.g. religion, science, supernatural, political (conspiracy) etc. All of them will serve to reduce your uncertainity and count as measurements.

        The existence question is very straight forward. I ‘know’ I exist and no matter what I believe, know, conjecture, experience, do, imagine or whatever you tell me etc. I still know I exist with 100 % certainity. I don’t have to be able to explain what I mean by it (existence) to you I only have to know myself.

        Liked by 3 people

        • Sorry to intrude on your conversation. I’ll just pop in to say that these points seem similar to the ones I was trying to make, albeit in my own mealy-mouthed way. As I see it, context makes measurement possible (or worthwhile) and yet the context itself is not knowledge derived from measurement. So Mike, this is similar to what I was trying to get at. But forget philosophy. I like Fizal’s way of putting it:

          “Measurement is against a comparator, knowledge doesn’t have to be. In order to measure you have to set conditions and then you can interpret that measurement in the context of those conditions. Measurement doesn’t exist outside the setting of those conditions but knowledge doesn’t need those conditions to exist.”

          On “my own” existence, I think Fizan is talking about the Cartesian version here…? Descartes considered it an indubitable truth that survives even the most extreme skepticism. Brain in the vat scenarios are basically the same as an evil demon warping my mind, and these only strengthen Descartes’ position. The Cogito is practically defined as that which doesn’t depend on measurement. It’s like a crystalline truth carved out by doubting everything, especially the existence of the world. So any position that assumes the existence of the world (or non-existence) can’t be used as an objection to it. Fizan, this is a really good one! I wish I’d thought of it.

          Mike, you could object to Descartes’ extreme doubt as methodology and say the Cogito isn’t really knowledge or isn’t really certain. But I think what’s at issue here is the methodology. Once you grant Descartes’ method, it seems to me impossible to argue against.

          Liked by 2 people

          • No worries at all on jumping in. Everyone’s always welcome. To me, that’s the benefit of having these conversations on public threads rather than by email.

            On existence, I did concede above that this may be a case of unmeasurable knowledge. But my follow up was more to point out that this isn’t the simple straight forward thing many people assume. Descartes’ assumption was that his access to his own mind was privileged and unequivocal. But the results of psychological and neuroscientific studies show that we don’t have that kind of unequivocal access. Our consciousness is a version of what’s in our mind, a representation, a model. We never have access to the actual thing, only a representation of it built outside of consciousness.

            What actually exists may be far less than the “I” that I perceive. I might be a piece of computer code somewhere that holds a model for the rest of a mind that doesn’t really exist. You might reply that, yes, but something exists that holds that model. But now the word “something” and “exist” are placeholders for a lot of concepts, and we’re in pretty ambiguous territory. I that case, what is Descartes’ method really establishing?

            All that said, I don’t generally find this kind of extreme skepticism productive. I only point out its conceivability when people take cogito ergo sum as absolute proof that their mind as they perceive it exists.

            Liked by 1 person

          • I agree that theories that address whether or not we have access to our minds is definitely not straightforward.

            On Descartes, he was looking for certain knowledge, that which cannot be doubted. The brain is a part of the world, but the cogito is not. The cogito doesn’t necessarily attach itself to a physical reality. This establishes a different sort of ontology than what we’re accustomed to, one that doesn’t take perception as a reliable means to ontological knowledge. The thinking mind knows itself as existing, but this existence is not the same as the sort that comes from science. In other words, he might agree that scientific evidence shows that we don’t have unequivocal access to our minds (as brains), but that entire enterprise is not certain knowledge. In the context of his method of doubt, we don’t really know that the sciences hold any truth.

            That said, someone could argue that Descartes didn’t establish existence at all, but instead established only that he experienced himself thinking. (This is really the thrust of his argument anyway—I can’t doubt that I’m thinking because doubting is a form of thinking.) In that case, we would concede “I think” but not “therefore I am.” And then ontology would remain to be determined, though scientific truth would still need justification if we’re starting from scratch.

            I agree about the skepticism. In fact my undergrad thesis was against Descartes’ method of doubt. I don’t think it’s the proper method for obtaining knowledge, though I do appreciate Descartes’ cogito and its contribution to philosophy.

            Liked by 2 people

          • Fizan says:

            “But my follow up was more to point out that this isn’t the simple straight forward thing many people assume.”

            Here we disagree I think it is actually very straightforward for me. I don’t know it might not be for others. And I don’t want to appeal to Descartes here either because I’ve never really read him (though I am acquainted with his famous statements). I’m talking of my existence to myself.
            Taking what latest scientific research tells me is also totally irrelevant here because that science or anything else wasn’t done by me. As a related example if I know I’m in pain then no matter what someone else tells me differently with regards to it, I still know I’m in pain. I don’t have to find this out (e.g using science) either because I already know it. Even if I did investigate using some method then how can I trust the results if they were in any way contrary to what I already know.
            Neuroscience could tell me that it knows me more than even what I know of myself but that’s a different issue. Here we are describing the characteristics of me rather than questioning my existence. Even on that, I’m a little skeptical (although malleable) because I feel there is a divide between population sample studies and then their interpretation/ generalization in individual contexts.

            Liked by 2 people

        • Like Tina I’ll also pat Fizan on the back for that one. It isn’t useful to classify “observation” itself under the heading of “measurement”. Instead it seems best to reserve this term for perceptions that have context. Perceiving those long watery colored sticks in the air may be “known” (in a soft sense), but won’t be measurement unless relations are made about them. Is this a benevolent god providing some kind of instructions? Or perhaps an indication of rain? Regardless, with context such perceptions may be usefully considered “measurement”. I’m sure that Doug would approve!

          Speaking of context (such as measuring the value of a given theorist’s ideas 🙂 ), I’ll leave my EP2:

          There is only one process by which anything conscious, consciously figures anything out. It takes what it thinks it knows (evidence) and uses this to assess what it’s not so sure about (a model). As a given model seems to remain consistent with evidence, it tends to become progressively more believed.

          (Then of course Fizan also earns points with me by acknowledging profundity of my favorite Catholic!)

          Liked by 2 people

        • “I think in a more literal sense you haven’t measured their knowledge of science but their knowledge of the test instead.”

          It’s a common sentiment that tests don’t measure knowledge, but in most of my experience, good test takers tend to be knowledgeable, at least in the area being tested, and poor test takers tend to be less knowledgeable. Certainly there are crappy tests that don’t do a good job at measuring what they purport to measure, but I think implying they’re all like that is inaccurate.

          (It reminds me of the philosophers who told Galileo that he didn’t actually see celestial objects in his telescope, only what the telescope showed him. While technically true, it ignored the other ways to establish the telescope’s reliability, such as using it to observe things on the ground that could then be approached to verify the magnified image.)

          Indeed, the effectiveness of a test can itself be measured with other factors such as long term student success, job success, etc. At work, we use a logic proficiency test to gauge candidates for programming jobs. I was initially very skeptical, but I eventually had to admit that people who did well on it tended to be good programmers, while people who didn’t were more likely to struggle.

          “Suppose we saw 5 very large and long watery colored sticks moving around in the air for sometime. That would count as an observation but has it reduced any uncertainity yet? No.”

          At the risk of seeming persnickety, we actually did reduce uncertainty. First, 1) the person saw some phenomena in the sky, 2) there were five of them, 3) they were “watery colored” and 4) stick shaped, 5) they moved around for sometime. I think you’re measuring uncertainty starting after the observations, but the observations themselves reduced uncertainty. Of course, the danger here is the person jumps to conclusions. (Seems to be what happens when people see unusual things in the sky.) Certainly more observations may reduce uncertainty much further.

          I do agree that context matters a great deal, but context is basically more knowledge, about the history of the phenomena, or the surrounding environment, or other facts in relation to it. (This does agree with Eric’s epistemic point.) I mentioned the information management DIKW pyramid somewhere else on this thread. Context is a crucial property to climb the pyramid from data to knowledge and wisdom. http://www.theitsmreview.com/2016/04/dikw-model-knowledge-management/

          So, can a measurement happen without context? No. I agree completely with that. But the question is, how much knowledge can exist without context? How much actual knowledge did the person seeing the sticks in the sky have? They remembered the phenomenology of their experience, but without more context, had little or no knowledge about what was actually there, other than the apparent characteristics noted above.

          Liked by 2 people

          • Fizan says:

            “It’s a common sentiment that..”
            Although I totally agree with your own sentiment of tests generally being good predictors. I am not talking about sentiments here rather the literal observation we are making with them. And I think the difference from the telescope example is the indirect nature of tests vs the direct nature of a telescope measurement.

            “At the risk of seeming persnickety, we actually did reduce uncertainty..”
            I would still like to ask what uncertainty did we reduce?
            1) The person saw a phenomenon in the sky – as opposed to what else? seeing a phenomenon at sea or in the home?
            2) There were five of them – as opposed to what 10 or 12?
            etc.
            In each of those instances, the possible alternatives are infinite. So I don’t feel any uncertainty was reduced. It doesn’t make sense to say uncertainty was reduced without saying what the uncertainty was about, to begin with.

            But the question is, how much knowledge can exist without context?
            Ok, I’ll agree with you on this to some extent. But I think we are looking at ‘if’ rather than how much. I think when a context is applied what we get is ‘meaning’ within that context. I do think that raw observations exist. You could argue that raw observation isn’t really knowledge. Furthermore, I think knowledge about something can exist in multiple contexts but its measurement is relevant and limited to each context separately. For example in the sticks example, you could say it was an undefined aerial phenomenon or it was a hallucination or it was an illusion or it was a miracle or it was advanced technology or it was a malfunction of code that runs the universe etc. If your context tells you it was an illusion then it does nothing for someone whose context is that it was a miracle and vice versa. Both have measured and given meaning to it separately. And still, you might also be able to appreciate it in separate contexts simultaneously.

            Liked by 1 person

          • On the tests, I think we’ll have to agree to disagree here. Which is fine. I start worrying when my conversation partners and I agree too much 🙂

            “I do think that raw observations exist. ”

            I’m actually not sure that they do. Oh, the raw sensory data exists, but we never get conscious access to it. Which is probably good, because neuroscience shows that it’s far more limited than we imagine. By the time we perceive it, it’s already gone through a vast pattern matching hierarchy. Again, this is good. First we see the oncoming truck to avoid, not the details of its shape, color, etc. (This is also why two different people with vastly different backgrounds can perceive radically different things at the same event.)

            In other words, we get a default context delivered to us (the conscious us). Now, we can choose to focus on those details, and using them, consciously reconstruct what we think a raw observation is. (Although technically this is yet another context, another model of the surrounding facts.) But it’s an after the fact thing. This is a useful thing to do, since often our pattern matching engine is wrong. But it takes extra effort. (Which is why many people never do it.)

            That said, I think we agree about the differing contexts. The only point I’d add is that it’s often possible to measure the effectiveness of particular contexts. Still, the context of seeing, say, a computer process at the hardware level, the software level, or a broader business process level, or a natural event at the physics level, the chemistry level, or a biological level, may all be effective for varying purposes depending on what level of abstraction we’re currently working at.

            Liked by 1 person

          • Fizan says:

            I do agree with your neuroscience perspective on this.
            I think one thing to consider here is; can we know something but be consciously unaware of it?
            I think neuroscience does show that can be true. One may argue that the time we can say we know it is when we actually become consciously aware of it. This would be a complicated discussion because I know from forensic psychiatry that in the legal context not being aware of something consciously is not a valid defence when you have gone through the experience of something.

            Another thing which I’m starting to get from our discussion is the feeling that knowledge is a first person endeavour whilst measurement a third person endeavour. Because what I know is all there is for me and I can’t measure it against a larger context (because even if that does exist, I don’t know it yet). Someone else might attempt to do so if they feel they have a larger context in which to measure a person’s knowledge (though that seems narcissistic). It’s just a feeling at this point ,may be wrong.
            E.g If I was the only person in the world then would I be uncertain?
            Taking your example from the post about life expectancy. I wouldn’t be uncertain as to what my life expectancy is or when I’m going to die because I wouldn’t even know that I was going to die.

            “The only point I’d add is that it’s often possible to measure the effectiveness of particular contexts.”

            I think if you were to do that then you’ve done it in another context and it’s an endless cycle. I could measure the effectiveness of a religious context in terms of human progress etc. A religious person on the other hand may measure the effectiveness of my scientific context in terms of reaching heaven etc.

            Liked by 1 person

          • Good point about unconscious knowledge. I definitely think unconscious beliefs exist, and some portion of those beliefs would be knowledge. My post a while back on politics being about self interest had that fact at the center of it. We are often motivated by things we know unconsciously.

            I’m also reminded of the phenomenon of blindsight, where someone with damage to their occipital lobe can’t consciously see something, but can make far better than random guesses about what’s in front of their eyes. The sub-cortical regions of their brain can perceive it, but the damage to the visual processing regions of their cortex make it inaccessible to their conscious selves.

            I guess the question is, at what point in sensory processing does the information become belief? It seems like another name for the pattern matching hierarchy I mentioned above is worldview, or belief system. But sensory information alters that belief system over time. Similar to the spectrum between beliefs and knowledge, I suspect there’s no bright line between sensory impressions and belief.

            On measuring contexts, whether someone makes it into heaven seems unmeasurable, at least in this world, but of course it’s also unknowable. On the other hand, whether religious belief or a scientific viewpoint increases our success at predicting phenomena is measurable. Of course, if you value psychological comfort over increased reliability of prediction, your preferences for the contexts will be different.

            Like

          • Fizan says:

            I am curious to know how you differentiate between belief and knowledge?

            And unconscious knowledge also points towards what I was considering earlier. You could potentially measure another person’s unconscious knowledge (psychoanalysts attempt to do that all the time). From your perspective that person knows something but is unaware of it. Yet from the person’s own perspective they don’t have that knowledge at all.

            You also seem to hint at the end, what you like can influence which context you choose. If you asked a religious person they would say it is measurable and predictable whether someone will go to heaven. Furthermore they may also say it is knowable.
            A religious person could similarly contend that you could measure whether a religious belief or scientific view point increases our success in attaining spiritual enlightenment.

            Liked by 1 person

          • “I am curious to know how you differentiate between belief and knowledge?”

            I actually address that in the next post. Turns out it’s far from a simple matter.

            “If you asked a religious person they would say it is measurable and predictable whether someone will go to heaven. Furthermore they may also say it is knowable.”

            I would ask them to describe how they’d propose to measure it, or how they would know it. They might say “faith”. I doubt most would claim that faith propositions can be measured, but many might insist that it provides knowledge. Since that assertion is untestable, my only response is that I’m a skeptic because most untested assertions, when they finally are tested, turn out to be wrong, and so I find it productive to assume they’re wrong.

            I fully realize this might leave me with some explaining to do at the pearly gates, should there turn out to be pearly gates.

            Liked by 1 person

          • Fizan says:

            I’ll look at your next post again then, I sure do find it difficult to differentiate them.

            “I would ask them to describe how they’d propose to measure it, or how they would know it.”
            I think they may say something like a virtuous person is more likely to go to heaven than a sinful one. Going a step further they may say (depending on the religion) that they have observed a person living in heaven, in a vision or dream etc. But even if they didn’t make the latter claim, in my opinion, it is still a measurement from their perspective (it does not necessarily have to be from someone else’s perspective).

            Liked by 1 person

  20. From what I gather Mike does actually agree with Fizan, though as mentioned he’s also being a bit persnickety. His 1 had a context of “seeing something in the sky rather than not”. His 2 had a context of “quantity”. His 3 had a context of “color”. His 4 had a context of “shape”. His 5 had a context of “motion”. Take away context and you should take away any reduction in uncertainty — meaningless input. The dead gopher in my trap should have had a context to me, and perhaps a similar one to its mate. Unfortunately for my lawn and I, apparently this second gopher knew far too well! If neither it nor I had put our perceptions into context however, then we shouldn’t have gleaned anything from them.

    To help resolve never ending philosophical discussions I like to emphasize the need for agreements to be reached. For example might we four agree upon the DIKW model that Mike linked to above? I must say that with the whole context versus understanding dynamic graphed out, it does seem like a pretty neat model to me. (Then there are points that progress from data to information to knowledge to wisdom.) A demand curve for example pits price versus quantity produced — at a high price a great deal will be produced while at a low price very little will be produced. Similarly with great understanding there should be great context and vice versa. So on a graph we get the same rising sort of line.

    Agreed?

    Liked by 1 person

    • Thanks Eric. I appreciate the effort to drive agreement.

      I’m a little worried about overselling the DIKW hierarchy. It has its uses within IT management, making a distinction between data (which is usually what’s in databases, data files, etc) and information, which typically requires functionality to produce. But for data to be data, it must at some point have been encoded by someone or by some process interpreting events. The problem is that efficiently storing it often means separating it from much of its context.

      Of course, the human brain must also encode its information into data stored in synapses. (At least according to the prevailing theory of memory.) A synapse, or collection of synapses, considered in isolation would, I guess, be just data. It would require collation with data in many areas of the brain and processing to be information. So maybe the DIKW does have value for this discussion.

      BTW, a minor correction on my part: I mentioned above that I had already brought up the DIKW earlier in this thread, but that actually happened on the next post’s thread.

      Liked by 1 person

    • Mike,
      From that short discussion that you linked to above (http://www.theitsmreview.com/2016/04/dikw-model-knowledge-management/), it seems to me that the DIKW model does not reference the non-conscious computers that we build, or even the vast majority of what’s in our heads. It seems to me that it’s about the conscious form of computer (and specifically IT workers). So let me break this down a bit more by means of my own understanding of conscious and non-conscious dynamics.

      “Data” here would not be one or more synapses in isolation, but rather the non-conscious computer producing at least a moment of input for a conscious computer to have. According to my model these inputs come in three different flavors — senses (such as smell), valence (punishment/reward), and memory of past conscious processing. For example a brief moment of pain without context/understanding, would merely provide “data”. With context/understanding however pain would graduate to “information”, or further to “knowledge”, or further still to “wisdom”.

      Would it be useful to call a momentary pain that has enough context/understanding, “wisdom”? Though not a standard application for the term, it should at least work under the provided progression. A worker without context/understanding should experience “data”, while a worker with the most context/understanding should experience “wisdom”. Given the state of mental studies today I’m a bit surprised that Dick Whittington was able to work this out. I suppose that people with real world jobs to do sometimes just make things happen anyway.

      Liked by 1 person

      • Eric,
        On pain, in terms of context, we might be talking about another spectrum. To experience any kind of pain requires that the brain have some context, otherwise it’s just an electrochemical impulse. Of course, when we consciously become aware of it, that context is already baked in.

        On the worker discussion, even data in a computer system requires some context to be data rather than structured noise. One of the biggest issues where I work is that much of the data in databases is gobbledygook to anyone not familiar with the business context of what it represents. I do agree that the more context you have, the more the meaning of the data becomes evident.

        Liked by 1 person

    • Mike,
      When you say, “To experience any kind of pain requires that the brain have some context, otherwise it’s just an electrochemical impulse…”, I agree, but I also like to use terms that are a bit more specific than “the brain”. What I presume you mean here is that there is a non-conscious computer that requires context in order to create the pain that a separate conscious computer experiences. Similarly there is context by which the buttons that I’m now pressing create letters on my screen. If we leave things as open as “the brain” (as most everyone today does), then we disregard which of the two varieties of computer is being referenced. Regardless yes, there will be non-conscious context baked into an experience of pain. But what about pain without any conscious context?

      Let’s say that you experience something that feels bad to you, but you can’t come up with any conscious context for it. Perhaps it doesn’t feel like pain in a given part of your body, or even like pain at all (which could be at least some conscious context I suppose). The DIKW model says that this sensation is essentially useless data — not information, or certainly not knowledge, or certainly not wisdom. I can see how the model could be useful.

      I think I chose valence to consider DIKW here, because it most challenges my own models. To exist without punishment/reward is, from my models, to exist without “self”. Here existence has no personal significance. But even useless data, or punishment/reward without context, creates self. This is the stuff that I theorize to drive the function of the conscious form of computer (not that it could do so without context).

      Anyway, what do you think about not referencing “the brain” so much in future discussions, when one of the two forms of computer that constitutes the brain might present what you mean more effectively?

      Liked by 1 person

      • Eric,
        On not referencing the brain, I think the issue might be that my understanding of the divides doesn’t neatly align with yours. I’m fine with saying that pain is determined outside of consciousness, but referring to the unconscious vs conscious computer isn’t how I understand the divide. For me, it’s much messier than the clean division you describe.

        Indeed, in my current view, you could say that all processing is unconscious, with the only conscious part being what the metacognitive functionality reflects back (with distortions, gaps, and simplifications). We can talk about what cognition gets noticed by these metacognitive functions, such as imaginative simulations, but not what cognition it itself does (aside from the meta part).

        Liked by 1 person

    • Mike,
      In your response, I wonder if you consciously decided that you’d reference my conscious / non-conscious computer distinction instead as “conscious / unconscious”? Otherwise it would seem that you must have done so, yes, “unconsciously” (or as I prefer to call it “quasi-consciously”). Whether calculated or unconscious however, you’ll need to personally acknowledge what I’m referring to in order to effectively understand how my dual computer model functions, let alone demonstrate that it’s too simple to be effective.

      You wouldn’t call the computer that you’re now looking at “unconscious”, of course, but rather “not conscious”. Well I call the computers in our heads “not conscious” as well, except that they also create tiny second computers (that do less than a thousandth of a percent of the processing), which are conscious.

      As always I look forward to your testing. Whenever you come up with those horribly messy mental quandaries that appear impossible for anyone to parsimoniously model, just pass them over to me!

      Liked by 1 person

      • Eric,
        “I wonder if you consciously decided that you’d reference my conscious / non-conscious computer distinction instead as “conscious / unconscious”?”

        It definitely was not a conscious choice. Sorry, I forgot about the distinctions you made on these terms. I was treating “unconscious” and “non-conscious” or “not conscious” synonymously.

        I think we’ve talked before about the difficulty of referring to these categories. I agree that intuitively “unconscious” refers to a system that has the potential to be conscious, or at least some portion of it does. I do often consciously make the effort to avoid the word “subconscious” due to the misconceptions associated with it, although at times I mourn the loss of a distinct term to refer to cognition outside the scope of introspection that happens while awake.

        On your dual computer model, I’m trying to think of how to illustrate the difference in our views here. Let’s try this. Consider a couple of commercial computer systems.

        System A is actually two computers, a larger one and a smaller one. The larger one does a lot of work, but delegates some portions of that work to the smaller one, feeding its inputs (say, punishments and rewards) and receiving its outputs. That’s how I understand your model.

        Now consider system B. System B also has two computers, a large one and a small one. But the large one does all the actual work. The only thing the small computer does is monitor aspects of the large computers’s work, and provides selective, summarized, and simplified data back to the large computer on that work. The scope of what the smaller computer provides feedback on is a bit uneven, but it’s the only insight the larger computer has on its own processing. That processing is affected by the feedback it receives, but it never has the second smaller computer actually do any of its work. That’s closer to my understanding.

        Of course, I often get sloppy with the language. I refer to processing in the large computer that the small computer is able to capture as being “in consciousness.” But the actual mechanism is a metacognitive one, an introspection engine, and the contents of consciousness are what the mechanism supplies to the overall cognitive system. A minority but substantial portion of that cognition is potentially accessible to the metacognitive mechanism, but most of the time it misses most of it. When metacognition captures a piece of cognition, that piece is within consciousness, but when metacognition misses it, it falls outside of consciousness. The result is an uneven messy border between what we call consciousness and what we call the unconscious, non-conscious, or not-conscious.

        Hope that makes sense.

        Liked by 1 person

    • Mike,
      The last thing that I want to do is come up with “gotcha moments” for you. It wasn’t me that duped you about the “unconscious” thing, but rather unhelpful terminology that has unfortunately been surviving for quite a while. We can hardly blame Freud when he needed to reference a theorized sexual attraction of boys for their mothers — this was in a far more primitive age of science. But in English “un” means “not”, as in “unable”, “unwise”, and so on. Freud however was obviously talking about a conscious subject that thinks and functions in a not entirely conscious manner. So his “unconscious” is not “not conscious”, as the English language otherwise suggests, or as we presume my computer to be, but rather “below conscious”, or “subconscious”. Of course if that term isn’t clean enough today given past usage, then an entirely new term may be appropriate. If so I propose “quasi-conscious”.

      (Here’s another wrinkle. What do scientists today call degraded states of consciousness encompassing sleep, drunkenness, being stoned, and so on? Note that “inebriated” isn’t quite big enough, and I’m not aware of anything larger. My own suggestion would be to not call this “subconscious”, but rather “sub-conscious”, and orally spoken with a slight pause at the hyphen.)

      I’m quite pleased that you’re proposing a computer system A (mine), as well as B (yours). This way I may be able to productively adjust your conceptions of my models, as well as explore the ones that you’ve developed.

      My model is not quite as simple as having a large computer that delegate a few things to the small computer (such that the small computer does less than a thousandth of a percent as much processing). It’s more like the small conscious computer only exists by means of the large non-conscious computer, and that this massive non-conscious computer exists in order to actually serve the conscious one (as well as promote survival in general). Thus it doesn’t take much conscious work to consciously wiggle your fingers — the vast non-conscious computer should effortlessly take care of such function. Furthermore it must be mentioned that the vast computer provides the small one with three forms of input —senses such as vision, punishment/reward for motivation, and accest to past consciousness (memory). It then interprets these inputs and constructs scenarios about how to thus promote its happiness (or utility, or valence, or whatever). Finally muscle operation is its only (non thought) form of output.

      Your system B seems to have a large non-conscious computer that does essentially everything, except that it also has a small conscious computer that monitors what it does as a form of input to it. As I understand what you’ve said, that seems somewhat like being in a space ship that’s essentially going to do what it does, though you (consciousness) are able to tell it some things that may or may not alter its course. My interpretation is that you have the large computer in charge aided by the small one, while I have the small computer in charge that’s aided by the large one.

      On metacognition, it’s difficult for me to go there. To me thinking about thinking seems like something that requires language. How might one think about thought, unlike say a tree, or a rain, without a term from which to represent the concept? To be clear, my own consciousness model addresses the function of conscious life whether or not it has a capacity for language, let alone a capacity for something like metacognitian. I consider language to be a highly advanced second mode of thought, and the first great human revolution (with the second being “specialization”, the third being “written language”, the fourth being “hard science”, and a coming fifth revolution that should help us effectively use our power by teaching us about what’s ultimately valuable).

      Liked by 1 person

      • Eric,
        To give Freud his due, from what I’ve read, he didn’t buy the idea of a subconscious either. The idea of a second subterranean consciousness doing things the main consciousness is not aware of seems to be a creation of later pop psychology. Most of the careful stuff I’ve read stays away from the term “subconscious,” principally to avoid implying support for the second consciousness version. Freud apparently wrote about the unconscious mind, not the subconscious one.

        On inebriation, being stoned, high, etc, I think it’s mostly referred to as “altered states of consciousness,” although strictly speaking, both the conscious and unconscious are being altered when drugs inhibit or supercharge post synaptic receptors.

        “My interpretation is that you have the large computer in charge aided by the small one,”

        I did a post a long time ago that described this relationship in another way that you might find instructive: https://selfawarepatterns.com/2014/03/24/consciousness-the-interpreter-the-lexicographer-the-reporter/

        On metacognition and language, I agree that they’re related, but for me, the causal relationship is reversed. Language as far as I can tell, is a specific type of symbolic thought, and symbolic thought requires metacognition. How else can we substitute a symbol for another conscious perception or action, such as linking the word “red” to the sensory perception of redness. To learn and use that association requires the ability to monitor our own perceptions, to introspect them, to have metacognitive access to them.

        Again, I think metacognition without cognition has no power. It’s why I describe the five layer model so often. Layer 5 (metacognition) is meaningless without the other four layers: reflex, perception, attention, and imagination. But what content from those lower five layers get noticed, that are within our consciousness, amounts to what is captured by the metacognitive functionality.

        Liked by 1 person

    • Mike,
      On a thread this long it might be best to let complex misconceptions between us wait until later. Still it’s difficult for me to stop when you provide such interesting responses.

      On Freud, my point (with the benefit of extended hindsight) is that I believe he’d have served humanity far better by choosing the term “subconscious” or even “quasi-conscious” for what he was referring to (such as below the surface sexual desires). Instead he used the “un” modifier, which literally means “not” in the English language. Back then he shouldn’t have had any conception of what a “not conscious computer” was, given that they weren’t yet invented. Today however we do understand, and so use the “unconscious” term as both not conscious as English suggests, as well as the way Freud intended, or not entirely conscious. I probably wouldn’t complain so much about this inherited relic, but mental science has enough problems already. Straightening out these terms might even help scientists grasp that we’re dealing with a vast supercomputer that creates a tiny conscious computer which does less than a thousandth of a percent as much processing.

      Speaking of more precise terminology, I’d forgotten about the “altered states of consciousness” term, or ASC. This not only addresses being drunk, stoned, and so on, but more natural states such as sleep and hypnosis. Thus I’ll retract my “sub-conscious” suggestion in favor of ASC. (Buy the way I hate when scientists refer to “dreamless sleep” as not being conscious. Under my models they’d be more careful. Being perfectly anaesthetised, or dead, or even being my mobile phone, may appropriately be termed “not conscious”. But being asleep? Attempting surgery in such a state should quickly illustrate the flaw there! Sleep is more like being in an altered state of consciousness.)

      I hadn’t previously read your “interpreter, lexicographer, reporter” post. Well done. Here I can see why it would be popular to consider the big computer to be in charge, though with aid from the little one. When the corpus callosum is cut we humorously find ourselves explaining why our bodies have done things that we weren’t given access to. The implication is that the big computer is able to get the whole job done by itself, with the little computer simply observing and attempting to explain after the fact. Furthermore there are all those studies which demonstrate that the big computer knows what’s going to be decided before the little one that supposedly makes these decisions.

      I consider such observations to fit my own model quite well however. In order for the vast supercomputer to most effectively service the tiny conscious computer, the idea is that it will need to predict conscious function before that function is actually complete. Furthermore of course it’s the little computer that has the agency. Obviously people without consciousness don’t get much done, beyond autonomic bodily function, which should suggest how important consciousness happens to be in the human.

      I consider language to be a specific kind of symbolic thought as well. But is “thinking about thinking” required for it? Is this what babies do while they learn English? And as puppies change into obedient dogs, do they do no metacognitian while they fail to learn English?

      I’d say that one must have a conception of thought in order to think about it. But then how does a baby get a conception of something as abstract as this? Sure there should be conceptions of various people and such, but not thought itself. Thus in a literal sense I’d say that there is no metacognition in the human baby while English is being learned. I suppose that you use a less literal definition for “metacognition”, and thus make it work that way. But once again, do human babies do that sort of metacognitian, while puppies that become loved dogs do not? I say that human babies and puppies are about the same, though a few hundred thousand years ago (or whatever) the human became better set up mentally for language.

      Liked by 1 person

      • Eric,
        Describing someone in a dreamless asleep as in a altered state of consciousness, rather than unconscious, seems like we’ve veered off the common meaning of these words. It illustrates how difficult it is to define “consciousness” in a way that meets all our intuitions about it.

        What are the minimum and essential attributes of consciousness? If we say awareness of the environment or of our current state, than being asleep, dreamless or not, doesn’t seem to meet that criteria. If we say a state where we think we’re aware of those things, then dreams count. But then what’s going on in dreams? I suspect dreaming is the imaginative simulation engine divorced from, or with compromised connections to, the perception centers of the brain.

        “Furthermore of course it’s the little computer that has the agency.”
        I’m not sure I agree. I think the lessons of the split brain patients, as well as all the research that shows we don’t know our own minds, that the “little computer” has causal influence, but not control.

        “And as puppies change into obedient dogs, do they do no metacognitian while they fail to learn English?”
        We actually no evidence that puppies or dogs have metacognition. Outside of primates, the evidence for it is scant. We do have evidence that they have the other layers, albeit in lower capacities compared with humans.

        “I’d say that one must have a conception of thought in order to think about it.”
        What leads you to conclusion? Does a baby have to have a conception of anxiety or jealousy before they can feel them, or one of red before seeing it? Some capabilities are innate, and those capabilities give rise to a primal first person, initially non-verbal version of the associated conception.

        And we do know that eventually humans develop metacognition. We don’t know that for non-primates. For humans, the only question is when. I think it’s possible the youngest babies, particularly newborns, don’t have it. But I think language acquisition requires at least an incipient version of it.

        Also, humans appear to be the only species with symbolic thought. That capability must be built on something that humans have that those other species don’t, or something we have to a much greater extent. If not metacognition, then what?

        Liked by 1 person

    • Mike,
      I have a request to ask you. When we discuss the models that I’ve developed, can we refrain from ever again using the “unconscious” term? I’m not suggesting that it not be used if it comes up in one of your blog posts, or even regarding the ideas of anyone beyond myself. I consider it my obligation to accept the definitions of any person that wants to convey their ideas, and the term “unconscious” may certainly apply. But if it’s specifically me attempting to show you how my models work, would you mind us no longer using this term ever again? I realize that this is a strange request, but I consider the term to be associated with far too many ideas to be useful in general. In order for my own ideas to be conveyed, precise instruments are instead required. So here’s what I’m proposing:

      As you know I theorize a vast supercomputer in our heads, and when referenced this would either be “not conscious” or “non-conscious”, though never “unconscious”. Then regarding the tiny computer that it presumably creates, which I theorize to do less than a thousandth of a percent as much processing, this would exclusively be called “conscious”. There are a couple of other distinctions to make in this regard as well however.

      One concerns a melding of these two computers. For example a person may have a bias against a given race of people, but without consciously perceiving this to be the case. Thus it shouldn’t be right to call associated behavior either “conscious” or “non-conscious”, but rather something in between. Given that you’ve mentioned that the “subconscious” term may not be very useful today (since certain pop psychologists have used it to theorize a second form of consciousness), a different term should be required. Unless there is a reasonably standard appropriate term (and once again I believe that “unconscious” has come to mean far too many things to be effective), I’ll use “quasi-conscious” here. This will represent situations in which conscious behavior is influenced by the non-conscious computer in an unperceived way.

      Secondly there are what may be considered “impaired” or at least “different” states of conscious function. I’ll thus call this “altered states of consciousness”, or ASC (which I’m pleased that you’ve suggested). I’m not going to provide a specific demarcation between the altered and non altered forms, but rather leave that for specialists to decide. I will however say that I don’t mean for this to be what’s “normal” for a given subject. For a person with Downs Syndrome for example, normal should be an altered conscious state as I define the term. Furthermore there are all sorts of recreational substances which alter consciousness, such as alcohol, and seem to do so in a gradual rather than absolute manner. Observe that the body even produces some of these substances, such as adrenaline, in order to promote survival under specific circumstances. Also note that some substances can hinder conscious function to the point that there is a complete loss of it, and thus here we go from ASC, to no C at all.

      ASC seems to occur through normal circumstances as well. From daydreaming to full hypnosis, consciousness changes. Exhaustion would be another example. I realize that you’re pushing back regarding sleep here, though to me ASC seems like an appropriate way to reference how we recuperate in this regard. Observe that we’re able to easily wake people from natural sleep, and specifically given that there is a diminished rather than a full loss of consciousness. Furthermore consider those scenarios that are run as we dream. When we wake up and review what is remembered of them, we find that a good bit of those scenarios don’t make sense. I suppose this is given those disconnected and compromised connections that you’ve mentioned. Thus here we have ASC rather than no C at all.

      Regarding consciousness not being in complete control, the models that I’ve developed do account for this. Note that the conscious computer is theorized to be entirely produced by the non-conscious computer, and is also theorized to do less than a thousandth of a percent as much processing. The theory is that the big computer, without a lick of agency, “farms” the little computer to determine how it would attempt to promote its own happiness. As a hard determinist I certainly don’t want to imply that this computer has agency in an ultimate sense. I rather mean that it has agency from its incredibly tiny perspective. Regardless I theorize the conscious form of computer to be needed, because it’s apparently not possible to sufficiently program a non-conscious computer well enough to deal with diverse circumstances (unlike the environment of, say, a tree). Consciousness seems to help promote an autonomous, and thus generic, variety of function. Consider how a computer might otherwise be programmed to write the sorts of things that we do on your blog. Without agency they can’t, though from its own sentient perspective the conscious form of computer does have agency, and thus can potentially think about the sorts of things that we I do.

      The definition that I use for metacognition actually seems more restrictive than the one that you use, so I certainly don’t consider dogs and human babies to have it. As I define “thinking about thought”, this should require a subject that has a reasonable mastery of a reasonably advanced natural language, or at least a term for “thought” itself to potentially pondered. That I’m aware of, no animals and no primates beyond a somewhat educated human, have any such conception.

      In order to think about something, I believe that a conception of it must consciously exist. For example if something feels anxiety, or jealousy (standard sorts of conscious inputs for many forms of conscious life I presume), it should have the potential to think about these feelings, and even without a language to use. Furthermore observe that various objects might be seen, heard, and so on, and thus could potentially be thought about by something conscious that has no language. But how does a conscious entity think about something as abstract as thought itself, without a term from which to represent the idea? Thought isn’t a potential input to the processor, such as a sound or a vision, but rather conscious processing associated with perceptions. So thinking about thinking should require a pretty advanced language that has a “thinking” term. You and I think about thought all the time — it’s a hobby I suppose. But my perception is that most people today rarely if ever think about thought, and still seem to do reasonably well in their own lives. Then going back further my guess is that thousands and thousands of years ago very few people thought about thought. Even with such a term to use, why think about thought outside of an academic setting?

      To your question of, “If not metacognition then what?” my answer is, “The tool of language itself”. Apparently this tool began to emerge in the human a few hundred thousand years ago (or whatever), and I mark this as the first of four great human revolutions, with a fifth to come. I consider language to be a powerful second mode of thought. Of course dogs are famous for learning names and various individual terms. But they don’t yet seem to have the evolution to put such terms together into meaningful sentences. Thus I don’t consider them to yet have this powerful second mode of thought. Conversely thinking about thought, at least in a literal sense, does not seem like a very practical tool. Regardless I consider language to be required for metacognition, rather than metacognition required for language. (Still it may be that you define metacognition more broadly than the literal way that I do, in which case it might be a more effective tool in general.)

      Liked by 1 person

      • Eric,
        I’ll do my best to remember not to use the “unconscious” term when discussing your model. Please don’t take it personally if I slip up from time to time.

        I do think agency exists (in the weak compatibilist sense), but I think it’s association with consciousness is questionable. And I see a module similar to what you’re describing for handling complex situations, but it’s the broader imaginative simulation engine, and much of what it does lies outside of our introspective reach (see what I did there 🙂 ). From my readings, a substantial portion of the prefrontal cortex is involved in this ability, and it relies heavily on functionality from throughout the neocortex, so my understanding of it would be much broader than your thousandth of a percent computer.

        On metacognition, I think I am using a much more primal conception of it than you are. Your version seems to be the ability of someone to articulate their own mental state. Perhaps a bit more broadly, you might mean the ability to have a sophisticated understanding of their own mental state. I agree that that ability probably requires language, and likely at least a minimal amount of intellectual training.

        But consider what is necessary in terms of symbolic thought for someone to have that ability. In order to understand the concept of “red” and be able to articulate it, I must have an ineffable first person appreciation of the sensation of redness. That sensation can’t be described other than by labeling it with a symbol and associating the symbol with the sensation, by pointing to something that reflects the right wavelength of electromagnetic radiation, hence providing the sensation, and saying “red” (or whatever the sound is in someone’s language).

        This ability, to see red and to know at a primal pre-language level that we are seeing red so that we have the ability to assign a symbol to it, is the metacognitive capability I’m talking about. This ability is very fundamental to our cognition. It is so fundamental, that we tend to intuitively project it onto anything that shows awareness of its surroundings. But careful scientific studies have failed to find it anywhere but in humans and a few other primates.

        Now, maybe non-human animals do have this ability and some other limitation prevents them from using symbols. It may be that the primal metacognitive capability I’m describing here is necessary but not sufficient. But I do think it is necessary. And our inability to isolate in anything other than primates (and then only to a limited degree) is what makes me think it might be the special sauce of symbolic thought, which is itself the special sauce for human capabilities.

        Liked by 1 person

    • Mike,
      Well regarding my problem with the “unconscious” term, I’m not trying to catch you napping or anything like that. Rather I believe that our soft sciences in general could be improved by deleting it in favor of at least three terms that are more specific. This way you should not only be able to better grasp my models, but better assess the ideas of others in general. For example the unconscious term might come up while you’re listening to Ginger Campbell’s show. From there you might use its context to interpret a meaning of “non-conscious”, or “quasi-conscious”, or an “altered state of consciousness”, or perhaps even something else. Here you should be able to more effectively grasp and assess what’s being said. Conversely someone who hasn’t quite noticed an assortment of implicit meanings associated with this term (which I consider standard) may not have your proficiency.

      Like you I’m also fine going “compatibllist” regarding agency / freewill / volition or whatever they call it tomorrow. It seems to me that the key here is perspective. Given my single principle of metaphysics, or that everything functions causally, I consider all that I do to be predestined based upon the structure of reality. But I can’t have anything close to a perfect perspective, and so from this ignorance, sure, choice will exist. Here it’s quite appropriate to label me “good” for some of the things that I do, as well as “evil” for others. But with a progressively greater perspective, theoretically any given subject’s function progressively reduces back to perfectly determined causal dynamics.

      You might want to rethink disassociating agency from consciousness, since you’d then be ascribing it to a non-conscious form of computer. Causality mandates that there be no agency in the end, but if we are going to identify a limited perspective that does facilitate it, consciousness seems like the only effective option to me. I realize that you’re concerned about experiments which suggest that once we consciously decide to do something, there is evidence that the non-conscious mind has already been working in that direction. Thus it could be that it’s actually the non-conscious computer deciding. But perhaps the non-conscious prediction engine simply has reasonable evidence of what the conscious prediction engine will end up deciding, and so gets working on this before the conscious decision is consciously realized? This would still be a conscious decision, just one that the non-conscious computer predicts beforehand. (And at least from a limited perspective, the decision should be free as well.)

      On consciousness being a computer that does less than one thousandth of a percent as much processing as the non-conscious computer that facilitates it, I don’t think I’ve been clear about what I mean. Perhaps a thought experiment will help.

      Let’s imagine that you have a brain that is not biologically connected with your body, though full brain function is still transferred to your body my means of magic. It should be clear to you that the brain which exists outside of your body is not conscious, as well as that you personally do happen to be conscious. Here you’re able to interpret inputs, like images, worries, memories, and so on, construct scenarios regarding such inputs, and come to decisions based upon what feels best for you to decide at any given moment. This specifically is what I’m referring to as “the conscious computer”, rather than what facilitates such processing, or “the brain”. And again, you personally are conscious, and so are able to do conscious computation such as decide what to eat for dinner, though the thing which facilitates this thinking is not. And it should go without saying that the conscious processing which you personally do while you’re interpreting my words, or perhaps feeling an itch, should involve less than one thousandth of a percent as much processing as what occurs in the non-conscious brain that facilitates both consciousness and more.

      Hopefully this is a sufficient description of how restrictive I’m being with my consciousness definition. I suppose that I shouldn’t even call it a computer, since that term is commonly taken to mean hardware, though I don’t know what else to say. The stuff that I’m talking about here does still compute.

      On metacognition, yes it’s surely the case that I’m using a very strict definition while you’re using one that’s more broad. I’ll do my best to give you my own definition and perhaps you’ll have some things to say about clarifying yours?

      As I see it, dogs probably have conceptions of all sorts of things associated with their senses. They might think about trees for example, or plants in general. Or cars. Or people. Not only should they be able to get a sense of specific people through vision and smell, but even spoken names. They even seem able to learn their own names, and regardless should be able to think about themselves. If a dog can see red, I don’t see why it couldn’t then assign some significance to this color for conscious function. Is this the sort of thing which studies suggest non-humans can’t do?

      Anyway how might a dog get a conception of the act of conscious processing itself, or thought, to thus think about it? I just don’t know. If they can get a conception of thought, then they should thus be able to think about it, or metacognition, though otherwise they shouldn’t. And I’m not sure what good this would do them anyway.

      Once humans began developing oral languages, they should have been able to get far more abstract. For example one might believe that it’s about to rain and thus tell others “Rain”. But what about rain? Then with a “belief” modifier the statement gets put more into context. Furthermore “belief” essentially means “thought”. So with people thus talking about what they believe/think will happen with the weather and so on, they’d also have the potential to grasp the idea of, as well as think about, their thoughts. But beyond academics, what’s so great about that? (This is at least from my literal form of metacognition.) Language itself however — to me that seems like some pretty special sauce!

      Liked by 1 person

      • Eric,
        “Causality mandates that there be no agency in the end, but if we are going to identify a limited perspective that does facilitate it, consciousness seems like the only effective option to me.”

        What makes you think that? Certainly the feeling of conscious agency is very powerful, but psychological evidence doesn’t seem to support it. Yes, there are the experiments you mention, but there’s also a vast array of evidence that we often aren’t privy to our decision making. https://aeon.co/ideas/whatever-you-think-you-don-t-necessarily-know-your-own-mind
        I think all of the data is best explained by regarding consciousness as having what I call causal influence, much like the news media of a city influences the city’s policy decisions, but ultimately the media isn’t the mayor or city council.

        “If a dog can see red, I don’t see why it couldn’t then assign some significance to this color for conscious function.”

        Not sure what you mean here by “assign some significance to this color for conscious function”. If you mean that the dog, in addition to seeing red and reacting to it, can know that it itself is seeing red, well, that’s extremely difficult to determine experimentally since dogs can’t talk. But when scientists try to have animals make decisions about how to get a treat based on their assessment of their own memory (such as maybe what color they remember seeing), no animal outside of primate species can do it. It’s possible non-primate animals do have metacognition to one degree or another, but if so, it seems very limited by human standards.

        “Anyway how might a dog get a conception of the act of conscious processing itself, or thought, to thus think about it? ”

        How did the first human who thought about this get that conception? There needed to be an underlying framework for them to do so, a primal pre-language ability to know that they themselves were thinking. How does a dog have a conception of tracking a smell to find food? To do that, they need a dog’s natural underlying ability to do that, the instincts and neural programming. If they were to think about thinking, they’d need a fundamental ability to do that, an ability we have no evidence that they possess.

        “Language itself however — to me that seems like some pretty special sauce!”

        I would agree that language is a special sauce, but what enables that special sauce? What is different about human minds that enables it, the thing that non-human animals lack?

        Liked by 1 person

    • Mike,
      I base agency / freewill / volition upon consciousness, because that’s how the term is effectively used. So my perspective here is simply definitional. We wouldn’t say that a non-conscious computer has any freedom, or thus for example charge a robot with crimes for killing people. Instead we’d charge the people “responsible” for this robot, and specifically given that they “chose” to unleash something so dangerous. But the freedom that we see should only exist from the small perspectives that we have, or be a function of ignorance given causal determination. Removing the ignorance should effectively remove the illusion of freedom (not that we can do so very far).

      That Aeon piece is a great one for people to keep in mind, and I consider it further support for the models that I’ve developed. We believe what is convenient for us to believe given that the little computer which we have access to, is responsible for less than a thousandth of a percent as much processing as the big one that we don’t. I call a melding of these computers, as displayed when a person consciously believes that racial stereotypes are false while behaving as if they are true, “quasi-conscious” rather than the other term (or the one that I consider to mean too many things to be effective).

      Anyway I consider you entirely correct that consciousness has causal influence upon behavior. My theory is that this tiny second form of computer, which functions through the first, evolved to bring effective autonomy. Here evolution didn’t need to directly program for a virtually unlimited array of contingencies that the human is able to handle, but instead put an agent in charge by means of a punishment / reward dynamic. An amazing supercomputer seems to service the tiny one by which I experience existence. So I’d place consciousness (or me) as the mayor that has some potential to figure things out, though I’m being serviced by a vast hoard of workers. Here I don’t really understand too much about what effectively is happening regarding my function, or know all that much about why I believe and do what I believe and do.

      Regarding metacognition, I suspect that I’ve finally been able to grasp what’s been going on with this term. Hopefully I can communicate it well enough for your assessment.

      For there to be any thinking about thought, one must naturally first have a conceptual grasp of what’s meant by the “thought” term. Thus a normal human does not gain an inherent capacity for metacognition, but rather can potentially do so after becoming familiar with the “thought” term. Here a person might think about thought, or think about thinking about thought tomorrow, and so on to whatever recursion that one might grasp. Conversely a person doesn’t need a term for “tree” to think about trees, because they can actually be seen and so might conceptually be grasped without a term. Similarly a dog should be able to think about a tree, or red, though not think about thought, since there shouldn’t be a conceptual grasp of this idea itself. As language evolved people must surely have needed a “thought” term early, and so that personal thoughts could be expressed that way, such as “I think [this]”.

      From here I’d like to box metacognition up and label it “settled”, though it’s not to be. It’s a fancy term that scientists have apparently decided to use in a different way, though while continuing to claim that they’re still using it as “thinking about thought”. From what I gather they’ve decided that being able to use memory of past consciousness effectively enough, applies. Thus they’re saying that they can measure metacognition through a test where one is able to notice that an apparently arbitrary color somewhere, signifies a reward for the subject. (Actually this situation isn’t surprising since they don’t yet have an effective consciousness model at their disposal. My own such model places memory as a standard form of input to the conscious processor, though one that involves past consciousness.)

      There is no doubt that dogs are able to remember all sorts of things that are relevant to them, such as the spoken term “vet”. But there will be a point for any animal, including the human, where meaningful correlations won’t be noticed. Thus I’m saying that these tests simply demonstrate an arbitrarily greater level of cognition in primates than non-primates, though it sounds far better to call this “metacognition” given the memory element, rather than just plain old “cognition”. Of course I don’t believe that they consciously understand the mistake that seems to have been made here. That should instead be a quasi-conscious circumstance.

      Regarding a “primal pre-language ability to know”, I consider this nothing less than sentience itself, or the motivation which drives the conscious form of computer. So I consider this universal for effective conscious function. If something is punished / rewarded, it should know from those experiences that it exists. In the end I refer to this stuff as “self”.

      It’s not clear to me that there is anything inordinately different between the human brain and the dog brain. It should just be that long ago the human happened to gain enough cognitive capacity to evolve natural languages, and then you know the rest.

      Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s