The implications of embodied cognition

Sean Carroll on his podcast interviewed Lisa Aziz-Zadeh on embodied cognition:

Brains are important things; they’re where thinking happens. Or are they? The theory of “embodied cognition” posits that it’s better to think of thinking as something that takes place in the body as a whole, not just in the cells of the brain. In some sense this is trivially true; our brains interact with the rest of our bodies, taking in signals and giving back instructions. But it seems bold to situate important elements of cognition itself in the actual non-brain parts of the body. Lisa Aziz-Zadeh is a psychologist and neuroscientist who uses imaging technologies to study how different parts of the brain and body are involved in different cognitive tasks.

As Carroll notes in his description, the idea of embodied cognition could almost be considered trivially true.  The body is the brain’s chief object of interest.  It is hardwired to monitor and control it.  Cognition in a brain is relentlessly oriented toward this relationship, to the extent that when we think about abstract things, we typically do so in metaphors using sensory or action experience, experiences of a primate body.

A recent study showed that our memories and imagination are actually mapped according to internal location maps primordially used for tracking physical locations.  In light of the brain’s body focus and orientation, this makes complete sense.  (I often think of the location of various web sites, including this one, as existing in an overall physical space, which completely fits with these findings.)

It’s fair to say that the body is what gives the information processing that happens in a brain its meaning.  That said, I do think some of the embodied cognition advocates get a little carried away, asserting that thinking is impossible without a body.

It may be that a human consciousness can’t develop without a body.  If we could somehow grow a human brain without a body, it’s hard to imagine what kind of consciousness might be able to form.  It seems like it would be an utterly desolate one by our standards.  But once it has developed with a body, I think we have plenty of evidence that the human mind is far more resilient than many people assume.

Patients with their spinal cord severed at the neck are cut off from most of their body.  Without the interoceptive feedback, their emotions are reportedly less intense than healthy people’s, but they retain their mind and consciousness.  Likewise, someone can be blind, deaf, lose their sense of smell, or apparently even have their vagus nerve cut, and still be conscious (albeit perhaps on life support).

It seems like the only essential component that must be present for a mind is a working brain, and not even the entire brain.  Someone can have their cerebellum destroyed and remain mentally complete.  (They’ll be clumsy, but their mind will be intact.)  The necessary and sufficient components appear to be the brainstem and overall cerebrum.  (We can lose small parts of the cortex and still retain most of our awareness, although each loss in these regions comes with a cost to our mental abilities.)

Embodied cognition is also sometimes invoked to make the case that mind uploading is impossible, even in principle.  I think it does make the case that a copied human mind would need a body, even if a virtual one.  And it definitely further illuminates just how difficult such an endeavor would be.  But “impossible” is a very strong word, and I don’t think this line of reason really establishes it.

Unless of course I’m missing something?

This entry was posted in Zeitgeist and tagged , , , , , . Bookmark the permalink.

36 Responses to The implications of embodied cognition

  1. James Cross says:

    I found that Quanta Magazine fascinating and suggests that a key part of the brain is mental maps about the rest of the body and the external world.

    When we say the “essential component that must be present for a mind is a working brain”, we do need to consider three things.

    1- The individual consciousness of someone with a severed spinal cord in most cases (all cases?) developed before the spinal cord was severed. So the whole body was involved in its formation.
    2- The brain itself as a biological unit probably has key parts prepped for mapping the body from birth.
    3- There certainly are feedback mechanisms other than the spinal cord from the body to the brain, among them certainly blood and the lymphatic system.

    Just found this article in verifying that about the lymphatic system.

    https://multiplesclerosisnewstoday.com/2018/09/20/brains-lymphatic-vessels-carry-messages-that-may-promote-ms-study-reports/

    Like

    • James Cross says:

      Oh, one other thing. Even in the case of the severed spinal cord, the head remains and provides all five senses.

      Like

    • On 1, I don’t know of any congenital spinal cord trauma, although there are cases of congenital paralysis due to various types of stroke.

      On 2, that’s what I meant by “hardwired”. Antonio Damasio refers to the initial network of body mapping in the brainstem as the “protoself”. And there’s the more well known humuculus mapping in the somatosensory and primary motor cortices. The overall body maps topographically to these neural networks.

      On 3, yes the overall circulatory system does remain so lymph, hormones, and other mechanisms remain. It’s difficult to discuss what might happen without these since their absence means death. But I think we can say that the amount of information involved is a fraction of what comes in from the peripheral nervous system.

      I mentioned the senses in the post. Of course, people lose one or more senses all the time. I don’t know of anyone who’s lost all their senses and been paralyzed. There may be locked-in patients who are close. Even establishing that these people are still consciousness is a challenge, although in some cases it’s been demonstrated with brain scans.

      Like

  2. Embodied cognition is fascinating. Do we need a body to think? I don’t think so, but I do think that our thinking is shaped by our body. Depending on how you define thought, software has no body, yet it thinks (it’s harder and harder to claim it doesn’t, given how it’s rapidly outclassing people in a variety of tasks, long considered the marks of intelligence). What about artificial neural networks? They lack a body…

    I think we shouldn’t underestimate the importance or power of embodied cognition, or fall into the trap of imagining that our bodies don’t vitally affect our cognitive process, but this doesn’t mean we should fall in the opposite trap of assuming one can’t have cognition without a body.

    Liked by 1 person

    • Well said.

      Equating computation with thought remains a controversial proposition. It seems glaringly evident to me that thought is computation, but whether all computation is thought is a much less obvious conclusion. But it’s worth remembering that computers were developed to do what, up to that point, could only be done with thought.

      Liked by 1 person

  3. john zande says:

    If we could somehow grow a human brain without a body, it’s hard to imagine what kind of consciousness might be able to form.

    Now THAT got my imagination going.

    Liked by 1 person

    • One of the things that impresses me from some sci-fi writers is the ability to tell stories from utterly alien perspectives. But telling a story from a human brain that never had a body would be a major challenge. Although if it had some kind of interaction with the outside world, it might be feasible.

      Liked by 1 person

  4. Wyrd Smythe says:

    Aren’t Boltzmann Brains and the whole idea of “brain in a jar” counters to the idea we need a body to think? (A lot of Greg Egan’s work, admittedly SF, but based on solid physics principles, is all about virtual minds.)

    Clearly a mind needs a substrate, and clearly the mind-body ensemble is a vast part of the whole, but even humans are all over the map in terms of how body conscious they are.

    Isn’t input (and maybe output) the only thing needed?

    Liked by 1 person

    • I think you’re right that input is the real divide, or maybe that the expected inputs in relation to outputs happen.

      For Boltzmann Brains, I think the idea is that they come into existence thinking they have a body (and of course a whole life history), even if they don’t. Someone pointed out to me that if Boltzmann Brains could actually happen, that there would be one for every instance of someone’s life, meaning a particular entity could have a life stretched out in fragments across infinite time and space. Indeed, each of us would have a doppelganger stretched out like that.

      Liked by 1 person

      • Stephen Wysong says:

        As an aside, why stop with a Boltzmann brain? Perhaps ours is a Boltzmann universe. The one is every bit as conceivable as the other.

        Like

      • Wyrd Smythe says:

        “Someone pointed out to me that if Boltzmann Brains could actually happen, that there would be one for every instance of someone’s life,”

        It would take unimaginable time and infinite universes, but yep.

        Greg Egan’s novel, Permutation City, gets into this big time. Not the Boltzmann brains, but the idea of mathematical “dust” (the Wiki article explains a bit).

        One of his ideas is that, if a numerical simulation of a mind exists, it exists even if each snapshot of the system is separate and out-of-order. Each “frame” of that simulation has an ontological reality, the collection of which is the consciousness of the simulated mind.

        Which suggests that in any random data (say the digits of pi), frames of some consciousness exist. In fact, infinite minds would exist in a variety of formats.

        It’s essentially the Boltzmann brain idea, except purely mathematically.

        If one believes mind is a numerical simulation, then one may have to accept an infinite number of “minds” existing in static.

        Liked by 1 person

        • I need to read Permutation City one of these days!

          Disagreeable Me and I had an epic conversation along these lines a while back. It came down to the conclusion that whether a particular physical process is implementing a particular algorithm is always a matter of interpretation, including the processes that implement a mind, meaning that whether a particular mind exists is a matter of interpretation.

          DM was bothered by this and felt mathematical platonism was necessary to deal with it. I found it interesting and stark, but ultimately an aspect of any functional understanding of the mind. It led to this blog entry: https://selfawarepatterns.com/2016/03/02/are-rocks-conscious/

          Liked by 1 person

          • Wyrd Smythe says:

            I remember that post. I’m surprised I didn’t comment on it, but I was focused on the election around then. (The series of posts I’d written about my views on consciousness were months prior, and we may have exhausted the topic at that point.)

            “It came down to the conclusion that whether a particular physical process is implementing a particular algorithm is always a matter of interpretation, including the processes that implement a mind, meaning that whether a particular mind exists is a matter of interpretation.”

            I think some of that comes from having too broad a definition of “algorithm.”

            There is a formal definition, either using a TM or lambda calculus, that eliminates a lot of possibilities. We talked recently, IIRC, about differing views of what calculates (or not). For me, any calculus necessarily has an associated TM or lambda expression.

            I feel that “A” being interpreted as “B” doesn’t mean “A” and “B” are equals, especially when so much necessarily rests in the interpretation.

            For example, a file of (genuinely) random bytes can interpreted as anything you want so long as the actual desired bytes somehow exist in the interpretation. The information obviously has to come from somewhere.

            I think what we’re talking about is structure and patterns, and those either exist in the source or the interpretation. The mathematical dust theory is interesting, but I don’t know that I take it too seriously.

            Liked by 2 people

          • As I recall, the issue is that no physical system ever perfectly implements those things. There are always variances and other possible interpretations of what’s happening. So whether that physical system implements a Turing Machine, Lambda calculus, any other model of computation, and/or a mind, becomes a matter of interpretation. The reductio ad absurdum argument was that this made unlimited-pancomputationalism inevitable, with every rock implementing every conceivable algorithm including every conceivable mind.

            There are lots of philosophical arguments to avoid unlimited-pancomputationalism, but mine was simply to observe that it matters how much energy and effort the interpretation takes. When the interpretation is larger and more complex than what is being interpreted, that interpretation is unproductive. We don’t use rocks as computers because it would take another computer to interpret it as doing computation, which seems silly.

            Dust theory doesn’t appear to rate a Wikipedia page. Just about every reference I can find to it is also a reference to Egan’s novel. But I agree that it sounds very Boltzmann Brain-ish, and about as whimsical.

            Liked by 1 person

          • Wyrd Smythe says:

            “There are always variances and other possible interpretations of what’s happening.”

            Is that also wrt systems designed as computers, or does it only apply to natural systems be viewed as putative computers? More concretely, what other valid interpretations exist about, say, my new laptop?

            “When the interpretation is larger and more complex than what is being interpreted, that interpretation is unproductive.”

            That’s a good way to look at it. Do you only go so far as “unproductive” or is it possible to say “invalid” in cases?

            I seem to recall at least starting to reply to your rocks post. (Did you write an earlier post that was similar? Maybe I replied to that one?) As I said, politics was really distracting me at that point, and I may have viewed us as having covered the territory fairly thoroughly by then.

            Thinking back, one thing that struck me was that the “rock as clock” isn’t about the rock at all. It’s about the solar system, because that’s the source of the heating pattern observed in the rock. One could construct a nearly identical clock using anything the sun warms as the timing source.

            I seem to recall actually writing that the same rock in a deep cave would make a non-clock. The rock is just a (terribly crude and inaccurate) sensing mechanism for time passing.

            So the computer in this case would be the sun, the Earth, the rock, and a whole bunch of other sensors and software running on some hardware.

            All of which is to say that calling it unproductive is putting it mildly! 😀

            “Just about every reference I can find to [dust theory] is also a reference to Egan’s novel.”

            Sounds like you have a handle on the theory. You may have seen Egan’s Dust FAQ (it does have plot spoilers). There’s a blog post from 2009 that outlines the theory pretty well.

            Liked by 1 person

          • “Is that also wrt systems designed as computers, or does it only apply to natural systems be viewed as putative computers? More concretely, what other valid interpretations exist about, say, my new laptop?”
            It is meant to apply both to natural systems and designed computers. But it’s a good question on what another valid interpretation of your laptop might be. It’s easier if we just look at your computer chips in isolation, since we could in principle change what the voltage ranges mean, or find alternate pattterns in the atomic structure of the silicon, although mapping that to a coherent account may require an absurd amount of work.

            But it gets progressively more difficult as we widen our scope to include the laptop’s I/O systems: keyboard, screen, etc. The wider the scope, the more a certain interpretation becomes enmeshed with the world. Indeed, the I/O systems become the processor’s version of “embodied”, where a particular interpretation of what the processor is doing becomes reified by the environment, including us.

            (An aside: It’s interesting how much embodiment and I/O derived meaning resonate with the description of quantum decoherence.)

            “Do you only go so far as “unproductive” or is it possible to say “invalid” in cases?”
            I think we concluded that saying it was logically invalid ultimately wasn’t possible, unless we wanted to define “invalid” as ridiculously unproductive.

            Thanks for the dust links!

            Liked by 1 person

          • Wyrd Smythe says:

            “But it gets progressively more difficult as we widen our scope to include the laptop’s I/O systems: keyboard, screen, etc.”

            I think a strong argument can be made that a computer’s architecture is very much repeated within the CPU (and other intelligent sub-systems). Modern CPUs, as you know, have internal memory, IO, and microcode.

            It’s an interesting question, how one interprets any system. I’ve been pondering it a bit since you raised the topic here.

            With computers and other designed and made objects, I think the intention and design is clear enough to invalidate (yes, invalidate) other interpretations. That doesn’t mean one can’t interpret a design as something else, but it would clearly be an alternate interpretation.

            It seems harder with natural (living) objects, but evolution selects for successful mechanisms, which ends up looking a lot like intentional design. (Enough to blind watchmakers.) Perhaps enough to make the obvious purpose of the organism the clearly correct interpretation of its function?

            As far as things like rocks, I think they’re just rocks. 😀

            “Thanks for the dust links!”

            No prob. In Egan’s FAQ, he says he doesn’t take the dust theory seriously but has never heard a good logical refutation. Most attempts run afoul of the full consequences of believing minds can be simulated. (As that blog post points out, one of the few ways to deny the conclusion is to deny that premise.)

            I wondered if one could argue along similar statistical lines as used in the virtual reality argument. That is: IF dust minds exist, then there must be an infinite (or at least vastly huge) number of them. Statistically speaking, the odds are overwhelming you are a dust mind, not a “real” one (whatever that means).

            It almost ends up being solipsistic: The odds are you’re a dust mind, but living a productive life seems to require accepting all this as real, so… might as well?

            Personally, I take dust theory as a reductio ad absurdum argument that minds cannot be simulated. I do, indeed, deny that blogger’s first premise. (As you know. 😀 )

            Liked by 1 person

          • Wyrd Smythe says:

            “Wyrd might be fine with this exclusion,…”

            Nope. 🙂

            Nor with the idea of redefining “computation.” I think it’s defined just fine.

            Liked by 1 person

          • Just wanted to throw two cents in here. I kinda think the better way to approach the issue is to redefine “computation” so that it fits better with general understanding. I think that’s where Chalmers was with counterfactuals.

            I think you can get where you, as a computationalist, want to be by requiring that a computation be a process performed by a mechanism such that, after the process is completed, the mechanism is capable of repeating the process. There may be further constraints on the processs, but that one should eliminate the “wall as computer” problem.

            *

            Liked by 1 person

          • I guess I would add the constraint that the mechanism be responding to it’s environment. This is what Mike referred to as being “enmeshed with the world”.

            *

            Liked by 1 person

          • James,
            The constraint that the system be able to repeat the process would rule out nervous systems. Any activity in a nervous system alters the system. There are refractory periods where neurons won’t fire another action potential, time required for sodium and potassium ions to be pumped back into place, short term and then long term potentiation or depression of synaptic strength, habituation, etc.

            Wyrd might be fine with this exclusion, but I suspect you wouldn’t.

            My stance is that if we do make the exclusion, it means we have to come up with another term to refer to what nervous systems are doing. Artificial neural networks, machine learning AI, would also need to be relabeled, although not the underlying hardware unless it’s neuromorphic.

            Like

          • Mike, are you saying CPU’s don’t have refractory periods? In any case, I include the refractory process (getting ready for the next round) as part of the process, and a part that walls and rocks don’t do.

            *

            Like

          • James, we can focus on the synapses if that helps. They’re constantly either increasing or decreasing in strength due to transmission and modulation activities. A neural circuit is never exactly the same on subsequent stimuli coming in. That, I can say, has no correlate in regular commercial processor chips. The issue is that nervous systems don’t have the hardware / software / storage divides. It’s all an integrated system.

            For a computer, if we expand our scope to the overall system including its hardware, software, and storage, then a complex task that modifies the system is also unlikely to be precisely repeatable (think about how a PC changes as software is installed, removed, and updates applied over its lifetime), and we’re back to the issues with the constraint.

            Like

          • Mike, are you saying the number of synapses changes after each firing of the neuron? I’m pretty sure this is not the case with regard to the vast majority of neuron firings. Even so, I think the requirement of being able to repeat a process within a window of variability, is not unreasonable. Note: I’m not talking about complex tasks. I’m talking about the simplest tasks. I’m talking about single neurons firing in response to incoming signals.

            *

            Like

          • The number of synapses don’t change that often, but the strength of the synapse, the number of post-synaptic receptors and the reliability of the pre-synaptic transmission of neurotransmitters, do vary from firing to firing. The short term variations are called short term potentiation (increase) and short term depression (weakening). Even sensory neurons in the peripheral nervous system which do nothing but fire when triggered, undergo habituation, firing less often on repeated stimuli or increasing firing under other conditions.

            The simplest operation an interneuron might face is simply being triggered by a pace-setting pulse, which may come from electrical synapses. But that amounts to the recharging waves that transistors receive in a chip, which is really more setting conditions for computations than performing them.

            If you want simple repeatability, you have to drop to the individual proteins, and even they degrade over time. Sorry. Not trying to be difficult. Biology is just complicated.

            Like

  5. Wyrd Smythe says:

    “Patients with their spinal cord severed at the neck are cut off from most of their body. Without the interoceptive feedback, their emotions are reportedly less intense than healthy people’s, but they retain their mind and consciousness.”

    I meant to comment there could be psychological reasons for this. For instance, if all life offered was paralysis, you might be faced with adopting a calm attitude with reduced expectations or going crazy over your loss. It’s an attitude — pervading calm — sometimes encountered with people who’ve suffered a grievous loss.

    And obviously the world is far more vivid if you can move around freely in it.

    Liked by 1 person

  6. Stephen Wysong says:

    I strongly recommend the book Philosophy in the Flesh, “The embodied mind and its challenge to Western thought” (1999 … sheesh! 20 years ago!) by Lakoff and Johnson. You can score a used paperback version from Amazon for as little a $8:

    It’s the book that informed a lot of my early thinking about consciousness, particularly their observation that upwards of 95% of the brain’s cognitive operations are unconscious. (I don’t have a source at hand and I’m too busy today for research but I believe they’ve since upped that to upwards of 98%). Their definition of “cognitive”:

    “As is the practice in cognitive science, we will use the term cognitive in the richest possible sense, to describe any mental operations and structures that are involved in language, meaning, perception, conceptual systems and reason.”

    I think that Aziz-Zadeh’s view that “thinking as something that takes place in the body as a whole” is something of a stretch (although not the kind of stretch that does take place in the body as a whole … that’s a joke folks). But Lakoff and Johnson make an excellent case that embodied metaphor is at the root of all of our cognition and much of our language. A very important book—highly recommended.

    Their thesis strongly suggests that if we ever attempt the creation of Artificial Consciousness (AC) (science fiction which no one is working on or even considering) or the uploading of a developed human consciousness (probably ditto), we’d better at the same time create a complete sensory surround with Artificial Embodiment (AE) in an Artificial World (AW) if we’re to avoid the most likely outcome, another AI—Artificial Insanity.

    Mike, we should note that the minds of the severely injured patients you describe are fully functional minds that developed with embodied cognition. A freshly developed disembodied animal consciousness devoid of sensory input seems impossible though and, if its ongoing functionality is at all possible, I’d suggest that taking extraordinary measures to keep such a “brain in a vat” alive seems morally hazardous. Also, regarding your remark that, “We can lose small parts of the cortex and still retain most of our awareness”—we can actually lose an entire hemisphere (as in a radical hemispherectomy to relieve seizures) and our consciousness (as awareness) remains complete and fully functional. The contents of consciousness are significantly reduced, to be sure, but consciousness itself is undisturbed.

    Liked by 2 people

    • Thanks for the book recommendation!

      Definitely it’s not at all obvious that an animal consciousness could develop without a body.

      Last year there was a story about scientists keeping a pig’s brain alive outside of a body for something like 36 hours. There was no evidence of it being conscious, although at one point they thought they might have detected activity consistent with consciousness via EEG. They reacted with “both alarm and excitement” before determining it wasn’t actually there. Talk about your ethical landmines.

      From what I’ve read about people who’ve lost an entire cortical hemisphere, a lot depends on when it happens. It always comes with dramatic perceptual, movement, and cognitive deficits, but younger people have a better ability to adapt. For example, an adult who loses their left hemisphere probably loses language, but a young child still has an ability for their right hemisphere to take in. (I say “probably” because there are people whose language centers are in their right hemisphere.)

      Like

    • James Cross says:

      That book does look good and I just ordered it.

      I’ve thought for a time that the foundations of consciousness began when life started to create a center (maybe multiple locations in the octopus) of neurons that are designed to control the whole organism and its interaction with the environment. To do this neurons needed to be able to map the body and the external world. This began probably with the first bilaterians, which were worms, with a concentration of neurons near the mouth. So the initial goal was the control of head and mouth and swallowing food. This expanded through evolution with additional sensory capabilities and new goals – predator avoidance, hunting in the case of predators, finding mates, etc.

      In my view, the primary sense is the sense of being in a body.

      I have thought that consciousness itself may have arisen directly or as a side effect from the need to coordinate the senses and the actions of the body as these capabilities have expanded

      That so much of this capability is unconscious to us, however, suggests consciousness is required to handle the left over part of coordination between the self and world that is not automated and unconscious. In some ways, this may sound like a definition of consciousness but it presents another way of looking at human consciousness. Human consciousness in a way may be possible because we have so much of our control and interaction with the environment automated and unconscious. This allows humans to engage in novel solutions and interactions to the natural and social environment.

      For the worm consciousness may be moving the head, opening the mouth, swallowing. We have pushed that kind of stuff into the unconscious so we can think on other things.

      Like

  7. tienzengong says:

    There is no chance for you to understand Sean’s explanation of gravity/dark energy, as he does not understand either, just talking.

    I will give you a very, very simple understandable explanation.

    The expansion (dark energy) is positive; that is, positive energy comes out in nowhere all the time, and it just does not make sense (although in the Cosmo level, the energy conservation is not needed to be conserved).

    {Needs not to be conserved) does not mean it is not conserved.

    The gravity is NEGATIVE energy which cancels out that positive energy.

    In a sense, the gravity is the result of the expansion.
    Of course, it can be viewed as the CAUSE of the expansion.
    Either way is correct. They are mutually in a cause/result entanglement.

    Liked by 1 person

    • Thanks Tienzen. I’m not sure if I follow, but the accountant in my understands that positive and negative energy would balance the energy books.

      Is it just repulsive gravity that is negative? Or all gravity?

      Like

      • tienzengong says:

        There is only ONE gravity although having two different gravity-theories (Newton and Einstein, excluding the quantum-gravity which fails on observation level).
        Yes, the bookkeeping is the key.
        For details, see below.

        For the Big-bang cosmology, the universe is finite, that is, it has a boundary (surface). Then,
        iii. Where is that boundary?
        iv. What is outside of that boundary?
        v. How to move that boundary to its outside?

        When we answer the question iii, iv and v, we get the answers for the i. and ii automatically. Let me give some short answers here.

        For Q iii — the answer is Here.
        For Q iv — the answer is Next.
        For Q v — the answer is with ħ (Planck constant).

        With these three answers, we derive the force equation: moving from {[Here (now), Now] to [Here (next), Next]} = {Delta S, Delta T} = (Delta S x Delta T);

                              F =  ħ/(delta S x delta T)
        

        This is the force driving the universe accelerating outward.

        See, http://prebabel.blogspot.com/2013/11/why-does-dark-energy-make-universe.html

        Yet, this expanding acceleration force is a unified force which derives all other forces, including the gravity.

        F (unified) = K ħ / (delta T * delta S)

        K is a coupling constant, ħ Planck constant, T time, S space. From this unified force equation, the uncertainty principle can be derived very easily.

        Delta P = F * Delta T = K ħ/ Delta S

        So, delta P * delta S = Kħ
        Thus;
        1. When, K >=1, then delta P * delta S >= ħ
        2. When K ~ 1, the uncertainty principle remains significant.
        3. When K << 1, then the uncertainty principle is no longer important.
        Now, FU physics has reproduced the traditional Uncertainty Principle, that is, making contact with Quantum physics.

        Furthermore, the Super Unification can be easily done at the unification scale when r (distance between two masses) is written r = delta r, and delta r = C delta t. C is the light speed.

        a. The Newtonian gravity equation can be rewritten as follow:
        F (Newton gravity) = G mM/r^2 = (ħC/Mp^2) mM/r^2
        = (mM/Mp^2)( ħc/r^2) = f1 (ħc/r^2)
        = f1 (ħc / (delta r)^2) = f1 (ħ/(delta r * delta t))
        (Mp is the Planck’s mass)

        b. The Coulomb law can be rewritten as:
        F (electric) = k* q1 * q2/r^2 = k * f * e^2/r^2
        = f2 (ħ c/r^2)
        = f2 (ħc / (delta r)^2) = f2 (ħ/(delta r * delta t))

        Obviously, both of them (gravity and electric forces) are unified with the Super Unified Force equation.

                                                    F (unified) = K ħ / (delta T * delta S)
        

        Like

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.