Late last week, there was a clash between philosophers on Twitter over panpsychism. This was followed by Philip Goff, an outspoken proponent of panpsychism, authoring a blog post arguing that we shouldn’t require evidence for it. This week, Susan Schneider did a (somewhat confusing) Big Think video arguing that panpsychism isn’t compatible with physics, and Annaka Harris did an interview Singularity Hub on her new book, which argues for panpsychism.
Panpsychism, the view that consciousness pervades the universe, seems to be in the air. Everyone is talking about it. Christof Koch, another panpsychist, has a new book coming out later this year, which I don’t doubt will expand on his views. And we’ve discussed David Chalmers’ fascination with it.
Panpsychism, in the dualist sense that most of these people are conceiving of it, seems to come from two conclusions. First, that conscious experience cannot be explained in terms of physics, that no explanation will ever be possible to bridge the gap between mechanism and subjective experience. As a result, experience must be something irreducible and fundamental.
And second, that there is no evidence that the physics in the brain are fundamentally different from the physics anywhere else.
If you accept these two precepts, then panpsychism seems like a reasonable conclusion. Experience is seen as a fundamental force, latent in all matter, with concentrations of it higher in some systems, such as brains.
It’s a view that’s extremely easy to strawman, to derisively talk about conscious rocks, protons, or thermostats, as though the view implies that these objects have the same kind of experience that humans or animals have. Most panpsychists would say that they’re not saying that. What they describe is an incipient level of experience, a low level quantity in most matter that exists in much higher levels in brains.
This common view seems to fit more with what Chalmers calls panprotopsychism, the view, not that consciousness pervades the universe, but that proto-consciousness does. Panprotopsychism seems in danger of just being reductionist physicalism by another name, but panprotopsychists point out that they’re not saying that experience reduces to physics, but to proto-experience, which itself remains irreducible to physics.
I personally don’t buy the first precept above, about experience not being explainable in physical terms. In my view, as I’ve explained before, the conviction arises from failing to appreciate that introspection is unreliable. Just as our senses can be adaptive but inaccurate, our inner senses can as well. Explaining why we have an inaccurate intuition of a non-reductive essence is much easier than explaining the non-reductive essence.
But if I were convinced of the first precept, I could see the appeal in panpsychism (or panprotopsychism). And I do sometimes wonder if attacking panpsychism is warranted, since if panpsychism gets people out of looking for magic in the brain, that’s a good thing. Optimistically, a functionalist and a panpsychist could bracket their metaphysical differences and then assess scientific theories about the brain together.
Except that panpsychists and functionalists often assess theories in a different manner. If you think consciousness is unexplainable and irreducible, then you’re not going to really expect scientific theories to provide a full explanation. That might be fine if by “experience” you mean something ineffable and separate from any of the contents and functionality of consciousness. But based on several conversations I’ve had, there tends to be disagreement over exactly what is and isn’t function.
I think that’s why IIT (Information Integration Theory), which doesn’t really attempt to explain functionality, seems plausible to many panpsychists. But for a functionalist, an identity theory like the strong metaphysical version of IIT, is utterly unsatisfying. A functionalist not only believes that a functional account is possible, they won’t be satisfied with anything less.
That’s aside from the fact that there’s simply no evidence for dualistic panpsychism. Goff points out that we can never observe consciousness, not even in brains, therefore, he contends, it’s unreasonable to require evidence for it anywhere else. I’m tempted to use Hitchen’s razor here, that what can be asserted without evidence can be dismissed without it. But it’s better to just note that consciousness is only a concept for us because we can infer it, Turing style, in some systems, and not in others.
I’ve sometimes been accused of panpsychism for noting how subjective this inference is. But I’m closer to illusionism than panpsychism, although I’ve noted before that the line between illusionism and naturalistic panpsychism may only amount to terminology preferences. (I’m also not a fan of the “illusion” label, preferring instead to say that consciousness only exists subjectively.)
Another big issue for panpsychism is that it seems to require epiphenomenalism, the idea that consciousness has no causal effects on behavior. Harris in her book seems to largely bite this bullet, although she does admit that our ability to talk about conscious experience is a problem for this view.
But she also describes what appears to be an increasingly common move from panpsychists, to point out that we don’t really know what matter intrinsically is. Maybe its intrinsic nature includes consciousness, and maybe this affects its causal properties. If so, it might allow panpsychists to evade the epiphenomenal trap.
Except this doesn’t really work. To begin with, what exactly do we mean by “intrinsic nature” when referring to matter? Matter at what level? Something’s “intrinsic” nature seems like the extrinsic nature of its components.
And physics has managed to reduce matter down to elementary particles and quantum fields. At that level, its behavior appears to rigidly follow physical laws. There’s no room for any conscious volition. Even quantum randomness smooths out to complete determinism with large numbers of events. I think this was the point Schneider was making. (Although physicist Sabine Hossenfelder handled it more clearly a few months ago.)
So panpsychism is built on a questionable intuition (albeit one everyone troubled by the hard problem shares), lacks evidence, can skew evaluation of scientific theories, and seems to either require epiphenomenalism or has problems with physics.
From my point of view, its main virtue is in getting people out of the mindset that there’s something spooky happening in the brain. But I’m not sure if that’s enough.
What do you think? Are there arguments for dualistic panpsychism I’m missing? Or panpsychism overall?
120 thoughts on “The problems with panpsychism”
“I personally don’t buy the first precept above, about experience not being explainable in physical terms.”
The problem is there is a subjective aspect to experience. You can’t explain subjectivity from the outside because it is inside the experience.
No matter how many correlations you make you can’t get there. It is a philosophical problem.
You can narrow the correlations until the difference amounts to one of perspective. But there’s admittedly no way to gradually shift between the perspectives. It’s like being in a house or seeing it from the inside. The views will always be different, even if we can look in the house from the outside. But in the case of a house, we can stand in the doorway threshold to reassure ourselves that they are one and the same. In the case of experience, we can only narrow the correlations to the point that the theory is only that change in perspective.
The perspective difference is profound, and I don’t doubt people will insist that there’s something more than perspective involved, long after the brain has been completely mapped.
LikeLiked by 1 person
“You can narrow the correlations until the difference amounts to one of perspective. ”
That difference is the problem. So you still have the problem of”experience not being explainable in physical terms”.
How is a difference in perspective still a problem of it not being explainable in physical terms? Connect the dots for me. What about that difference defies physics?
As you stated” “The perspective difference is profound.”
So are you saying it isn’t profound?
I am the one needing the dots connected about how something external to subjective experience can explain subjective experience. How do you ever propose to ever make “red” from neurons even if you can identify the exact set of neurons and how they worked to give the subjective impression of “red”?
It seems to me that “red” and neurons are in completely different domains. Or, if the same domain, then there would be ironclad way of knowing whether the domain is physical or mental.
Not all profundity requires explanation. There’s nothing particularly mysterious about looking at a mountaintop. We understand the geology and meteorology involved. Yet often its appearance feels profound to us. (Some people have had religious experiences from it, although I think that amounts to serious over-interpretation, and the effect seems very culturally specific.)
Above, you said that the difference in perspective was the issue. I asked why it was an issue. Now you’ve switched back to the more loaded term “subjective experience”, implying that more than perspective is at stake. But once we’ve narrowed the correlations and reduced the gap to one of perspective, it’s a mistake to forget what’s already been explained. You might argue that we’ll never be able to narrow the correlations to that extent, but then it just becomes a long term empirical question.
I expect red to eventually be narrowed down to one of those correlations. We already have some of it with the L cones and opponency ganglion cells in the retina. We know the perception of red is eventually generated from those signal patterns. Perhaps it eventually gets boiled down to a protocol signal, a mechanism to transmit information from the visual cortex to the motor planning parts of the brain, one that is associated with a number of affective reactions. Together that raw protocol signal and its associated affects make up the experience of redness.
Why does red have redness? Why do feelings having feelingness? At some point we have to ask what the alternative might be. It’s not like the brain could transmit wavelength numbers or some other symbology. The communication needs to be pre-linguistic and pre-symbolic. The simulation engine needs just enough to factor it into its processing. I do suspect we have an introspective meta-representations of redness and feelingness, concepts that get looped in enabling these discussions. I expect something like this to be part of the overall correlations.
Eventually we’re just left with a difference in perspective, the communication vs the phenomenal quality. Insisting on explanation beyond that might be like insisting that we explain why 1+1=2, or why triangles have three sides. What would such an explanation even look like? We reach a point of subjective irreducibility, a limitation in one of the perspectives that we can only overcome by switching to the other perspective, sort of like a software bit mapping to a transistor.
You started using the term “perspective”.
I don’t know what is loaded about “subjective experience” which was the term I started with. That is the core of Chalmers hard problem .
Ultimately you will think it is something that needs to be accounted for or not. From a scientific standpoint, maybe it doesn’t matter. That’s a fine position to take and I wouldn’t argue if you just said we can’t account for it but it doesn’t matter, that we don’t care. But then you go on to say we can explain it in physical terms.
I don’t think you can make the jump from physical to mental. It isn’t in the realm of science to solve.
So. Many. Topics.
Let’s start with the intuition that consciousness is ineffable and possibly fundamental. John Searle’s paper the Phenomenological Illusion, an analysis of the phenomenology of Husserl, Heidegger, etc., explains it pretty well. From the perspective of a subject, i.e., the phenomenological perspective, i.e., if you bracket off empirical facts and consider only how you know things via experience, then “experience” is fundamental, and empirical facts are just derivative. This would naturally lead to the intuition that experience could be fundamental to anything and everything. But Searle, and I expect you, would say that it’s simply an error to draw conclusions while bracketing off the empirical facts.
LikeLiked by 1 person
I think that’s definitely right. Much of the mystery comes from declaring that things are separate, which typically involves separating the actual explanation from what needs to be explained, then lamenting the unexplainable nature of what needs to be explained. It’s separating puzzle pieces that fit together, then declaring there’s no solution to the puzzle.
On empirical facts, I suspect many would point out that such facts are themselves conscious experience, that maybe we’re letting the secondary reality take precedence over the primary one. But it’s the nature of reality that we’re forced to develop theories, predictive models. We can only judge those models by how accurately they predict future experiences. But often those predictive models tell us that our initial intuitive models are wrong.
Ugh! what a load of equivocation in those arguments, between explanatory and theoretical reduction.
In its most careful exposition, panpsychism seems to claim: If the world is intelligible, then it has an intelligibility property. The world is intelligible. Therefore it has an intelligibility property.
I am blown away by the banality….
LikeLiked by 3 people
I guess is depends on what one’s definition of “intelligible” is.
LikeLiked by 1 person
Yup. I mean amenable to explanatory reduction, i.e. “the cat on the mat” is that cat on my mat, which is fat and covered with scat, etc.
LikeLiked by 1 person
“And physics has managed to reduce matter down to elementary particles and quantum fields. At that level, its behavior appears to rigidly follow physical laws. There’s no room for any conscious volition. Even quantum randomness smooths out to complete determinism with large numbers of events.”
I am not sure what you are saying but I think almost exactly opposite is the case.
“One more interesting thing of note is that the authors make a fascinating calculation which shows how quantum uncertainty in colliding things transforms from tiny quantum fluctuations to large-scale, obvious things. What’ amazing is that they calculate how many times different things need to bounce together before the whole situation is dominated by quantum uncertainty. The results might surprise you!
System Number of Collisions
Air in the room -0.2
Water in your body 0.6
Bumper cars 25
The above table shows you how many collisions you’d need in each system for quantum uncertainty to take hold. You can see in the top two (air and water), the number of collisions is less than one and therefore the systems are quantum-uncertain all the time.”
LikeLiked by 2 people
I’m skeptical of those calculations. The paper the post refers to is titled “Origin of probabilities and their application to the multiverse”. And I think if those results were widely accepted, other physicists would have been blogging about it. (Or has it been and I’ve just missed it?)
But in any case, quantum randomness still obeys mathematical laws. There still doesn’t seem to be any room for conscious volition. Of course, I won’t be surprised if someone somewhere has tortured the definition of “volition” until they could declare that it is in fact there.
Looks like the authors may be colleagues or students of Sean Carroll. He’s mentioned in the paper and given thanks at the end.
How do you have a negative number of collisions? (Air in the room)
While I’m tempted to just reject pansychism, I do think that the ‘strange loop’ type of architecture that seems to give rise to consciousness does not rely on complexity, as such. Therefore we could look for the same architecture in simpler and simpler examples and may find consciousness in some surprising places.
Something keeps fundamental particles in shape – that might just be first order controlling interactions, or it might possibly include the extra twist, loop and optimisation criterion that seem to give rise to consciousness.
…and at the large scale I do see human organisations, such as companies, as conscious at the whole organisation level, because they represent themselves to themselves, sense their situation, measure outcomes and select actions, irrespective of the individual humans and roles that make them up. Indeed we may be able to improve human organisations by recognising and actively developing the processes that make them conscious.
LikeLiked by 1 person
I think there is a difference between naturalistic and dualistic panpsychism. One fits within the physical or causal framework as we understand it. The other requires either an expansion of physics or going beyond it. Much of what you discuss strikes me as fitting within the naturalistic version.
Naturalistic panpsychism is simply a way of looking at other systems. Since I believe consciousness is in the eye of the beholder, a judgment about how similar to us other systems process information, I can’t really say it’s wrong. But I’m not sure how productive it is.
For example, would particle mechanics ever tempt us to treat them as moral subjects? That said, we actually already treat organizations as moral subjects in many ways. Corporations can be sued and punished. But I don’t know that anyone really thinks of a corporation as sentient.
Ultimately, it comes down to what we’re prepared to call “conscious”. It’s also a stark reminder that many simple definitions of consciousness inevitably seem to lead to results that don’t meet the intuitions most of us have about it. But then our intuitions don’t appear to be consistent.
LikeLiked by 2 people
I think Sabine Hossenfelder was exactly right. Panpsychism is science fiction.
“But based on several conversations I’ve had, there tends to be disagreement over exactly what is and isn’t function.”
Now figuring that out sounds interesting. Do you have a definition of what you mean by a function?
I tend to think of a function from the math/CS perspective: Something that takes input(s), processes them, and returns a result. To me, a function is therefore primarily an abstraction — a description of a process.
I rarely think of the physical world in terms of functions, but I do think of it in terms of functionality — the capabilities of a physical system. One might also say that the “function” of the elbow is to allow articulated arm movement, but the use of “function” here relates to the functionality of the elbow.
So I struggle when you ask about functions in the physical world, since I find it something of a category error — applying abstractions to the physical world. I suspect we might be talking about slightly different things.
LikeLiked by 1 person
Wyrd, as you state, there are two ideas of “function” which we distinguish as the mathematical and the purposeful/teleological/teleonomic.
You said “To me, a function is therefore primarily an abstraction — a description of a process“, refering to the mathematical idea. And your idea is correct, but I’m wondering if you understand that every physical process has such a description. You say you rarely think of the world in terms of functions, but I think you should start.
Understanding that every physical process can be described as a function let’s you understand that every physical process is potentially multiply realizable. FWIW, it can also be used to explain Kant’s noumenon, Russell’s(?) process philosophy, and panprotopsychism.
LikeLiked by 1 person
“I’m wondering if you understand that every physical process has such a description.”
I’ve been designing software since 1977 and have created many software models of physical processes, so I’m gonna go with “yes.”
“You say you rarely think of the world in terms of functions, but I think you should start.”
You’re ignoring the totality of what I said: “I rarely think of the physical world in terms of functions, but I do think of it in terms of functionality — the capabilities of a physical system.” Combine that with the bit about having designed software since 1977…
“Understanding that every physical process can be described as a function let’s you understand that every physical process is potentially multiply realizable.”
I agree the first clause is probably true in theory, but some cases may be too complex to do that effectively. (And of course we’re a long way off from being able to write functions for everything we know about. The behavior of quarks in a proton, for instance, is currently computationally intractable.)
I have major issues with the second clause (which you will know by now if you’ve paid any attention to my point of view). Essentially, in my view, the only thing effectively realizable on different substrates are things that are abstractions to begin with.
For instance, the abstraction of simple math (plus, minus, etc) is realized in every tiny calculator and in various software calculator apps and in computer hardware itself.
But the only way you can realize a pine tree is by planting one and having it grow.
(This is essentially the “simulated water isn’t wet” thing.)
That’s the point of Kant’s Noumenon. Everything is an abstraction. An electron is an abstraction. We only know about electrons because we can see what they do. Input—>[electron]—>Output. As far as we know there maybe be seven different kinds of things which all appear as electrons to us because all the inputs match the outputs we expect. We cannot know unless we can find some input which will have different outputs. But then we will just have more things that could be multiple things in reality, more abstractions.
“An electron is an abstraction.”
Our understanding of them may be, but I believe in actual electrons. Our goal is for our abstraction to match that reality as closely as possible.
“We only know about electrons because we can see what they do.”
Are you including measuring their basic properties as something “they do”?
“As far as we know there maybe be seven different kinds of things which all appear as electrons to us because all the inputs match the outputs we expect.”
Any single particle with an electric charge of -1, a spin of 1/2, a mass of 8.109×10-31 kg, and a handful of other quantum numbers, is an electron by definition.
Is it possible there are multiple different processes that give rise to an apparent particle with those properties? It can’t be ruled out, but it’s an extraordinary (very hand-wavy) claim that needs some strong evidence to support it.
Things to keep in mind: Every electron is 100% identical. And their defining properties never change. I would think that shows that any “inner life” an electron could have is sheer fantasy.
Sauron’s electrons maybe have consciousness, but the electrons in this world, not at all.
In a word, yep. All of those values were determined via processes that could be expressed as Input—>[electron]—>Output.
That’s the entire point. It can’t be ruled out. Ever. It could possibly be ruled in, the way it’s counterpart, the proton, was shown to be made of quarks, but never ruled out. Because everything we know about anything necessarily comes from processes that look like Input—>[mechanism]—>Output. That’s what measurements look like.
I’m not saying electrons have consciousness. I’m saying that electrons have panprotoconsciousness, which is simply the ability to interact with the environment, which is the ability to be the mechanism in Input—>[mechanism]—>Output. It’s not a very exciting statement, and I doubt those promoting panprotoconsciousness would agree, but it’s the one thing that everything has which is necessary for Consciousness. Necessary but not sufficient.
“That’s the entire point. It can’t be ruled out. Ever.”
If the best support you’ve got is “it can’t be ruled out,” in my view you’ve got exactly nothing. Given what we know — and we know a lot — about electrons, I see no indication they are anything other than what they appear to be: ripples in the electron field. (Likewise all the other “particles.”)
“Necessary but not sufficient.”
To me that’s the key point. It picks out a part of the picture and says its the source of the picture.
Everything is made of particles, so there’s nothing special about the brain being made of them. And it is only in this one very special configuration that particles participate in something that is conscious.
Every living organism is made of cells, some of which are neurons, but only in the brain configuration do cells cognate, and only in the human brain does our form of cognition occur.
Doesn’t that alone make it pretty clear consciousness isn’t on the parts, but on the whole? (Not to mention there simply isn’t any “room” for particles to be other than the simple building blocks they appear to be.)
Another problem is that this jumps from “interacting with the environment” all the way to cognition and consciousness in one jump. If this notion were true, we’d expect to see low levels of conscious behavior in other complex systems.
And we don’t. Not one shred of it.
“Do you have a definition of what you mean by a function?”
You’re probably reading more into the omission of “ality” at the end of that word than I intended. In this context, I just meant that it’s part of the causal framework that can lead to behavior. Our experience of the vividness of red probably comes from an adaptation to find ripe fruit. Our feelings of satisfaction at learning a new fact probably comes from things like an instinct to learn about our environment, because it was efficacious later for hunting, foraging, finding mates, etc.
But it’s also a view that says that our ability to do this kind of processing and learn from it requires what we call “experience.” Experience isn’t an add-on, but a vital part of the functional structure. We don’t experience pain to tell us to move our hand off a stove. We have reflexes for that. We experience pain to tell us next time we’re near a stove to inhibit any impulse to put our hand there, or if its related to an ongoing injury, to spur us to run simulations on how that injury might be protected and ameliorated.
When I say there’s disagreement over function (I probably should have just said “functionality”), I only meant whether any of this is superfluous to the operations of an organism.
“When I say there’s disagreement over function (I probably should have just said ‘functionality’), I only meant whether any of this is superfluous to the operations of an organism.”
By “this” I’m assuming you mean the experience part.
There is the (separate?) question of why there is something it is like, because a general view is that (zombie fashion) experience doesn’t seem absolutely required. I do take your point, though, about its value.
“We experience pain to tell us next time we’re near a stove to inhibit any impulse to put our hand there…”
But that heat damage could be registered (machine-like) as a datum. It would be a more logical, more effective, way for the organism to function. Our experience, especially of pain, can mess up our psyche in all sorts of ways. (Think of all the people mentally scarred for life due to a traumatic experience.)
On some level, that we experience is a real pain in the ass that gets us into all sorts of trouble (wars, road rage, jealous lovers). On the other hand, great music and stories.
I mean, I’ll take the trade-off, but human experience sure is messy and problematic.
“But that heat damage could be registered (machine-like) as a datum.”
I think that’s basically what’s happening. In our case, it’s an evolved mechanism, so it won’t have the “machine-like” feel of something we might design. But I think our experience of pain is the registering of damage (or potential damage) as data for making predictions.
Pain certainly has its problematic psychological side effects, but its utility can be seen in patients who, due to brain lesions in the connections between the insula and anterior cingulate cortices, lose the ability to feel pain. Their life expectancy is reportedly not high. The loss of the experience of pain has starkly detrimental effects on our survivability.
“The loss of the experience of pain has starkly detrimental effects on our survivability.”
Well, sure, because experience is part of our existence. Those who lose pain experience lose all indications of physical damage — there’s no data at all.
Consider a “zombie” patient that loses the experience of pain but not the data connected with it. If that seems at all coherent, we’re smack dab in Chalmers’ hard problem.
I personally don’t think it is coherent. In my mind, the reception of damage signal data for planning is the experience of pain. But admittedly, our model of the experience and our model of the mechanism are radically different, which is why zombies intuitively feel coherent to many people.
So a robot, for which damage would just be numbers outside a defined range, is experiencing pain?
For a robot to feel pain, a number of things would have to be true. (Things that are generally true for land vertebrates.)
It would need to have self concern, or at least a concern that the signal was relevant toward.
It would be unable to turn off the reception or processing of the signal.
Processing the signal would result in an automatic reaction that it either had to engage in or constantly override.
The processing and overriding would require significant ongoing resources, which would stress the system.
If it had those things, then I’d say it was feeling pain, or something similar enough that I’d still be fine with the label.
These attributes exist in evolved systems because of how they evolved. I hope no one ever develops a machine that way.
“I hope no one ever develops a machine that way.”
It would be a pretty strange machine indeed!
LikeLiked by 1 person
“I think Sabine Hossenfelder was exactly right. Panpsychism is science fiction.”
The problem is that electrons do think when we assemble enough of them together with other particles in the right configuration. At least, that is the physicalist view.
So how many electrons does it take to think?
You question jumps over too many layers of abstraction. How many electrons does it take to run the game Angry Birds? Or to brew a pot of coffee? Whatever the answers, it doesn’t seem like they would be particularly useful for game writers or baristas.
Still it must be electrons and other particles that are thinking. We could probably estimate a number – a big number for sure – of how many it might take for a thought. At some point, mind has to come from matter. So it must have been there all along in some form or it emerged. Is there another option?
By that same line of reasoning, there must be some minuscule form of Angry Birds in matter. The Angry Birds we see on our phones is just a concentrated form of it. There must also be a minuscule form of tornadoes in particles, since tornadoes are ultimately made of matter.
Of course, it’s not productive to say electrons run Angry Birds, blow down houses, or think. They are a fundamental element of reality, a lower level component of all those things.
The other option is that thought is like the other things, a composite phenomenon, a set of information processing capabilities in animal brains.
We can more or less explain tornadoes by reference to matter itself. In that sense tornadoes are somewhat implicit in the properties of the gases in air, temperature, and fluid dynamics that are properties of matter. The same with other non-conscious things in nature. In a sense they would be implicit in matter that composed them.
So you think subjectivity and consciousness can emerge directly from the properties of matter in the same way?
Maybe you could come up with a Fujita scale for consciousness based on something?.
C-1 Snails – C-5 Humans
Unfortunately I am not sure what physical property (like wind speed) you would be measuring with the C-Meter.
I think consciousness is more like Angry Birds than a like a tornado, a system of information processing. I’m not sure trying to measure it with a single number would be productive, although I know people are trying.
LikeLiked by 1 person
“Is there another option?”
Yes. The assembly. The whole thing. The brain.
What you’re suggesting is that “bridge-ness” exists in bolts and girders. But so does “skyscraper-ness” and “baseball stadium-ness” and “airplane-ness” and “dump truck-ness” and many more.
Likewise electrons particpate in many things without being any of those things.
Don’t mistake the part for the whole.
The problem with the Angry Birds analogy is that it was created by conscious beings. It is not just electrons moving through a processor and on a display. It has a symbolic meaning that is only understood by other conscious beings that understand the symbolic context. So you are trying to use a creation of conscious beings as an analogy for consciousness.
It would be better if you could give an example of something with symbolic meaning arising spontaneously in nature (other than consciousness, of course).
I missed above that you used the phrase “in the right configuration”, which actually does the work of crossing the layers I said you were jumping. Sorry, my bad. I would just note that that phrase is crucial.
On Angry Birds being the creation of conscious beings, it’s worth noting that every conscious being is the creation of other conscious beings. You could say that sexual reproduction is natural and building Angry Birds isn’t. But then we’re in essence excluding something we understand and insisting that we only look at evolved things that we may not understand nearly as well. It seems artificial, an epistemic constraint that ensures the main topic will remain mysterious and unexplainable.
“…every conscious being is the creation of other conscious beings.”
But there’s a key difference between making babies and making computer programs: The former result is accidental and unpredictable — due to genetics and chance. The former result is planned by intelligent minds.
Two very different categories of being.
In answer to an earlier question: No, I can’t think of anything in nature (other than brains) that uses symbolic processing. Everything other than brains was designed by brains.
“Yes. The assembly. The whole thing. The brain.
What you’re suggesting is that “bridge-ness” exists in bolts and girders.”
Are you suggesting the mind is the brain?
“Are you suggesting the mind is the brain?”
I’m clearly saying mind is not due to a conglomeration of proto-conscious particles.
In terms of the question you just asked: Yes, “bridge-ness” comes from bridges, “skyscraper-ness” comes from skyscrapers, etc. The “-ness” comes from the whole, not the little pieces.
“The problem is that electrons do think when we assemble enough of them together with other particles in the right configuration.”
No, they really don’t. They are just an attribute of a system that does think.
The electrons are an attribute?
I would think they would be a physical part of the system and the thinking would be the attribute.
If you rotate a coil of wire in a magnetic field, you generate electricity. You output something physical that can be measured. Consciousness is not like that. You can measure blood flow in the brain but you can’t measure the output.
So on what basis is it physical?
“The electrons are an attribute?”
Heh. Just as I was hitting send I thought: “Oh, shit! Someone is gonna focus on the word ‘attribute’ — I should have said “part.”
Yes, part. 😉
“So on what basis is it physical?”
Consciousness? We don’t know that it is. This thousands-year-long debate is about that very thing. Materialists say yes, various stripes of dualists say no. (Personally, I’m some stripe of dualist.)
What I reject utterly is the notion of proto-consciousness in particles. That’s a non-starter with me. It’s not like, say materialism, where I can at least see the other side’s point and agree that they might be right.
I never was proposing proto-consciousness except maybe as one of a laundry list of possibilities. My thinking electrons were in a configuration of other particles.
If we agree the configuration is what matters (and the substrate doesn’t) then we do agree.
Maybe but depends if the configuration requires meat. In other words, maybe only a configuration of biological brain material works.
I don’t believe meat is required. I think any sufficiently large, sufficiently interconnected, parallel processing network (possibly restricted in space and possibly interacting at an EMF level) would be conscious.
As I’ve long said, Asimov’s Positronic Brain ought to work.
Science fiction for sure. Sounds like IIT.
What is science fiction about the idea that two things identical in form and function accomplish the same thing? Why would the specific chemistry matter?
To the extent IIT specifies a sufficiently connected network, I absolutely agree it’s a required aspect of cognition. But IIT, as I understand it, ignores the capability of the network nodes, so it’s not a sufficient explanation.
Let’s say hypothetically, that consciousness is created in neural microtubules in living organisms. This is actually a theory.
I suppose if you could put living microtubules into some other hardware and could sustain them then they would be identical in form and function and could produce consciousness. But that would actually be the same chemistry.
If you came up with an substitute, made of metal for example, that behaved exactly like microtubules then maybe that could produce consciousness and it would be identical in function. I am not clear how it would be identical in form since the living microtubules aren’t made of metal.
And what if it is exactly the chemistry and form of living microtubules that is required to create consciousness?
Let’s say we have proprietary hardware that will only run the WS/OS because it has the proprietary drivers for the hardware. Obviously we can’t install Linux on the hardware and have it work. The substrates in this case are almost the same in function and even in form, yet you could not install the different OS on it and have it work.
“…you could not install the different OS on it and have it work.”
You could with a software emulator. (I’ve run various mobile device OSes on Windows with an emulator while developing app software.)
“Let’s say hypothetically, that consciousness is created in neural microtubules in living organisms.”
This is that Hameroff-Penrose thing? The one that posits quantum effects in those microtubules?
Is it possible there are low level chemical effects, or even quantum effects, that cannot be replicated in any other way than with meat? Sure it’s possible.
Which would then confine consciousness to biological brains only, and I’d be a little surprised by that, but also fine with it. I’ve always believed consciousness arises from the entire physical system.
But if, as many believe, consciousness doesn’t supervene on such low-level effects (and we have no current evidence that it does), but lies in the operation of neurons and wiring, then I see no reason such a network needs to be made of meat.
I’m not sure my example was all that great. Let me try again.
To accomplish X, we need a liquid solvent with a Ph of 7 that freezes at 32 degrees F and boils at 212 degrees F. So what would be identical in form and function to water that is not water?
We can find a lot of liquid solvents but they would fail to match the other required attributes.
The only thing that is going to be absolutely identical in form and function is the same as the thing itself. So the question would be whether a liquid solvent that freezes at a higher temperature or has a different Ph would be close enough to accomplish X. Or that some of the criteria we thought important weren’t.
You seem to be thinking that consciousness is something like portable software that will run on any hardware as long as it is compiled for the platform. But even a simple text editor from today might not be able to run on the early Amiga chips because of memory or other incompatibilities. It probably couldn’t even be compiled. And in that case we are using an analogy to substrates that are substantially similar.
So it boils down to how portable is consciousness. It might be something that is tied quite directly to biological material and won’t port to silicon and circuits.
It always comes down to what you think consciousness is. If you think it’s a fluid, a field, a form of ectoplasm, a generated ghost of some type, then the concerns you’re listing seem relevant. (It’s worth noting that there’s no evidence for anything like this. Arguably there’s counter-evidence.)
But if you see it as a suite of capabilities, well, there are always other ways to generate the same capability, although the other ways might have different trade offs in combinations of performance, efficiency, and compactness.
A stronger argument might be that it’s intimately integrated with organic homeostasis, the embodied cognition argument that a lot of people now make. Nothing about this in principle prevents something similar in a machine, but it might put constraints on what such a machine would need to have.
The burden of proof is on anyone who thinks consciousness – whether suite of capabilities or whatever – can occur outside of biological organisms. I haven’t seen you present anything convincing. It seems mostly a matter of faith. Or maybe you’ve read too much science fiction.
This isn’t an argument for it being a fluid, field, ecotoplasm, or anything like that. It would more likely be a product of ion flows and organic molecules, which we know can produce consciousness since they already do. The task would be to show how you can do exactly what the ion flows and organic molecules do in silicon and electronic circuits, for example, or whatever substrate you think can support it.
“The burden of proof is on anyone who thinks consciousness – whether suite of capabilities or whatever – can occur outside of biological organisms.”
The burden of proof is on whoever is making the assertion. No one can even prove conclusively that you or I are conscious. So proof of machine consciousness seems like a red herring. Personally, I think it’s a meaningless question.
All we can observe are the capabilities of a system. Along those lines, AI continues to make steady progress, although it definitely has a long way to go. You can say that progress will hit some future obstacle, but then you’d be the one making the assertion, and would need to provide either evidence or compelling logic.
“We can find a lot of liquid solvents but they would fail to match the other required attributes.”
Sure, but I wouldn’t compare a simple chemical process to consciousness. (And what I just said to your previous comment stands. It’s absolutely possible there is some as yet undetected low-level chemical or quantum effect required for consciousness.)
“You seem to be thinking that consciousness is something like portable software that will run on any hardware as long as it is compiled for the platform.”
Ha! If you’d paid attention to what I’ve written dozens of posts about, and even more comments about, you’d know I think exactly the opposite.
The correct analogy would be to compare running Microsoft Windows on IBM hardware versus clone hardware. I’m explicitly saying the hardware has to function identically. If there are low-level effects that can only be replicated in meat, then an “IBM clone” wouldn’t be possible, but I consider that long odds.
Plus, I’ve also said repeatedly that I think consciousness, whatever it is, arises from the processes of the brain. (Remember my laser light analogy?) So I don’t see it as being “software” at all and I deny computationalism.
“I’m explicitly saying the hardware has to function identically. If there are low-level effects that can only be replicated in meat, then an “IBM clone” wouldn’t be possible, but I consider that long odds.”
I am not convinced that you can make something function “identically” to a neuron that is not a neuron. You would have a better argument if you could describe the things a neuron does that are critical for consciousness then tell how those same things can be done on different hardware.
“I am not convinced that you can make something function ‘identically’ to a neuron that is not a neuron.”
It depends on whether a neuron is just its inputs and outputs (in which case, there’s no reason it couldn’t be duplicated mechanically) or whether there is some secret sauce beyond those inputs and outputs.
Given that a neuron receives inputs, integrates them, and responds with an output, what do you think can’t be replicated?
As I’ve agreed, low-level effects we don’t recognize may well be part of the picture, but so far there isn’t any evidence of such.
“You would have a better argument if you could describe the things a neuron does that are critical for consciousness then tell how those same things can be done on different hardware.”
Neurons are “logic gates” that receive inputs, integrate them, and produce an output in response. (Treating them as special risks confusing the part with the whole.)
Wire tons of them together in the right way, and we have cognition.
Materialists believe “consciousness” arises from this system in some way. It’s what it feels like to be such a system. Others think there’s more to it, but that’s the hard problem.
“It depends on whether a neuron is just its inputs and outputs (in which case, there’s no reason it couldn’t be duplicated mechanically) or whether there is some secret sauce beyond those inputs and outputs.”
That’s mostly what I just said. Obviously a neuron is not just inputs and outputs but it might be that for the purpose of consciousness only inputs and outputs matter. But I haven’t seen something resembling a proof of that.
“Wire tons of them together in the right way, and we have cognition.”
That’s the leap of faith. Let’s break it apart.
“Wire” – things are interconnected – yeah, probably makes sense
“tons of them” – how many are needed? – are 100 enough if connected in the right way? – critical to whether fleas might be conscious
“right way” – that hides a myriad of problems
If this can be done with logic gates, then it would seem we could just hook each gate to every other gate and keep adding gates until it became conscious. That would encompass all the possible ways the connections could be made so the “right way” would have to a subset of all possible ways. Somehow, I don’t think that will work but it seems to follow logically (gately).
Let me anticipate your objection that the number of connections will become too large to be practical.
Even if you can’t fully connect every gate to every other, you could certain approximate by connecting increasing numbers and different permutations of connections until a little bit of consciousness arose.
Once you had a little bit of consciousness, then each additional connection would either increase or decrease the amount of consciousness and by deselecting connections that decreased it you could incrementally build up into full consciousness.
Ooops! We don’t have anyway of measuring it so we can’t tell when we have a little bit, a lot, or none at all.
It’s hard to be a materialist when you can’t measure the material.
Not if you don’t assume there’s something there that can’t be measured.
We can sort of see this in the animal kingdom — a range of brains from small worms all the way to us. And consciousness does seem on a spectrum across the animal kingdom.
The striking thing, as I’ve mentioned before, is the prominent gap between all the animals and us. That could use some explaining.
I’ve been meaning to ask: I take it you reject Chalmers’ dancing and fading qualia arguments?
You would consider replacing a single neuron impossible in some way or perhaps identical to killing that neuron?
That’s an excellent paper.
At least until the last section. 🙂
LikeLiked by 1 person
Heh, yes, I can see that you’d get off the bus at the “Nonreductive functionalism” stop. 😉
LikeLiked by 1 person
“Obviously a neuron is not just inputs and outputs…”
No, that’s not obvious at all, and many don’t believe it to be true.
“But I haven’t seen something resembling a proof of that.”
Given that the only evidence we do have says exactly that, what more do you think neurons do besides processing their inputs and generating an output?
“‘tons of them’ – how many are needed?”
Given the only instances we know, it appears to be in the billions range. Which is one reason we don’t really know: We don’t yet have the technical capability to investigate.
“‘right way’ – that hides a myriad of problems”
Mostly it hides our current ignorance.
“If this can be done with logic gates, then it would seem we could just hook each gate to every other gate and keep adding gates until it became conscious.”
Some believe exactly that. What, other than hand-waving, do you have to offer that they’re wrong?
Maybe I’ve posted this before, but to me panpsychism is very spooky. I don’t see it happening, that eating ice cream feels nice just by accident. We can easily imagine all behavior, but with opposite feelings. We would be really unlucky. Everything that is good for us feels bad and everything that will destroy us feels good. We will still show the same behavior because everything is deterministic, but it will all feel wrong. I call that the correlation problem of panpsychism. Why do things beneficial for successful reproduction often feel so good. Or am I missing something?
The correlation problem of panpsychism. Interesting. Definitely how we feel about something needs to matter. It’s adaptive for us to feel bad when something threatens us and good when it’s good for us in terms of homeostasis or reproduction. If experience were totally divorced from biological needs, I agree it could lead to the nonsense scenarios you lay out.
And it seems obvious that sensory perception and motor impulses need to be biologically useful. So the question is, if sensory processing, affective feelings, and motor actions are biological, what does that leave for a non-physical non-functional experential essence?
I think you hit the nail on the head when you said, “Much of the mystery comes from declaring that things are separate, which typically involves separating the actual explanation from what needs to be explained, then lamenting the unexplainable nature of what needs to be explained.”
IMHO I think it’s easy to formulate questions that can’t be answered but are not required for us to reproduce or decompose the function or entity.under scrutiny. We need to decide what are the questions that need to be answered in order to break down consciousness to its component parts until we are able to observe it or predict it or reproduce it in living and/or non-living systems. I’m not suggesting that other questions should wait on the sidelines until we get answers to the necessary questions. The “necessary” questions are technical issues. Maybe we can and should discuss the possible consequences (moral, ethical, etc.) before or at least in parallel to the technical issues.
LikeLiked by 1 person
Thanks Mike. Good points. There are practical questions we can be asking, and then there are metaphysical ones that may trouble people, but may never have an answer. It might be that people will still be discussing the hard problem as uploaded entities in virtual environments.
The ethical issues seem particularly perplexing. When exactly do we have an entity who is a subject of moral concern? What does it mean for a system to suffer? We have visceral intuitions about this for us and animals. Ultimately, I fear there is no fact of the matter on this. We’ll have to depend on our intuitions, as inconsistent and problematic as they are. But we are going to do that anyway.
LikeLiked by 1 person
it’s unreasonable to require evidence for it anywhere else.
That seems somewhat defeatist, and I think Tegmark would have aquite a bit to say about it.
LikeLiked by 1 person
I don’t know if it’s defeatist so much as setting the bar low to preserve a particular proposition.
It’s been a while since I read Tegmark. I don’t recall his take on consciousness. What would have say?
He believes that like a solid, a liquid or a gas, consciousness is a state of matter; a fourth state that has until now eluded scientific investigation but is material, measurable and mathematically verifiable.
(Max Tegmark, Consciousness as a State of Matter, New Scientist, April 12 2014, p 28-31)
LikeLiked by 1 person
Thanks! I’d forgotten about his perceptronium conjecture, although I didn’t understand him to be saying that everything was perceptronium. Some matter is computronium and some of that is perceptronium.
Of course, Tegmark is a pretty radicial mathematical platonist, so his conception of perceptronium seems tangled up with that, that consciousness is a mathematical pattern. (Although to him, everything is a mathematical pattern.)
LikeLiked by 1 person
Aren’t functionalists in agreement, at least in the limited domain of consciousness, that it is a mathematical function? So Tegmark takes the underlying idea behind functionalism, but applies it universally. He’s the uber-functionalist.
I suppose so, but the path he takes to get there, talking about states of matter, doesn’t fit with typical functionalist views, which usually see it happening at a higher level of organization. He only gets to a type of functionalism through his absolute platonism.
Is there any unobtainium? 😛
Only on Pandora, which wants to kill you.
LikeLiked by 1 person
Panprotopsychism seems in danger of just being reductionist physicalism by another name
That’s putting it mildly. James Cross (Aug 1 at 3:06pm) nails it. Electrons and other standard model particles, in the right configuration, do think. The “panprotopsychist” has no way to distinguish their view from (Type B) physicalism. We both agree that there is no conceptual reduction from statements couched in mental language to statements couched in physical language. We both agree that it is metaphysically necessary that, given the right configuration of electrons etc., consciousness will occur. There is no intelligible thesis left to disagree about.
LikeLiked by 1 person
An argument can be made that panpsychism is just a romantic description of physicalism. But that assumes that when dualistic panpsychists are speaking about experience being non-physical, that they mean it in a platonic sense. If so, that’s what I call naturalistic panpsychism, which I think definitely is just a romantic description of physicalism.
Perhaps Angry Birds is a great analogy for consciousness but it illustrates the problem of explaining it in purely physical terms.
Angry Birds is not just electrons moving through a processor and on a display. It has a symbolic meaning that is only understood by other conscious beings that understand the symbolic context. This symbolic meaning/context is non-physical even though it depends on the physical to exist. It is not derivable from measuring the electrons or the processor it runs on.
This is fundamentally different from a tornado. The tornado may be a system that arises from interactions of multiple physical components but it has no symbolic component.
So, for the moment, put aside neurons, brain structures, etc which are the equivalents of the processors running Angry Birds.
The problem is to show from fundamental physics how abstract symbolic systems can arise from matter.
If a theory could be shown to do that then the leap from matter to mind would only be a question of how the fundamental physics gets implemented in brains and nervous systems.
LikeLiked by 1 person
I think the answers you’re looking for is evolution through natural selection. It’s what provides the symbolic meaning/context for a natural system. Nature provides the interpretation of what happens in brain by the way the bodies they’re embedded in react to them.
The meaning of the signals is honed over thousands of generations through variation and selection. As James of Seattle and I have discussed, this “interpretation” arises through a shared causal history.
This is one of the reasons I spend so much time reading about the evolution of the brain. Many of the mysteries philosophers obsess over have answers sitting in plain sight in evolutionary neurobiology. Admittedly, the books on that stuff are expensive, technical, and not easy reading.
I see you can’t leave the particular implementation apart from the fundamental problem.
Can only living organisms be conscious?
I don’t think you would answer “yes” but maybe I have misunderstood things you have said previously.
I’m not seeing the logical connection that leads to your question. My response above was specifically in terms of how natural systems get their meaning.
Ultimately whether a non-living machine is conscious comes down to our definition of “consciousness”. But I don’t think there’s anything that can evolve that can’t also be engineered, at least in principle.
“The problem is to show from fundamental physics how abstract symbolic systems can arise from matter.”
Again, you’re artificially constraining the epistemology. If you exclude higher level organization from the explanation, then you’re excluding the explanation.
They don’t. They only arise in intelligent minds. Abstract symbols are just descriptions of something. How can a description arise in physical nature?
Not descriptions. Symbols have no direct or obvious relation to what they symbolized. Think of pointers.
Same thing. Pointers are “descriptions” of where other data is.
They are just a memory address in programming. I wouldn’t call that a description.
Okay, whatever word works for you is fine. The point is, abstract symbols (and pointers) refer to something else. More to what I believe was my original point, they are only created by intelligent minds.
If symbols only arise in mind and cannot in physical nature, then they are definitely not physical and neither would be mind.
Abstract symbols arise from intelligent thought and are a part of whatever mind is. That doesn’t require that mind be abstract (although it may well be).
I would say tentatively the entire basis of mind is symbolic from qualia to abstract thought. “Red” is a symbol of wavelength, It has an arbitrary relationship with the light wave. “Table” in my mind is a cluster of pointers to the word “table”, a table in my kitchen, various tables I’ve seen in my life, various word associations to other concepts of table like database table, a memory of banging my knee on a table, etc.
It’s all in your intelligent mind. Literally.
Not clear to me why you keep emphasizing “intelligent” unless you are making a solipsism joke. When my cat sees a table, it doesn’t have an actual table in its mind/brain. It has a symbolic representation of a table, which may itself be a composite of other symbolic representations. This is how all mind works. It is all abstract. Language, mathematics, and the things we normally think of as “abstract” are just more so.
Yes. And your cat is intelligent. It has cat intelligence, but it still has a brain that functions as you just said.
Ok. We can agree if you are willing to extend the bounds of intelligence to some degree. We could debate about how much of course.
Well, we’re talking about mental models and abstraction, which I see as different.
The mental models, found in any brain that’s processing the world, which some call abstractions, I see more as “resonating with” or “reflecting” reality. To me the mind is more akin to how a guitar string resonates from notes it “hears” or a reflection in a mirror — an imperfect version of the real thing. The brain is, in some sense, directly connected to reality through the senses. The brain is an extension of that reality.
Just because a mental model is a poor reflection doesn’t make it “abstract” to me. Many characterize it that way, but I don’t.
An abstraction, as I’ve said, is an intentional description of reality created by an intelligent mind for a purpose. To me, abstractions always have a teleology. They are deliberate and purposeful.
Oh, that’s right. You have the faithful depiction view of mental models.
Do you have an evidence that is right? What about “red” is a faithful depiction of the wavelengths that encompass the color red?
Didn’t I just say it was a “poor reflection”? I’m not sure how you jumped to “faithful.”
That said, when I look at something red, I see something red. Seems pretty faithful of a system to me.
“The problem is to show from fundamental physics how abstract symbolic systems can arise from matter.”
It cannot be done. So, what we have left is a paradox, a paradox which cannot be resolved using our current reference point. The paradigm of both matter and mind as a grounding architecture is fundamentally flawed because neither one of those models are real in the context of (R). Mind and matter are only real within their own context of appearances (A), and that context is one of expression. Matter is the synthetic a priori expression of the ultimate reality (R). Since our phenomenal realm is not the true nature of reality, fundamental particles are nothing more than conditions on a possibility, and one of those possibilities culminates in our own experience of consciousness.
In conclusion: The phenomenal realm of appearances is conscious from the bottom all the way up, simply because the expression itself is the synthetic a priori expression of the true nature of reality. This is what is known as the reality/appearance distinction in Parmenides and Kant’s ontology.
“Again, you’re artificially constraining the epistemology.”
Not really. If you are arguing for a physical explanation, there ought to be some physics that shows theoretically at least how abstract symbolic systems can arise from matter. From that you can go on to describe the specifics of how living organisms do it. And you could also go on to describe how it could be done with an inanimate object.
Absent that, it is leap of faith to say there could be a physical explanation.
I don’t know whether it can be done or not.
It seems to me that tackling the question from the standpoint of basic physics about what exactly is a symbolic system, how do we describe it, and how can it arise is the more fundamental question.
“there ought to be some physics that shows theoretically at least how abstract symbolic systems can arise from matter”
There are. Electricity, chemistry, biology, neuroscience, and computer engineering, among others. Anything can be a mystery if you ignore the available answers.
Of course, you omitted physics and instead of a single reference you provided a list of sciences.
This doesn’t seem like it is too far beyond the realm of possibility. And maybe there is already something like this.
Take some large configuration of matter. Within that configuration are sub-configurations that represent objects and even small sub-configurations that represent pointers to the objects. There could also be pointers to pointers. I’m thinking somewhat here of old style C memory management. The task would be to define how naturally this could arise from basic principles.
There’s probably a lot more to it than this but you get the idea.
One area that James of Seattle has been investigating along these lines is semiotics.
But for me, in terms of fundamentals, it’s a matter of shared causal history, particularly in the case of animals, shared causal evolutionary history.
Bees seek out bright flowers because that’s where the nectar is. Flowers display bright colors because insects are attracted to them, and they get cross-pollination as a result. How does a symbiotic relationship like this get established? At some point, either a bee or a flower had a variation that caused the other species to change its behavior. The benefit at first might have been slight, barely measurable. But then future variations that enhance the benefit would be selected for, and those that reduce it selected against. We gradually get the symbiotic relationship.
Likewise, the ability of an early primate to see red would have started out as a random mutation. But it would have had benefit since the primate could then find ripe fruit more easily. Higher success meant that primate passed on its genes. Again, variations that enhance that benefit would be selected for. So an emotional reaction to red, in essence seeing it as vivid, would add to the benefit. And so the meaning of that percept would develop.
On the pointers, don’t know if you saw the next post yet. It highlights another paper on HOT (higher order thought) theories. One way of thinking about a higher order representation is as a cluster of pointers to a lower order representation. Of course, a nervous system doesn’t have memory addresses, just connections, so the analogy is a bit strained.
“It seems to me that tackling the question from the standpoint of basic physics about what exactly is a symbolic system, how do we describe it, and how can it arise is the more fundamental question.”
Absolutely. But taking into consideration the prevailing limitations and/or current paradoxes intrinsic to physics, Kant’s model of transcendental idealism is the only ontology which provides an answer for those fundamental questions: “while we are prohibited from absolute knowledge of the thing-in-itself, we can impute to it a cause beyond ourselves as a source of representations within us.”
The only problem is that nobody is happy with that answer simply because it does not conform to our paradigm of control. Just the opposite is true, transcendental idealism limits our control by setting clear boundaries, and those boundaries are defined by the reality/appearance distinction. Not to mention, that psychologically the notion inadvertently leads one to the God deference which gets people all twisted up inside. Go figure… By nature, I’m a pragmatist, so I’m not going to waste my time beating the dead horse of either materialism or idealism looking for answers, people have been doing that for thousands of years.
I love it when people start a discussion with a statement “Science can’t explain …” Just how would such a claim be verified? Clearly there are myriad things that science has not explained, most of which are because scientists have not studied them as of yet, but there is a big difference between “science has not yet explained …” and “science cannot explain …”.
Just the use of the phrase triggers my Bullshit Alert System. Starting with an unverifiable claim is not a good start to any conversation.
I think we are in somewhat of an agreement that consciousness will be able to be explained as an emergent property of physical brains. I think that the distaste for the possibility of this is the all too common desire on the part of human beings to be “special.” Any time someone starts a comment with a “we are special” or “we are exceptional” statement, my Bullshit Detector starts blaring away.
And we can’t be “special,” (maybe that is Special™) if we are subject to basic physical laws. Such people claim that that would make us “robots.” As if robots with all of the powers of human beings and more are not possible.
LikeLiked by 1 person
That mostly matches my own sentiments. I do think there are things science can’t explain, but it amounts to incoherent questions (what color is 5), questions of fundamental reality (why are there three sides to a triangle), or questions where there simply is no fact of the matter (butter side up or down).
I read a study once about bread falling butter-side down. Turns out the height of the average table, the falling rate of bread, result in there being just enough time for the bread to rotate 180 degrees assuming it’s knocked off the table such that it has any spin at all.
There are things we know are beyond reach of any science: Turing Halting, Gödelean incompleteness, Heisenberg uncertainty. OTOH, science does explain why we are limited, so perhaps not the same thing.
Steve Ruis says: “And we can’t be “special,” (maybe that is Special™) if we are subject to basic physical laws.”
I agree that we are not special. But the greater and more compelling question is: What is the basic, physical law which governs a meaningful relationship between two consenting adults? All comments are welcome, because based upon a paradigm of law, I would like to know what that law is exactly.
I’m curious. Why would you think there is one basic, physical law which governs something as complex as a personal relationship?
It doesn’t have to be just one law, it can be several or any combination thereof. But the point is cogent, are complex personal relationships “governed” by basic physical laws, and if so, what would those laws be?