It’s a common sentiment, even among many staunch materialists, that we will never understand consciousness. It’s one I held to some degree until a few years ago. But the more I’ve read about neuroscience, the more convinced I’ve become that we will eventually understand it, at least at an objective level.
That’s actually an important distinction to make here. Many discussions of consciousness inevitably include pondering of the hard problem, the problem of understanding how subjective experience, what it’s like to be a conscious being, arises from physical systems. I suspect we’ll never solve the hard problem, at least not to the satisfaction of those troubled by it. It will remain a conundrum for philosophers, no matter what kind of progress is eventually made in neuroscience or artificial intelligence.
But I don’t think it’s reasonable to require that science solve it. Science gave up looking for ultimate understandings centuries ago in favor of settling for pragmatic ones. It was one of the first steps in the evolution from natural philosophy to modern science. The approach, that it is better to settle for what can be understood rather than hold out for a perfect and perhaps unattainable understanding, has been an amazingly fruitful one.
Consider also that the entire history of science has been a demonstration that reality doesn’t match our subjective experience. Subjectively, the earth is stationary and the center of the universe, but we’ve known for centuries that, objectively, it very much isn’t. Subjectively, humanity is very different from animals, but we’ve known since Darwin that, humanity is just another animal species, albeit the alpha of alpha predators.
The objective facts in these areas have taken us farther from our subjective experience. We have nothing to indicate that the mind will be different. Any expectation that a scientific understanding of the mind will explain our subjective experience, why red is red, etc, is doomed, I fear, to be a frustrated one.
Sometimes along with that is any expectation that an understanding of the mind will somehow show our subjective experience is more real than objective reality. It’s an expectation that there is still something different about us, something that makes us special, that separates us from the rest of nature, that vitalism in some form or another is still true. It’s a sentiment that ignores the lessons of Copernicus and Darwin.
I fear that this is what motivates a lot of very intelligent people to speculate that consciousness operates using some form of unknown and unknowable physics. One of the most common is to posit exotic quantum mechanics. Of course, the mind depends on quantum mechanics, just as every other physical system in the universe. But proponents of quantum consciousness often make an assertion that it uses an exotic and unknown aspect of quantum physics.
The problem is that there is zero scientific evidence for anything like this exotic physics. Speculation in this area continues because we don’t yet understand consciousness, and some people conclude that this means there must be some new aspect of reality that we’re not seeing yet. While this lack of understanding remains true, there’s nothing in mainstream neuroscience to indicate that we will need a new physics to understand the mind.
This isn’t to say that the mind may not use certain quantum phenomena such as entanglement. After all, plants appear to use it in photosynthesis, and birds in detecting the earth’s magnetic field. But while biology uses these phenomena, the phenomena themselves still operate according to the scientific understanding of how they work. The mind using them would be, at most, complications to understanding, not an insurmountable barrier.
But the data most strongly indicate that consciousness arises from neural circuitry. This circuitry is profoundly complicated, but it operates according to well known physical laws involving chemistry and electricity. Understanding how the mind arises from it is largely understanding how information is processed in the brain’s neural network.
Because of this, while there are lots of theories of consciousness, I think it’s the ones by neuroscientists, the people actually studying the brain, which are likely to be closest to the truth. (A quick note here: neurosurgeons, such as Ben Carson or Eben Alexander, are generally not neuroscientists.) These theories seem to agree that information integration is crucial. Some stop at integration and declare any integrated information system to have some level of consciousness, leading to a form of philosophical panpsychism. But I think the better theories see integration as necessary but not sufficient.
My long time readers know that I’m a fan of Michael Graziano‘s Attention Schema Theory, but there are other similar theories out there. Many of them posit that consciousness is basically a feedback system, allowing the brain to perceive some aspects of its own internal state. These theories give a data processing explanation for our sense of internal experience, one that doesn’t require anything mystical. These explanations are far less extraordinary than those requiring exotic physics. We shouldn’t accept extraordinary claims without extraordinary evidence, particularly when far less extraordinary theories explain the facts.
Of course, there are still huge gaps in our knowledge. Until those gaps are closed, we can’t completely rule out exotic physics or magic, just as we can’t completely rule out that UFOs are extraterrestrials, that bigfoot is roaming the forests of North America, or that ghosts are haunting old decrepit houses. But we can note that there is zero actual evidence for any of these things.
None of this is to say that there aren’t aspects of reality that we may never understand. It’s possible that we’ll never figure out a way to understand singularities at the center of black holes, whether there are other universes, or what actually happens during quantum decoherence. But unlike these problems, which exist in realms we may never be able to observe, the brain shows no sign of being fundamentally beyond careful observation.
Yes, understanding the brain will be hard, very hard. But even though there are many people who don’t want the mind to be understood, neuroscientists will continue making progress, year by year, decade by decade. The gaps will shrink, eventually closing off the notions that depend on them.
I suspect that even when science does achieve an understanding of how consciousness and the mind arise from the brain, there will be many people who refuse to accept it. It will be the same fights that heliocentrism and evolution once endured. Many will look at the explanations and insist that their consciousness, their inner experience, simply can’t come from that. But as I said above, we shouldn’t judge a scientific theory on whether it solves this hard problem.
60 thoughts on “Why I think we will eventually have a scientific understanding of consciousness”
I’ve never quite understood the “Hard Problem.” To me its simply a matter of complexity. Pile enough neuron fields on top of another, add a delayed feedback, and voila! Sentience.
For a long time, my reaction to the Hard Problem was, “What problem?” And then, when I finally grasped what the issue was, the reaction became, “Why is that a problem?” It’s still my personal reaction, but a great many people remain bothered by it. Now I see it as intractable, but irrelevant.
LikeLiked by 1 person
Is the “problem” in “the hard problem” a “problem” in the same sense as a math problem? i.e. An unanswered question.
Is using the word “problem” four times in one sentence a problem?
Good question. Myself, I’m not sure that it is, but maybe someone who does perceive the Hard Problem as a problem can weigh in.
“If you can keep your head when all about you are losing theirs, perhaps it’s because you don’t understand the problem.”
LikeLiked by 1 person
What is it to have a scientific understanding of consciousness, though, if not to have a scientific explanation of the Hard Problem?
To me, it seems that without the Hard Problem, then understanding consciousness is just understanding how the brain processes information, and there doesn’t seem to be much resistance to the idea that we will achieve this some day. Or at least that we’ll have the gist of it (the full detail may be too complex to ever really grasp).
The Hard Problem cannot ever be solved scientifically, it has to be dissolved philosophically, and this is why it will always persist (because philosophical answers are rarely conclusive or universally accepted). It is a pseudoproblem resting on a category error. To understand what it is like to be a bat (a related problem) is just to be a bat. A first person perspective can only be appreciated from the first person, a fact which seems to me to be a simple tautology. You can never get to a first person understanding from the third person facts and this ought not to surprise anyone.
I think a scientific understanding of consciousness might allow us to reproduce it, to have an objective way of recognizing when it is and isn’t there, and measure to what extent it’s there, among other things.
There are a lot of people who cite the Hard Problem, then fold their arms and consider any endeavor at understanding consciousness doomed until it is dealt with. It sounds like we agree that that position isn’t logical.
Totally agree with your last paragraph. Well said.
You and I agree that all you need to reproduce consciousness is a scientific grasp of the easy problems, which it is not especially controversial (pace Penrose and Hammeroff) to hold to be within our grasp eventually.
But to have an objective way of recognizing when it is and isn’t there would mean having a solution to the Hard Problem. And this, being a philosophical pseudoproblem, cannot be solved with science, any more than science can answer the question of whether submarines can actually swim.
The only way that issue might ever go away is, I suspect, with sociological factors. Perhaps with the advent of AGI, once the lingering generations of Searlians and Chalmerites die out, we will be left with a population of people who grew up interacting with intelligent machines and who naturally accept them as conscious entities just as they do other people. The argument will not be won by a scientific discovery or clever syllogism or thought experiment, but simply by exposure and familiarity.
Which, I’m sure Searlians and Chalmerites would say, is not really a win at all. Just because everyone believes something doesn’t make it true. And I have to say they’re right about that. It’s a pity that the question will never really be satisfactorily proven settled scientifically or philosophically, but I think that’s the situation we’re stuck with.
“But to have an objective way of recognizing when it is and isn’t there would mean having a solution to the Hard Problem.”
I think if we observed that information flowed in a certain way in the brain, and discovered that every system where information flowed that way appeared to be conscious, we’d have an idea of the data processing architecture for consciousness. We’d be able to detect it, and with observation, measure it.
But we still wouldn’t have explained why red is red and all the rest. There will still be people wondering how these systems acquire subjective experience. Although I agree that they’ll fade in number as it becomes obvious that the question is hopelessly metaphysical.
“Red” is (something like) the label we apply to our experience of observing light with a frequency within a certain band. I thought the question (hard problem) was whether our respective experiences of “red” are qualitatively similar..? i.e. Does red feel/appear to you as it does to me? Or does red look to you like blue does to me, but we both call it red as we look at the same strawberry?
The most common ways I hear the hard problem articulated go something like, “Why (or how) do we have subjective experience?” Again, for me, it’s an unanswerable question, and not one that I find particularly troubling.
It is interesting to ponder our subjective existence, and how grounded we are in it. But it’s not hard for me, at an intellectual level, to simply accept it for what it is.
Ah, but “appears to be conscious” doesn’t mean actually conscious! A scientific understanding of consciousness would surely mean telling the difference between the two!
Also, I think you run into a bit of problem when you think that we can just detect that a system implements a certain algorithm or processes information in a certain way. There’s an interpretational problem there, as Searle will tell you. There’s more than one way to interpret a physical system as implementing an algorithm. In fact, there are more or less infinite ways, and particularly in natural systems it’s not at all clear that one interpretation is going to be more justifiable than another. A physical system can be seen as implementing any algorithm you want it to.
And, of course, that being the case, it can’t be that something externally projected onto something (such as a pattern of information flow) can really be responsible for consciousness. Unless of course we accept Platonism and that even abstract patterns can be conscious!
This gets into the problem of other minds, itself an intractable problem. But like all intractable problems, similar to the problem of induction, we have to muddle through it as best we can. To be more precise, we might observe the way information is processed in the brain, and then observe that systems with similar structures are able to pass a rigorous version of something like the Turing test.
Of course, we would never know beyond all doubt that there was a consciousness there, just as you can never know beyond all doubt that I’m conscious. All we can ever do is be mostly sure.
On detecting information processing, all observation in theory laden. You can’t escape it. We often say that astronomers observe the universe to be expanding, but what they actually observe is red shift of galaxies, which increases as their brightness and size decrease. From this (and GR equations) they deduced that the universe is expanding. Anything we observe about the brain will almost certainly have include that kind of interpretation. For better or worse, that’s science.
I think you’re misstating the “hard problem.” What you detail in your second paragraph is what Chalmers (who coined the term) calls the “easy problems.” As you say, we’ll surely solve those. They’re (comparatively) easy.
What he calls the hard problem is simply why we experience. How does that happen?
How in the hell does a physical process give rise to a personal narrative?
It’s a mystery. Science doesn’t like mysteries. Some scientists will try to solve the mystery.
But… they expect it to be really hard. 🙂
LikeLiked by 1 person
I don’t think so, Wyrd. I understand that the Hard Problem is why we have a first person phenomenal experience, but for reasons of illustration I just focused on what is to me just a restatement of that problem, the difficulty of understanding another’s first person experience from a third person perspective.
The easy problems are all to do with behaviour and how the brain processes information. I get the difference! I just think that the Hard Problem rests on a category error and so is a pseudoproblem. Either we don’t have first person experience (!) but just think we do, or equivalently first person experience is just believing that you have first person experience, where belief is cashed out in a consciousness-agnostic sense applicable to computers and so on (e.g. from the intentional stance).
There really isn’t anything to explain, but there is an illusion that there is — the illusion is not consciousness itself but the idea that consciousness is something more than what it is like to be a system which processes information in a certain way (or perhaps the illusion that it shouldn’t feel like anything to be such a process).
A system which is functionally identical to a human behaves as if it believes itself to have a conscious experience, including asking questions about the Hard Problem and so on. If that system has no magic spark of consciousness it still thinks it does, and so is operating under some kind of illusion. Since we can see that there is nothing to explain for such a system except why it has that belief, there are no meaningful questions that cannot be answered from a third person perspective for such a system. Which means that for such a system, the Hard Problem is a pseudoproblem.
And so it is with us, I believe, which is not to say that we are not conscious but rather to say that consciousness reduces to the functional correlates of consciousness and nothing more. What we have to explain is the easy problem of why we believe ourselves to have phenomenal experience and not why we actually do.
LikeLiked by 1 person
You’re a Tegmarkian, so I can see why you believe what you do. It follows.
The category-mistake argument, which opposes the idea of a “magic spark,” doesn’t carry the weight with me it does you. (I freely admit some of this is my own bias and belief, but I’ve thought about this, too. FWIW, I debate with myself as aggressively as I do others! 🙂 )
You say my experience of qualia is an illusion. I’m not quite sure what the difference here is supposed to be between an illusion of consciousness and the real thing. What doe that even mean? Is there a real form of consciousness I could experience instead?
If not, then what is an illusion about what I (seem to) experience?
I do know this. Whatever it is, it’s breath-takingly rich in content, texture, and memory, and almost impossible to describe fully. (All of art is an attempt to do so.)
So they say all this is the result of processing information, but that’s just a phrase.
And the thing is, I’ve worked with computers extensively on all levels for nearly 40 years, and I’ve never seen anything, zip, nadda, FA, that suggests data processing is capable of experiencing — or being under the illusion it experiences — anything.
I’m not denying that may be the answer. It may be. Our views are equally rational given our beliefs.
I am saying, based on everything I know about computers, saying a computer can have the indescribably rich (illusion of) consciousness that I do…
…is gonna take some ‘splainin’, Lucy!
And the fact that data processing, that algorithms, show little sign of that rich behavior (so far), and the fact that some computer science argues against it, is the hard problem.
Maybe, as you say, it’s mislabeled. Could be. It’s still a huge, difficult question. It’s still possible it could turn out to be like FTL — just not possible.
LikeLiked by 1 person
Exactly! It is meaningless. But, if we take Chalmers seriously and allow the concept of p-zombies, then these must have the illusion of qualia. They claim to have qualia and behave as if they have qualia and they can certainly tell the difference between green and red. If you try to explain how they do so from a third person, you will tell a narrative of this neuron exciting that neuron and so on, but if you ask them to give an account from their own perspective, they will just say they look different — that the difference in qualia is how they make the distinction.
I’m the one saying that the idea of an illusion of qualia is nonsense. The qualia just are what you would otherwise call the illusion — that which is made available to our conscious minds regarding the differences between sensory inputs. Since we don’t have direct access to the nuts and bolts of what is happening, it seems mysterious how we can distinguish red from green, but it really isn’t.
You are under the illusion that what you are experiencing is more than just what it feels like to be a certain kind of information processing system. I’m not saying your qualia or consciousness are illusions, I’m saying that an illusion is causing you to mistake their nature.
Have you ever seen data processing that approaches human intelligence, either qualitatively or quantitatively? In any case, how would you know? It seems quite plausible to me that there could be a conscious computer system built one day and even its creators would not believe it to be conscious.
We haven’t solved the easy problems yet, so it’s not surprising yet that there are little hints of computer consciousness.
Besides, there are plenty of people who have spent their lives working on computer systems who have intuitions diametrically opposed to yours.
“Since we don’t have direct access to the nuts and bolts of what is happening, it seems mysterious how we can distinguish red from green, but it really isn’t.”
Agreed. There’s nothing mysterious in our ability to distinguish physically different things. What’s mysterious is that actually ‘is something it is like’ to be us (regardless, for the moment, of what “us” actually is).
That there can possibly be ‘something it is like’ to be a data processing system beggers my imagination. I think that is a phrase very close to being on par with, “God watches over me.” It is, at this point, wishful thinking.
I realize that, as a Tegmarkian, it’s easy for you to imagine. 🙂 It still does require facts not in evidence.
“I’m saying that an illusion is causing you to mistake their nature.”
Okay, I understand. And we agree they do have a nature.
As I said before, “Whatever it is, it’s breath-takingly rich in content, texture, and memory, and almost impossible to describe fully.”
But I would think that even a Tegmarkian might appreciate the implications of chaos theory and the real number realm (let alone the transcendentals). Perhaps I can invite you to read my recent post, Transcendental Territory, which addresses this?
It’s one thing to acknowledge a universe of mathematics, but not all math is calculable!
“Have you ever seen data processing that approaches human intelligence, either qualitatively or quantitatively?”
No, obviously not, but I know the science behind data processing, and I know that, at its root, it’s just ones and zeros and binary logic. Nothing more. Systems that process discrete symbols are actually somewhat limited in what they can accomplish.
“Besides, there are plenty of people who have spent their lives working on computer systems who have intuitions diametrically opposed to yours.”
Indeed. And that’s kind of my deeper point. This is all a matter of belief for now. I’m just saying my belief is that data processing will never have (what I see as) magical properties.
If our minds are based on mathematics,… why aren’t we better at it?
We can’t even do large sums (most of us), let alone complex math. Some people barely grasp it. If it’s all based on math, at what point do we lose the facility with it?
(Are mathematicians superior beings? 🙂 )
I don’t think it’s mysterious. There is even (if only fictionally) something it is like to be a p-zombie, in that a p-zombie will insist that this is so and act as if it believes this to be the case. It can answer questions about what it is like to be it just as well as you can.
So we might all be such p-zombies, or rather the idea of a being like us which doesn’t have a subjective “what it is like” doesn’t make sense. We can explain why we feel moved to ask the question and why we feel it to be mysterious. That explanation is sufficient to solve the mystery.
That just means that the model you have of your environment is very detailed. The same could be said of your p-zombie twin. That’s really not a problem for my view.
OK, I’ve read your argument and seen the video. If there is an argument to be there against human intelligence being algorithmic or against mathematical Platonism, then I didn’t find one. It seems you’re jumping from amazement at transcendental numbers to the conclusion that human intelligence is not algorithmic. There also seems to be a certain amount of conflation between transcendental numbers and indeterminacy as well as other meanings of the term “transcendental”.
I didn’t say it was. I am aware of uncomputable numbers, for instance (which do not include pi and e, which are perfectly computable).
You say that as if there is a proof that other kinds of systems are less limited. Yes, there are proofs of the limitations of discrete systems (the halting problem, for instance), but there is no proof that analogue or other systems are not similarly limited in practice (so I’m excluding fantasy hypercomputation scenarios where some natural magnitude happens to perfectly match some uncomputable number).
Since discrete systems can simulate non-discrete systems to whatever precision you require, your thesis depends upon the dubious assumption that infinite precision is needed to reproduce the qualitative capabilities of analogue systems. Seems a little too delicate to me. There needs to be a margin for error in any natural process or it would never work in practice.
And if our minds are based on biology, why aren’t we better at it? There are many people who don’t even believe in evolution!
We already have neural networks we can train to do certain kinds of tasks. If you trained a neural network to add numbers, it would have similar weaknesses to a human brain. It would make mistakes and there would be a point when you get to sufficiently large numbers where it just wouldn’t be able to cope.
Your argument is a non sequitur. Our brains are chaotic messy networks of neurons and neurotransmitters and so on. Whatever mathematical ability we have is emergent and supervenes on many layers of abstraction. There is no reason to suppose that we should have a perfect innate grasp of mathematics.
“I don’t think it’s mysterious.”
This, I think, is the crux. We see the world differently. We’ll never agree on this.
“… or rather the idea of a being like us which doesn’t have a subjective “what it is like” doesn’t make sense.”
I believe that’s Chalmers’ point. It’s a litmus test. If you agree the concept is at least coherent, you should be open to his argument. If you don’t, then you won’t.
I disagree the concept is coherent. I don’t believe such a world is conceivable. Ours is the way it is in large part because we do experience and feel.
(I do generally like Chalmers, but I’m not sure I buy his p-zombies.)
“That explanation is sufficient to solve the mystery.”
Not to me. I want to know how from data processing emerges self-awareness.
While I agree that no system extant is anywhere near complex enough, the extrapolation that complex enough data processing gives rise to consciousness seems a magical explanation to me. An explanation with no facts in evidence.
“That just means that the model you have of your environment is very detailed.”
This is more than details as I know them.
Again, there is an apparent extrapolation here that consciousness magically appears if there’s enough stuff in one place. Critical mass consciousness?
“It seems you’re jumping from amazement at transcendental numbers to the conclusion that human intelligence is not algorithmic.”
No, that’s really not the point or a fair way to put it. Further discussion should be under that post, not here. Just one point:
“Since discrete systems can simulate non-discrete systems to whatever precision you require,…”
Chaos mathematics has shown discrete calculation cannot model certain analog systems, even in principle. (At least until you get down to the quantum level, and maybe not then.) It’s not a lack of precision, it’s that there’s a need for a precision at all.
“And if our minds are based on biology, why aren’t we better at it?”
Every person on Earth can use biology to make new people! They don’t even need training!
But, yes, this is a non-argument. It’s a slogan. I figured since the Hard AI people have “Information Processing!” as a slogan, my side should have one, too! XD
I’m surprised you don’t think p-zombies are conceivable. What about p-zombies that are not necessarily physically identical to us? Say we encounter a very intelligent alien species that seems to have as rich a mental life as us. Would you entertain the possibility that there’s nothing going on on the inside? Or is that inconceivable?
Nobody is saying that if you just keep adding complexity that consciousness will emerge by magic. Instead, consciousness is just what it feels like to be an information processing system with an organisation similar to that of a human being. You can’t get that level of functionality without complexity, so complexity is a necessary but not sufficient condition for consciousness. But any system with functional capabilities similar to humans will have consciousness, systems with lesser though analogous capabilities (animals for instance) can also be said to be conscious, more (dogs) or less (tapeworms). So to me, consciousness is largely a synonym for “human-like information processing”.
I don’t think so. Or at least you’ll need to expand on that a little for me. Here’s my understanding of what I am trying to say.
A discrete model of a chaotic analog system will perfectly mirror its behaviour for a while, and then the two will diverge. The more precise the modelling of initial conditions, the longer the two will be in sync, though you get rapidly diminishing returns from increasing that precision.
But, again, digital systems can model analog systems to any desired degree of precision, but for chaotic systems it is impractical to achieve levels of precision that will keep the two in step for very long.
However I would say it doesn’t even matter if the simulation and the real system keep in lockstep. I don’t think that perfection is needed for something to be a good model, because even when the two diverge, the simulation will still behave in a manner characteristic of the natural system. It may no longer be identical to the original, but it is still behaving just as a real one does. So if you try to model a real hurricane with a digital simulation, after a while the digital simulation will diverge from the original but it is still behaving in a manner indistinguishable from a natural hurricane (just not indistinguishable from that specific natural hurricane).
“Say we encounter a very intelligent alien species that seems to have as rich a mental life as us. Would you entertain the possibility that there’s nothing going on on the inside?”
Not for one second. 🙂
“Instead, consciousness is just what it feels like to be an information processing system with an organisation similar to that of a human being.”
A belief without supporting evidence. Every information processing system currently in existence denies that belief.
“You can’t get that level of functionality without complexity, so complexity is a necessary but not sufficient condition for consciousness.”
So a complex enough data processing network will not give rise to consciousness? What is sufficient?
“So to me, consciousness is largely a synonym for ‘human-like information processing’.”
That’s just a label and the fact remains: no evidence. Not one single shred supports this view, so it is a belief in the mathematical “transcendence” (or whatever we want to call it) of such a system.
And all such systems we can point to (humans, dogs, etc.) are physical systems in which the complexity (which we agree is necessary) is clear and present and — most importantly — physical.
A belief that calculation of numbers will have the same effect is a belief without any facts in evidence. (It might turn out to be a true belief, but right now nothing supports the view except wishful thinking.)
“A discrete model of a chaotic analog system will perfectly mirror its behaviour for a while, and then the two will diverge.”
That they do diverge means the modeling was not, in fact, perfect, but with that caveat, I agree.
“The more precise the modelling of initial conditions, the longer the two will be in sync, though you get rapidly diminishing returns from increasing that precision.”
“[F]or chaotic systems it is impractical to achieve levels of precision that will keep the two in step for very long.”
Correct. (Which means the model isn’t accurate.)
“I don’t think that perfection is needed for something to be a good model, because even when the two diverge, the simulation will still behave in a manner characteristic of the natural system.”
Yes, that is a crucial point. It could go either way. A model that can support some form of chaos or indeterminacy or mathematical transcendence (or even just introducing error or noise into the system) might demonstrate behavior from which consciousness will emerge.
The question is whether that divergence breaks the process or just shoves it in another direction. A more important question might be whether uploaded minds (which so many hope for) would remain anything like the original. For all we know, such minds might become insane or just not work.
There still remains a critical difference between a software model of a physical process and that physical process. Modeling a process is not the same as being part of that process. Hurricanes are a fine example. No model ever blew anyone’s shutters off.
OK, like me, you seem to think that if anything functions just like us (e.g. an alien), then it must be assumed to have consciousness. I think this attitude implies one of two ways to look at it. Either it is impossible to realise these functions without having realised consciousness, or consciousness is just what it is to realise these functions. I tend towards the latter interpretation, so I think that if we can get a computer to function like a human, it will be conscious. I’m guessing you lean towards the former interpretation, so you think we can never make a computer which functions like a human without figuring out how to give it consciousness first (presumably requiring some additional hardware and not just clever software).
If you are right, there are hard theoretical limits that mean we could never make a computer pass a maximally robust version of the Turing Test. This seems unlikely to me, as I can’t see why doing a whole brain simulation (infeasible in practice) would not necessarily lead to it behaving just like a human. Since our current understanding of the physics at play within a brain seems to be compatible with computer simulation, it seems to me that your view requires new exotic uncomputable physics to work. That’s not to say you’re wrong, but it does seem to make your view less plausible.
It’s not just complexity, it’s organisation. I don’t think consciousness is really a well-defined thing, so it’s not possible to give criteria of sufficiency for consciousness. Consciousness is just a word we use for how it feels to be an information processing entity somewhat like us (especially an entity that can perform information processing tasks on the topic of what it feels like to be that entity!). Anything that is so like us that it can actually report on what it is feeling (and especially if what it is doing is quite analogous to what we do when we do the same) is feeling, in my view
So, the Internet is complex. It may be as complex as a human brain. I don’t know. Let’s say it is. That doesn’t mean I think it is conscious, because it is not organised anything like a human brain and does not process information in anything like the way a human brain does.
I stipulated that it continues to behave in a way characteristic of that process and you agreed. So it just diverges and goes in a different direction over time. It doesn’t break. From behaviour alone, it is indistinguishable from a natural instance of a process, so that given details about a natural instance and details about a simulation no expert would be able to tell which is real and which is virtual.
Sure, you lose some precision when you represent analog systems discretely. But our identities cannot be tied into the absolutely precise values of the states of particles in our brains, because these states are being messed with all the time as we are bombarded by cosmic rays or other interactions from our environment. We are generally pretty robust, stable creatures, not fragile houses of cards that crumble into madness every time an acoustic vibration traverses our skulls, so there has to be some tolerance for error. As long as your simulation is precise enough to be within that tolerance, there’s no problem.
So, what I think is that it will retain the personality of the original and behave in a way characteristic of the individual. It will evolve differently over time, just as my twin would in a different branch of the Many Worlds Interpretation. Not just because of chaos theory and the imprecision of representation, either. It will also have had a different life experience, so of course it cannot be expected to remain in lock-step. But it will have just as much claim to my identity as the physical me. It will be as much a continuation of my past self.
That argument, really? Sorry, but to me that’s practically the definition of cringeworthy! SAP has addressed it here many times. I would also point you to my blog on the subject here: http://disagreeableme.blogspot.co.uk/2013/08/consciousness-is-not-like-photosynthesis.html
In addition to the arguments there, I would point out that a virtual hurricane could indeed blow the shutters off the home of a virtual person, so to say that a virtual hurricane can’t hurt anybody is to beg the question by assuming that the viewpoint of a virtual person is not really a viewpoint at all. If the viewpoint of a virtual person is allowed, then in principle we could all be living in a simulation and you can’t actually know for a fact that the hurricane that blew your shutters off is physical at all. So it’s not really an argument at all, it’s just a clever-sounding assertion with no real argumentative weight.
LikeLiked by 1 person
I’ve been arguing the hard AI topic now for weeks, and I’m bored with it.
We’re never going to agree, and I imagine we both have better things to do than craft long arguments at each other on someone else’s blog.
We can continue the discussion already started on my blog if you want, but I’ve said enough here.
Um… I hope that didn’t come across as dismissive. No offense is meant. It’s just that in the last three or four weeks, I’ve written nearly 20 posts and then tons of comments on the topic. I need a break! (I just can’t stay on one topic for long. 🙂 )
If you really want to continue this, I’ll bookmark the comment and come back. (But we should then carry on on your blog or mine.)
Just jumping in here to note that I have no objection if you guys want to continue the discussion here. I think I’ve said just about everything I have to say on this topic (at least for now) but I don’t mind seeing what others are saying.
LikeLiked by 1 person
Yeah, I am, likewise, pretty much talked out on the topic for the moment.
LikeLiked by 1 person
That’s fine, Wyrd. No problem.
I think a lot of the issue is that people think “explain subjective experience” and don’t realize that what they mean is “share subjective experience.” Even if we could transfer neural patterns directly from one person to another, there’s not really any reason to think that “red” for one would be “red” for the other, but we don’t need to do that to explain subjective experience. Red is red because that person’s neurons associated with red are lighting up.
Another issue is that “consciousness” is a loosely defined idea. For one person it is a sense of identity, for another it is the ability to react to your environment, for another it is the ability to imagine one’s place in the future, and yet another thinks it is libertarian agency. These different conceptions can apply to a wide variety of different entities, including plants. It seems like all we really know is that we have it. We don’t know where it cuts off.
LikeLiked by 1 person
Good points. Personally, I think any definition of consciousness that leads to us regarding plants or the tax code as “conscious” are problematic. I think most people would regard only something that has an inner experience as conscious.
The discovery of what builds a consciousness is the one thing I wish for the most before I die. I personally have quite a few examples of experiences that may hold some vital answers if they could be repeated and understood. I will express them as matter of factly for you guys to get the general idea and hold any speculation unless asked.
What occurred while leaning against a light pole holding a transformer that blew out. My conscious experienced a light so bright it was removed from my physical senses. Me or my conscious sensed floating in a void of white, and a thought was formed. “Where am I?”, followed by another thought “What am I?” and followed by the thought “What is an I?” Pondering that my vision slowly returned and immediately followed by an extreme panic coupled with a racing heart. It seemed that I was racked by panic while I was detached from physical input. Noticing a smoking squirrel and smoking clear oil on the ground brought me back to reality, The thought passed “Wow, he(squirrel) has it worst than me.”
What my body uncontrollably did after a man pointed a 12 gauge at my chest. Soon as it occurred I consciously allowed something inside of me to become active. After that I was unable to find the correct way to control my physical functions. I was aware I had grabbed a stick and that the words “I will stick this up your ass” came from me but I was not happy with those choices and was trying to stop any and all actions. I had the thought in my head that this is stupid and I need to stop myself. But the person who sits and ponders the world and the one writing this comment was unable to affect what my physical body was doing. I could sense the emotion of utter glee and when I got control again I had a lingering concept fading away that “If he shoots me then I will be able to kill him” He didn’t thankfully.
Ok that is good enough to ponder for now. I aw a very proficient conscious self maintenance guy which I learned by way of boredom while lying in a hospital bed for 3 years. 17 diagnosis with osteosarcoma followed by cisplatin and adriamycin chemo. It’s funny the things we use to entertain ourselves. =)
Interesting hair raising stories. Hope the chemo is or was successful.
Thank you and yes I am doing great. That was in 1987 when I first was diagnosed. The chemo was supposed to make me sterile as well but I had a son a few years later. It came back in my lungs two times and I was down to a 3% chance to live LOL. Also I would like to make a note that I am not a mean person, I am very humble and kind, I do not like it when the thing takes control of me, which has been about 3 times.
I continue to think we’re largely on the same page here. I’m curious about a couple of points:
“Science gave up looking for ultimate understandings centuries ago in favor of settling for pragmatic ones.”
I’m guessing you mean metaphysical ultimate understandings? String theorists and quantum gravity physicists can be said to be seeking a pretty ultimate understanding. Even just what goes on at CERN is very fundamental fabric stuff.
I don’t know that science ever really gives up nibbling at trying to understand it all.
“One of the most common is to posit exotic quantum mechanics.”
🙂 As opposed to the more ordinary quantum physics! 🙂
That made me smile, but who is positing exotic physics? If you mean Penrose, I think his theory is that quantum effects, which would be present in the microtubules because they’re that small, affect how the neurons work.
Not that there’s any evidence (at all) that happens, but it’s one of those things that isn’t ruled out, yet. [shrug] Could be. Could be something quantum happening in the synapses, too; they’re pretty tiny.
“…some people conclude that this means there must be some new aspect of reality…”
That doesn’t have to imply new physics, though. It can just mean a deeper, better, understanding of existing physics. (Here’s a question for you: Would you consider Special Relativity to be new physics? I can see arguing it either way, but I’m leaning towards no, myself.)
“[T]he mind may […] use certain quantum phenomena such as entanglement. […] But the data most strongly indicate that consciousness arises from neural circuitry.”
Which may include that quantum phenomena. I agree, entanglement is an interesting candidate!
“Understanding how the mind arises from it is largely understanding how information is processed in the brain’s neural network.”
It may involve a great deal more. Not saying it’s not doable, but I think “information processing” is too vague a term to actually mean anything. That means it’s a black box we have yet to really understand.
As you say, very complex network made of very complex parts. And parts tiny enough it’s possible quantum behavior is involved. Also massively parallel… like 500 trillion tiny computers networked together.
And something struck me today thinking about how neurons fire. They’re binary in the on-off sense, but they turn on-off in a pulse train while firing. The timing between the pulses is entirely analog down to pretty much the quantum level.
Which means the dendrite system is processing analog inputs and making threshold decisions (as opposed to polling or count decisions). Neurons are integrating all that! Pretty awesome!
I agree we’re on the same page. (Based on our previous discussions, I didn’t expect us not to be.)
On science, I was referring to looking for Aristotelian final causes as opposed to settling for efficient causes. We largely now regard final causes, aka teleology, to be unjustified metaphysical baggage, but in the 16th century it bothered people when Galileo and others stopped worrying about it.
On exotic physics, Penrose is one of the culprits I had in mind. Others include Deepak Chopra and his ilk. Of course, Penrose is an actual scientist, but a theoretical physicist, not a neuroscientist.
“Would you consider Special Relativity to be new physics?”
Not anymore. I certainly would have seen it as new in 1905. It’s worth noting that it was driven by late 19th century experimental results that begged for an explanation, not by idle speculation.
“Also massively parallel… like 500 trillion tiny computers networked together.”
Synapses are more complicated than transistors, but I think calling them computers is an overstatement. I have heard the label applied to neurons which, as you discuss, are far more complex.
“Others include Deepak Chopra”
Oh, okay. I don’t pay any attention to him. Is he positing exotic physics? Yikes!
(Was he in that What the [bleep] do we know? movie? That was a silly movie.)
“Penrose is an actual scientist, but a theoretical physicist, not a neuroscientist.”
Isn’t that judging the source and not the content? Thing is, we three are all on the same page: the mind is not algorithmic. He’s just trying to explain why that might be so.
“…not by idle speculation.”
True. There was a key element of speculation involved, though. Einstein famously was pondering what would happen if you went as fast as light and looked at a light wave. Would it be like looking at a frozen sea wave? That seemed preposterous, so something else must be true.
That’s when he began thinking about what Maxwell’s equations implied. The speed of light, the number, turns up in those equations, but no one really thought about what it implied. (And then he remembered what he’d learned from Minkowski about 4D spacetime.)
Newton, as you know, speculated about why apples fell and the moon didn’t.
Even quarks at one time were basically a math model dreamed up by Murry Gell-Mann to explain hadron interactions. Turned out — ha! — quarks is real! Another math unicorn turns horse.
Don’t go knocking idle speculation! The ghosts of Newton and Einstein will haunt you! 😀
“Synapses are more complicated than transistors, but I think calling them computers is an overstatement.”
This is from theoretical neuroscientist Kenneth Miller’s article (the one you linked to a few posts ago; emphasis mine):
Integration of past histories sounds pretty computer-ish to me!
I’m not sure neurons are necessarily more complex other than the mind-boggling complexity of their interconnections.
Chopra is…well best ignored. Truth be told I haven’t read anything by him since I stopped commenting at HuffPost, and I’m happier for it.
“Isn’t that judging the source and not the content?”
Yep. We’ve discussed this before. Not being an expert in neuroscience, I’m going to weigh what actual neuroscientists say about neuroscience much more heavily than what others say, even when that other is an expert in something else. I’d also weigh what physicists say about physics much more heavily than what neuroscientists say about it, should the hubris run the other way.
All the cases of speculation you describe were driven to explain observed phenomena. Speculation on quantum consciousness and the like…aren’t.
On synapses, I’m not arguing that their molecular machinery isn’t enormously complex, but if that qualifies them to be a computer, then all cellular machinery are computers.
“I’d also weigh what physicists say about physics much more heavily than what neuroscientists say about it, should the hubris run the other way.”
Fair enough. That’s up to you, of course. It bothers me to be dismissive about work I don’t know. Some scientists are quite competent in multiple disciplines. Even assuming that it’s hubris… Maybe it’s not.
“Speculation on quantum consciousness and the like…aren’t.”
Isn’t consciousness an observed phenomena? At least as observed as “the moon doesn’t fall down!” Why is speculating about invisible forces right in one case but not the other?
You know, it was Planck speculating about why ovens don’t become infinitely hot and melt that led to quantum physics. He made a leap of imagination that opened one hell of an astonishing door.
“On synapses, I’m not arguing that their molecular machinery isn’t enormously complex, but if that qualifies them to be a computer, then all cellular machinery are computers.”
Oh, I think that’s probably true! The nucleus alone is a complex little machine. (DNA is pure data!) Even the little mitochondrion are pretty complicated!
“It bothers me to be dismissive about work I don’t know.”
We all have limited time, and have no choice but to use our prior experience to decide what’s worthy of it. My prior experience is that non-experts making extraordinary claims about a subject that the experts in that subject don’t take seriously, are virtually never right.
Part of the hard problem boils down to the limits of analysis. With due respect to Kripke, there are no real rigid designators, (OK maybe the ice climb in Vail, but that is regarding an aesthetic standard).
We get all tingly about quantum weirdness – we can’t have the complete description of an electron in terms of its location and velocity, for instance, even though we have a very good idea of where and how fast the average electron will be going in any particular situation.
However, do we have a complete description of any microscopic object either? No, we have the same sort of averaged-over account which sacrifices specificity for overall predictive power.
In the end, only the bat cares about what it is like to be itself – not what it’s like to be a bat; Nagel got that part wrong – and that is something which is simply not amenable to analysis.
And yes, neurosurgeons are surgeons who specialize in surgeries on nerve tissue, which means that they are just as likely to be bat shit crazy as any other surgeon (i.e. quite likely), cases in point.
LikeLiked by 1 person
“But unlike these problems, which exist in realms we may never be able to observe, the brain shows no sign of being fundamentally beyond careful observation.
Yes, understanding the brain will be hard, very hard. But even though there are many people who don’t want the mind to be understood, neuroscientists will continue making progress, year by year, decade by decade. The gaps will shrink, eventually closing off the notions that depend on them.”
You switched from saying “brain” to “mind”, thereby equating them. Just saying. 🙂
Yes, the hard problem may never be answered. And yes, we shouldn’t judge a scientific theory on whether or not it solves the hard problem (I’m inclined to think that, by definition, it can’t). On the other hand, scientists do try to answer it without fully understanding it. They do philosophy unwittingly, without being experts in it. If they want to dismiss the problem as irrelevant to what they’re doing—studying the brain—then fine, but they should acknowledge that they are stepping outside the realm of their expertise when they say things like “consciousness is an illusion.” That’s a philosophical assumption that they rarely attempt to argue for in a way that would satisfy experts in philosophy.
For my part, I’d be very happy if they could figure out the brain, specifically for medical purposes. Whatever knowledge has already been acquired by scientists seems to take a very long time to trickle down into the medical arena, though. As a non-expert, I hear of advances in neuroscience (from media) and get a general sense that things are moving rapidly there. But when I go into a neurologist’s office, I get a lot of shrugging and pills—the effects of which they readily admit they don’t really understand—and advice to try yoga. (The yoga thing came from both a neurologist and a cardiologist). This is not to say that the brain is an impossible thing to figure out, but just that the general population may be getting a slightly overly-optimistic view of what is known.
LikeLiked by 3 people
“We shouldn’t accept extraordinary claims without extraordinary evidence, particularly when far less extraordinary theories explain the facts.” – that sounds just like Chris Hitchens on Atheism Mike! The thing is, as regards consciousness, the theories never do explain the self-evident fact of (what we call) subjectivity. From what I’ve read, then a multi-disciplinary approach seems more likely to yield fruits than any specialist field alone, such as neurophysics, psychology, Quantum Theory, biology, philosophy of mind, or whatever. I like V.S. Ramachandran on consciousness and the self:
LikeLiked by 1 person
Whoops – that found it’s way into Tina’s comment – sorry!
LikeLiked by 1 person
There are a lot of things that I disagreed with Hitchens on, but that isn’t one of them.
On multidisciplinary approaches, nothing exists in isolation, but in understanding how consciousness arises in the brain, I think neuroscience will be the main show. Philosophy of mind and psychology might help in generating hypotheses and interpreting results, and of course neuroscience is applied biology, which is applied chemistry, which applies physics, etc. But neuroscience will be where the rubber meets the road.
Thanks for the video. I’ll have to watch it later. I’ve seen Ramachandran talk before, but I’m not sure if it was ever specifically on consciousness.
LikeLiked by 1 person
“You switched from saying “brain” to “mind”, thereby equating them. Just saying. 🙂 ”
You caught me 🙂
I actually don’t read too many neuroscientists who use the “consciousness is an illusion” line. I’ve mostly seen it used by philosophers like Daniel Dennett. From what I recall, even they don’t say it doesn’t exist, only that it isn’t what we think it is from introspection.
I personally think the phrase “is an illusion” is overused, and is hyperbole in most places where it does get used. It’s main utility is that it gets people’s attention. It’s main detriment is that it causes confusion.
Most neuroscientists also strongly emphasize how much there remains to learn.
How are you doing? Any change at all? (Feel totally free to ignore or answer privately if you prefer.)
LikeLiked by 1 person
I’m glad to hear that not too many neuroscientists use the line. They’d have to explain why their experience of the scientific experiments that discredit consciousness isn’t also an illusion. 🙂
On the health, thanks for asking! Still the same. I stopped taking SSRIs, SNRIs, etc. None of that seemed worth it. Now I’m on a more powerful upper, and I still take naps in the morning. And I’m starting to think all these pills have fried my brain. Seriously, I can’t seem to get anything done anymore. I’m missing appointments, showing up to things at the wrong time, forgetting things a few seconds after thinking of them. Where was I going with this? 😉
I’ve come to the conclusion that no one knows what’s going on. I am trying yoga, but it’s gonna take some time to get used to. I really don’t like being told to “direct my breath to the spaces that need attention” when I can no longer feel my legs and I’m about to do a face plant. All this thinking about breathing…isn’t that what the autonomic system is for?
How are things with you? The shoulder still improving?
Sorry to hear that. Hope the yoga helps. I known many people who felt that it helped them. I’ve never felt the appeal myself, but then I’ve never had the challenges of many of the people who are drawn to it. I did have a friend who repeatedly tried to get me to do yoga stretches for my shoulder, but at the time, they were simply too aggressive.
The shoulder is doing mostly well. I’ve been uneven lately with the exercises, and my shoulder occasionally emits a low grade ache to remind me of it. But it’s nothing like what it was in December and January. I’m functional which, as you get older, tends to be what satisfies you.
Everyone in the whole wide world seems to think yoga is the cure for everything. Okay, I’m being hyperbolic. 🙂 But I swear, if I hear it one more time…
My husband tried it and it hurt his back. I think it’s one of those things that works well for some, not so much for others.You have to be really careful with it.
I went to a class designed for older folks with bad backs, and that one put me to sleep. Literally. When the woman started talking, I felt that jolt to reality that reminded me of nodding off in grade school and slipping off the desk. That yoga class I didn’t mind too much, but I didn’t feel any benefits beyond the nice nap. 🙂
Right now I’m working with videos and learning that yoga can be pretty damned hard. Most of it I can’t do. Even downward dog requires quite a bit of upper body strength (for me anyways) and I usually have to stop early.
I’m considering developing my own routine, minus the bullshit woo woo talk, minus the contortionist stretches that leave me in pain. I have a nice CD from my musician friends (Quantum Calm…you could check it out on Spotify or Pandora if you’re interested) that is specifically designed for yoga or relaxation. I might just listen to that and do my own thing after I get enough of an idea of what to do.
Glad to hear your shoulder is in that satisfying functional mode at least. It’s amazing that you managed to avoid surgery. Good deal!
My yoga friend had this gigantic book by some yoga guru with every yoga stretch ever, including beginner and intermediate versions to get you to the advanced ones. It was a fascinating book, but even the beginning shoulder stretches required raising your hands above your head, which for me at the time was a no-go. My friend was absolutely sure yoga would solve my problems. She said it had done wonders for her.
But yeah, I can definitely see someone hurting themselves with it if they’re not careful. Hope it ends up working for you.
LikeLiked by 1 person
Mike, I want to drag your attention to a paper by David J. Chalmers, Facing Up to the Problem of Consciousness. If you have not read it yet, I fully encourage you to do so.
Thanks J.F. I’ve actually read articles by Chalmers on the hard problem, but I don’t know that I’ve ever read the original paper where he formulated it. Does he cover things there he doesn’t cover in his later writing?
No, he does not, I just thought that it complements your piece rather nicely. But it is a great read if you want to make your own thinking clearer, particularly on the way the hard problem is traditionally construed.
LikeLiked by 1 person
The movie Revolver with Ray Liotta and Jason Statham touched on a subject regarding the sub-conscious as a self serving evil force. Ignoring the evil religious aspect of the message my experiences somewhat validate this belief. It may just be my subconscious is geared more to suggest actions that cause regrets, but the message of the movie had a few Doctors of Psychiatry at the end that tried explaining where the facts came from. Basically the process is that your subconscious makes attempts to convince you that it is you. After if gains your trust the subconscious can alter your actions with thoughts of insecurity, paranoia, jealousy and the like.
I can’t know what goes on in anyone else’s mind but I have, through meticulous back tracking, traced every major regret to concepts derived from suggested ideas. Not the idea’s that are at my core already but ones that are being inserted into my beliefs. Over the years I have mostly forgotten how I traced the logic back to it’s source, but I still remember the moment I found the source.
The movie Revolver sounds interesting. I might have to check it out.
I think the main thing to be leery of when trying to understand consciousness, is that introspection, surveying our subjective experience, is of limited value. It has great value psychologically, but not in fundamentally understanding awareness itself. The problem is that we’re not… conscious of the limitations of consciousness.
If you haven’t seen it before, the TED talk by Dan Dennett gives some good demonstrations on the limits of our ability to know our own consciousness.
An interesting aspect of the “hard problem” is that we know about it. Somehow, the knowledge that there are qualia and that this is strange and difficult to understand, reaches the level of being content of our thoughts (and of our blog articles and comments to them). It cannot be something that exists separate from the cognitive processes because we are observing the existence of qualia and we are thinking about this.
We might gain an understanding of what is going on here, and ultimately an explanation, by trying to understand who this is happening, that the qualia are leading to explicit thought about them. The resulting theory might solve the hard problem, but might seem unsatisfactory because it only yields words but not the qualia itself. It should, however, be possible to describe inside such a theory itself why it must appear unsatisfactory in such a way.
Just an intuitive thought at the moment. Don’t know if I managed to formulate this in an understandable way. I hope I don’t produce too hard a problem with this 🙂
If I’m understanding correctly, I think that’s a good summation. An explanation of qualia will not be in terms of qualia. It will be like explaining what’s on stage in terms of backstage mechanics, but it won’t explain why there is a stage, why the show exists.