Consider a rock outside somewhere. It sits there, starting off in the morning in a certain state. The sun comes out and proceeds to warm it up. Its temperature climbs through the day until the sun sets, whereupon it cools through the night. The cycle starts again the next morning. The rock is going through a series of states throughout the day.
We can model the changing states of the rock with a computational model, which we’ll call R. However, if we can model the rock with that computation, then we can regard the rock to be implementing that computation. In other words, the rock can be seen as a computational system implementing the algorithm R.
Suppose we want to consider the rock to be implementing something other than R? In truth, there are probably numerous computational models that would describe what is happening in the rock, depending on the level of detail we want to work at and perspective that we want to take. But suppose we want to interpret the rock to be doing something non-rockish.
Well, we can create a new model, which we’ll call R+M (rock + mapping). Let’s implement a clock algorithm with R+M. Naively, this might seem straightforward. The rock’s temperature will vary throughout the day, so all we need to do is map each temperature to a specific time. That’s what M adds to the model. Viola, we’ve interpreted a rock to be implementing a clock.
But is the rock really implementing a clock algorithm? Before you answer, consider that if you ran a clock algorithm on your computer, the actual sequence of states inside your computer could be modeled at a much more primal physical level involving transistor voltage states, which might bear limited resemblance to a high level clock model. We’ll call this primal model C. Your computer has an I/O (input/output) system, which maps C into presenting all the things a clock would present. We’ll call the overall model of this C+M (computer+mapping).
It’s not C by itself which provides the clock, but C+M. What’s the functional difference between R+M and C+M? Certainly R and C are radically different, but aren’t we compensating for those differences by their respective versions of M?
If we can do this to consider the rock to be implementing a clock, can’t we do it with more sophisticated algorithms? Suppose we want to consider the rock to be running Microsoft Word. So we implement a new R+M model, but this time M adds everything to map the rock temperature states to the computational states of Word.
But if we can do that, is there then any computation, any algorithm we can’t consider the rock to be implementing?
If the mind is computation, couldn’t we then extend R+M to be a conscious mind? In other words, with the right perspective, isn’t every rock implementing a conscious mind? Not just one mind, but every conceivable mind? In other words, if the computational theory of mind is correct, and we can map the sequence of states for any complex object, isn’t the universe teeming with consciousness?
Before you start having any concern about the way you might have treated the last rock you encountered, let’s back up a bit. In the first scenario of R+M above, we mapped the states of the rock to a clock. But this is problematic because the rock has variances in its inputs that the clock model doesn’t. For example, cloud cover and other weather conditions may affect exactly how warm the rock becomes and there may be other environmental factors. When these come into play, we may find our R+M clock being a bit unreliable.
No problem. We’ll just put in some adjustments into the mapping M. When the weather is overcast, or if it’s raining, or windy, we’ll adjust which temperature maps to which time to take into account the weather.
Except now, is our mapping still just a mapping? It’s taking in its own inputs, performing its own logic, and essentially dynamically adjusting as needed to insure that the states of the rock map to the states of a clock. Of course, to have implemented Word or a mind, we would have to take similar albeit much more aggressive steps in the mappings for those algorithms. Does it still make sense to say the rock is implementing a clock, or Word, or a mind?
If two implementations of a piece of functionality are not physically identical, then isn’t it a judgment call whether they are functionally the same? Algorithm X executing on a desktop PC is not physically identical to algorithm X executing on an iPhone. Both implementations may have originated from the same source code, the same action plan in essence, but we would still need a mapping between the physical processes is necessary to consider them to be the same. Run algorithm X on a quantum processor, and the mapping might become extremely complex.
So our rock is implementing every algorithm? There have been a number of proposals to explain why this isn’t true.
First, it’s been pointed out that, in the case of something like a rock, the mappings are created after the fact, observing what happens in the rock and creating a framework to map it to a meaningful algorithm. This certainly makes the rock unusable as a computing tool. But does that mean it isn’t actually implementing the algorithm?
Another criticism, related to the first, is that the mapped algorithms are fragile, falling into incoherence if the rock’s state transitions don’t flow in just the right way. In other words, the algorithm isn’t just one of many the rock could be implementing, but the only one it could be implementing with the given interpretation.
That leads to challenges involving causality and dispositional states. The reasons why the rock moves from one state to another have no relation to why the (reliable) clock, or Word, or a mind move from one state to another. Each state has a dispositional nature, that together with inputs into the system, causally lead to the next state. If the physical state’s dispositions don’t have some logical resemblance to the disposition of the computational model’s state, then the mappings are arbitrary.
There are other criticisms. Is the rock really a computational system? We spend a lot of money to purchase computational devices which are only created with a lot of engineering. If the mind is a computational system, evolution invested a lot of resources developing its computational substrate in the brain. We don’t run Word on just any matter. Minds can’t exist in just any piece of biological matter. In both cases, it takes a very specialized structure. Regarding the rock as equivalent to a brain or silicon chip seems to ignore these important facts.
So, how complex can the mapping become before it is invalid? What specifically makes it invalid? Do the objections laid out above resolve the issue?
My own answer is that I think the mapping become invalid when it crosses into becoming part of the implementation. Due to the absurd work required of the mappings, I’m not inclined to view the rock as implementing Word or conscious minds. It seems to me that the mapping is implementing these things and blaming it on the rock.
But as my friend and fellow blogger Disagreeable Me pointed out in our discussions, attempting to objectively nail down exactly where it crosses this line may be a lost cause. Every mapping, including the ones we use for our computational devices, have some degree of implementation. Ultimately, it may be that whether a particular physical system is implementing a particular algorithm is subjective, which implies that whether a system is conscious is also subjective.
Now, I think the idea of a rock being conscious requires a perverse interpretation, a ridiculous mapping, so I’m not going to be worried about the next rock I break apart or throw. But this becomes a more difficult matter for simulated beings, whose internals might be just close enough to those of a conventional conscious being to give us moral quandaries.
Many people see this is as a problem with the computational theory of mind. While the consequence is profound, I don’t see it as a problem, but simply a stark fact of reality. To understand why, let’s back up and consider exactly what the computational theory of mind is. It’s the belief that the mind is what the brain does, as opposed to some separate substance. In other words, the mind is the function of the brain, and that function can be mathematically modeled.
But isn’t the purpose of any functional system always open to interpretation? To say otherwise is to veer into natural teleology, the belief that natural things have intrinsic purpose. Pursuit of teleology was abandoned by scientists centuries ago, because it could never be objectively established. I fear we might be discovering that the existence of a mind might lie on the far side of that divide.
There’s a strong sentiment that consciousness must be a fundamental aspect of reality. While it certainly is a fundamental aspect of the subjective reality of any conscious being, reality doesn’t appear to be telling us that consciousness is objectively fundamental, at least not in the way of a fundamental force like gravitation, electromagnetism, etc.
Unless, of course, I’m missing something?
This post was inspired by a lengthy conversation with Disagreeable Me (and others). He has posted a much more rigorous and philosophically termed entry on it. If you’re feeling particularly energetic, there is a Stanford encyclopedia of philosophy article that covers this issue in pretty good depth. If you’re feeling truly masochistic, check out the papers cited by Disagreeable Me or the Stanford article.
21 thoughts on “Are rocks conscious?”
The history of consciousness speculations (they have yet to reach the dignity of being called consciousness studies … they are close, though) is that our minds have always been linked to the latest technological innovations. Pre-evidential thinking had our minds being created by a metaphysical “spirit” called a soul. (Metaphysical means imaginary, by definition.) Then our brain processes were likened to telegraph systems, then telephone networks, then they were consider to be computers, today it is in vogue to liken them to computer networks and/or computational algorithms.
Eventually, I suspect, we will reverse engineer our mental process and discover how they function (we are just now getting evidence regarding unconscious processing) and then we will “know.” I just wish that people would stop grasping at “answers” when we aren’t even sure what the questions are. Jumping to conclusions is not a very effective mode of traveling anywhere.
I think it’s human nature to try to explain things we don’t yet understand in terms of what we do understand. People have speculated about consciousness for millenia, but until the last century or so, it’s pretty much been speculation floating in mid-air. Rene Descartes couldn’t conceive of how the mind could work physically, which is why he formulated substance dualism, although in reality what he formulated matched intuitions people have had since the dawn of humanity.
But I think implying modern theories are just as wrong as ancient ones is itself wrong.
That said, I’m a computationalist, but only in the broadest sense, not in the sense of subscribing to any particular theory, most of which I find too speculative. The models I think have the most credibility are the ones from computational neuroscience, which start with the actual biology and build from there.
LikeLiked by 1 person
Based on your description, it seems to me that the conscious rock idea just adds new imaginary inputs to counteract the imposed imaginary inputs that go to other entities, and to compensate for the lack of processing inside, so as to wind up with the same actual results. Seems absurd, and I think your discussion of computational theory of mind stands find on its own (or better?) without it.
LikeLiked by 1 person
That’s my sentiment as well. The problem is finding exactly where the mapping objectively crosses that boundary.
LikeLiked by 1 person
We could imagine a null algorithm, R, that literally does nothing. if we apply an appropriate mapping, M, our system can become a clock, or Word, or a linux server. That’s the extreme limit of the conscious rock argument.
LikeLiked by 2 people
Wow, that’s a perfect point!
Actually, now that I think of it and to be fair, the actual philosophical argument requires that that the object or system under consideration have a minimal complexity including undergoing enough state transitions to qualify for this. Although to be fair to your argument, I don’t recall any objective criteria being specified for this minimal level of complexity.
I want to make sure I’m understanding this correctly …
Let’s assume I’m computational and that the computational model of mind is correct. I’m “mapping” relative to the rock in my hand. If I use the rock’s temperature as a way to infer the time of day, the rock is implementing or processing my internal clock model, right?
That’s one way to look at it. In truth, considering a system in isolation is artificial. Every system is a nexus of inputs from other systems and outputs to others. Exactly where we draw the boundary and say “here is system X” can be a complicated question. Every system is integrated with others.
In the case of consciousness, consider that we have a theory of mind about other people. Who that person is to us, subjectively, is as much about our internal model of them as it is about the objective person themselves. That’s why it’s so easy for us to project our own level of consciousness on animals, or on things that we objectively can’t say are conscious, such as the rivers, oceans, volcanoes, or nature overall.
In the case of computation, if my computer is running a clock, what that means to me is as much about my internal model as it is about the states of electrons inside silicon chips. In the case of commercial computers, it’s a system engineered to make that model as minimal as possible. A rock-clock requires a more developed internal model. A rock MS Word requires a model probably far more complicated than the original ostensible pattern in the rock.
LikeLiked by 1 person
Nice summary of the issues.
Well, only that it’s not so easy to understand what it can mean that the existence of consciousness is not an objective fact. How can it be that there is no objective fact of the matter regarding whether I, as a conscious mind, exist? I’m not saying there is no answer, but this is too much of a weird point of view to be a satisfying conclusion in its own right. It points the way to further investigation rather than being much of an answer.
I’ve just posted my own exploration of what this means.
LikeLiked by 1 person
Thanks DM. I’m going to comment at your blog post, but wanted to address one point here. There’s been a lot of material written in recent years from neuroscientists and philosophers about how the self is an illusion, or that consciousness is an illusion. I’ve been a bit impatient with that wording, feeling like it was hyperbole to garner attention.
But this line of discussion, along with the conversations we had with Wyrd Smyth a few months ago, has made me realize what the illusion is. It’s this objective fact-of-the-matter existence for these things, that consciousness is a thing, an indivisible fundamental aspect of reality.
Part of the problem is the ambiguity associated with terms like “consciousness”, “self”, or “mind.” In truth, these things include a hazy collection of attributes that we draw a boundary around and apply these labels to.
I’d say that you are (or possess) these things because, as a mentally competent human adult, you fit the only category where everyone (at least every human) agrees is a conscious mind. There are debates whether newborns, animals, and many other entities are conscious, but not your category. Your category is the reference example every definition of consciousness needs to include. If it doesn’t, it’s pretty pointless.
Of course, we all have a strong feeling that we are conscious, and that systems like us are conscious. But as we move away from systems that resemble us, the farther we get away, the more controversial assertions of consciousness become.
I think any entity with an I/O system that allows them to have an objective physical presence in the world is at an advantage in this discussion over, say, a simulated entity in a self contained simulation. Is this physical bias? Probably. But for us physical beings, it’s probably where we are.
Ultimately, I think the intuition that consciousness must be an absolute fact of the matter is one we should question. I think it’s an anachronism, a holdback from the innate intuition of substance dualism, part of a lingering sentiment that we are something special in the universe.
Again, I think you’re answering the wrong question. I’m not asking whether I can objectively be regarded as conscious (we are agreed that there is no magic objective threshold where a structure passes from unconscious to conscious), I’m asking whether a mind which is somewhere towards the conscious end of the spectrum can objectively be said to be sustained by my brain.
So: the mind has to exist objectively, it seems. What is fuzzy is whether we describe particular “minds” as conscious or not.
I’m not sure if I’m grasping the distinction you’re making here. Previously, I had understood a distinction you made between whether a system is implementing an algorithm (for consciousness or otherwise) versus whether an agent is conscious. That distinction is nuanced and difficult to get at with language and, at least under computationalism, I’m not even sure I’d agree it’s real. But with the language you use here, I’m beginning to doubt I have understood the actual distinction you’ve been making.
Consider the following questions:
1) Whether, or to what extent, a given algorithm is conscious (or consciousness-producing)
2) Whether a given algorithm is instantiated by a physical system
3) Whether a given supposedly fully conscious mind such as yours or mine really/objectively exists and is experiencing consciousness from its point of view.
The difficulty in categorising things is question 1. That’s what you appear most keen to answer, apparently because it’s the easiest, least controversial question of the lot. We are agreed that there are no objective criteria to measure whether or to what extent an algorithm produces consciousness.
2 and 3 are tied together for physicalist computationalists if the algorithm of 2 corresponds to the mind of 3. If 2 is a yes, then 3 is a yes. If 2 is a no, then 3 is a no.
I am not a physicalist computationalist so I think that 2 has no objective answer. But I do think that 3 has an objective answer in the affirmative for any possible mind.
Any answer which posits a subjective answer for question 3 is immediately suspect and requires further explanation. It’s not clear how a fully conscious mind (i.e. a mind corresponding to that produced by an algorithm from the conscious end of the spectrum) can kind of exist and kind of experience consciousness from its own point of view. It seems to require an all or nothing answer. Either it exists or it doesn’t. Either it is experiencing or it isn’t.
Thanks for the clarification. It turns out I did understand the distinction.
However, I’d add a new question to your lineup for your consideration.
2a) Whether a given algorithm is instantiated by another algorithm
I add that to point out why I can’t see that abandoning physicalism really solves this issue. It just causes a redefinition of words. If everything is computation, the question moves from whether a physical system is implementing an algorithm to whether an algorithm is implementing another algorithm, or perhaps more precisely, whether there is something embedded in algorithm ROCK that is isomorphic with algorithm CONSCIOUS-MIND.
It seems to me that whatever issues you see 2) causing for 3), 2a) has the same problems.
The only way I can see to have an absolutely pure objective existence of minds is some form of either substance dualism or a unique physics that only applies to minds. We have to remove minds from the regular workings of the universe in order to eliminate any chance of something we don’t intuitively consider a mind to be interpreted in some way or another as a mind.
2a can be treated just the same as 2, as far as I’m concerned. There isn’t a fact of the matter. I don’t think that’s a problem. The mind exists in its own right whether or not it is embedded in another algorithm.
So, we can in principle trace out according to some intuitive mapping an algorithm that seems to capture what it is our brains are doing. As long as we have done this correctly, then the algorithm we have identified IS the mind and it does exist objectively.
In other words, the mind is just the algorithm that the brain seems to be implementing. It doesn’t matter that we can have no objective criteria by which we can prove the brain to be implementing the algorithm. Similarly, it’s hard to prove objectively that the individual “Barack Obama” is the person depicted in the famous “HOPE” poster (it could be someone who looks like him or it could be the product of random splashes of ink), but I can still say that Barack Obama is the person that we agree the poster seems to be depicting.
I agree. In principle, for all practical purposes, we should be able to figure out the information processing architecture of the human mind, and once we’ve done that, determine with enough objectivity (to satisfy the vast majority of observers) whether another system is implementing it.
To me, the issue we’ve been discussing falls into the category of the problem of induction, or the unobservability of causation. These are issues we should be aware of, but we shouldn’t conclude that they’re intractable obstacles to progress.
Having nothing of value to offer:
I had a Pet Rock once. If he’s actually in my closet(more likely gone forever) … I wonder if he’s still alive 🙂
LikeLiked by 1 person
The key thing is whether you still have the instruction manual 🙂
I’ve been on an epic throwaway spring cleaning run, and have been encountering all kinds of similar things. Most of them are getting thrown out, but I’m allowing myself to be sentimental and hold onto a few things. Not sure if a pet rock would have made the cut.