Philip Ball has an article up at Aeon: Life with purpose, which resonates in theme with the one a few weeks ago by Michael Levin and Dan Dennett on purpose in nature. Like Levin and Dennett, Ball argues that we shouldn’t be shy about discussing purpose in biology, or feel obliged to put quotes around words indicating goals, such as discussing a plant’s “desire” to reach sunlight.
But Ball’s subject is agency, both in biological and technological systems. He starts off by defining agency as:
agency, the ability of living entities to alter their environment (and themselves) with purpose to suit an agenda
He goes on to elaborate:
Agency stems from two ingredients: first, an ability to produce different responses to identical (or equivalent) stimuli, and second, to select between them in a goal-directed way. Neither of these capacities is unique to humans, nor to brains in general.
That last sentence is a major theme of the piece. Agency does not require the full cognitive repertoire that humans and many other animals possess, such as modeling the environment and simulating possible courses of action, although human agency generally does involve those things. But a system can have agency with no awareness of its agency. So we can see agency in unicellular organisms, plants, and technological systems.
Ball notes that, in simulations, relatively simple optimization rules can lead to fairly complex behavior. The necessary ingredient appears to be having a goal, or goals. Having some form of memory helps in ensuring that a sequence of actions leads toward the goal, although this doesn’t have to be anything as sophisticated as what happens in a brain.
I like this concept of agency, often referred to as “autonomy” in other venues. It gives us a way to discuss what happens in relatively simple systems without having to get into arguments about whether they’re conscious in any fashion. Indeed, the nice thing about agency as a concept is it doesn’t have the c-word’s historic entanglement with Cartesian dualism and other baggage. As a result, no one seems to object to seeing different levels of it, reducing it, or the idea that some machines have it.
My own view is that consciousness is a particularly advanced form of agency, one which adds model based decision making into the mix, but if someone disagrees, we could still discuss things in terms of agency and simply eschew c-word talk. We can agree that a self-driving car or other autonomous robot has as much agency as many simple animals.
I think this also gives us a way to maybe split the baby when talking about purpose. For purpose in simple systems, systems that have their purpose from evolution or engineering, we could talk about agency-purpose as distinct from cognitive-purpose. So a fruit plant’s purpose in having sweet fruit is to have animals eat it and then defecate its seeds across a wide area. In this case, the purpose would be an agency-purpose. Of course, we’ve also always had phrases like “evolutionary purpose” or “adaptive purpose” to provide the same idea, but it does seem to sharpen the distinction.
What do you think? Is agency, as distinct from consciousness, a useful concept? Does it successfully dodge some of the conceptual baggage of consciousness? And how does the concept of intelligence factor into this?
Yeah, it sounds roughly equivalent to my third level of consciousness: “intent.” Life can act with intent before it gets to prediction or self awareness. I think we see eye to eye on this. But agency might have a little more baggage elsewhere about what an agent is and whether free will is involved.
LikeLiked by 1 person
Given how simple agency can be, in my own hierarchy, I’m not sure where it starts. If we accept that there can be stochastic components in reflexive reactions, then it probably starts at that level.
Ball actually discusses free will toward then end, linking it to actual consciousness. Prior to that, he discusses “free choice” as an aspect of agency. I think the idea is that free will requires being able to imagine the outcome of choices, while free choice is simply having the capability to make decisions. I thought he could have been a little more clear on that point.
LikeLike
Re “agency, the ability of living entities to alter their environment (and themselves) with purpose to suit an agenda” I would accept this … if the organism itself states or acts out its purpose and states its agenda. Having these projected from the outside is the worst kind of wishful thinking.
As far as simple animals go, their purpose seems to be ‘Stayin’ alive, stayin’ alive and mate, mate, mate. And this definition of agency basically says that to have it, these “motivations” cannot be programmed in.
And “Ball notes that, in simulations, relatively simple optimization rules can lead to fairly complex behavior.” And the programmer of those rules? Evolution, a mindless process. So …
LikeLiked by 1 person
“Having these projected from the outside is the worst kind of wishful thinking.”
I’d agree if we’re thinking that these systems are imaging their purpose in the same way we would. But the main argument is that these systems can have purpose, a purpose we humans can recognize, that the system’s themselves, or the evolutionary processes that produced them, cannot. Ultimately all we can do is try to clarify these two meanings of “purpose”, which is what I tried to get at in the post.
LikeLike
“Modeling the environment and simulating possible courses of action” is roughly what I would have said if you had asked me to define “agency” before reading about Ball’s essay. But if Ball succeeds in getting biologists, or cyberneticists, or whoever, to use the word his way, I’m fine with that.
LikeLiked by 1 person
I know what you mean. I have to admit that my conception of agency was hazy prior to this article, and I can’t say it’s particularly sharp now. But if it gives us a way to discuss systems with an agenda, without having to drag in all the consciousness baggage, I think it has value.
LikeLiked by 1 person
I’m gratified that these authors (Ball, Dennett, Levin) are working towards “naturalizing” purpose. To your list of phrases describing the lower-level version, namely “evolutionary” and “adaptive”, I would add teleonomic (as opposed to teleologic) and archeo-purpose (via Dawkins).
I was a little disappointed that Ball’s definition of “agent” seems to require the use of memory. I mean, everyone has a right to their own definition, but that just means he has to come up with another word for a system that acts to achieve a purpose but without using memory. An example that comes to mind is the Watt governor. Proto-agent?
Finally, I’m looking forward to the time that people realize that Aristotle was correct regarding his 4 causes, in that “final” cause can be naturalized.
*
LikeLiked by 1 person
This has been one of your convictions for a while. Biologists may still be gun shy after the creationist and intelligent design wars. But we’ll see what happens.
I thought about the teleonomy vs teleology distinction, but “teleonomy” generally means the appearance of purpose, rather than real purpose. It actually fits closer to the current biological stance that there aren’t real purposes in something like a plant. But it is true that teleonomy does refer to many of the same things that the non-cognitive purpose terms would refer to.
Given that Aristotle’s four cause paradigm was meant to apply to everything in nature in a somewhat theological framework, not just biological systems, I wouldn’t hold my breath on it making a comeback. What is the final cause of a hurricane? COVID-19? Or a supernova?
LikeLiked by 1 person
2nd law of thermodynamics, natural selection, 2nd law, in that order.
*
LikeLiked by 1 person
Thanks, but it’s hard to see those as final causes in the Aristotelian tradition, in the sense of an end purpose, as opposed to just a final result. And you could say “entropy” for anything, so it’s not clear how productive it is.
LikeLiked by 1 person
The translation as “final” cause is probably unfortunate, because the final cause is not so much an “end purpose” as an explanation of why the efficient cause came about. I would be interested to hear what the actual translation from the Greek was. I kinda assumed it was telos, which I understood to be “distant”, as in telescope.
So again, final cause is what causes the efficient cause to come into being. The final cause is necessarily prior to the efficient cause. Any purpose comes not from the efficient cause, but from that which generated the efficient cause.
Consider a screw driver. What is the purpose of a screw driver? The answer depends on how the screw driver is being used. If a person is using it to drive in a screw, that purpose is generated by the person who pushes it into the slot of the screw and applies torque. If it’s used to scratch a person’s back, again it’s that person’s purpose. If it’s used to provide a source of electron’s to oxygen atoms via rusting, that’s the universe’s purpose of increasing entropy.
Admittedly I was being a bit flippant with my answer, but I think there is some truth there. In general, a naturalized purpose applies to a system which tends to move the environment in a particular direction. So a rock rolling down a hill seems to have a purpose, but it’s not the rock’s purpose, it’s the system’s purpose, and in this case the system is pretty much “the universe”. The second law of thermodynamics is the main driver that tends to move the environment in a particular direction – more entropy.
But within the universe you will find some subsystems which tend to move the environment in a specific direction which seems counter to entropy. Vortexes and Bernard cells and such. But it turns out that these systems actually increase global entropy faster by decreasing local entropy. So vortexes have a purpose, which is the same purpose as Bernard cells, and which is probably the same purpose as life/natural selection. My main source of confirmation of the latter part of that statement comes from Jeremy England’s work. If you need another book to read, I highly recommend his latest: “Every Life is On Fire”. It’s an easy read, and the first I’ve been able to read all the way through in a long while.
So once you get life, you not only have a system which moves the environment in a given direction (toward a non-equilibrium steady state, i.e, homeostasis) for a purpose (increased global entropy). You get a system that can create new mechanisms (via mutation/selection), and these mechanisms will have the purpose of maintaining homeostasis. Such mechanisms might include gathering and storing information about the environment. Some of these new mechanisms, furthermore, will be able to generate yet newer mechanisms, such as action plans based on acquired information. These mechanisms may generate further mechanisms like those which generate the actual actions.
The point is, there’s a hierarchy of mechanisms creating mechanisms, with each mechanism inheriting purpose (final cause) from the next higher (lower?) level in the hierarchy. At one end of the hierarchy we talk about intentions, goals, teleology. At the other we use the other terms. But it’s the same thing happening throughout the hierarchy.
*
[whew]
LikeLiked by 1 person
Thanks for your extended thoughts. Looking at the Wikipedia articles, the translation for “telos” seems to be “end”, “aim”, “goal”. The translation for “ology” is “reason” or “explanation”. So we get something implying a final ultimate purpose.
The issue, I think, is that there’s never just one final answer for this question. Your screwdriver example exemplifies this. So it really is a matter of philosophy, but not science, that is, something we can assume for one reason or another, but which doesn’t amount to reliable knowledge that’s true whether or not we believe it.
Based on the science history books I’ve read, that’s the main reason people like Galileo didn’t bother with it, and Francis Bacon argued against it. It was probably a stance Galileo learned from his engineering training. Renaissance engineers weren’t interested in final or ultimate explanations. They just needed to understand material and efficient causes so they could do their job. Abandoning ultimate causes or explanations was one of the steps in transforming natural philosophy to modern science.
It is true that a purpose for a particular mechanism is an easy and convenient narrative to slip into. But it requires caution. It can lock us into a particular mode of thinking. Intelligent designers often argue that certain traits are irreducibly complex and therefore could not have evolved through random mutation and natural selection. But this overlooks that the adaptive value of a trait can shift throughout its evolutionary history. The earliest feathers probably served a insulating function similar to fur, only evolving into their flight characteristics later.
So I think we can talk about evolutionary purpose or adaptive value in biology, but I don’t think it’s the same thing as the old final purpose of Aristotle.
All that said, you’re totally free to engage in the philosophy of teleology. But I think it will always be an interpretation rather than objectively established fact.
LikeLiked by 1 person
I think the question is not so much whether it’s an objectively established fact as much as whether it’s a useful explanation. Newton’s laws of gravity were not objectively established facts, but they were useful explanations.
So to get back to the question in the OP, I think agency is a useful concept, especially because it is one of the concepts which are necessary to understand consciousness (others including mutual information, computation, and representation). I think agency is best described as any system with one or more goals. I think explaining how systems are associated with goals necessarily requires understanding how those system came to be in their relevant states. (I note in passing that explanations in this form perfectly correlate with Aristotle’s 4 causes.)
*
[taking questions at this time]
[hmmm, didn’t mean to compare myself to Newton]
LikeLiked by 1 person
I view Newton’s laws as a reliable model, just not the most reliable one since general relativity was developed. In that sense, I do see them as them as objective.
I guess we’ll just have to agree to disagree about Aristotle’s framework. I do agree that agency is useful for understanding consciousness.
LikeLiked by 2 people
If you define agency this way, then what is to say that my big toe doesn’t have agency? It alters its environment to suit its purpose – being a big toe. An electron has agency. It repels positive forces clearing a path for its orbit of the nucleus.
I don’t see the difference between living and non-living entities under Balls’ definition. Almost anything we describe with language will appear to have agency.
Agency to me is a product of the human mind: Linguistic consciousness.
LikeLiked by 2 people
I’m not sure those examples fit Ball’s definition. He requires action selection in service of an independent agenda. A big toe’s agenda seems subsidiary to the overall organism’s agenda. And an electron is going to do what it’s going to do. It’s actions seem too rigid to meet his definition. (Some may inject quantum randomness into this, but quantum randomness still follows rigid rules.)
You could define agency that way, as a product of the human mind, but it seems to largely make it synonymous with your definition of consciousness. It still leaves us looking for a label to describe activity that seems more purposeful than a rock falling, but without an awareness of purpose.
LikeLiked by 1 person