Philip Ball has an article up at Aeon: Life with purpose, which resonates in theme with the one a few weeks ago by Michael Levin and Dan Dennett on purpose in nature. Like Levin and Dennett, Ball argues that we shouldn’t be shy about discussing purpose in biology, or feel obliged to put quotes around words indicating goals, such as discussing a plant’s “desire” to reach sunlight.
But Ball’s subject is agency, both in biological and technological systems. He starts off by defining agency as:
agency, the ability of living entities to alter their environment (and themselves) with purpose to suit an agenda
He goes on to elaborate:
Agency stems from two ingredients: first, an ability to produce different responses to identical (or equivalent) stimuli, and second, to select between them in a goal-directed way. Neither of these capacities is unique to humans, nor to brains in general.
That last sentence is a major theme of the piece. Agency does not require the full cognitive repertoire that humans and many other animals possess, such as modeling the environment and simulating possible courses of action, although human agency generally does involve those things. But a system can have agency with no awareness of its agency. So we can see agency in unicellular organisms, plants, and technological systems.
Ball notes that, in simulations, relatively simple optimization rules can lead to fairly complex behavior. The necessary ingredient appears to be having a goal, or goals. Having some form of memory helps in ensuring that a sequence of actions leads toward the goal, although this doesn’t have to be anything as sophisticated as what happens in a brain.
I like this concept of agency, often referred to as “autonomy” in other venues. It gives us a way to discuss what happens in relatively simple systems without having to get into arguments about whether they’re conscious in any fashion. Indeed, the nice thing about agency as a concept is it doesn’t have the c-word’s historic entanglement with Cartesian dualism and other baggage. As a result, no one seems to object to seeing different levels of it, reducing it, or the idea that some machines have it.
My own view is that consciousness is a particularly advanced form of agency, one which adds model based decision making into the mix, but if someone disagrees, we could still discuss things in terms of agency and simply eschew c-word talk. We can agree that a self-driving car or other autonomous robot has as much agency as many simple animals.
I think this also gives us a way to maybe split the baby when talking about purpose. For purpose in simple systems, systems that have their purpose from evolution or engineering, we could talk about agency-purpose as distinct from cognitive-purpose. So a fruit plant’s purpose in having sweet fruit is to have animals eat it and then defecate its seeds across a wide area. In this case, the purpose would be an agency-purpose. Of course, we’ve also always had phrases like “evolutionary purpose” or “adaptive purpose” to provide the same idea, but it does seem to sharpen the distinction.
What do you think? Is agency, as distinct from consciousness, a useful concept? Does it successfully dodge some of the conceptual baggage of consciousness? And how does the concept of intelligence factor into this?