The Edge question for this year was, “What do you think about machines that think?” There are a lot of good responses, and some predictably inane ones. Daniel Dennett gives a good write up on why the Singularity is overblown, and points out something that I’ve said myself, that the real danger isn’t artificial intelligence, but artificial stupidity.
Steven Pinker gives another excellent response, but I think the best one was given by theoretical physicist Sean Carroll. I had some serious issues last year with Carroll’s response to the 2014 question of “What scientific idea is ready for retirement?”, where Carroll advocated ditching falsifiability. My post taking issue with his response has been one of the most heavily visited ones that I’ve done.
Since I’m a fan of Carroll’s, I was pleased this year to see that, not only am I (almost) completely in agreement with his response, I find it the best of the ones I’ve read so far: We Are All Machines That Think.
Neuroscience, a much more challenging field and correspondingly not nearly as far along as physics, has nevertheless made enormous strides in connecting human thoughts and behaviors with specific actions in our brains. When asked for my thoughts about machines that think, I can’t help but reply: Hey, those are my friends you’re talking about. We are all machines that think, and the distinction between different types of machines is eroding.
(My only real beef with Carroll’s response is the verbiage immediately preceding this quote where he asserts that science has a complete understanding of the physics involved in everyday life, an assertion I find a bit hubristic, since we often don’t know what it is that we don’t know, the “unknown unknowns” in Rumsfeldian terminology.)
Although I agree with Carroll’s main contention, that we are evolved machines, I can see two objections people might make to the “thinking machines” concept, at least aside from the semantic quibbling about insisting that a “machine” is something humans build.
The first is to assert that there is a non-physical aspect to humans that machines will never be able to duplicate. I’ve already done a post on why I think the mind is the brain. The TL;DR is that the well known effects of brain damage and mind altering drugs, which can affect not only our physical coordination, but our memories, inclinations, and our most profound moral and intellectual decisions, leave little room for a non-physical aspect of mind. As I admit in that earlier post, there is still logical space for a non-physical aspect to the mind, but it is rapidly shrinking as neuroscience advances, and already excludes many things that make us, us.
The second is to admit that the mind is the brain, but assert that the brain mechanisms are too complicated to ever be reproduced. Perhaps mental processing happens at the base layer of reality, say the quantum layer, or perhaps some unknown lower layer. While conceivable, there’s no real reason to think that at this point, except to find a way to cling to human exceptionalism. While we have reasons to suspect that the brain uses quantum effects, we have no good reason to suppose that it uses them in any non “standard” way of how quantum physics are understood to work.
My personal view is that the “secret sauce” of mental processing probably happens at the level of neurons and synapses, with perhaps nuances coming from the molecular level, which might indeed be difficult to reproduce technologically, but far from impossible. In any case, this is only a problem if someone is attempting to reproduce the exact way a human mind works, not if they are attempting to build something else with the same capabilities and capacities.
Will we ever have thinking engineered machines? Depending on how you define “thinking”, we already do. But even if you use a definition that includes consciousness or some other mental capability that machines don’t currently have, I don’t see any fundamental aspect of reality that would prevent it. (Unlike, say, faster than light travel, which our understanding of physics currently makes unlikely.) We might eventually discover some such fundamental limitation, but until we do, saying it’s impossible strikes me overly pessimistic (or, depending on your point of view, unrealistically optimistic).
The Edge question also mentions the “dangers” of AI that people periodically express anxiety over. I’ve done numerous posts on this. All I’ll say here is that we, as evolved survival machines, fear creating a superior survival machine, but most AIs will have primary purposes other than survival, such as navigation, analysis, construction, etc. They’re as unlikely to accidentally become survival machines as my Sony PlayStation is to accidentally become a Garmin GPS.
What do you think of the Edge question? Or the responses?