Eric Siegel at Big Think, in a new “Dr. Data Show” on the site, explains Why A.I. is a big fat lie:
1) Unlike AI, machine learning’s totally legit. I gotta say, it wins the Awesomest Technology Ever award, forging advancements that make ya go, “Hooha!”. However, these advancements are almost entirely limited to supervised machine learning, which can only tackle problems for which there exist many labeled or historical examples in the data from which the computer can learn. This inherently limits machine learning to only a very particular subset of what humans can do – plus also a limited range of things humans can’t do.
2) AI is BS. And for the record, this naysayer taught the Columbia University graduate-level “Artificial Intelligence” course, as well as other related courses there.
AI is nothing but a brand. A powerful brand, but an empty promise. The concept of “intelligence” is entirely subjective and intrinsically human. Those who espouse the limitless wonders of AI and warn of its dangers – including the likes of Bill Gates and Elon Musk – all make the same false presumption: that intelligence is a one-dimensional spectrum and that technological advancements propel us along that spectrum, down a path that leads toward human-level capabilities. Nuh uh. The advancements only happen with labeled data. We are advancing quickly, but in a different direction and only across a very particular, restricted microcosm of capabilities.
The term artificial intelligence has no place in science or engineering. “AI” is valid only for philosophy and science fiction – and, by the way, I totally love the exploration of AI in those areas.
3) AI isn’t gonna kill you. The forthcoming robot apocalypse is a ghost story. The idea that machines will uprise on their own volition and eradicate humanity holds no merit.
He goes into detail in a long post at Big Think, or you can watch him discuss it in the video.
His presentation is over the top, but I have to agree with much of what he ways. AI is hopelessly over hyped. From singularities to Frankensteinian worries it will turn on us, it’s become a new mythology, supplying new versions of the old deities and demons, superhuman powers that rule the world, including promises to solve our problems, or threats of destruction, but now with a thin veneer of technological sophistication. (Not that I don’t enjoy science fiction with these elements too.)
That’s not to say that there isn’t some danger from these technologies, but it more involves how humans might use them than from the technologies in and of themselves. For instance, it’s not hard to imagine systems that closely and tirelessly monitor our activity using machine learning to figure out when we’re doing things a government or employer might not like. Or an intelligent bomb smart enough to wait until it recognizes an enemy close by before exploding.
And I think he has a point with the overall term “artificial intelligence” or “AI”. It’s amorphous, meaning essentially computer systems that are smarter than normal, lumping everything from heuristic systems to Skynet under one label. We sometimes talk about the “AI winter”, the period where AI research fell on hard times but that it eventually pulled out of. It could be argued that the endeavor to build a mind never really escaped that winter. We just lumped newer more focused efforts under the same name. (Not that I expect the term to die anytime soon.)
To be clear, I do think it will eventually be possible to build an engineered mind. (I wouldn’t use the adjective “artificial” because if we succeed, it will be a mind, not an artificial one.) They exist in nature with modest energy requirements, so saying it’s impossible to do technologically is essentially asserting substance dualism, that there is something about the mind above and beyond physics.
But we’re unlikely to succeed with it until we understand animal and human minds much better than we currently do. We’re about as likely to create one accidentally as a web developer is to accidentally create an aerospace navigation system.