Someone asked for my thoughts on an argument by Sean Dorrance Kelly at MIT Technology Review that AI (artificial intelligence) cannot be creative, that creativity will always be a human endeavor. Kelly’s main contention appears to be that creativity lies in the eye of the beholder and that humans are unlikely to recognize AI accomplishments as creative.
Now, I think it’s true that AI suffers from a major disadvantage when it comes to artistic creativity. Art’s value amounts to what emotions it can engender in the audience. Often generating those emotions requires an insight from the artist into the human condition, an insight that draws heavily on our shared experiences as human beings. This is one reason why young artists often struggle, because their experiences are too limited yet to have those insights, or at least too limited to impress older consumers of their art.
Of course an AI has none of these experiences, nor the human drives that make that experience meaningful in the way it is to us. AI may be able to use correlations between things in other works and how popular those works are, but for finding a genuine insight into the human condition, it is simply not going to be equipped to do it, at least not for a long time. In that sense, I agree with Kelly, although his use of the word “always” has an absolutist ring to it I can’t endorse.
But it’s in the realm of games and mathematics that I think Kelly oversells his thesis. These are areas where insights into the human condition are not necessarily an advantage, although in the case of games they can be.
Much has been written about the achievements of deep-learning systems that are now the best Go players in the world. AlphaGo and its variants have strong claims to having created a whole new way of playing the game. They have taught human experts that opening moves long thought to be ill-conceived can lead to victory. The program plays in a style that experts describe as strange and alien. “They’re how I imagine games from far in the future,” Shi Yue, a top Go player, said of AlphaGo’s play. The algorithm seems to be genuinely creative.
In some important sense it is. Game-playing, though, is different from composing music or writing a novel: in games there is an objective measure of success. We know we have something to learn from AlphaGo because we see it win.
I can’t say I understand this point. Because AlphaGo’s success is objective we can’t count what it does in achieving that win as creative? The fact is AlphaGo found strategies that humans missed. In some ways, this reminds me of the way evolution often find creative solutions to problems, solutions that in retrospect look awfully creative.
In the realm of mathematics, Kelly asserts that, so far, mathematical proofs by AI have not been particularly creative. Fair enough, although by his own standard that’s a subjective judgment. But he then focuses on proofs an AI might come up with that humans couldn’t understand, noting that a proof isn’t a proof if you can’t convince a community of mathematicians that it’s correct.
Kelly doesn’t seem to consider the possibility that an AI might develop a proof incomprehensible to humans that nevertheless convince a community of other AIs who can demonstrate its correctness by using it to solve problems. Or the possibility that the “not particularly creative” AIs today might advance considerably in years to come and produce ground breaking proofs that human mathematicians can understand and appreciate. Mathematics is one area where I could see AI eventually having insights a human might never have.
But I think the biggest weakness in Kelly’s thesis is at its heart, his admission that creativity, like beauty, lies in the eye of the beholder, that it only exists subjectively. In other words, it’s culturally specific, and our conception of what is creative might change in the future, particularly as we become more accustomed to intelligent machines.
This leads him to this line of reasoning:
This claim is not absolute: it depends on the norms that we allow to govern our culture and our expectations of technology. Human beings have, in the past, attributed great power and genius even to lifeless totems. It is entirely possible that we will come to treat artificially intelligent machines as so vastly superior to us that we will naturally attribute creativity to them. Should that happen, it will not be because machines have outstripped us. It will be because we will have denigrated ourselves.
In other words, machines can’t be creative because we humans won’t recognize them as so, and if humans do start to consider them creative, then we will have denigrated ourselves. This is just a rationalized bias for human exceptionalism, a self reinforcing loop that closes off any possibility of considering counter-evidence.
So, in sum, will AI ever be creative? I think that’s a meaningless question (similar to the question of whether it will ever be conscious). The real question is will we ever regard them as creative? The answer is we already do in some contexts (see the AlphaGo quote above), but in others, notably in artistic achievement, it may be a long time before we do. But asserting we never will seems more like a statement of faith than a reasoned conclusion. Who knows what AIs in the 22nd century will be capable of?
What do you think? Is creativity something only humans are capable of? Is there any fact of the matter on this question?