The ASSC (Association of Scientific Study of Consciousness) had its annual conference on consciousness this week, which culminated in a debate on whether AI can be conscious.
Note: the event doesn’t actually start until the 28:30 minute mark. The remaining part is about 99 minutes long.
I was delighted to see the discussion immediately become focused on the importance of definitions, since I think the question is otherwise meaningless. In my humble and totally unbiased opinion, the first speaker, Blake Richards, hit it out of the park with his answer that it depends on which definition of consciousness we’re using, and in noting the issues with the folk definitions, such as subjective experience, phenomenality, etc.
In fact, I would go on to say that just about all of Richards’ positions in this discussion struck me as right. The only issue I think he might have misplaced faith in is our ability to come together on one definition of consciousness that is scientifically measurable. (And to be fair, it was more an aspiration than a faith.) I strongly suspect that we’ll always have to qualify which specific version we’re talking about (i.e. access consciousness, exteroceptive-consciousness, etc). But overall I found his hard core functionalism refreshing.
It’s inevitable that this type of conversation turns toward ethics. Indeed, I think when it comes to folk conceptions of consciousness, the questions are inextricably linked. Arguably what is conscious is what is a subject of moral worth, and what is a subject of moral worth is conscious.
I got a real kick out of Hakwan Lau’s personality. As a reminder, he was one of the authors of the paper I shared last week on empirical vs fundamental IIT.
I was also happy to see all the participants reject the zombie concept in the later part of the discussion.
Generally speaking, this was an intelligent, nuanced, and fairly well grounded discussion on the possibilities.
As I noted above, my own view is similar to Richards’. If we can design a system that reproduces the functional capabilities of an animal, human or otherwise, that we consider conscious, then by whatever standard we’re using, that system will be conscious. The interesting question to me is what is required to do that.
What do you think? Is AI consciousness possible? Why or why not? And if it is, what would be required to make you conclude there is a consciousness there?