This is an interesting video from Big Think. It features discussion from a variety of thinkers like Richard Dawkins, Peter Singer, Susan Schneider, and others, including a lot of intelligent remarks from someone I wasn't familiar with until now, Joanna Bryson. https://www.youtube.com/watch?v=ETVr_lpIMO0 Consciousness lies in the eye of the beholder. There is no universally agreed … Continue reading Does conscious AI deserve rights?
Recently, there was a debate on Twitter between neuroscientists Hakwan Lau and Victor Lamme, both of whose work I've highlighted here before. Lau is a proponent of higher order theories of consciousness, and Lamme of local recurrent processing theory. The debate began when Lau made a statement about panpsychism, the idea that everything is conscious … Continue reading The issues with biopsychism
In the post on the Chinese room, while concluding that Searle's overall thesis isn't demonstrated, I noted that if he had restricted himself to a more limited assertion, he might have had a point, that the Turing test doesn't guarantee a system actually understands its subject matter. Although the probability of humans being fooled plummets … Continue reading The barrier of meaning
Neuroscientists Kingston Man and Antonio Damasio have a paper out arguing that the way to get artificial intelligence (AI) to the next level is to add in feelings. “Today’s robots lack feelings,” Man and Damasio write in a new paper (subscription required) in Nature Machine Intelligence. “They are not designed to represent the internal state of their operations … Continue reading Add feelings to AI to achieve general intelligence?
This interesting Nature article by Anthony M. Zador came up in my Twitter feed: A critique of pure learning and what artificial neural networks can learn from animal brains: Artificial neural networks (ANNs) have undergone a revolution, catalyzed by better supervised learning algorithms. However, in stark contrast to young animals (including humans), training such networks … Continue reading Machine learning and the need for innate foundations
An interesting paper came up in my feeds this weekend: Indicators and Criteria of Consciousness in Animals and Intelligent Machines: An Inside-Out Approach. The authors put forth a definition of consciousness, and then criteria to test for it, although they emphasize that these can't be "hard" criteria, just indicators. None of them individually definitely establish … Continue reading Detecting consciousness in animals and machines, inside-out
The ASSC (Association of Scientific Study of Consciousness) had its annual conference on consciousness this week, which culminated in a debate on whether AI can be conscious. Note: the event doesn't actually start until the 28:30 minute mark. The remaining part is about 99 minutes long. http://www.youtube.com/watch?v=97z0OmpTs-Q I was delighted to see the discussion immediately … Continue reading The ASSC 23 debate on whether artificial intelligence can be conscious
John Basl and Eric Schwitzgebel have a short article at Aeon arguing that AI (artificial intelligence) should enjoy the same protection as animals do for scientific research. They make the point that while AI is a long way off from achieving human level intelligence, it may achieve animal level intelligence, such as the intelligence of … Continue reading Protecting AI welfare?
Daniel Dennett and David Chalmers sat down to "debate" the possibility of superintelligence. I quoted "debate" because this was a pretty congenial discussion. (Note: there's a transcript of this video on the Edge site, which might be more time efficient for some than watching a one hour video.) http://www.youtube.com/watch?v=eHN_o6RqrHY Usually for these types of discussions, … Continue reading Is superintelligence possible?
Source: Saturday Morning Breakfast Cereal (Click through for full sized version and the red button caption.) My own take on this is that what separates humans from machines is our survival instinct. We intensely desire to survive, and procreate. Machines, by and large, don't. At least they won't unless we design them to. If we … Continue reading SMBC on what separates humans from machines