This interesting Nature article by Anthony M. Zador came up in my Twitter feed: A critique of pure learning and what artificial neural networks can learn from animal brains: Artificial neural networks (ANNs) have undergone a revolution, catalyzed by better supervised learning algorithms. However, in stark contrast to young animals (including humans), training such networks … Continue reading Machine learning and the need for innate foundations
Tag: Artificial intelligence
Detecting consciousness in animals and machines, inside-out
An interesting paper came up in my feeds this weekend: Indicators and Criteria of Consciousness in Animals and Intelligent Machines: An Inside-Out Approach. The authors put forth a definition of consciousness, and then criteria to test for it, although they emphasize that these can't be "hard" criteria, just indicators. None of them individually definitely establish … Continue reading Detecting consciousness in animals and machines, inside-out
The ASSC 23 debate on whether artificial intelligence can be conscious
The ASSC (Association of Scientific Study of Consciousness) had its annual conference on consciousness this week, which culminated in a debate on whether AI can be conscious. Note: the event doesn't actually start until the 28:30 minute mark. The remaining part is about 99 minutes long. http://www.youtube.com/watch?v=97z0OmpTs-Q I was delighted to see the discussion immediately … Continue reading The ASSC 23 debate on whether artificial intelligence can be conscious
Protecting AI welfare?
John Basl and Eric Schwitzgebel have a short article at Aeon arguing that AI (artificial intelligence) should enjoy the same protection as animals do for scientific research. They make the point that while AI is a long way off from achieving human level intelligence, it may achieve animal level intelligence, such as the intelligence of … Continue reading Protecting AI welfare?
Is superintelligence possible?
Daniel Dennett and David Chalmers sat down to "debate" the possibility of superintelligence. I quoted "debate" because this was a pretty congenial discussion. (Note: there's a transcript of this video on the Edge site, which might be more time efficient for some than watching a one hour video.) http://www.youtube.com/watch?v=eHN_o6RqrHY Usually for these types of discussions, … Continue reading Is superintelligence possible?
SMBC on what separates humans from machines
Source: Saturday Morning Breakfast Cereal (Click through for full sized version and the red button caption.) My own take on this is that what separates humans from machines is our survival instinct. We intensely desire to survive, and procreate. Machines, by and large, don't. At least they won't unless we design them to. If we … Continue reading SMBC on what separates humans from machines
Why we’ll know AI is conscious before it will
At Nautilus, Joel Frohlich posits how we'll know when an AI is conscious. He starts off by accepting David Chalmers' concept of a philosophical zombie, but then makes this statement. But I have a slight problem with Chalmers’ zombies. Zombies are supposed to be capable of asking any question about the nature of experience. It’s worth … Continue reading Why we’ll know AI is conscious before it will
AI and creativity
Someone asked for my thoughts on an argument by Sean Dorrance Kelly at MIT Technology Review that AI (artificial intelligence) cannot be creative, that creativity will always be a human endeavor. Kelly's main contention appears to be that creativity lies in the eye of the beholder and that humans are unlikely to recognize AI accomplishments … Continue reading AI and creativity
Is it time to retire the term “artificial intelligence”?
Eric Siegel at Big Think, in a new "Dr. Data Show" on the site, explains Why A.I. is a big fat lie: 1) Unlike AI, machine learning's totally legit. I gotta say, it wins the Awesomest Technology Ever award, forging advancements that make ya go, "Hooha!". However, these advancements are almost entirely limited to supervised machine learning, … Continue reading Is it time to retire the term “artificial intelligence”?
The dangers of artificial companionship
Lux Alpstraum at Undark argues against "Our Irrational Fear of Sexbots": When most people envision a world where human partners are abandoned in favor of robots, the robots they picture tend to be reasonably good approximations of flesh-and-blood humans. The sexbots of “Westworld” are effectively just humans who can be programmed and controlled by the … Continue reading The dangers of artificial companionship