You've probably heard the narrative before. At some point, we will invent an artificial intelligence that is more intelligent than we are. The superhuman intelligence will then have the capability to either build an improved version of itself, or engineer upgrades that improve its own intelligence. This will set off a process where the system … Continue reading Is the singularity right around the corner?
Tag: Artificial intelligence
Could a neuroscientist understand a microprocessor? Is that a relevant question?
A while back, Julia Galef on Rationally Speaking interviewed Eric Jonas, one of the authors of a study that attempted to use neuroscience techniques on a simple computer processor. The field of neuroscience has been collecting more and more data, and developing increasingly advanced technological tools in its race to understand how the brain works. … Continue reading Could a neuroscientist understand a microprocessor? Is that a relevant question?
Adding imagination to AI
As we've discussed in recent posts on consciousness, I think imagination has a crucial role to play in animal consciousness. It's part of a hierarchy I currently use to keep the broad aspects of cognition straight in my mind. Reflexes, instinctive or conditioned responses to simuli Perception, which increases the scope of what the reflexes … Continue reading Adding imagination to AI
The system components of pain
Peter Hankins at Conscious Entities has a post looking at the morality of consciousness, which is a commentary on piece at Nautilus by Jim Davies on the same topic. I recommend reading both posts in their entirety, but the overall gist is that which animals or systems are conscious has moral implications, since only conscious … Continue reading The system components of pain
Why fears of an AI apocalypse are misguided
In this Big Think video, Steven Pinker makes a point I've made before, that fear of artificial intelligence comes with a deep misunderstanding about the relationship between intelligence and motivation. Human minds come with survival instincts, programmatic goals hammered out by hundreds of millions of years of evolution. Artificial intelligence isn't going to have those … Continue reading Why fears of an AI apocalypse are misguided
Daniel Wolpert: The real reason for brains
I came across this old TED talk today and decided to share it because it's relevant to the previous post on consciousness and simulations. Daniel Wolpert's talk doesn't address consciousness specifically, only the overall role of the simulations, but it's still a fascinating exploration of what we're doing when our attention is focused on a … Continue reading Daniel Wolpert: The real reason for brains
SMBC: A treatise on machine ethics
via Saturday Morning Breakfast Cereal A better question might be, if a robot has conflicting programming, what will it do? That seems to be where most human moral dilemmas arise, when our instincts are in conflict.
SMBC: Do humans have feelings?
Apropos to the previous post, albeit from a different angle. Hovertext: "This comic was posted in order to increase my social status, acquire wealth, and thus improve the reproductive fitness of my offspring." Click through for full sized version and red button caption. via SMBC I've noted many times before that emotions and other instinctual … Continue reading SMBC: Do humans have feelings?
What would it mean for a machine to suffer?
One of the dividing lines I often hear in discussions about whether we should regard an artificially intelligent machine as a fellow being is, does it have the capacity to suffer? It's an interesting criteria, since it implies that what's important is that there be something there for us to empathize with. But it raises an interesting question. … Continue reading What would it mean for a machine to suffer?
Let artificial intelligence evolve? Probably fruitless, possibly dangerous.
Michael Chorost has an article at Slate about artificial intelligence and any dangers it might present. I find myself in complete agreement with the early portions of his piece, as he explains why an AI (artificial intelligence) would be unlikely to be dangerous in the way many fear. To value something, an entity has to be able … Continue reading Let artificial intelligence evolve? Probably fruitless, possibly dangerous.
