Discovering the architecture of the mind

I’ve written numerous times here that I tend to think that AGI (artificial general intelligence) and mind uploading are both ultimately possible.  (Possibly centuries in the future, but possible.)  I’ve also noted that we’ll have to have a working understanding of the mind, how it works, how it is structured, before we can do either, but that gaining that understanding is possible.

I often get push-back on this idea.  One of the arguments I commonly hear is, perhaps the mind doesn’t have a structure.  Maybe it’s just an unstructured mess from which our consciousness arises, or the structure may be so complicated that it’s forever outside of our ability to understand it.

I think this is unlikely, for two reasons, one broad and one narrow.

The first broad one is that everything else in biology follows recognizable systems and subsystems and has a systematic structure that we’ve been able to discover.  (Think organs in a body or cell machinery.)  Many of these systems are profoundly complex, but they haven’t shown themselves to be undiscoverable.  Arguing that the mind is unstructured is arguing that everything is systematic until we get to the mind, then the rules change.  It’s possible, but doesn’t seem likely to me.

To discuss the second more narrow reason, it’s necessary to mention a couple of facts about brains.  Before I started reading neuroscience, I thought of the brain as a computer, and the mind as the software of that computer.  This is a common conception held by many programmers and other people knowledgeable about computing.  While it could be broadly true, there are some crucial caveats to keep in mind.

The division between hardware and software is an innovation of modern computing.  Interestingly, the earliest electronic computers didn’t fully have that division, often requiring physical rewiring to be reprogrammed.  The ability to load and reload new software dramatically increased the flexibility and usefulness of computing systems.

This ability is why, when we speak of the architecture of an operating system or application software, such as Microsoft Windows, we generally do so without reference to the hardware that it’s running on.  To be sure, the architecture of an operating system does address the hardware, in the case of Windows with an architectural layer called the HAL (hardware abstraction layer).  The HAL and device drivers deal directly with the hardware so most of the software system doesn’t have to.

350px-Computer_system_bus.svg
Image credit: W Nowicki via Wikipedia

In addition, modern hardware includes a system bus, a mechanism that allows any component of the system to talk to any other component without regard to how near or far it is on the system board.  And memory chips are random access, generally allowing any desired memory location to be accessed by a memory address.

For these reasons, pointing to a specific part of a CPU or memory chip for a piece of Windows functionality is a meaningless exercise.  Yes, the code that implements that feature does physically exist transitorily in those chips, but the exact location of that code right then is not particularly meaningful.  It’s exact location varies for a wide variety of reasons, from the timing of when it was loaded, to various transitory needs of the operating system.

It’s often noted that if we didn’t understand software, examining computer hardware in the same way we typically examine the brain would tell us very little about the software architecture.  And that is right, if brains worked like modern computers.

However brains don’t work like that.  There’s no mechanism to reload software and no system bus.  The hardware-software divide doesn’t exist.  To be clear, there’s extensive evidence that brains are information processing systems, but their architectures are very different from general purpose digital computers.

800px-Slide3aa
Image credit: Anatomist90 via Wikipedia

What this means is that specific functionality in the brain exists in specific locations.  The functionality in most locations depends on what’s connected to that location.  For example, vision processing happens in the occipital lobe.  It doesn’t happen there because there’s anything necessarily special about the neurons in the occipital lobe, but because that’s where the vision processing nucleus of the thalamus connects to, and that nucleus does vision processing because that’s where the connections from the eye retinas go.

The same applies to the other processing centers in the brain.  The parietal lobe processes touch sensations from throughout the body, with specific parts of the parietal lobe dealing with specific body parts.  The temporal lobe processes auditory information, the frontal lobe plans and initiates movement, the cerebellum provides fine motor coordination, and so on.  The parts of the brain that process sensations from feet are always in the same location.  The parts that recognize certain visual shapes or colors are also generally in the same location.

There is some variance between individual brains.  For instance, the language centers are usually in the left hemisphere of the brain but a few people have them on the right side.  (The functionality of the two hemispheres, each controlling half of the body, is similar but not identical.)  But these variations are rarely major.  The location of functionality is dependent on where the connections from the senses come in, where the motor control connections come out, and where connections from other brain regions connect, and all of that is similar in individual brains of the same species.

Of course, the brain has plasticity, so if someone is blind during development, many of the neurons that typically service vision, in the absence of any signals from the eyes, can get recruited to help in the processing going on in adjacent areas.  But in a healthy person, the majority of the functions can be physically identified to a specific part of the brain.

Now, there are areas of the brain which do appear to have specialized hardware.  The neurons in the hippocampus are reportedly able to strengthen and weaken synapses faster than in other regions, which no doubt aids its role in long term memory storage.  The amygdala, given its role in generating primal emotions, probably has more hard coded functionality than elsewhere in the brain.  Most of this specialized hardware appears to be in the midbrain or lower.  The neocortex regions, again, get their specialization by what connects to them.

Memories themselves are stored in the patterns of synapses (connections between neurons) throughout the brain, with visual portions of a memory stored in the visual processing centers, auditory portions in the auditory processing centers, etc.  A full memory has to be retrieved (via connections) from all these regions.

The coordination of consciousness, sleep, and attention happen in the thalamus, which also appears to be heavily involved in integration of sensory information and motor control.  The thalamus outsources most of its work through its extensive connections to the neocortex, which essentially serves as a gigantic expansion substrate for it.  (The thalamus is often described as an information hub for the neocortex, but given its many other crucial functions, I think it’s better to see the thalamus as the main system calling subroutines in the neocortex.)

All of which is to say that studying brain modules and how they are interconnected, in other words studying the structure of the brain, is also studying the software, the architecture of the mind.  This is the second reason I mentioned above.  Without the software-hardware divide, the physical structure of the brain is also its logical structure, and science already has some understanding of it.

None of this is to say that the understandings of all these areas is anywhere near complete.  Indeed, there are still areas of the brain whose purposes are not well understood at all.  And given the enormous complexity, 86 billion neurons (most of which are in the cerebellum) and up to a quadrillion synapses, no one has yet managed to trace the neural circuits throughout to see precisely how a particular thought or decision happens.

But new imaging techniques are constantly being developed, and detailed knowledge increases each year.  A final understanding is a long way off, but there’s no fundamental barrier to it being achievable.

Michael Graziano on building a brain

I’ve written a few times on the Attention Theory schema of consciousness.  It’s a theory I like because it’s scientific, eschewing any mystical steps, such as assuming that consciousness just magically arises at a certain level of complexity.  It’s almost certainly not perfect, but I think it’s a major step in the right direction.

Michael Graziano, the author of the theory, has a new article up at Aeon, describing, under his theory, the essential steps in giving a computer consciousness.  Of course, the devil is in the details, as they always will be.  But it’s a fascinating new way to describe the theory.  If you’ve read my previous posts on this and still didn’t feel clear about it, I recommend checking out his article.

Artificial intelligence is growing more intelligent every year, but we’ve never given our machines consciousness. People once thought that if you made a computer complicated enough it would just sort of ‘wake up’ on its own. But that hasn’t panned out (so far as anyone knows). Apparently, the vital spark has to be deliberately designed into the machine. And so the race is on to figure out what exactly consciousness is and how to build it.

…In this article I’ll conduct a thought experiment. Let’s see if we can construct an artificial brain, piece by hypothetical piece, and make it conscious. The task could be slow and each step might seem incremental, but with a systematic approach we could find a path that engineers can follow.

Read the rest at Aeon.

 

Enthusiasts and Skeptics Debate Artificial Intelligence

Kurt Anderson has an interesting article at Vanity Fair that looks at the debate among technologists about the singularity: Enthusiasts and Skeptics Debate Artificial Intelligence | Vanity Fair.

Machines performing unimaginably complicated calculations unimaginably fast—that’s what computers have always done. Computers were called “electronic brains” from the beginning. But the great open question is whether a computer really will be able to do all that your brain can do, and more. Two decades from now, will artificial intelligence—A.I.—go from soft to hard, equaling and then quickly surpassing the human kind? And if the Singularity is near, will it bring about global techno-Nirvana or civilizational ruin?

The article discusses figures like Ray Kurzweil and Peter Diamandis, who strongly believe the singularity is coming and are optimistic about it, and skeptics like Jaron Lanier and Mitch Kapor, who are skeptical of singularity claims.

Personally, I put myself somewhere in the middle.  I’m skeptical that there’s going to be a hard takeoff singularity in the next 20-30 years, an event where technological progress runs away into a technological rapture of the nerds.  But I do think many of the claims that singularitarians make may come true, eventually.  But “eventually” might be centuries down the road.

My skepticism comes from two broad observations.  The first is that I’m not completely convinced that Moore’s Law, the observation by Gordon Moore, co-founder of Intel, that the number of transistors on semiconductor chips doubles every two years, is going to continue indefinitely into the future.

No one knows exactly when we’ll hit the limits of semiconductor technology, but logic-gate sizes are getting closer to the size of atoms, often understood to be a fundamental limit.  It’s an article of faith among staunch singularitarians that some new technology, like quantum or optic computing, will step in to continue the progress, but I can’t see any guarantee of that.  Of course, there’s no guarantee that one of those new technologies won’t soar into even higher exponential progress, but beating our chests about it and proclaiming trust in its eventuality is more emotion than rationality.

The second observation is that the people making predictions of a technological singularity understand computing technology (although not all of them), but not neuroscience.  In other words, they understand one side of the equation, but not the other.  The other day I linked to a study which showed that predictions of hard AI since Turing had been consistently over optimistic, not necessarily on the technology, but on where the technology would have to be to function anywhere like an organic brain (human or not).

Now, that being said, I do think many of the skeptics are too skeptical.  Many of them insist that we’ll never be able to build a machine that can match the human brain, that we’ll never understand it well enough to do so.  I can’t see any real basis for that level of pessimism.

In my experience, when someone claims that X will be forever unknowable, what they’re really saying, explicitly or implicitly, is that we shouldn’t ever have that knowledge.  I can’t disagree more with that kind of thinking.  Maybe there will be  areas of reality we’ll never be able to understand, but I certainly hope the people who a priori conclude that about those areas never get the ability to prevent others from trying.

There are a lot of other things singularitarians assert, such as the whole universe being converted into “computronium” or beings able to completely defy our current understanding of physics.  I think these types of predictions are simply unhinged speculation.  Sure, we can’t rule them out, but having any level of confidence in them strikes me as silly.

None of this is to say that there won’t be amazing progress with AI in the next few years.  We’ll see computers able to do things that will surprise and delight us, and make many people nervous.  In other words, the current trends will continue.  I think we’ll eventually get there, and I’d love it if it happened in my lifetime, but I suspect it will be a much longer and harder slog than most of the singularity advocates imagine.

Elon Musk: Killer robots will be here within 5 Years

Not sure what to make of this one: ELON MUSK: Killer Robots Will Be Here Within 5 Years – Business Insider.

Elon Musk has been ranting about killer robots again.

Musk posted a comment on the futurology site Edge.org, warning readers that developments in AI could bring about robots that may autonomously decide that it is sensible to start killing humans.

…Here’s Musk’s deleted comment from Edge.org:

The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I don’t understand.

I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen…

According to the article, the comment was deleted a few minutes after it was posted.  (Which will no doubt be the source of new conspiracy theories.)

Here’s an article on the secretive Deepmind initiative that Musk mentions.

I don’t know if Musk’s assessment of how close we might be to general artificial intelligence is accurate, but this was more than just a cautionary note.  It was outright fear mongering.  I think he realized it, which no doubt is why he deleted it so quickly.  (Assuming his account didn’t get hacked or something, but this sounds like it is in line with what he’s been saying in interviews.)

I’m not going to repeat everything I’ve said about AI in the last few days.  I’ll just note that Musk’s familiarity with the putative progress of AI research doesn’t mean he understands how minds work, or what it would take for an artificial one to actually be a threat.