Recommendation: Dark Intelligence

I’ve been meaning to check out Neal Asher’s books for some time.  They keep coming up as recommendations on Amazon, Goodreads, and in various other venues, and they sound enticing, like the kind of fiction I’d enjoy.  Last week, I finally read the first book of his most recent trilogy, ‘Dark Intelligence‘.

The universe described in Dark Intelligence has some similarities to Iain Banks’ Culture novels.  Earth lies at the center of an interstellar society call the Polity.  The Polity isn’t nearly as utopian as Banks’ Culture, but it’s similarly ruled and run by AIs.  Humans are still around, but in various combinations between baseline humans and ones augmented in various ways, either physically or mentally.  In this particular novel, most of the action takes place outside of the Polity itself.

The Polity has an enemy, the Prador Kingdom, composed of a brutal crab like alien species called the prador.  The Polity and the prador fought a war about a century before the novel begins, which ended with a tentative truce.  What I’ll call the anchor protagonist, the awesomely named Thorvald Spear, was a soldier killed in the war, but at the beginning of the book is resurrected from a recently discovered mind recording.

It turns out that Spear was killed by a rogue AI named Penny Royal, who also took out a large number of Spear’s fellow soldiers when it went berserk.  Penny Royal is still at large when Spear is revived, and he has a burning desire for revenge, so he sets out to find and destroy it.  His chief lead to find Penny Royal is a woman and criminal boss named Isobel Satomi, who may know the AI’s location because she once visited it to attain new abilities, which it provided, but at a cost.  As a result of receiving those abilities, Satomi is now slowly transforming into an alien predator.

Yeah, obviously there is a lot going on in this book, and everything I’ve just described is revealed in the opening chapters.  The book has a substantial cast of viewpoint characters: humans, AIs, and aliens.  Penny Royal is at the center of several ongoing threads, its actions affecting many lives.  It turns out it is regarded by the Polity AIs as dangerous, a “potential gigadeath weapon and paradigm-changing intelligence”.

There are a lot of references to events that I assume happened in previous books, particularly on one of the planets, Masada.  Somewhere in the book I realized that I had already read about one of the aliens in a short story by Asher: Softly Spoke the Gabbleduck.  He appears to have written a large number of books and short stories in this universe.

I found Asher’s writing style enticing but at times tedious.  Enticing because he enjoys describing technology, weapons, and space battles in detail, and a lot of it ends up being nerd candy for the mind.  Tedious because he enjoys detail all around, often describing settings and characters in more detail than I really care to know, making his book read slower as a result.

Asher also has a tendency to evoke things like quantum computing or fusion power as a means for describing essentially magic technologies.  Much of it is standard space opera fare, such as faster than light travel or artificial gravity.  Some of the rest involve things like thousands of human minds being recorded on a shard of leftover AI material.  This isn’t necessarily hard science fiction, although it remains far harder than typical media science fiction.

But what kept me riveted were the the themes he explores.  The story often focuses on the borders between human, AI, and alien minds.  Satomi’s transformation in particular is described in gruesome detail throughout the book.  (It reminded me of the movie, ‘The Fly’, particularly the 1986 version.)  But most of what makes her transformation interesting, as well as similar transformations other characters are going through in the book, are how their minds change throughout the process.  Their deepest desires and instincts start to change in ways that really demonstrate just how contingent our motivations are on our evolutionary background or, in the case of AIs, engineering.

Not that this book was only an intellectual exercise.  There is a lot of action, including space battles, combat scenes, and AI conflict, not to mention scenes of an alien predator hunting down humans, from the predator’s point of view.

Warning: this book has its share of  gore and violence.  I think it’s all in service to the story, but if  you find vividly described gore off putting, this might not be your cup of tea.

This book is the first in a trilogy, so it ended with lots of loose unresolved threads.  I’ve already started the second book, and will probably be reading a lot more of Asher’s books in the coming months.

Posted in Science Fiction | Tagged , , , | 5 Comments

Steven Pinker: From neurons to consciousness

This lecture from Steven Pinker has been around for a while, but it seems to get at a question a few people have asked me recently: how does the information processing of neurons and synapses lead to conscious perception?  Pinker doesn’t answer this question comprehensively (that would require a vast series of lectures), but he answers facets of it to the extent that it’s possible to see how the rest of the answer might come together.

Be warned: this lecture is very dense.  If the concepts are entirely new to you, you might have to re-watch portions to fully grasp some of the points.  And the visual illusions he shows, unfortunately, don’t seem to come through, but the point they make does.

Of course, people who insist that there has to be something more than just the physical processing won’t be convinced.  But if you’re interested in what mainstream neuroscience knows about this stuff, well worth a watch.

Posted in Zeitgeist | Tagged , , , , , | 2 Comments

Is consciousness only in the back of the brain?

There’s an interesting debate going on among some neuroscientists about which parts of the brain are involved in subjective experience.  On the one side are Christof Koch, Giuilio Tononi, and colleagues who argue that consciousness exists wholly in the back of the brain, that the frontal structures are not involved.  On the other side are neuroscientists who, while agreeing that the back part of the brain is definitely involved, argue that the role of the front part can’t be dismissed.

To understand this debate, it’s worth doing a quick review of what is known about the functionality of the various components of the brain.  (To keep things simple, I’m going to focus primarily on the neocortex, the wrinkled cover on the top of the brain.  If you’re familiar with neural anatomy, this isn’t to discount the role of sub-cortical structures such as the thalamus or basal ganglia.)

Lobes of the brain
Image credit: BruceBlaus via Wikipedia

The first thing to understand is that the back part of the brain seems to be dedicated to sensory perception, and the front part to planning and initiating movement.  The neocortex is divided into four lobes, which are separated from each other by deep fissures.

The occipital lobe in the back is dedicated to vision.  The front part of the temporal lobe on the side handles hearing.  The back part of the temporal lobe handles visual recognition of objects, faces, etc.  The back part of the parietal lobe handles visual perception of movement.  The middle part of the parietal lobe, along with surrounding regions, appears to be involved in integration of the various senses.  It’s sometimes referred to as the posterior association cortex.

A strip along the front part of the parietal lobe is the somatosensory cortex, each part of which processes touch sensations from a particular body part.  It’s somewhat mirrored by a strip just across the central sulcus fissure along the back of the frontal lobe, which is the primary motor cortex involved in controlling the movement of each body part.

In addition to controlling movement, the frontal lobe also plans movement.  More immediate planning happens in the regions just forward of the primary motor cortex, named appropriately enough, the premotor cortex.

As we move forward, the planning becomes progressively more forward looking and more abstract.  This is the prefrontal cortex, often referred to the executive center of the brain.  Its primary role is planning, including planning to plan, driving information gathering for future planning, etc.  As part of its function, it acts as a conductor leading the other lobes in imagining various scenarios.

Okay, so back to the debate.

The back-only proponents cite various neurological case studies as evidence, talking about patients who had parts of their frontal lobes damaged or disconnected, but who still showed signs of being conscious.  They also cite cases of patients who had a frontal lobe pathology making them unresponsive, but later recovered the use of their frontal lobes enough to relay that they were conscious the whole time, but simply lacked the will to communicate.

This kind of evidence seems problematic for a number of reasons.  First, in my (admittedly inexpert) opinion, some of the cited cases in the paper seem anecdotal and based on hearsay.  Second, the other cases depend on self report, which is a problem because only patients with at least somewhat functional frontal lobes can self report anything, and the accuracy of such reports hinge on them remembering their former states of mind accurately.  Third, as the authors of the second paper point out, the data has something of a selection bias in it, and some of the cited evidence doesn’t check out.  And finally, again as pointed out in the response paper, the exact nature of frontal lobe damage or disconnect matters, making each case unique.

But I think the actual answer to this question depends on how we define “consciousness.”  If our definition only includes unfocused perception, then the back-only proponents might have a case.  The problem is that we seem to perceive a lot of stuff unconsciously.  And raw perception alone doesn’t quite seem to match most people’s intuition of consciousness.

That intuition also typically requires that the system have attention, emotions, imagination, and introspection.

Frontal lobe expert Elkhonon Goldberg, in his book ‘The New Executive Brain’, sees attention as a frontal lobe function.  He describes the back portions of the brain as creating the stage production of subjective experience, with the audience for the resulting show being in the frontal lobes.  Crucially, it’s this audience that decides what part of the show to focus on, in other words, where to direct attention.

Image credit: OpenStax College via Wikipedia

Emotions are driven by sub-cortical structures such as the amygdala,  hypothalamus, anterior cingulate cortex, and others that are sometimes referred to together as the limbic system.  The signals from these structures seem to affect processing in the frontal lobe, but also the temporal lobe and the insular cortex, which exists in the fissure between the temporal and parietal lobe.  In other words, emotional feeling seems to happen in both the front and back of the brain.

Imagination, simulating various action-sensory scenarios, seems to require the frontal lobes, particularly the prefrontal cortex.  Not that the content of imagination takes place in the prefrontal cortex itself.  It actually farms the content generation of these simulations out to the other regions, such that the vision processing centers handle the visual parts of an imagined scenario, the hearing centers handle the auditory parts, etc.  The prefrontal cortex acts as the initiator, conductor, and audience, but not the content generator.  Still, without the prefrontal cortex driving it, it’s hard to see imagination happening in any meaningful way.

And then there’s introspection, also known as self reflection.  Without introspection, we wouldn’t even know we were conscious, so it seems vital for human level consciousness.  Again, the prefrontal cortex seems heavily involved in this feedback function, although as with imagination, it depends on processing in the back portions of the brain, most likely the regions on the border between the temporal and parietal lobes.

Perhaps another way to look at this is to ask, if we somehow completely removed the brain’s frontal regions (and associated basal ganglia and thalamic nuclei), would the remaining back half still be conscious?  It might have the ability to build predictive sensory models, in other words it would have perception, but the modeling wouldn’t be done with any purpose, and it wouldn’t have any mechanism to decide on what portions of those models should be focused on.  Arguably, it would be a mindless modeling system.

But if we removed the rear portion and kept the frontal lobes, we’d have even less functionality since the frontal lobes are crucially dependent on the posterior ones for the content they need to do their work.

And neither of the above isolated systems would have emotions unless we retained the limbic system as part of their supporting structures.

All of which is to say, for what we intuitively think of as consciousness, we need all of the components discussed above.  Subjective experience is the communication between the perception and emotion centers of the brain and the action oriented centers.  Wholesale removal of any of these centers might conceivably leave us with an information processing framework, but not one most of us would recognize as conscious.

Unless of course I’m missing something?

h/t Keith Frankish and Gregg Caruso for sharing the papers on Twitter.

Posted in Mind and AI | Tagged , , , , , , , | 20 Comments

The success of John Scalzi’s descriptive minimalism

One of the categories here on the blog is Science Fiction, mainly because I read and watch a lot of it.  Occasionally, someone wanting to get into the literary version of the genre asks me for recommendations on good initial books to start with.  My recommendation often depends on the person, but I frequently suggest they try John Scalzi’s work.

Scalzi has a light witty writing style.  He never seems to be far from outright humor, although his stories usually have an overall serious core.  This allows him to explore some issues that other authors struggle to do without alienating all but the most hardcore sci-fi nerds.  A lot of people who dislike science fiction often do like his books.

Of the writers who have explored posthuman themes, his approach is often the least threatening.  His breakout novel, Old Man’s War, features old people recruited into a future army where their minds are transferred into new combat bodies.  But he carefully avoids broaching some of the more existential issues associated with that idea.  Likewise, his novel Lock In explores minds in different bodies in a way that minimizes the angst of many of his more (small “c”) conservative readers.

Scalzi makes compromises to make his work more accessible, but it allows him to present ideas to a wide audience.  He’s been rewarded for it; he’s a bestselling author.  And he won the Hugo Award for Best Novel with the book, Redshirts, with a setting very similar to Star Trek, but one where the ship crew actually notices that a lot of people other than the senior officers die on away missions, and decide to do something about it.

His most recent book is The Collapsing Empire, a far future story about an interstellar empire that is about to lose its ability to travel interstellar distances.  I read, enjoyed, and recommend it.  But it’s the first in a new series, so it ends on a cliffhanger, which some readers might find annoying.

But the reason for this post is that some reviewers are apparently finding the book to be too short a read.  As Scalzi pointed out in a recent post, the novel isn’t actually a short one by normal sci-fi standards, weighing in at about 90,000 words.  Why then does it feel short to some readers?  Scalzi himself offers an explanation.

I’m not entirely sure what makes people think The Collapsing Empire is short, but I have a couple guesses. One is that, like most books of mine, it’s heavy on dialogue and light on description, which makes it “read” faster than other books of the same length might be.

I think Scalzi’s exactly right about this.  His books do read fast, and I think a large part of it is because they’re simply easy to read.  It takes a minimal amount of effort to parse them, particularly starting with Redshirts.  I saw someone once comment that his writing makes for an “effortless” experience of story.

It seems to me that a large part of this is because of his “heavy on dialogue and light on description” style.  If you’ve never read his stuff and want to get an idea of this style, check out his novella on Tor: After the Coup.  Scalzi virtually never gives a detailed description of settings, except to note what kind of place they are, such as an office, spaceship bridge, or palace, and if there is anything unusual about them.  And I can’t recall him ever describing a character in detail.

Some readers are put off by this type of minimalism, finding it to be a bit too “white room”, too much of a bare stage.  They prefer more sensory detail to add more vividness for the setting or character.

I can understand that sentiment to some extent, but I personally find detailed descriptions too tedious.  If I’m otherwise enjoying the story, I’ll put up with detailed descriptions (to an extent), but for me it’s something I have to endure, an obstacle I have to climb over.

One of the most often cited pieces of writing advice is “show don’t tell”.  This advice seems to mean different things to different people.  To me it means that, to relay important information to the reader, the best option is with story events that reveal it, the second is with dialog or inner monologue, and the least desirable is with straight exposition.

But many writers take “show don’t tell” to mean providing detailed descriptions and letting the reader reach their own conclusions.  So instead of simply saying that a workroom is messy, the details of the messiness should be described and the reader allowed to figure out that it’s a mess.  As a reader, I personally find this kind of writing frustratingly tedious.  I tend to glaze over during the description and miss the point the author wanted me to derive.

Apparently a lot of people agree with me.  As I noted above, Scalzi is a bestselling author.  I’ll say I don’t like everything about his writing.  (His character voices could be more distinct, although he’s improving on that front, and his endings often feel a little too pat.)  But his books are always entertaining, and I think, together with the humor, the minimalist style has a lot to do with it.

In many ways, this style is reminiscent of a type of writing we used to see a lot more of.  Classic science fiction authors like Robert Heinlein (whose style Scalzi’s early Old Man’s War books emulated), Isaac Asimov, Jack Vance, and many others were all fairly minimalist on description.

Over time, styles have tended to become more verbose.  I’m not sure why this is, but I suspect technology has something to do with it.  Before the 1980s, most writers used a typewriter.  Iterative revisions, with lots of opportunities to add new descriptive details, often required retyping a lot of text (i.e. work).  It became much easier with word processing software, making it much more common.

In my view, this has led to a lot of bloated novels, often taking 500 pages to tell a 300 page story.  To be clear, I have no problem with a 500 page book if it tells a 500 page story (Dune and Fellowship of the Ring both told a lot of story with around 500 pages), but many authors today seem to need that many pages to tell the same stories that were once handled with much smaller books.

Certainly tastes vary, but I think Scalzi’s success shows that when given an option for tighter writing, a lot of readers take it.  I wish more authors would take note.

Posted in Writing | Tagged , , , , | 14 Comments

Why fears of an AI apocalypse are misguided

In this Big Think video, Steven Pinker makes a point I’ve made before, that fear of artificial intelligence comes with a deep misunderstanding about the relationship between intelligence and motivation.  Human minds come with survival instincts, programmatic goals hammered out by hundreds of millions of years of evolution.  Artificial intelligence isn’t going to have those goals, at least unless we put them there, and therefore no inherent motivation to be anything other than be the tools they were designed to be.

Many people concerned about AI (artificial intelligence) quickly concede that worry about it taking over the world due to a sheer desire to dominate are silly.  What they worry about are poorly thought out goals.  What if we design an AI to make paperclips, and it attacks its task too enthusiastically and turns the whole Earth, and everyone on it, into paperclips?

The big hole in this notion is that the idea that we’d create such a system, then give it carte blanche to do whatever it wanted to in pursuit of its goals, that we wouldn’t build in any safety systems or sanity checks.  We don’t give that carte blanche to our current computer systems.  Why should we do it with more intelligent ones?

Perhaps a more valid concern is what motivations some malicious human, or group of humans, might intentionally put in AIs.  If someone designs a weapons system, then giving it goals to dominate and kill the enemy might certainly make sense for them.  And such a goal could easily go awry, a combination of the two concerns above.

But even this concern has a big assumption, that there would only be one AI in the world with the capabilities of the one we’re worried about.  We already live in a world where people create malicious software.  We’ve generally solved that problem by creating more software to protect us from the bad software.  It’s hard to see why we wouldn’t have protective AIs around to keep any errant AIs in line and stop maliciously programmed ones.

None of this is to say that artificial intelligence doesn’t give us another means to potentially destroy ourselves.  It certainly does.  We can add it to the list: nuclear weapons, biological warfare, overpopulation, climate change, and now poorly thought out artificial intelligence.  The main thing to understand about this list is it all amounts to things we might do to ourselves, and that includes AIs.

There are possibilities of other problems with AI, but they’re much further down the road.  Humans might eventually become the pampered centers of vast robotic armies that do all the work, leaving the humans to live out a role as a kind of queen bee, completely isolated from work and each other, their every physical and emotional need attended to.  Such a world might be paradise for those humans, but I think most of us today would ponder it with some unease.

Charles Stross in his science fiction novel ‘Saturn’s Children’, imagined a scenario where humans went instinct, their reproductive urge completely satisfied by sexbots indistinguishable from real humans but without the emotional needs of those humans, leaving a robotic civilization in its wake.

None of this strikes me as anything we need to worry about in the next few decades.  A bigger problem for our time is the economic disruption that will be caused by increasing levels of automation.  We’re a long way off from robots taking every job, but we can expect waves of disruption as technology progresses.

Of course, we’re already in that situation, and society’s answer so far to the effected workers has been variations of, “Gee, glad I’m not you,” and a general hope that the economy would eventually provide alternate opportunities for those people.  As automation takes over an increasingly larger share of the economy, that answer may become increasingly less viable.  How societies deal with it could turn out to be one of the defining issues of the 21st century.

Posted in Zeitgeist | Tagged , , , , , | 58 Comments

Are the social sciences “real” science?

YouTube channel Crash Course is starting a new series on what is perhaps the most social of social sciences: Sociology.

The social sciences, such as sociology, but also psychology, economics, anthropology, and other similar fields get a lot of grief from people about not being “real” science.  This criticism is typically justified by noting that scientific theories are about making predictions, and the ability of the social sciences to make predictions seems far weaker than, say, particle physics.  Economists couldn’t predict when the Great Recession was coming, the argument goes, so it’s not a science.

But this ignores the fact that predictions are not always possible in the natural sciences either.  Physics is the hardest of hard sciences, but it’s married to astronomy, an observational science.  Astronomers can’t predict when the star Betelguese will go supernova.  But they still know a great deal about star life cycles, and can tell that Betelguese is in a stage where it could go any time in the next few million years.

Likewise biologists can’t predict when and how a virus will mutate.  They understand evolution well enough to know that they will mutate, but predicting what direction it will take is impossible.  Meteorologists can’t predict the precise path of a hurricane, even though they understand how hurricanes develop and what factors lead to the path they take.

The problem is that these are matters not directly testable in controlled experiments.  Which is exactly the problem with predicting what will happen in economies.  In all of these cases, controlled experiments, where the variables are isolated until the causal link is found, are impossible.  So scientists have little choice but to do careful observation and recording, and look for patterns in the data.

Just as an astronomer knows Betelguese will eventually go supernova, an economist knows that tightening the money supply will send contractionary pressures through the economy.  They can’t predict that the economy will definitely shrink if the money supply is tightened because other conflating variables might affect the outcome, but they know from decades of observation that economic growth will be slower than it otherwise would have been.  This is an important insight to have.

In the same manner, many of the patterns studied in the other social sciences don’t provide precise predictive power, but they still give valuable insights into what is happening.  And again, there are many cases in the natural sciences where this same situation exists.

Why then all the criticism of the social sciences?  I think the real reason is that the results of social science studies often have socially controversial conclusions.  Many people dislike these conclusions.  Often these people are social conservatives upset that studies don’t validate their cherished notions, such as traditionally held values.  But many liberals deny science just as vigorously when it violates their ideologies.

Not that everything is ideal in these fields.  I think anthropology ethnographers often get too close to their subject matter, living among the culture they’re studying for years at a time.  While this provides deep insights not available through other methods, it taints any conclusions with the researcher’s subjective viewpoint.  Often follow up studies don’t have the same findings.  This seems to make ethnographies, a valuable source of cultural information, more journalism than science.

And psychology has been experiencing a notorious replication crisis for the last several years, where previously accepted psychological effects are not being reproduced in follow up studies.  But the replication crisis was first recognized by people in the field, and the field as a whole appears to be gradually working out the issues.

When considering the replication crisis, it pays to remember the controversy over the last several years in theoretical physics.  Unable to test their theories, some theorists have called for those theories not to be held to the classic testing standard.  Many in the field are pushing back, and theoretical physics is also working through the issues.

In the end, science is always a difficult endeavor, even when controlled experiments are possible.  Looking at the world to see patterns, developing theories about those patterns, and then putting them to the test, facing possible failure, is always a hard enterprise.

It’s made more difficult when your subject matter have minds of their own with their own agendas, and can alter their behaviors when observed.  This puts the social sciences into what philosopher Alex Rosenberg calls an arms race, where science uncovers a particular pattern, people learn about it, alter their behavior based on their knowledge of it, and effectively change the pattern out from under the science.

But like all sciences, it still produces information we wouldn’t have otherwise had.  And as long as it’s based on careful rigorous observation, with theories subject to revision or refutation on those observations, I think it deserves the label “science”.

Posted in Science | Tagged , , , , | 41 Comments

What about subjective experience implies anything non-physical?

Mary’s room is a classic philosophical thought experiment about consciousness.  The Wikipedia article on what’s called the knowledge argument quotes Frank Jackson, the originator of the argument, as follows:

Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. […] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?

The takeaway idea from this thought experiment is supposed to be that, since Mary knows “all the physical information there is to obtain” about seeing color, what she learns when having her first actual sensory experience of color must be non-physical.

But this assumes that it is possible for Mary to actually know everything physical about seeing color, without actually ever seeing color.  It seems clear she does get new knowledge when she leaves the room, the knowledge of what it’s like to actually experience color.  The question is what the nature of that new knowledge is.  Like so many of these types of exercises, the premise essentially assumes the conclusion, that raw subjective experience isn’t physical.  But if the raw experience actually is physical, then the premise is a contradiction, positing that she has all the information, then going on to describe what information she doesn’t have.

But the question I have is, why does this premise, that experience is not physical, seem compelling to so many people?  (At a philosophical level.  I understand why so many people find it emotionally compelling.)

One of the chief features that separate humans from other animals is the degree to which we can think symbolically.  Language is the most common example of this ability.  Other animals issue sounds which mean something to those around them, such as a monkey who issues a certain screech for a snake, and a different screech for a flying predator.  But only humans appear able to manipulate the sounds in complex sentences and frameworks, particularly with hierarchical and recursive levels of complexity.

When we use language, we utter a sound that is a symbol for something else.  That something else might be another symbol acting as another placeholder for collections of more primitive symbols.  But eventually, if we follow through the hierarchy of symbols, the most primitive ones we can find will represent sensory perceptions, emotions, or actions, in other words, raw conscious experience.

Now, you might argue that some words refer to objects, such as dogs.  But dogs are themselves a composite sensory experience.  When I say the word “dog” to you, it evokes certain imagery.  But the dog concept generally denotes a certain type of animal with a certain type of body plan.  The imagery has colors, textures, shapes, sounds, and smells, in other words, more primitive sensory experiences.

We might also talk about the altered consciousness of meditative states some people experience.  But if you read descriptions of those states, they’re always either using a new word to label that state, or attempting to describe it in terms of the other primitives we’re all familiar with.

So, all language ultimately reduces to these primitive aspects of conscious experience: sensory perception, primal emotions, motor action, and perhaps meditative states.  Once we reach this point however, language ends.  While we can come up with words as stand-ins for these primitives, we can’t further describe them.

For example, consider trying to describe the color yellow to someone who had been born blind.  You can’t.  The best you can do is attempt to relate it into terms the blind person might understand, such as the feel of sunshine, the touch and smell of bananas, etc.  But you can’t describe the raw experience of yellow to them.  It’s ineffable.

But does this ineffability, this inability to subjectively reduce the raw experience further, mean anything about the reality of such an experience?  What about this ineffability might lead us to conclude it involves something other than physics?

It’s worth noting that just because these experiences can’t be subjectively reduced, it doesn’t mean that the neural correlates can’t be objectively reduced.  For example, we know the experience of yellow begins with photons with wavelengths of between 575 and 585 nanometers striking our retina, exciting a mixture of red sensitive and green sensitive light cone receptors and causing a cascade of electrochemical signals up the optic nerve to the thalamus and occipital lobe, somewhere producing what will eventually be communicated as yellow to the other brain centers.

Of course, we are far from a full accounting of the neuroscience here.  And many seem always ready to seize on the remaining gaps as an opportunity to wedge in mystical or magical notions.  But every year, those gaps close a little more.  Taking solace in them seems like an ever eroding stance.

A common argument is that we don’t know why these experiences exist.  Why can’t the brain go about its business without them?  This seems to assume that raw experience is superfluous to what the brain does, and perhaps that superfluousness means that it’s outside of the causal framework we call “physics”, an epiphenomenon.

But as I’ve noted before, the very fact that we can discuss primal experiences and apply symbolic labels to them means that they’re not outside of that causal framework.  It takes extreme logical contortions to avoid concluding they don’t influence at least the language centers of our brain.

So then, what explains experience?  As I’ve noted before, I think to have any hope of answering that question, we have to be willing to ask what experience actually is.  It seems like there are many possible answers, but the one I like best is grounded in the evolutionary reason for brains, to make movement decisions.  Experience is communication.  But communication from what to what?

I think the answer is: communication from the perception centers and emotion centers of the brain to the movement planning centers.  This communication provides information that is crucial for the movement planning centers to do their job.  What we call “experience” or “feeling” is the raw substance of that communication.  This communication includes sensory perceptions (including a sense of self) and emotional reactions.  Remove it, and it’s difficult to see how movement decisions can happen.

Of course, this remains a speculative explanation.  Any explanation of experience will be at this point.  The question is, does speculation of this type, built on physical functionality we already know has to exist in the brain, involve fewer assumptions than speculation about non-physical phenomena?

It’s often said that subjective experience can’t be explained physically.  My question is, what am I missing?  What about experience causes people to say this?  What specific attributes are outside the purview of any such explanation?

Posted in Mind and AI | Tagged , , , , , , , | 98 Comments