Is consciousness only in the back of the brain?

There’s an interesting debate going on among some neuroscientists about which parts of the brain are involved in subjective experience.  On the one side are Christof Koch, Giuilio Tononi, and colleagues who argue that consciousness exists wholly in the back of the brain, that the frontal structures are not involved.  On the other side are neuroscientists who, while agreeing that the back part of the brain is definitely involved, argue that the role of the front part can’t be dismissed.

To understand this debate, it’s worth doing a quick review of what is known about the functionality of the various components of the brain.  (To keep things simple, I’m going to focus primarily on the neocortex, the wrinkled cover on the top of the brain.  If you’re familiar with neural anatomy, this isn’t to discount the role of sub-cortical structures such as the thalamus or basal ganglia.)

Lobes of the brain
Image credit: BruceBlaus via Wikipedia

The first thing to understand is that the back part of the brain seems to be dedicated to sensory perception, and the front part to planning and initiating movement.  The neocortex is divided into four lobes, which are separated from each other by deep fissures.

The occipital lobe in the back is dedicated to vision.  The front part of the temporal lobe on the side handles hearing.  The back part of the temporal lobe handles visual recognition of objects, faces, etc.  The back part of the parietal lobe handles visual perception of movement.  The middle part of the parietal lobe, along with surrounding regions, appears to be involved in integration of the various senses.  It’s sometimes referred to as the posterior association cortex.

A strip along the front part of the parietal lobe is the somatosensory cortex, each part of which processes touch sensations from a particular body part.  It’s somewhat mirrored by a strip just across the central sulcus fissure along the back of the frontal lobe, which is the primary motor cortex involved in controlling the movement of each body part.

In addition to controlling movement, the frontal lobe also plans movement.  More immediate planning happens in the regions just forward of the primary motor cortex, named appropriately enough, the premotor cortex.

As we move forward, the planning becomes progressively more forward looking and more abstract.  This is the prefrontal cortex, often referred to the executive center of the brain.  Its primary role is planning, including planning to plan, driving information gathering for future planning, etc.  As part of its function, it acts as a conductor leading the other lobes in imagining various scenarios.

Okay, so back to the debate.

The back-only proponents cite various neurological case studies as evidence, talking about patients who had parts of their frontal lobes damaged or disconnected, but who still showed signs of being conscious.  They also cite cases of patients who had a frontal lobe pathology making them unresponsive, but later recovered the use of their frontal lobes enough to relay that they were conscious the whole time, but simply lacked the will to communicate.

This kind of evidence seems problematic for a number of reasons.  First, in my (admittedly inexpert) opinion, some of the cited cases in the paper seem anecdotal and based on hearsay.  Second, the other cases depend on self report, which is a problem because only patients with at least somewhat functional frontal lobes can self report anything, and the accuracy of such reports hinge on them remembering their former states of mind accurately.  Third, as the authors of the second paper point out, the data has something of a selection bias in it, and some of the cited evidence doesn’t check out.  And finally, again as pointed out in the response paper, the exact nature of frontal lobe damage or disconnect matters, making each case unique.

But I think the actual answer to this question depends on how we define “consciousness.”  If our definition only includes unfocused perception, then the back-only proponents might have a case.  The problem is that we seem to perceive a lot of stuff unconsciously.  And raw perception alone doesn’t quite seem to match most people’s intuition of consciousness.

That intuition also typically requires that the system have attention, emotions, imagination, and introspection.

Frontal lobe expert Elkhonon Goldberg, in his book ‘The New Executive Brain’, sees attention as a frontal lobe function.  He describes the back portions of the brain as creating the stage production of subjective experience, with the audience for the resulting show being in the frontal lobes.  Crucially, it’s this audience that decides what part of the show to focus on, in other words, where to direct attention.

Image credit: OpenStax College via Wikipedia

Emotions are driven by sub-cortical structures such as the amygdala,  hypothalamus, anterior cingulate cortex, and others that are sometimes referred to together as the limbic system.  The signals from these structures seem to affect processing in the frontal lobe, but also the temporal lobe and the insular cortex, which exists in the fissure between the temporal and parietal lobe.  In other words, emotional feeling seems to happen in both the front and back of the brain.

Imagination, simulating various action-sensory scenarios, seems to require the frontal lobes, particularly the prefrontal cortex.  Not that the content of imagination takes place in the prefrontal cortex itself.  It actually farms the content generation of these simulations out to the other regions, such that the vision processing centers handle the visual parts of an imagined scenario, the hearing centers handle the auditory parts, etc.  The prefrontal cortex acts as the initiator, conductor, and audience, but not the content generator.  Still, without the prefrontal cortex driving it, it’s hard to see imagination happening in any meaningful way.

And then there’s introspection, also known as self reflection.  Without introspection, we wouldn’t even know we were conscious, so it seems vital for human level consciousness.  Again, the prefrontal cortex seems heavily involved in this feedback function, although as with imagination, it depends on processing in the back portions of the brain, most likely the regions on the border between the temporal and parietal lobes.

Perhaps another way to look at this is to ask, if we somehow completely removed the brain’s frontal regions (and associated basal ganglia and thalamic nuclei), would the remaining back half still be conscious?  It might have the ability to build predictive sensory models, in other words it would have perception, but the modeling wouldn’t be done with any purpose, and it wouldn’t have any mechanism to decide on what portions of those models should be focused on.  Arguably, it would be a mindless modeling system.

But if we removed the rear portion and kept the frontal lobes, we’d have even less functionality since the frontal lobes are crucially dependent on the posterior ones for the content they need to do their work.

And neither of the above isolated systems would have emotions unless we retained the limbic system as part of their supporting structures.

All of which is to say, for what we intuitively think of as consciousness, we need all of the components discussed above.  Subjective experience is the communication between the perception and emotion centers of the brain and the action oriented centers.  Wholesale removal of any of these centers might conceivably leave us with an information processing framework, but not one most of us would recognize as conscious.

Unless of course I’m missing something?

h/t Keith Frankish and Gregg Caruso for sharing the papers on Twitter.

Posted in Mind and AI | Tagged , , , , , , , | 20 Comments

The success of John Scalzi’s descriptive minimalism

One of the categories here on the blog is Science Fiction, mainly because I read and watch a lot of it.  Occasionally, someone wanting to get into the literary version of the genre asks me for recommendations on good initial books to start with.  My recommendation often depends on the person, but I frequently suggest they try John Scalzi’s work.

Scalzi has a light witty writing style.  He never seems to be far from outright humor, although his stories usually have an overall serious core.  This allows him to explore some issues that other authors struggle to do without alienating all but the most hardcore sci-fi nerds.  A lot of people who dislike science fiction often do like his books.

Of the writers who have explored posthuman themes, his approach is often the least threatening.  His breakout novel, Old Man’s War, features old people recruited into a future army where their minds are transferred into new combat bodies.  But he carefully avoids broaching some of the more existential issues associated with that idea.  Likewise, his novel Lock In explores minds in different bodies in a way that minimizes the angst of many of his more (small “c”) conservative readers.

Scalzi makes compromises to make his work more accessible, but it allows him to present ideas to a wide audience.  He’s been rewarded for it; he’s a bestselling author.  And he won the Hugo Award for Best Novel with the book, Redshirts, with a setting very similar to Star Trek, but one where the ship crew actually notices that a lot of people other than the senior officers die on away missions, and decide to do something about it.

His most recent book is The Collapsing Empire, a far future story about an interstellar empire that is about to lose its ability to travel interstellar distances.  I read, enjoyed, and recommend it.  But it’s the first in a new series, so it ends on a cliffhanger, which some readers might find annoying.

But the reason for this post is that some reviewers are apparently finding the book to be too short a read.  As Scalzi pointed out in a recent post, the novel isn’t actually a short one by normal sci-fi standards, weighing in at about 90,000 words.  Why then does it feel short to some readers?  Scalzi himself offers an explanation.

I’m not entirely sure what makes people think The Collapsing Empire is short, but I have a couple guesses. One is that, like most books of mine, it’s heavy on dialogue and light on description, which makes it “read” faster than other books of the same length might be.

I think Scalzi’s exactly right about this.  His books do read fast, and I think a large part of it is because they’re simply easy to read.  It takes a minimal amount of effort to parse them, particularly starting with Redshirts.  I saw someone once comment that his writing makes for an “effortless” experience of story.

It seems to me that a large part of this is because of his “heavy on dialogue and light on description” style.  If you’ve never read his stuff and want to get an idea of this style, check out his novella on Tor: After the Coup.  Scalzi virtually never gives a detailed description of settings, except to note what kind of place they are, such as an office, spaceship bridge, or palace, and if there is anything unusual about them.  And I can’t recall him ever describing a character in detail.

Some readers are put off by this type of minimalism, finding it to be a bit too “white room”, too much of a bare stage.  They prefer more sensory detail to add more vividness for the setting or character.

I can understand that sentiment to some extent, but I personally find detailed descriptions too tedious.  If I’m otherwise enjoying the story, I’ll put up with detailed descriptions (to an extent), but for me it’s something I have to endure, an obstacle I have to climb over.

One of the most often cited pieces of writing advice is “show don’t tell”.  This advice seems to mean different things to different people.  To me it means that, to relay important information to the reader, the best option is with story events that reveal it, the second is with dialog or inner monologue, and the least desirable is with straight exposition.

But many writers take “show don’t tell” to mean providing detailed descriptions and letting the reader reach their own conclusions.  So instead of simply saying that a workroom is messy, the details of the messiness should be described and the reader allowed to figure out that it’s a mess.  As a reader, I personally find this kind of writing frustratingly tedious.  I tend to glaze over during the description and miss the point the author wanted me to derive.

Apparently a lot of people agree with me.  As I noted above, Scalzi is a bestselling author.  I’ll say I don’t like everything about his writing.  (His character voices could be more distinct, although he’s improving on that front, and his endings often feel a little too pat.)  But his books are always entertaining, and I think, together with the humor, the minimalist style has a lot to do with it.

In many ways, this style is reminiscent of a type of writing we used to see a lot more of.  Classic science fiction authors like Robert Heinlein (whose style Scalzi’s early Old Man’s War books emulated), Isaac Asimov, Jack Vance, and many others were all fairly minimalist on description.

Over time, styles have tended to become more verbose.  I’m not sure why this is, but I suspect technology has something to do with it.  Before the 1980s, most writers used a typewriter.  Iterative revisions, with lots of opportunities to add new descriptive details, often required retyping a lot of text (i.e. work).  It became much easier with word processing software, making it much more common.

In my view, this has led to a lot of bloated novels, often taking 500 pages to tell a 300 page story.  To be clear, I have no problem with a 500 page book if it tells a 500 page story (Dune and Fellowship of the Ring both told a lot of story with around 500 pages), but many authors today seem to need that many pages to tell the same stories that were once handled with much smaller books.

Certainly tastes vary, but I think Scalzi’s success shows that when given an option for tighter writing, a lot of readers take it.  I wish more authors would take note.

Posted in Writing | Tagged , , , , | 18 Comments

Why fears of an AI apocalypse are misguided

In this Big Think video, Steven Pinker makes a point I’ve made before, that fear of artificial intelligence comes with a deep misunderstanding about the relationship between intelligence and motivation.  Human minds come with survival instincts, programmatic goals hammered out by hundreds of millions of years of evolution.  Artificial intelligence isn’t going to have those goals, at least unless we put them there, and therefore no inherent motivation to be anything other than be the tools they were designed to be.

Many people concerned about AI (artificial intelligence) quickly concede that worry about it taking over the world due to a sheer desire to dominate are silly.  What they worry about are poorly thought out goals.  What if we design an AI to make paperclips, and it attacks its task too enthusiastically and turns the whole Earth, and everyone on it, into paperclips?

The big hole in this notion is that the idea that we’d create such a system, then give it carte blanche to do whatever it wanted to in pursuit of its goals, that we wouldn’t build in any safety systems or sanity checks.  We don’t give that carte blanche to our current computer systems.  Why should we do it with more intelligent ones?

Perhaps a more valid concern is what motivations some malicious human, or group of humans, might intentionally put in AIs.  If someone designs a weapons system, then giving it goals to dominate and kill the enemy might certainly make sense for them.  And such a goal could easily go awry, a combination of the two concerns above.

But even this concern has a big assumption, that there would only be one AI in the world with the capabilities of the one we’re worried about.  We already live in a world where people create malicious software.  We’ve generally solved that problem by creating more software to protect us from the bad software.  It’s hard to see why we wouldn’t have protective AIs around to keep any errant AIs in line and stop maliciously programmed ones.

None of this is to say that artificial intelligence doesn’t give us another means to potentially destroy ourselves.  It certainly does.  We can add it to the list: nuclear weapons, biological warfare, overpopulation, climate change, and now poorly thought out artificial intelligence.  The main thing to understand about this list is it all amounts to things we might do to ourselves, and that includes AIs.

There are possibilities of other problems with AI, but they’re much further down the road.  Humans might eventually become the pampered centers of vast robotic armies that do all the work, leaving the humans to live out a role as a kind of queen bee, completely isolated from work and each other, their every physical and emotional need attended to.  Such a world might be paradise for those humans, but I think most of us today would ponder it with some unease.

Charles Stross in his science fiction novel ‘Saturn’s Children’, imagined a scenario where humans went instinct, their reproductive urge completely satisfied by sexbots indistinguishable from real humans but without the emotional needs of those humans, leaving a robotic civilization in its wake.

None of this strikes me as anything we need to worry about in the next few decades.  A bigger problem for our time is the economic disruption that will be caused by increasing levels of automation.  We’re a long way off from robots taking every job, but we can expect waves of disruption as technology progresses.

Of course, we’re already in that situation, and society’s answer so far to the effected workers has been variations of, “Gee, glad I’m not you,” and a general hope that the economy would eventually provide alternate opportunities for those people.  As automation takes over an increasingly larger share of the economy, that answer may become increasingly less viable.  How societies deal with it could turn out to be one of the defining issues of the 21st century.

Posted in Zeitgeist | Tagged , , , , , | 58 Comments

Are the social sciences “real” science?

YouTube channel Crash Course is starting a new series on what is perhaps the most social of social sciences: Sociology.

The social sciences, such as sociology, but also psychology, economics, anthropology, and other similar fields get a lot of grief from people about not being “real” science.  This criticism is typically justified by noting that scientific theories are about making predictions, and the ability of the social sciences to make predictions seems far weaker than, say, particle physics.  Economists couldn’t predict when the Great Recession was coming, the argument goes, so it’s not a science.

But this ignores the fact that predictions are not always possible in the natural sciences either.  Physics is the hardest of hard sciences, but it’s married to astronomy, an observational science.  Astronomers can’t predict when the star Betelguese will go supernova.  But they still know a great deal about star life cycles, and can tell that Betelguese is in a stage where it could go any time in the next few million years.

Likewise biologists can’t predict when and how a virus will mutate.  They understand evolution well enough to know that they will mutate, but predicting what direction it will take is impossible.  Meteorologists can’t predict the precise path of a hurricane, even though they understand how hurricanes develop and what factors lead to the path they take.

The problem is that these are matters not directly testable in controlled experiments.  Which is exactly the problem with predicting what will happen in economies.  In all of these cases, controlled experiments, where the variables are isolated until the causal link is found, are impossible.  So scientists have little choice but to do careful observation and recording, and look for patterns in the data.

Just as an astronomer knows Betelguese will eventually go supernova, an economist knows that tightening the money supply will send contractionary pressures through the economy.  They can’t predict that the economy will definitely shrink if the money supply is tightened because other conflating variables might affect the outcome, but they know from decades of observation that economic growth will be slower than it otherwise would have been.  This is an important insight to have.

In the same manner, many of the patterns studied in the other social sciences don’t provide precise predictive power, but they still give valuable insights into what is happening.  And again, there are many cases in the natural sciences where this same situation exists.

Why then all the criticism of the social sciences?  I think the real reason is that the results of social science studies often have socially controversial conclusions.  Many people dislike these conclusions.  Often these people are social conservatives upset that studies don’t validate their cherished notions, such as traditionally held values.  But many liberals deny science just as vigorously when it violates their ideologies.

Not that everything is ideal in these fields.  I think anthropology ethnographers often get too close to their subject matter, living among the culture they’re studying for years at a time.  While this provides deep insights not available through other methods, it taints any conclusions with the researcher’s subjective viewpoint.  Often follow up studies don’t have the same findings.  This seems to make ethnographies, a valuable source of cultural information, more journalism than science.

And psychology has been experiencing a notorious replication crisis for the last several years, where previously accepted psychological effects are not being reproduced in follow up studies.  But the replication crisis was first recognized by people in the field, and the field as a whole appears to be gradually working out the issues.

When considering the replication crisis, it pays to remember the controversy over the last several years in theoretical physics.  Unable to test their theories, some theorists have called for those theories not to be held to the classic testing standard.  Many in the field are pushing back, and theoretical physics is also working through the issues.

In the end, science is always a difficult endeavor, even when controlled experiments are possible.  Looking at the world to see patterns, developing theories about those patterns, and then putting them to the test, facing possible failure, is always a hard enterprise.

It’s made more difficult when your subject matter have minds of their own with their own agendas, and can alter their behaviors when observed.  This puts the social sciences into what philosopher Alex Rosenberg calls an arms race, where science uncovers a particular pattern, people learn about it, alter their behavior based on their knowledge of it, and effectively change the pattern out from under the science.

But like all sciences, it still produces information we wouldn’t have otherwise had.  And as long as it’s based on careful rigorous observation, with theories subject to revision or refutation on those observations, I think it deserves the label “science”.

Posted in Science | Tagged , , , , | 43 Comments

What about subjective experience implies anything non-physical?

Mary’s room is a classic philosophical thought experiment about consciousness.  The Wikipedia article on what’s called the knowledge argument quotes Frank Jackson, the originator of the argument, as follows:

Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. […] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?

The takeaway idea from this thought experiment is supposed to be that, since Mary knows “all the physical information there is to obtain” about seeing color, what she learns when having her first actual sensory experience of color must be non-physical.

But this assumes that it is possible for Mary to actually know everything physical about seeing color, without actually ever seeing color.  It seems clear she does get new knowledge when she leaves the room, the knowledge of what it’s like to actually experience color.  The question is what the nature of that new knowledge is.  Like so many of these types of exercises, the premise essentially assumes the conclusion, that raw subjective experience isn’t physical.  But if the raw experience actually is physical, then the premise is a contradiction, positing that she has all the information, then going on to describe what information she doesn’t have.

But the question I have is, why does this premise, that experience is not physical, seem compelling to so many people?  (At a philosophical level.  I understand why so many people find it emotionally compelling.)

One of the chief features that separate humans from other animals is the degree to which we can think symbolically.  Language is the most common example of this ability.  Other animals issue sounds which mean something to those around them, such as a monkey who issues a certain screech for a snake, and a different screech for a flying predator.  But only humans appear able to manipulate the sounds in complex sentences and frameworks, particularly with hierarchical and recursive levels of complexity.

When we use language, we utter a sound that is a symbol for something else.  That something else might be another symbol acting as another placeholder for collections of more primitive symbols.  But eventually, if we follow through the hierarchy of symbols, the most primitive ones we can find will represent sensory perceptions, emotions, or actions, in other words, raw conscious experience.

Now, you might argue that some words refer to objects, such as dogs.  But dogs are themselves a composite sensory experience.  When I say the word “dog” to you, it evokes certain imagery.  But the dog concept generally denotes a certain type of animal with a certain type of body plan.  The imagery has colors, textures, shapes, sounds, and smells, in other words, more primitive sensory experiences.

We might also talk about the altered consciousness of meditative states some people experience.  But if you read descriptions of those states, they’re always either using a new word to label that state, or attempting to describe it in terms of the other primitives we’re all familiar with.

So, all language ultimately reduces to these primitive aspects of conscious experience: sensory perception, primal emotions, motor action, and perhaps meditative states.  Once we reach this point however, language ends.  While we can come up with words as stand-ins for these primitives, we can’t further describe them.

For example, consider trying to describe the color yellow to someone who had been born blind.  You can’t.  The best you can do is attempt to relate it into terms the blind person might understand, such as the feel of sunshine, the touch and smell of bananas, etc.  But you can’t describe the raw experience of yellow to them.  It’s ineffable.

But does this ineffability, this inability to subjectively reduce the raw experience further, mean anything about the reality of such an experience?  What about this ineffability might lead us to conclude it involves something other than physics?

It’s worth noting that just because these experiences can’t be subjectively reduced, it doesn’t mean that the neural correlates can’t be objectively reduced.  For example, we know the experience of yellow begins with photons with wavelengths of between 575 and 585 nanometers striking our retina, exciting a mixture of red sensitive and green sensitive light cone receptors and causing a cascade of electrochemical signals up the optic nerve to the thalamus and occipital lobe, somewhere producing what will eventually be communicated as yellow to the other brain centers.

Of course, we are far from a full accounting of the neuroscience here.  And many seem always ready to seize on the remaining gaps as an opportunity to wedge in mystical or magical notions.  But every year, those gaps close a little more.  Taking solace in them seems like an ever eroding stance.

A common argument is that we don’t know why these experiences exist.  Why can’t the brain go about its business without them?  This seems to assume that raw experience is superfluous to what the brain does, and perhaps that superfluousness means that it’s outside of the causal framework we call “physics”, an epiphenomenon.

But as I’ve noted before, the very fact that we can discuss primal experiences and apply symbolic labels to them means that they’re not outside of that causal framework.  It takes extreme logical contortions to avoid concluding they don’t influence at least the language centers of our brain.

So then, what explains experience?  As I’ve noted before, I think to have any hope of answering that question, we have to be willing to ask what experience actually is.  It seems like there are many possible answers, but the one I like best is grounded in the evolutionary reason for brains, to make movement decisions.  Experience is communication.  But communication from what to what?

I think the answer is: communication from the perception centers and emotion centers of the brain to the movement planning centers.  This communication provides information that is crucial for the movement planning centers to do their job.  What we call “experience” or “feeling” is the raw substance of that communication.  This communication includes sensory perceptions (including a sense of self) and emotional reactions.  Remove it, and it’s difficult to see how movement decisions can happen.

Of course, this remains a speculative explanation.  Any explanation of experience will be at this point.  The question is, does speculation of this type, built on physical functionality we already know has to exist in the brain, involve fewer assumptions than speculation about non-physical phenomena?

It’s often said that subjective experience can’t be explained physically.  My question is, what am I missing?  What about experience causes people to say this?  What specific attributes are outside the purview of any such explanation?

Posted in Mind and AI | Tagged , , , , , , , | 98 Comments

Kindle Oasis: a quick review

I read a lot of books, and as I’ve posted about before, the lion share of those books these days are Kindle e-books.

E-books aren’t for everyone, but for the last several years they’ve been my preferred way to consume a book.  I love the way I can buy a book and immediately start reading it, the fact that I can quickly search the book for specific words or phrases, that my large library of e-books is accessible from anywhere, and that it doesn’t take up any space in the house.  (A house that, despite an epic cleanup last  year, still has a lot of space taken up with shelves and mounds of traditional books.)

I started with a Kindle 2 in 2009, and after a year or so of tentative experimentation, pretty much went all digital.  After a while, I discovered the iOS Kindle app and started reading on my phone and iPad.  Within a few months, I almost never used the old Kindle device and eventually retired it.

The nice thing about reading books on iOS devices (and occasionally Android ones) was that I could see the color version of the book cover and the user interface was much more responsive.  But it’s always had a couple of drawbacks: an unreadable display outside in the sun and eyestrain caused by the screen backlight.  Seeing my phone screen in the sun is frequently an issue although I rarely attempt to read books outside, but the eyestrain thing has been an issue from time to time.  I’ve always handled it by minimizing the screen brightness and taking frequent breaks.

But given the improvement I’ve seen in my friends’ new Kindle Paperwhite devices, I decided it was time to try a dedicated Kindle again.  And as a voracious reader, I felt justified in splurging for the top of the line model: the Kindle Oasis.  (In reality, the price of this model is in the neighborhood of what I paid for the old device years ago.)

kindleoasisclosedsmallkindleoasisopensmallMy first impression of this thing was how small it is.  It’s not much bigger than my iPhone 7 and seems to be just as light.  It’s definitely smaller and lighter than the iPad I often read on.  But its battery life is far longer.  The included cover comes with an additional battery, which Amazon promises will last for months.  (Although that promise is based on 30 minutes of reading a day.  Yeah right.  I might get a week or two out of it, but that will be a lot more than I get out of the phone or tablet.)

The user interface on these new models is much more responsive than what I recall from my old one.  It’s still not as responsive as iOS devices, but then it costs a lot less, so pluses and minuses.  And the display is much sharper and clearer than the old model.  It really does look like printed text.  With the backlight off (it’s only needed in the dark), I was able to read from the device for hours with no more eyestrain than I would have gotten from reading a paper book.  For reading straight text, it’s working like a charm.

The loss of color is still noticeable when perusing the book library or catalog, but the amount of time I spend doing that is fleeting compared to the time actually spent in the books themselves.  I’m still waiting to see how well this device does for books with diagrams and tables, an area where I think Kindle on all devices has struggled somewhat, sometimes due to shoddy formatting from the publisher, but often simply due to limitations in the platform.

So, all in all, I’m pretty happy with it after a week of usage.  I had told a couple of friends I was picking one up, and they wanted to know my impressions, hence this post.  I’m definitely not going to stop reading on my phone when waiting for an appointment or in the grocery check-out line, and the iPad or laptop may still get some action for books with lots of tables and illustrations, but the Oasis seems poised to get the lion share of my home reading.

Posted in Zeitgeist | Tagged , , , | 10 Comments

Recommendation: The Stars Are Legion

thestarsarelegioncoverOccasionally on this blog, when pondering the far future, I’ve pushed back on the idea that the long term fate of civilization is to be machine robotic type life, instead noting that a truly advanced civilization would instead be engineered life, that it would make a lot more sense for its “machines” to be biological systems.  Admittedly, at some point, the distinction between engineered biology and very advanced machinery starts to become blurred.

Kameron Hurley’s ‘The Stars Are Legion‘ appears to take this idea very much to heart.  From one point of view, this is a classic sci-fi tale of an interstellar generation ship where things have deteriorated and everyone has forgotten the original purpose of the voyage.  But in this tale, the interstellar ark appears to be an artificial miniature solar system, with a miniature sun in the center orbited by innumerable world ships, all of which are called “the Legion”, with each world ship a living entity with its own homeostasis system.

The story characters live in these world ships.   They have the ability to travel between them on sentient single person rider ships.  Naturally, there is warfare between the worlds, with certain worlds conquering others and raiding their resources.  Things are not well in the Legion.  Many, perhaps most worlds appear to be dying, rotting.  The warfare is often about extracting resources to survive.

The interiors of the world ships are very strange; being biological systems, they are…gooey, with spongy walls and floor absorbing any spilled liquids (including blood), large arteries and veins running through the structures, and many other hallmarks of a living organism, such as the rooms coming across more like organelle compartments than traditional rooms.

Just about everything in this story is gooey, including the spray-on spacesuits.  And the characters often have a comfort level with the integrated biology of their environment that will leave many readers queasy.

But the strangeness doesn’t end there.  It quickly becomes apparent that the characters in the book are all female.  No males are mentioned.  Although as the story continues, it also becomes evident that the engineered biology doesn’t stop with the environment, but also applies to the characters themselves, and everything is not how it seems.

The story here is more than just an exploration of engineered biology.  It’s a searing story of two characters working to save their world, with these characters providing the two narrative viewpoints.  Here, Hurley takes a technique used in George R.R. Martin’s Song of Ice and Fire series and James S.A. Corey’s Expanse books, with each chapter named for that chapter’s viewpoint character.  But it’s taken to a new level with the viewpoints both being first person and in present tense, providing an intimate and immediate feel to the writing.

Shifting viewpoints is something that has historically happened in third person accounts, but it’s fairly rare in first person books, mainly due, I think, to the fact that it can be very easy to get confused about whose viewpoint we’re getting at any one point.  (Although there has been a trend in recent years pairing one first person protagonist with other third person narratives.)  But here we have two first person accounts.  It works because of the chapter title trick telling us upfront whose viewpoint we’re getting.  It’s a technique that I’m wondering if we’ll see more of.

Hurley’s world in this book is gooey, gory, violent, and often surreal.  In many ways, it reminds me of early Orson Scott Card stories from the 1980s.  I found it mind bending in ways that few books manage to pull off.  If you’re looking for something bizarre and thought provoking, and can tolerate violence and a lot of fairly gross description, I highly recommend it.

Posted in Science Fiction | Tagged , , | 22 Comments