Predicting far future technologies

Prediction is very difficult, especially about the future.
Niels Bohr

If you’re a science fiction writer, one of the things you do is try to predict what future technologies will come along.  If you’re not writing hard science fiction, this is relatively easy.  You just come up with a cool capability and throw in some plausible sounding technical jargon.  It’s like adding a magical ability in a fantasy story.  As long as you make sure the rules of magic are consistent, you’re in business.

But if you are aiming for harder science fiction, or you’re a futurist, what guiding principles can you use for making grounded predictions?  The difficulty level is actually much harder when making near term predictions, partly because your predictions will be assessed for accuracy in your lifetime, but also because it requires a pretty thorough immersion into current technologies, how they work, what the trends lines are, and what room exists for future improvement.

It gets a little easier for making longer term predictions, because instead of trying to figure out what technical breakthroughs will happen in the next few years, you’re focusing on what might eventually be possible, where the laws of physics may eventually be the deciding factor.

Of course, we could well discover new laws of physics down the road, and who knows what capabilities that new knowledge might enable?  As Arthur C. Clarke once observed, any sufficiently advanced technology becomes indistinguishable from magic for an observer from a less developed society.  We don’t know most of what we don’t know, and attempting to make predictions about future knowledge is basically just wild guessing.

But if we’re trying to be somewhat grounded, keeping our predictions to things that have a reasonable chance of being true, then it might pay to stick to known science, or at least science that isn’t too speculative.  When thinking about this, it pays to remember what technology actually is, which is the manipulation of natural forces for our benefit.  If the future technology you’re imagining isn’t based on some natural force, or a combination of natural forces, then you’re essentially positing magic.

This might be a little clearer if we think about the earliest technologies.  Many animals use sticks as tools to get food from out of tight places, what Douglas Adams called “stick technology”.  Early humans developed a technology no other animal had developed by taming fire and using it for cooking, protection, and many other purposes.  And starting with breeding dogs from wolves, humans began domesticating a number animals and for a variety of purposes and controlling how plants grow for food, again making use of existing natural resources.

If you don’t think these things count as technology, then consider plumbing.  Developed by ancient societies, plumbing makes use of the natural tendencies of water (hydraulics) for human convenience.  Or consider electricity, which Larry Niven and Jerry Pournelle in their novel, Lucifer’s Hammer, referred to as tamed lightning.

A modern car is built to harness natural forces: electricity, air flow, the combustive reaction of gasoline (refined oil, which is stored concentrated solar energy), and mechanical force.  Without these natural forces, there can be no car.  Or any other kind of technology.

For future technologies, this means we need to find plausible natural forces which could be used to construct them.  It’s easy to imagine something like, say,  a Star Trek style teleporter, until we try to envision what kind of system could deconstruct, track, transmit, and reconstruct the 7 X 1027 atoms in a human body along with their precise physical configuration.  Even if we could come up with a computational system that could store that much information and get around quantum no-clone theorems, transmitting it using any variant of electromagnetism might take orders of magnitude longer than the age of the universe.  It seems difficult to imagine such a system without resorting to new physics, in other words without reaching for magic.

I think a good starting point when evaluating potential future technologies is to ask, does it happen in nature?  Or do all of its components happen in nature?  Once we observe it in nature, the question ceases to be whether it’s possible, but whether humans can do it?  Historically, assuming that the answer to that last question is “no” hasn’t been a winning bet.

By this simple metric, it should have been obvious to people in the 19th century that eventually flight would be possible, since birds were doing it everywhere.  And the fact that meteors and other natural phenomena routinely exceed the speed of sound should have clued in early 20th century pilots that aircraft would eventually be able to do so.

So, what does this mean for individual future technologies?  Well, organic life is built on molecular nanomachinery, which seems to indicate that nanomachines will definitely be possible at some point.  (Nanomachines that exist in attack swarms?  Not so much.  What would be the means of propulsion and levitation for such swarms?)  An interesting question is whether manufacturing nanomachines could happen without mutations creeping in, something evolved nanomachines haven’t accomplished.

Intelligent machines?  If you accept that we are intelligent machines, just evolved ones, then the question is how long it will take for us to build engineered intelligent machines.   Another question is what the potential might be for those engineered intelligences.  Many people seem to assume that they could have the capacities of human brains paired with the speed of silicon processors, making them super god-like entities.

But nothing like this yet exists in nature, and we may well find that achieving the necessary capacities requires inescapable trade-offs in performance.  (For instance, maybe that much information density requires water cooled operation.)  While human minds are highly unlikely to be the most intelligent systems that can exist, we shouldn’t assume that AI (artificially intelligent) minds will automatically be thousands of times more powerful than the human variety, particularly a human brain that has itself been integrated with technology.

The criteria become more problematic when we consider things like warp drives, hyperspace, or other putative FTL (faster than light) technologies.  We have no evidence for anything in nature that travels faster than light, except for a couple of exceptions that don’t seem like much help.

One exception is quantum entanglement, but whether it counts as FTL seems to depend on which interpretation of quantum mechanics you favor, and it allows for no actual communication.  (If you try to manipulate the quantum state of one of the entangled particles, you don’t affect its partner, you only destroy the entanglement.)

Another exception are galaxies beyond our cosmological horizon.  Due to the expansion of the universe, they’re moving away from us faster than light (from our vantage point).  But those galaxies are causally disconnected from us.  Since they moved over the horizon, they can have no affect on us, nor us on them.  In other words, they’re now effectively in a different universe.  No objects that can causally interact have ever been observed to move faster than light relative to each other.

People sometimes talk about concepts such as Alcubierre Drives or wormholes.  But these concepts require speculative phenomena such as negative energy or imaginary mass to exist, which puts us back in the realm of new physics, in other words, speculative guessing.

And under special and general relativity, any FTL capability, by whatever means, effectively allows for time travel.  Is time travel possible (that is, aside from our normal forward progression)?  Again, we have no observable phenomena in nature that seems to do it, nor any identifiable method that could built on to do it.  And the absence of tourists from the future seems to hint that time travel to arbitrary destinations isn’t possible.  (People sometimes imagine a code of ethics that prevents time travelers from making their presence known, but the idea that such a code would hold for all time travelers from all future societies seems improbable.)

Dyson Sphere
Image credit: Bibi Saint-Pol via Wikipedia

But the criteria does allow for some pretty mind bending concepts such as artificial planets, stars, even black holes, not to mention megastructures such as Dyson swarms.  All of which are rarely seen in science fiction.

What do you think?  Do you agree that looking for natural phenomena is a good criteria to evaluate the possibility of future technology?  If not, what additional or alternative criteria would you add?

Posted in Science Fiction | Tagged , , , , | 11 Comments

Why embodiment does not make mind copying impossible

A while back, I highlighted a TEDX talk by Anil Seth where he discussed that cognition is largely a prediction machine.  Apparently Seth more recently gave another talk at the full TED conference, which is receiving rave reviews.  Unfortunately, that talk doesn’t appear to be online yet.

But one article reviewing the talk focuses on something Seth purportedly said in it, that uploading minds is impossible because the mind and body are tightly bound together.

Seth’s work has shown compelling evidence that consciousness doesn’t just consist of information about the world traveling via our senses as signals into our brains. Instead, he’s found that consciousness is a two-way street, in which the brain constantly uses those incoming signals to make guesses about what is actually out there. The end result of that interplay between reality and the brain, he says, is the conscious experience of perception.

“What we perceive is [the brain’s] best guess of what’s out there in the world,” he said, explaining that these guesses are constantly in flux.

…“We don’t passively see the world,” he said, “we actively generate it.” And because our bodies are complicit in the generation of our conscious experience, it’s impossible to upload consciousness to some external place without somehow taking the body with it.

Everything Seth describes conforms with most of the neuroscience I’ve read.  To be clear, the brain is indeed tightly bound with the body.  Most of it is tightly focused on interpreting signals sent to it from throughout the peripheral nervous system, and much of the rest is focused on generating movement or hormonal changes.  The portions involved in what we like to think of as brainy stuff: mathematics, art, culture, etc, is a relatively tiny portion.

And the brain appears to have very strong expectations about the body it’s supposed to be in.  That expected body image may actually be genetic.  In his book, The Tell-tale Brain, neuroscientist V.S. Ramachandran describes a neurological condition called apotemnophilia where patients want part of their body removed because they don’t feel like it should be there (to the extent that 50% of people with this condition go on to actually have the body part amputated).  It’s as though their expected body image has become damaged in some way, missing a part of their actual physical body.

If apotemnophilia is a standard brain mechanism gone awry, then a normal human mind is going to have very strong expectations of what kind of body it will be in.  This makes science fiction scenarios of removing someone’s brain and installing it as the control center of a machine an unlikely prospect, at least without dealing with far more complex issues than successfully wiring the brain into a machine.

But ultimately, I don’t think this makes copying a mind impossible, although it does put constraints on the type of environment the copied mind might function well in.  If the mind has strong expectations about its body, then a copied mind will need to have a body.  A mind uploaded into a virtual environment would need a virtual body, and a mind installed in a robotic body would need that body to be similar to its original body.  (At least initially.  For this discussion, I’m ignoring the possibility of later altering the mind to be compatible with alternative bodies.)

But doesn’t the tight integration require that we take the entire body, as Seth implies?  We could insist that copying a mind requires that the person’s entire nervous system be copied.  This would raise the difficulty, since instead of just copying the brain, the entire body would have to be copied.

Alternatively, a new nervous system could be provided, one that sends signals similar to the original one.  This requires that we have an extremely good understanding of the pathways and signalling going to and from the brain.  But if we’ve developed enough knowledge and technology to plausibly copy the contents of a human brain, understanding those pathways seems achievable.

The question is, what exactly is needed to copy a person’s mind?  If we omit the peripheral nervous system, are we perhaps leaving out something crucial?  What about the spinal cord?  When pondering this question, it’s worth noting that patients who’ve suffered a complete severing of their upper spinal cord remain, mentally, the same person they were before.

Such patients still have a functioning vagus nerve, the connection between the brain and internal organs.  But in the past, patients with severe peptic ulcer conditions would sometimes have vagotomies, where the vagus nerve to the stomach was partially or completely severed, without compromising their mental abilities.

Certainly the severing of these various nerve connections might have an effect on a person’s cognition, but none of them seem to make that cognition impossible.  Every body part except the brain has been lost by somebody who continued to mentally be the same person.  The human mind appears to be far more resilient than some scientists give it credit for.

Indeed, the fact that a person can remain partially functional despite damage to various regions to the brain demonstrates that this doesn’t stop at the spinal cord.  Which raises an interesting question, does the entire brain have to be copied to copy a human mind?

The short answer appears to be no.  The lower brain stem seems to be well below the level of consciousness and is very tightly involved in running autonomous functions of the body.  In a new body, it could probably be replaced.

The same could be said for the cerebellum, the compact region at the lower back of the brain involved in fine motor coordination.  Replace the body, and there’s no reason this particular region would need to be preserved.  In fact, patients who have suffered catastrophic damage to their cerebellum are clumsy, but appear to remain mentally complete.

That leaves the mid-brain region and everything above, including the overall cerebrum.  Strangely enough, of the 86 billion neurons in the brain, these regions appear to have less than 25 billion of them.  (Most of the brain’s neurons are actually in the cerebellum.  Apparently fine motor coordination takes a lot of processing capacity.)  It’s even conceivable that lower levels of the cerebral sensory processing regions could be replaced to match the new sensory hardware in a new body without destroying human cognition.

Obviously all of this is very speculative, but then people are often content to entertain concepts like faster-than-light spaceships, which would require a new physics, as merely a matter of can-do spirit.  All indications are that mind copying wouldn’t require a new physics, only an ability to continue studying the physics of the brain.

Unlike the singularity enthusiasts, I doubt this capability will happen in the next twenty years.  It seems more likely to be something much farther in the future, although it’s unlikely to be developed by those who’ve already concluded it’s impossible.

But is there an aspect of this I’m missing?  Something (aside from incredulity) that does in fact make mind copying impossible?

Posted in Mind and AI | Tagged , , , , , , | 30 Comments

Recommendation: Dark Intelligence

I’ve been meaning to check out Neal Asher’s books for some time.  They keep coming up as recommendations on Amazon, Goodreads, and in various other venues, and they sound enticing, like the kind of fiction I’d enjoy.  Last week, I finally read the first book of his most recent trilogy, ‘Dark Intelligence‘.

The universe described in Dark Intelligence has some similarities to Iain Banks’ Culture novels.  Earth lies at the center of an interstellar society call the Polity.  The Polity isn’t nearly as utopian as Banks’ Culture, but it’s similarly ruled and run by AIs.  Humans are still around, but in various combinations between baseline humans and ones augmented in various ways, either physically or mentally.  In this particular novel, most of the action takes place outside of the Polity itself.

The Polity has an enemy, the Prador Kingdom, composed of a brutal crab like alien species called the prador.  The Polity and the prador fought a war about a century before the novel begins, which ended with a tentative truce.  What I’ll call the anchor protagonist, the awesomely named Thorvald Spear, was a soldier killed in the war, but at the beginning of the book is resurrected from a recently discovered mind recording.

It turns out that Spear was killed by a rogue AI named Penny Royal, who also took out a large number of Spear’s fellow soldiers when it went berserk.  Penny Royal is still at large when Spear is revived, and he has a burning desire for revenge, so he sets out to find and destroy it.  His chief lead to find Penny Royal is a woman and criminal boss named Isobel Satomi, who may know the AI’s location because she once visited it to attain new abilities, which it provided, but at a cost.  As a result of receiving those abilities, Satomi is now slowly transforming into an alien predator.

Yeah, obviously there is a lot going on in this book, and everything I’ve just described is revealed in the opening chapters.  The book has a substantial cast of viewpoint characters: humans, AIs, and aliens.  Penny Royal is at the center of several ongoing threads, its actions affecting many lives.  It turns out it is regarded by the Polity AIs as dangerous, a “potential gigadeath weapon and paradigm-changing intelligence”.

There are a lot of references to events that I assume happened in previous books, particularly on one of the planets, Masada.  Somewhere in the book I realized that I had already read about one of the aliens in a short story by Asher: Softly Spoke the Gabbleduck.  He appears to have written a large number of books and short stories in this universe.

I found Asher’s writing style enticing but at times tedious.  Enticing because he enjoys describing technology, weapons, and space battles in detail, and a lot of it ends up being nerd candy for the mind.  Tedious because he enjoys detail all around, often describing settings and characters in more detail than I really care to know, making his book read slower as a result.

Asher also has a tendency to evoke things like quantum computing or fusion power as a means for describing essentially magic technologies.  Much of it is standard space opera fare, such as faster than light travel or artificial gravity.  Some of the rest involve things like thousands of human minds being recorded on a shard of leftover AI material.  This isn’t necessarily hard science fiction, although it remains far harder than typical media science fiction.

But what kept me riveted were the the themes he explores.  The story often focuses on the borders between human, AI, and alien minds.  Satomi’s transformation in particular is described in gruesome detail throughout the book.  (It reminded me of the movie, ‘The Fly’, particularly the 1986 version.)  But most of what makes her transformation interesting, as well as similar transformations other characters are going through in the book, are how their minds change throughout the process.  Their deepest desires and instincts start to change in ways that really demonstrate just how contingent our motivations are on our evolutionary background or, in the case of AIs, engineering.

Not that this book was only an intellectual exercise.  There is a lot of action, including space battles, combat scenes, and AI conflict, not to mention scenes of an alien predator hunting down humans, from the predator’s point of view.

Warning: this book has its share of  gore and violence.  I think it’s all in service to the story, but if  you find vividly described gore off putting, this might not be your cup of tea.

This book is the first in a trilogy, so it ended with lots of loose unresolved threads.  I’ve already started the second book, and will probably be reading a lot more of Asher’s books in the coming months.

Posted in Science Fiction | Tagged , , , | 5 Comments

Steven Pinker: From neurons to consciousness

This lecture from Steven Pinker has been around for a while, but it seems to get at a question a few people have asked me recently: how does the information processing of neurons and synapses lead to conscious perception?  Pinker doesn’t answer this question comprehensively (that would require a vast series of lectures), but he answers facets of it to the extent that it’s possible to see how the rest of the answer might come together.

Be warned: this lecture is very dense.  If the concepts are entirely new to you, you might have to re-watch portions to fully grasp some of the points.  And the visual illusions he shows, unfortunately, don’t seem to come through, but the point they make does.

Of course, people who insist that there has to be something more than just the physical processing won’t be convinced.  But if you’re interested in what mainstream neuroscience knows about this stuff, well worth a watch.

Posted in Zeitgeist | Tagged , , , , , | 14 Comments

Is consciousness only in the back of the brain?

There’s an interesting debate going on among some neuroscientists about which parts of the brain are involved in subjective experience.  On the one side are Christof Koch, Giuilio Tononi, and colleagues who argue that consciousness exists wholly in the back of the brain, that the frontal structures are not involved.  On the other side are neuroscientists who, while agreeing that the back part of the brain is definitely involved, argue that the role of the front part can’t be dismissed.

To understand this debate, it’s worth doing a quick review of what is known about the functionality of the various components of the brain.  (To keep things simple, I’m going to focus primarily on the neocortex, the wrinkled cover on the top of the brain.  If you’re familiar with neural anatomy, this isn’t to discount the role of sub-cortical structures such as the thalamus or basal ganglia.)

Lobes of the brain
Image credit: BruceBlaus via Wikipedia

The first thing to understand is that the back part of the brain seems to be dedicated to sensory perception, and the front part to planning and initiating movement.  The neocortex is divided into four lobes, which are separated from each other by deep fissures.

The occipital lobe in the back is dedicated to vision.  The front part of the temporal lobe on the side handles hearing.  The back part of the temporal lobe handles visual recognition of objects, faces, etc.  The back part of the parietal lobe handles visual perception of movement.  The middle part of the parietal lobe, along with surrounding regions, appears to be involved in integration of the various senses.  It’s sometimes referred to as the posterior association cortex.

A strip along the front part of the parietal lobe is the somatosensory cortex, each part of which processes touch sensations from a particular body part.  It’s somewhat mirrored by a strip just across the central sulcus fissure along the back of the frontal lobe, which is the primary motor cortex involved in controlling the movement of each body part.

In addition to controlling movement, the frontal lobe also plans movement.  More immediate planning happens in the regions just forward of the primary motor cortex, named appropriately enough, the premotor cortex.

As we move forward, the planning becomes progressively more forward looking and more abstract.  This is the prefrontal cortex, often referred to the executive center of the brain.  Its primary role is planning, including planning to plan, driving information gathering for future planning, etc.  As part of its function, it acts as a conductor leading the other lobes in imagining various scenarios.

Okay, so back to the debate.

The back-only proponents cite various neurological case studies as evidence, talking about patients who had parts of their frontal lobes damaged or disconnected, but who still showed signs of being conscious.  They also cite cases of patients who had a frontal lobe pathology making them unresponsive, but later recovered the use of their frontal lobes enough to relay that they were conscious the whole time, but simply lacked the will to communicate.

This kind of evidence seems problematic for a number of reasons.  First, in my (admittedly inexpert) opinion, some of the cited cases in the paper seem anecdotal and based on hearsay.  Second, the other cases depend on self report, which is a problem because only patients with at least somewhat functional frontal lobes can self report anything, and the accuracy of such reports hinge on them remembering their former states of mind accurately.  Third, as the authors of the second paper point out, the data has something of a selection bias in it, and some of the cited evidence doesn’t check out.  And finally, again as pointed out in the response paper, the exact nature of frontal lobe damage or disconnect matters, making each case unique.

But I think the actual answer to this question depends on how we define “consciousness.”  If our definition only includes unfocused perception, then the back-only proponents might have a case.  The problem is that we seem to perceive a lot of stuff unconsciously.  And raw perception alone doesn’t quite seem to match most people’s intuition of consciousness.

That intuition also typically requires that the system have attention, emotions, imagination, and introspection.

Frontal lobe expert Elkhonon Goldberg, in his book ‘The New Executive Brain’, sees attention as a frontal lobe function.  He describes the back portions of the brain as creating the stage production of subjective experience, with the audience for the resulting show being in the frontal lobes.  Crucially, it’s this audience that decides what part of the show to focus on, in other words, where to direct attention.

Image credit: OpenStax College via Wikipedia

Emotions are driven by sub-cortical structures such as the amygdala,  hypothalamus, anterior cingulate cortex, and others that are sometimes referred to together as the limbic system.  The signals from these structures seem to affect processing in the frontal lobe, but also the temporal lobe and the insular cortex, which exists in the fissure between the temporal and parietal lobe.  In other words, emotional feeling seems to happen in both the front and back of the brain.

Imagination, simulating various action-sensory scenarios, seems to require the frontal lobes, particularly the prefrontal cortex.  Not that the content of imagination takes place in the prefrontal cortex itself.  It actually farms the content generation of these simulations out to the other regions, such that the vision processing centers handle the visual parts of an imagined scenario, the hearing centers handle the auditory parts, etc.  The prefrontal cortex acts as the initiator, conductor, and audience, but not the content generator.  Still, without the prefrontal cortex driving it, it’s hard to see imagination happening in any meaningful way.

And then there’s introspection, also known as self reflection.  Without introspection, we wouldn’t even know we were conscious, so it seems vital for human level consciousness.  Again, the prefrontal cortex seems heavily involved in this feedback function, although as with imagination, it depends on processing in the back portions of the brain, most likely the regions on the border between the temporal and parietal lobes.

Perhaps another way to look at this is to ask, if we somehow completely removed the brain’s frontal regions (and associated basal ganglia and thalamic nuclei), would the remaining back half still be conscious?  It might have the ability to build predictive sensory models, in other words it would have perception, but the modeling wouldn’t be done with any purpose, and it wouldn’t have any mechanism to decide on what portions of those models should be focused on.  Arguably, it would be a mindless modeling system.

But if we removed the rear portion and kept the frontal lobes, we’d have even less functionality since the frontal lobes are crucially dependent on the posterior ones for the content they need to do their work.

And neither of the above isolated systems would have emotions unless we retained the limbic system as part of their supporting structures.

All of which is to say, for what we intuitively think of as consciousness, we need all of the components discussed above.  Subjective experience is the communication between the perception and emotion centers of the brain and the action oriented centers.  Wholesale removal of any of these centers might conceivably leave us with an information processing framework, but not one most of us would recognize as conscious.

Unless of course I’m missing something?

h/t Keith Frankish and Gregg Caruso for sharing the papers on Twitter.

Posted in Mind and AI | Tagged , , , , , , , | 20 Comments

The success of John Scalzi’s descriptive minimalism

One of the categories here on the blog is Science Fiction, mainly because I read and watch a lot of it.  Occasionally, someone wanting to get into the literary version of the genre asks me for recommendations on good initial books to start with.  My recommendation often depends on the person, but I frequently suggest they try John Scalzi’s work.

Scalzi has a light witty writing style.  He never seems to be far from outright humor, although his stories usually have an overall serious core.  This allows him to explore some issues that other authors struggle to do without alienating all but the most hardcore sci-fi nerds.  A lot of people who dislike science fiction often do like his books.

Of the writers who have explored posthuman themes, his approach is often the least threatening.  His breakout novel, Old Man’s War, features old people recruited into a future army where their minds are transferred into new combat bodies.  But he carefully avoids broaching some of the more existential issues associated with that idea.  Likewise, his novel Lock In explores minds in different bodies in a way that minimizes the angst of many of his more (small “c”) conservative readers.

Scalzi makes compromises to make his work more accessible, but it allows him to present ideas to a wide audience.  He’s been rewarded for it; he’s a bestselling author.  And he won the Hugo Award for Best Novel with the book, Redshirts, with a setting very similar to Star Trek, but one where the ship crew actually notices that a lot of people other than the senior officers die on away missions, and decide to do something about it.

His most recent book is The Collapsing Empire, a far future story about an interstellar empire that is about to lose its ability to travel interstellar distances.  I read, enjoyed, and recommend it.  But it’s the first in a new series, so it ends on a cliffhanger, which some readers might find annoying.

But the reason for this post is that some reviewers are apparently finding the book to be too short a read.  As Scalzi pointed out in a recent post, the novel isn’t actually a short one by normal sci-fi standards, weighing in at about 90,000 words.  Why then does it feel short to some readers?  Scalzi himself offers an explanation.

I’m not entirely sure what makes people think The Collapsing Empire is short, but I have a couple guesses. One is that, like most books of mine, it’s heavy on dialogue and light on description, which makes it “read” faster than other books of the same length might be.

I think Scalzi’s exactly right about this.  His books do read fast, and I think a large part of it is because they’re simply easy to read.  It takes a minimal amount of effort to parse them, particularly starting with Redshirts.  I saw someone once comment that his writing makes for an “effortless” experience of story.

It seems to me that a large part of this is because of his “heavy on dialogue and light on description” style.  If you’ve never read his stuff and want to get an idea of this style, check out his novella on Tor: After the Coup.  Scalzi virtually never gives a detailed description of settings, except to note what kind of place they are, such as an office, spaceship bridge, or palace, and if there is anything unusual about them.  And I can’t recall him ever describing a character in detail.

Some readers are put off by this type of minimalism, finding it to be a bit too “white room”, too much of a bare stage.  They prefer more sensory detail to add more vividness for the setting or character.

I can understand that sentiment to some extent, but I personally find detailed descriptions too tedious.  If I’m otherwise enjoying the story, I’ll put up with detailed descriptions (to an extent), but for me it’s something I have to endure, an obstacle I have to climb over.

One of the most often cited pieces of writing advice is “show don’t tell”.  This advice seems to mean different things to different people.  To me it means that, to relay important information to the reader, the best option is with story events that reveal it, the second is with dialog or inner monologue, and the least desirable is with straight exposition.

But many writers take “show don’t tell” to mean providing detailed descriptions and letting the reader reach their own conclusions.  So instead of simply saying that a workroom is messy, the details of the messiness should be described and the reader allowed to figure out that it’s a mess.  As a reader, I personally find this kind of writing frustratingly tedious.  I tend to glaze over during the description and miss the point the author wanted me to derive.

Apparently a lot of people agree with me.  As I noted above, Scalzi is a bestselling author.  I’ll say I don’t like everything about his writing.  (His character voices could be more distinct, although he’s improving on that front, and his endings often feel a little too pat.)  But his books are always entertaining, and I think, together with the humor, the minimalist style has a lot to do with it.

In many ways, this style is reminiscent of a type of writing we used to see a lot more of.  Classic science fiction authors like Robert Heinlein (whose style Scalzi’s early Old Man’s War books emulated), Isaac Asimov, Jack Vance, and many others were all fairly minimalist on description.

Over time, styles have tended to become more verbose.  I’m not sure why this is, but I suspect technology has something to do with it.  Before the 1980s, most writers used a typewriter.  Iterative revisions, with lots of opportunities to add new descriptive details, often required retyping a lot of text (i.e. work).  It became much easier with word processing software, making it much more common.

In my view, this has led to a lot of bloated novels, often taking 500 pages to tell a 300 page story.  To be clear, I have no problem with a 500 page book if it tells a 500 page story (Dune and Fellowship of the Ring both told a lot of story with around 500 pages), but many authors today seem to need that many pages to tell the same stories that were once handled with much smaller books.

Certainly tastes vary, but I think Scalzi’s success shows that when given an option for tighter writing, a lot of readers take it.  I wish more authors would take note.

Posted in Writing | Tagged , , , , | 18 Comments

Why fears of an AI apocalypse are misguided

In this Big Think video, Steven Pinker makes a point I’ve made before, that fear of artificial intelligence comes with a deep misunderstanding about the relationship between intelligence and motivation.  Human minds come with survival instincts, programmatic goals hammered out by hundreds of millions of years of evolution.  Artificial intelligence isn’t going to have those goals, at least unless we put them there, and therefore no inherent motivation to be anything other than be the tools they were designed to be.

Many people concerned about AI (artificial intelligence) quickly concede that worry about it taking over the world due to a sheer desire to dominate are silly.  What they worry about are poorly thought out goals.  What if we design an AI to make paperclips, and it attacks its task too enthusiastically and turns the whole Earth, and everyone on it, into paperclips?

The big hole in this notion is that the idea that we’d create such a system, then give it carte blanche to do whatever it wanted to in pursuit of its goals, that we wouldn’t build in any safety systems or sanity checks.  We don’t give that carte blanche to our current computer systems.  Why should we do it with more intelligent ones?

Perhaps a more valid concern is what motivations some malicious human, or group of humans, might intentionally put in AIs.  If someone designs a weapons system, then giving it goals to dominate and kill the enemy might certainly make sense for them.  And such a goal could easily go awry, a combination of the two concerns above.

But even this concern has a big assumption, that there would only be one AI in the world with the capabilities of the one we’re worried about.  We already live in a world where people create malicious software.  We’ve generally solved that problem by creating more software to protect us from the bad software.  It’s hard to see why we wouldn’t have protective AIs around to keep any errant AIs in line and stop maliciously programmed ones.

None of this is to say that artificial intelligence doesn’t give us another means to potentially destroy ourselves.  It certainly does.  We can add it to the list: nuclear weapons, biological warfare, overpopulation, climate change, and now poorly thought out artificial intelligence.  The main thing to understand about this list is it all amounts to things we might do to ourselves, and that includes AIs.

There are possibilities of other problems with AI, but they’re much further down the road.  Humans might eventually become the pampered centers of vast robotic armies that do all the work, leaving the humans to live out a role as a kind of queen bee, completely isolated from work and each other, their every physical and emotional need attended to.  Such a world might be paradise for those humans, but I think most of us today would ponder it with some unease.

Charles Stross in his science fiction novel ‘Saturn’s Children’, imagined a scenario where humans went instinct, their reproductive urge completely satisfied by sexbots indistinguishable from real humans but without the emotional needs of those humans, leaving a robotic civilization in its wake.

None of this strikes me as anything we need to worry about in the next few decades.  A bigger problem for our time is the economic disruption that will be caused by increasing levels of automation.  We’re a long way off from robots taking every job, but we can expect waves of disruption as technology progresses.

Of course, we’re already in that situation, and society’s answer so far to the effected workers has been variations of, “Gee, glad I’m not you,” and a general hope that the economy would eventually provide alternate opportunities for those people.  As automation takes over an increasingly larger share of the economy, that answer may become increasingly less viable.  How societies deal with it could turn out to be one of the defining issues of the 21st century.

Posted in Zeitgeist | Tagged , , , , , | 58 Comments