Is cosmic inflation settled science?

Ethan Siegel at Starts With a Bang has a post up arguing that the multiverse must exist.  His reasoning has to do with cosmic inflation.  Inflation is the theory that the universe expanded at an exponential rate in the first billionth of a trillionth of a trillionth of a second of the big bang timeline.  After that, space started expanding at a slower rate along the lines we see today.

Inflation was originally developed as a theory to explain why the universe seemed flat (in terms of spacetime), why temperature fluctuations throughout the early universe were so consistent, and a few other things such as the absence of magnetic monopoles.  These motivations for the theory seem important.  I think they should be remembered when evidence for the theory is being discussed.  In other words, can we cite the motivations for the theory as evidence for it?

Anyway, Siegel notes that, due to quantum uncertainty, the particle physics that led to inflation ending would not have been a strictly deterministic event.  Therefore inflation may not have ended everywhere in space at the same time.  Our universe may be a bubble of non-inflation in a sea of inflating space.   And if there is one bubble, there are likely others, other universes, the multiverse.

But all of this seems to hinge on the idea of cosmic inflation, specifically a variant of it called eternal inflation.  And here’s my issue.  The majority of physicists do seem to accept that the theory of inflation is true, but not all of them.  And the evidence for it seems to be a combination of the original problems the theory was developed to solve and circumstantial evidence.   Some notable physicists, including Paul Steinhardt, one of the theory’s early supporters, seem to think it actually creates more questions than it answers.

So, should we consider inflation settled science?  This seems like an example of an unsettling trend in physics in recent years of accepting theories that can’t be empirically tested.  Given that a huge swath of similar theories in particle physics were reportedly invalidated by the LHC results (or rather, non-results), it seems like a very questionable strategy, an abandoning of a key aspect of scientific investigation that been successful for centuries.

Science has credibility for a reason.  That reason is the value it puts on testing every proposition.  Talking about untestable theories as though they’ve been validated seems to put that credibility in jeopardy.  There’s a danger that the public will see start to see theoretical physics as metaphysical navel gazing.

There is also a danger, identified by Jim Baggott some years ago, that many scientists may simply not look at alternative theories because they think inflation has solved the issue, or that they may eschew some speculative theories just because they’re not compatible with inflation.  But if inflation is still really just a speculative theory, then they’re giving up on one speculative theory because it’s not compatible with another speculative theory, perhaps cutting off a fruitful line of inquiry.

It may turn out that inflation does eventually pass some test we simply haven’t thought of yet.  Or we may eventually figure out a way to test the idea of bubble universes.  But until we do, talking as though these are settled issues makes my skeptic meter jump through the roof.

Physics has a reputation for being a very hard science.  Sometimes I wonder how warranted that reputation really is, at least in the theoretical branches.

Unless of course I’m missing something?

Posted in Zeitgeist | Tagged , , , , , , | 30 Comments

Recommendation: Prador Moon

I’ve recommended Neal Asher’s books before.  This one is pretty much cut from the same pattern: superhuman AIs, fearsome aliens, exotic future technologies, and epic space battles covered in detail.  In terms of the chronology of his Polity future universe, Prador Moon is the earliest story, although it was written after several other books and stories in that universe.

This book covers the first encounter between humans and the prador, the fierce crab-like species that is featured in so many other stories.  This first encounter immediately leads to war.

The prador have no empathy, either with each other or with other intelligent aliens.  They have no problem eating each other, and discover in the story that they like the taste of human flesh.  They also have no problem experimenting on their own children and, of course, on humans.  They’re pretty much the epitome of the evil space opera aliens.

(Their depiction in later books I’ve read and reviewed is a bit more sympathetic, but only because the specific prador in these books are “modified” from the natural versions portrayed in this book.)

My only real disappointment with the book is that it’s short in comparison with Asher’s other books. (Although as of this post, it’s priced lower than most of his stuff.)  It felt like the story of one of the chief protagonists, Jebel Krong, could have been explored in far more detail.  He is referenced in other books, most of which I might not have read, so readers familiar with most of Asher’s work might have already had an impression of his overall biography.

So if you like space opera, might be worth checking out.

Posted in Science Fiction | Tagged , , , | 12 Comments

A neuroscience showdown on consciousness?

Apparently the Templeton Foundation is interested in seeing progress on consciousness science, and so is contemplating funding studies to test various theories.  The stated idea is to at least winnow the field through “structured adversarial collaborations”.  The first two theories proposed to be tested are Global Workspace Theory (GWT) and Integrated Information Theory (IIT).

GWT posits that consciousness is a global area that holds information that numerous brain centers can access.  I’ve seen various versions of this theory, but the one that the study organizers propose to focus on holds that the workspace is held by, or at least coordinated in, the prefrontal cortex at the front of the brain, although consciousness overall is associated with activation of both the prefrontal cortex and the superior parietal lobe at the back of the brain.

IIT posits that consciousness is the integration of differentiated information.  It’s notable for its mathematical nature, including the value Φ (phi); higher values denote more consciousness and lower values less consciousness.  According to the theory, to have experience is to integrate information.  Christof Koch, one of the proponents of this theory, is quoted in the linked article as saying he thinks this integration primarily happens in the back of the brain, and that he would bet that the front of the brain has a low Φ.

While I think this is going to be an interesting process to watch, I doubt it’s going to provide definitive conclusions one way or the other.  Of course, the purpose of the collaboration isn’t to find a definitive solution to consciousness, but to winnow the field.  But even that I think is going to be problematic.

The issue is that, as some of the scientists quoted in the article note, GWT and IIT make different fundamental assumptions about what consciousness is.  Given those different assumptions, I suspect there will be empirical support for both theories.  (They are both supposed to be based on empirical observations in the first place.)

From the article, it sounds like the tests are going to focus on the differences in brain locations, testing whether the prefrontal cortex in the front of the brain is really necessary for consciousness, or whether, as Koch proposes, that the parietal lobe in the back is sufficient.  However, even here, philosophical distinctions matter.

By “consciousness” do we only mean sensory consciousness, that is awareness of the information provided by the senses, both exteroceptive (of the outer world) and interoceptive (of the insides of the body)?  If so, then the parietal lobe probably would be sufficient, provided subcortical structures like the brainstem, thalamus, basal ganglia, and others are functional and providing their support roles.

Or do we mean both sensory and motor consciousness?  Motor consciousness here refers to being aware of what can be done and having preferences about various outcomes, that is, having volition and affective feelings (sentience).

If by “consciousness” we only mean the sensory variety, then Koch will likely be right that only the back of the brain is needed.  But for anyone who considers both sensory and motor consciousness essential, an empirical accounting of the sensorium will not be satisfying.

What complicates a discussion like this is that our intuitions of consciousness are not consistent.  We have intuitions about what subjective experience entails, and that usually includes feelings and volition.  But if we see a patient who’s had a prefrontal lobotomy, who is still able to navigate around their world, and respond reflexively and habitually to stimuli, even if they’ve lost the ability to emotionally feel or plan their actions, we’ll still tend to think they’re at least somewhat conscious.

Which brings me to my own personal attitude toward these theories.  I find GWT more grounded and plausible, but as I’ve progressively learned more about the brain, I’ve increasingly come to see most of these theories as fundamentally giving too much credence to the idea of consciousness as some kind of objective force.

Many of these theories seemed focused on a concept that is like the old vital force that biologists used to hunt for to explain the animism of life.  Today we know there is no vital force.  Vitalism is false.  There is only organic chemistry in motion.  The only “vital force” is the structural organization of molecular systems and the associated processes, and the extremely complex interactions between them.

I currently suspect that we’re eventually going to come to the same conclusion for consciousness, that our subjective experience arises through the complex interactions of cognitive systems in the brain.  Cognitive neuroscience is making steady progress on identifying and describing these systems.  Like molecular biology, we may find that there’s no one simple theory that explain it all, that we have little choice but to get down to the hard work of understanding all of these interactions.

Still, maybe I’m wrong and these “structured adversarial collaborations” will show compelling results.  As Giulio Tononi mentions in a quote in the article, the tests may well teach us useful things about the brain.

What do you think?  Am I too hasty in dismissing consciousness as some kind of objective force?  If so, why?  Are there things about GWT or IIT that make one more likely than the other, or more likely than other theories such as HOT (Higher Order Theory)?

Posted in Zeitgeist | Tagged , , , , | 37 Comments

Captain Marvel

Last night I did something I rarely do anymore, saw a movie in theater right when it was released: Captain Marvel.  As Marvel movies go, it was typical: lots of action, special effects, heart warming moments, and laughs.  Marvel / Disney really seems to have the formula for general entertainment down.  You don’t come out of these movies with any deep insight into the human condition, but you usually do come out emotionally satisfied (as least as long as you don’t scrutinize the details too carefully).

One of the things I found interesting is how much the Captain Marvel story has changed from when I was a kid reading comics.  In my day, Mar-Vell was an alien guy, a Kree military officer who ended up exiled on Earth.  This original character died in a much publicized story in the early 80s.  I had heard about the character “Ms. Marvel” but I think she came along after my comic reading days.

The movie retains elements of this original story, although the title character is, of course, now a woman, and we learn almost immediately that she is in fact originally from Earth.  But her overall history is somewhat confused for a good part of the movie, although eventually the pieces come into place in a satisfying manner.  A version of the original Mar-Vell character even ends up being worked in.

As with all Marvel movies, this one has tie ins with the others.  Nick Fury, played by a Samuel L. Jackson made young with CG, is a major supporting character.   It even has a young Agent Coulson.  We also have a villain show up from another series, but discussing that connection crosses into spoilers.  And, of course, in the now thoroughly expected end-credit scenes, it ties in heavily with the upcoming Avengers: End Game, as anyone who’s seen the end of Avengers: Infinity War would expect.

So if you enjoy Marvel movies in general, I think you’ll enjoy this one.  I could quibble endlessly with the scientific inaccuracies, but that’s really not the frame of mind to be in to enjoy these movies.  They’re fantasy pure and simple and should be taken in as such.

As a side note: it’s interesting that DC is about to come out with a Shazam movie, which will feature the first superhero character from the 1940s named “Captain Marvel.”  You have to wonder about the timing of this release.

Posted in Zeitgeist | Tagged , , , | 10 Comments

Why we’ll know AI is conscious before it will

At Nautilus, Joel Frohlich posits how we’ll know when an AI is conscious.  He starts off by accepting David Chalmers’ concept of a philosophical zombie, but then makes this statement.

But I have a slight problem with Chalmers’ zombies. Zombies are supposed to be capable of asking any question about the nature of experience. It’s worth wondering, though, how a person or machine devoid of experience could reflect on experience it doesn’t have.

He then goes on to describe what I’d call a Turing test for consciousness.

This is not a strictly academic matter—if Google’s DeepMind develops an AI that starts asking, say, why the color red feels like red and not something else, there are only a few possible explanations. Perhaps it heard the question from someone else. It’s possible, for example, that an AI might learn to ask questions about consciousness simply by reading papers about consciousness. It also could have been programmed to ask that question, like a character in a video game, or it could have burped the question out of random noise. Clearly, asking questions about consciousness does not prove anything per se. But could an AI zombie formulate such questions by itself, without hearing them from another source or belching them out from random outputs? To me, the answer is clearly no. If I’m right, then we should seriously consider that an AI might be conscious if it asks questions about subjective experience unprompted. Because we won’t know if it’s ethical to unplug such an AI without knowing if it’s conscious, we better start listening for such questions now.

This seems to include a couple of  major assumptions.

First is the idea that we’ll accidentally make an AI conscious.  I think that is profoundly unlikely.  We’re having a hard enough time making AIs that can successfully navigate around houses or road systems, not to mention ones that can simulate the consequences of real world physical actions.  None of these capabilities are coming without a lot of engineering involved.

The second assumption is that consciousness, like some kind of soul, is a quality a system either has or doesn’t have.  We already have systems that, to some degree, take in information about the world and navigate around in it (self driving cars, Mars rovers, etc).  This amounts to a basic form of exteroceptive awareness.  To the extent such systems have internal sensors, they have a primitive form of interoceptive awareness.  In the language of the previous post, these systems already have a sensorium more sophisticated than many organisms.

But their motorium, their ability to perform actions, remains largely rule based, that is, reflexive.  They don’t yet have the capability to simulate multiple courses of action (imagination) and assess the desirability of those courses, although the Deepmind people are working on this capability.

The abilities above provide a level of functionality that some might consider conscious, although it’s still missing aspects that others will insist are crucial.  So it might be better described as “proto-conscious.”

For a system to be conscious in the way animals are, it would also have to have a model of self, and care about that self.  This self concern comes naturally to us because having such a concern increases our chances of survival and reproduction.  Organisms that don’t have that instinctive concern tend to quickly be selected out of the gene pool.

But for the AI to ask about its own consciousness, its model of self would need to include a another model to monitor aspects of its own internal processing.  In other words, it would need metacognition, introspection, self reflection.  Only once that is in place will it be capable of pondering its own consciousness, and be motivated to do so.

These are not capabilities that are going to come easily or by accident.  There will likely be numerous prototype failures that are near but not quite there.  This means that we’re likely to see more and more sophisticated systems over time that increasingly trigger our intuition of consciousness.  We’ll suspect these systems of being conscious long before they have the capability to wonder about their own consciousness, and we’ll be watching for signs of this kind of self awareness as we try to instill it, like a parent watching for their child’s first successful utterance of a word (or depending on your attitude, Frankenstein looking for the first signs of life in his creation).

Although it’s also worth wondering how prevalent systems with a sense of self will be.  Certainly they will be created in labs, but most of us won’t want cars or robots that care about themselves, at least beyond their usefulness to their owners.  And given all the ethical concerns with full consciousness and the difficulties in accomplishing it, I think the proto-conscious stage is as far as we’ll bring common everyday AI systems, a stage that makes them powerful tools, but keeps them as tools, rather than slaves.

Unless of course I’m missing something?

Posted in Zeitgeist | Tagged , , , , | 36 Comments

The sensorium, the motorium, and the planner

I’ve been reading Gerhard Roth’s The Long Evolution of Brains and Minds. This a technical and, unfortunately, expensive book, not one aimed at general audiences, but it has a lot of interesting concepts.  A couple that Roth mentions are the terms “sensorium” and “motorium.”

The sensorium refers to the sum total of an organism’s perceptions, to its ability to take in information about its environment and itself.  The motorium, on the other hand, is the sum total of an organism’s abilities to produce action and behavior, to affect both itself and its environment.

What’s interesting about this is that sensoriums and motoriums are ancient, very ancient.  They predate nervous systems, and exist in unicellular organisms.  Often these organisms, such as bacteria, have motoriums that include movement via flagella, tiny motors that propel them through their environment.

Their sensoriums often include mechanoreception, the ability to sense when an obstacle has been encountered, and this triggers the flagella to temporally reverse direction for a brief time, followed by a programmed change in direction, and subsequently by forward motion again.  This is typically repeated until the organism has cleared the obstacle.

These organisms also often have chemoreception, the ability to sense whether the environment contains noxious or nutritious chemicals, which again can cause a change in motion until the noxious chemicals are decreasing or the nutritious ones increasing.  Some unicellular organisms even have light sensors, which can cause them to turn toward or away from light, depending on which is more adaptive for that species.

Reading about these abilities, which in many ways seem as sophisticated as those from simple multicellular animals, and given the evolutionary success of these organisms, you have to wonder why complex life evolved.  (There are many theories on why it did.)  But it’s interesting that the earliest multicellular organisms, such as sponges, actually seem less responsive to their environment overall than these individual unicellular life forms.

It’s also interesting to consider what is different between the sensorium of these unicellular and simpler multicellular organisms, and those of more complex animals such as amphibians, mammals, birds, arthropods, and the like.  Obviously the addition of distance senses dramatically increases the size of the sensorium, allowing an organism to react to more than just what it directly encounters, but also by what it can see, hear, or smell.

When we remember that some unicellular organisms have light sensors, and that the evolution in animals from light sensor to camera-like eye is a very gradual thing, with no sharp break between merely detecting light, having multiple light sensors to detect the direction of light, and forming visual images, then the addition of distance senses starts to look like a quantitative rather than qualitative difference.

Of course, more complex animals also have far more complex motoriums enabling a larger repertoire of behavior.  A fish can do more than a worm, a lizard more than a fish, a rat more than a lizard, and a primate more than a rat.  This increased repertoire requires more sophisticated motor machinery in the brain.

But that leads to what is probably the most significant difference, the communication between the motorium and the sensorium.  In unicellular organisms, the communication between them seems to be one way.  The sensorium senses and sends signals to the motorium which acts.  This also seems like the pattern for simple animals.

But distance senses and complex behaviors require interaction between the motorium and sensorium.  In essence, this involves higher order functionality in the motorium interrogating the sensorium for both past perceptions and future scenarios.  A good name for this higher order functionality could be “the planner”.  (I considered “imaginarium”, but that sounds too much like an amusement park attraction.)

The motorium planner interacts with both the sensorium and the lower level motorium.  It constantly queries the sensorium for perceptual information related to possible movement scenarios, and the lower level motorium for reflexive responses to each scenario.  Sometimes it does this while directing the lower level motorium in real time, but often it is considering alternatives while the lower motorium engages in habitual action.

Lobes of the brain
Image credit: BruceBlaus via Wikipedia

In the human brain, the sensorium seems to primarily exist at the back of the brain.  It includes the temporal, occipital, and parietal lobes.  The top part of the parietal lobe is a region sometimes called the posterior association cortex.  This is the center of the human sensorium.  (There is also a simple sensorium in the midbrain, but it appears to operate somewhat separately from the thalamo-cortical one.)

The motorium exists at multiple levels.  The lower levels are in the brainstem and basal ganglia.  Together these handle reflexive and habitual movement respectfully.  The higher order motorium, the planner, is in the frontal lobe cortices, including the prefrontal cortex and premotor cortex.

Neuroscientists often say that consciousness requires activation of both the frontal lobes and posterior association cortex.  (This is sometimes referred to as the “fronto-parietal network.”)  The reason for this is the communication between the motorium planner and the sensorium.  It may be that full phenomenal consciousness requires this interaction, with qualia in effect being the flow of information from the sensorium to the motorium planner.

But there is some controversy that the motorium is required for phenomenal awareness.  Many neuroscientists argue that the sensorium by itself constitutes consciousness.  The problem is that a patient with pathologies in their motorium usually can’t communicate their conscious state to anyone, making determining whether they’re conscious somewhat like attempting to see whether the refrigerator light stays on when the door is closed.

Neuroscientist Christof Koch points out that patients with frontal lobe pathologies who later recovered, reported having awareness and perception when their frontal lobes were non-functional but simply not having any will to respond.  But even this leads to the question, were they fully conscious when they laid down those memories?  Or is “consciousness” just a post hoc categorization we’re attempting to apply to a complex state on the border?

So we have the sensorium and the motorium, which predate nervous systems, going back to unicellular life.  What seems to distinguish more advanced animals is the communication between the sensorium and motorium, particularly the higher level motorium planner.  And this might converge with the view that full consciousness requires both the frontal lobes and parietal regions.

Unless of course, I’m missing something?

Posted in Mind and AI | Tagged , , , , , , | 25 Comments

Probability is relative

At Aeon, Nevin Climenhaga makes some interesting points about probability.  After describing different interpretations of probability, one involving the frequency with which an event will occur, another involving its propensity to occur, and a third involving our confidence it will occur, he describes how, given a set of identical facts, each of these interpretations can lead to different numbers for the probability.  He also describes how each interpretation has its problems.

He then proposes what he calls the “degree of support” interpretation.  This recognizes that probabilities are relative to the information we consider.  That is, when we express a probability of X, we are expressing that probability in relation to some set of data.  If we take away or add new data, the probability will change.

This largely matches my own intuition of probability, that it is always (or almost always) relative to a certain perspective, to a particular vantage point.  If I ask what is the probability of it raining tomorrow, you can give an answer before looking up the weather report based on what you know at that moment.  It might not be a particularly precise probability, but it can still be made based on where you live and  your experience of how often it typically rains there.  Of course, once you look at the weather report, you’ll likely adopt the probabilities it provides (unless the forecast where you live has historically been unreliable).

(One possible exception to probabilities being relative is quantum physics.  Depending on which interpretation you favor, quantum probabilities may be objective or they may be relative.  In non-deterministic interpretations, they might be objective (although that depends on your interpretation of the interpretation 🙂 ).  But in the deterministic interpretations, it would still be relative to our perspective.)

Every so often I do a post discussing the probability of something, such as the probability of other intelligent aliens in our galaxy.  It’s not unusual for someone to comment that we don’t know enough to estimate any probabilities and that the whole exercise is then pointless.  But if probabilities are relative, this position is wrong.

Of course, my estimated probabilities may be wrong, but if so the correct way to address it is in relation to the data that is being considered.  Or to offer additional data that may change the probability.  Or point out why some (or all) of the data should not be considered when making the estimate.

But if we have a perspective, then we have the ability to estimate probabilities from that perspective.  If our perspective is one of complete ignorance, the probability should reflect it.  Maybe we can only say the probability of something being true is 50%, that is, it has an equal chance of being true or false.  Or if the proposition is one of ten possible outcomes, then it might be more along the lines of 10% probable.

But it doesn’t take much knowledge to shift a probability.  In 1600, a natural philosopher could probably rationally argue that, based on what was then known, the probability of the heliocentric model of the solar system being true was only 50%.  But after Galileo’s blurry telescopic observations a few years later, along with confirmations by other observers, the probability shifted dramatically, so much so that by Newton’s time in the latter part of that century, the probability had shot up much higher.

Does that mean the natural philosopher in 1600 was wrong in his probabilities?  No, because relative to his perspective at the time, those were the probabilities.  He would only have been wrong if he hadn’t used the data available to him in making his estimate, or used it correctly, or insisted due to ideological commitments that the probability was zero.

So we’re always in a position to estimate probabilities.  We may not be in a position to do so precisely, since that usually requires a lot of data, but the argument that we should never try strikes me as invalid.  The only valid argument is whether or not we’re doing it correctly based on what is then known.

Unless of course I’m missing something?

Posted in Zeitgeist | Tagged , , , , | 9 Comments

AI and creativity

Someone asked for my thoughts on an argument by Sean Dorrance Kelly at MIT Technology Review that AI (artificial intelligence) cannot be creative, that creativity will always be a  human endeavor.  Kelly’s main contention appears to be that creativity lies in the eye of the beholder and that humans are unlikely to recognize AI accomplishments as creative.

Now, I think it’s true that AI suffers from a major disadvantage when it comes to artistic creativity.  Art’s value amounts to what emotions it can engender in the audience.  Often generating those emotions requires an insight from the artist into the human condition, an insight that draws heavily on our shared experiences as human beings.  This is one reason why young artists often struggle, because their experiences are too limited yet to have those insights, or at least too limited to impress older consumers of their art.

Of course an AI has none of these experiences, nor the human drives that make that experience meaningful in the way it is to us.  AI may be able to use correlations between things in other works and how popular those works are, but for finding a genuine insight into the human condition, it is simply not going to be equipped to do it, at least not for a long time.  In that sense, I agree with Kelly, although his use of the word “always” has an absolutist ring to it I can’t endorse.

But it’s in the realm of games and mathematics that I think Kelly oversells his thesis.  These are areas where insights into the human condition are not necessarily an advantage, although in the case of games they can be.

Much has been written about the achievements of deep-learning systems that are now the best Go players in the world. AlphaGo and its variants have strong claims to having created a whole new way of playing the game. They have taught human experts that opening moves long thought to be ill-conceived can lead to victory. The program plays in a style that experts describe as strange and alien. “They’re how I imagine games from far in the future,” Shi Yue, a top Go player, said of AlphaGo’s play. The algorithm seems to be genuinely creative.

In some important sense it is. Game-playing, though, is different from composing music or writing a novel: in games there is an objective measure of success. We know we have something to learn from AlphaGo because we see it win.

I can’t say I understand this point.  Because AlphaGo’s success is objective we can’t count what it does in achieving that win as creative?  The fact is AlphaGo found strategies that humans missed.  In some ways, this reminds me of the way evolution often find creative solutions to problems, solutions that in retrospect look awfully creative.

In the realm of mathematics, Kelly asserts that, so far, mathematical proofs by AI have not been particularly creative.  Fair enough, although by his own standard that’s a subjective judgment.  But he then focuses on proofs an AI might come up with that humans couldn’t understand, noting that a proof isn’t a proof if you can’t convince a community of mathematicians that it’s correct.

Kelly doesn’t seem to consider the possibility that an AI might develop a proof incomprehensible to humans that nevertheless convince a community of other AIs who can demonstrate its correctness by using it to solve problems.  Or the possibility that the “not particularly creative” AIs today might advance considerably in years to come and produce ground breaking proofs that human mathematicians can understand and appreciate.  Mathematics is one area where I could see AI eventually having insights a human might never have.

But I think the biggest weakness in Kelly’s thesis is at its heart, his admission that creativity, like beauty, lies in the eye of the beholder, that it only exists subjectively.  In other words, it’s culturally specific, and our conception of what is creative might change in the future, particularly as we become more accustomed to intelligent machines.

This leads him to this line of reasoning:

This claim is not absolute: it depends on the norms that we allow to govern our culture and our expectations of technology. Human beings have, in the past, attributed great power and genius even to lifeless totems. It is entirely possible that we will come to treat artificially intelligent machines as so vastly superior to us that we will naturally attribute creativity to them. Should that happen, it will not be because machines have outstripped us. It will be because we will have denigrated ourselves.

In other words, machines can’t be creative because we humans won’t recognize them as so, and if humans do start to consider them creative, then we will have denigrated ourselves.  This is just a rationalized bias for human exceptionalism, a self reinforcing loop that closes off any possibility of considering counter-evidence.

So, in sum, will AI ever be creative?  I think that’s a meaningless question (similar to the question of whether it will ever be conscious).  The real question is will we ever regard them as creative?  The answer is we already do in some contexts (see the AlphaGo quote above), but in others, notably in artistic achievement, it may be a long time before we do.  But asserting we never will seems more like a statement of faith than a reasoned conclusion.  Who knows what AIs in the 22nd century will be capable of?

What do you think?  Is creativity something only humans are capable of?  Is there any fact of the matter on this question?

Posted in Zeitgeist | Tagged , , | 44 Comments

Synthetic DNA and the necessity of biological mechanisms

Scientists have created synthetic DNA with four extra “letters”:

A couple billion years ago, four molecules danced into the elegant double-helix structure of DNA, which provides the codes for life on our planet. But were these four players really fundamental to the appearance of life — or could others have also given rise to our genetic code?

A new study, published today (Feb. 20) in the journal Science, supports the latter proposition: Scientists have recently molded a new kind of DNA into its elegant double-helix structure and found it had properties that could support life.

But if natural DNA is a short story, this synthetic DNA is a Tolstoy novel.

The researchers crafted the synthetic DNA using four additional molecules, so that the resulting product had a code made up from eight letters rather than four. With the increase in letters, this DNA had, a much greater capacity to store information. Scientists called the new DNA “hachimoji” — meaning “eight letters” in Japanese — expanding on the previous work from different groups that had created similar DNA using six letters.

The team was able to confirm that the synthetic DNA could be replicated into RNA, and could form the double helix structure.  However, they stopped short of confirming that it could replicate itself:

Still, in order for the Hachimoji DNA to support life, there’s a fifth requirement, Benner said. That is, it needs to be self-sustaining or have the ability to survive on its own. However, the researchers stopped short of investigating this step, in order to prevent the molecule from becoming a biohazard that could one day work its way into the genomes of organisms on Earth.

The article takes Hachimoji DNA as evidence that extraterrestrial DNA could be made of different components than the ones found on Earth.  However, since they stopped short of replication, I’m not sure we have that evidence yet.

One question that often comes up in biology is, how necessary or arbitrary is a particular solution in evolution?  In other words, were there other solutions that life could have taken to solve a particular problem?  This is always a difficult question, because we don’t know whether those alternate solutions arose at some point in the past, but were subsequently selected against, or never arose because the right mutation just never happened.

Biological traits arise because of mutations.  Mutations can be beneficial, in which case they’ll usually be selected for, or they can be detrimental, leading to them being selected against.  Or they can be neutral, in which case whether they propagate may come down to the random fluctuations of genetic drift.

This is complicated by the fact that phenotypic (observed) traits typically arise from the complex interactions of proteins produced by individual genes.  So a beneficial trait might be paired with a detrimental one, with whether the combination propagates depending on how the mix of benefit and detriment works out.

An aspect of any trait or mechanism is how much energy it needs.  The trait or mechanism might be neutral or perhaps even mildly beneficial, but if it’s costly in terms of energy, it’s likely to end up falling on the detrimental side of the ledger.  Although if it’s very beneficial, then even being costly in terms of energy might not matter.  (The brain is a prime example of this latter case, an energy hungry organ that nonetheless earns its keep.)

Energy is what I’m not sure about with these additional letters.  How much chemical energy do they require to be incorporated into the DNA structure?  The answer might not matter for any artificial applications we come up with, such as DNA storage, but if they require more energy to form, that might be why we don’t see them in nature.

Along those lines, I’d be very interested if anyone has seen information on this aspect of the development.  Or, as usual, if I’m missing anything here.

Posted in Zeitgeist | Tagged , , , , | 21 Comments

The real issues with colonizing space

At Nautilus, Phil Torres argues that we should think twice about colonizing space.  His reasoning appears to be that as we spread throughout the universe, we will undoubtedly diversify into different species, and that those species may come to distrust each other, and eventually try to destroy each other.

Now, I’ve argued before that most of the urge to colonize space has problematic assumptions, at least in the short term.  Setting up an independent self sustaining ecosystem in a space colony is going to be far more difficult than most colonization advocates realize.  Any colony in the short term is likely to have a crucial supply lifeline back to Earth for the many vital things unavailable in its small ecosystem.  Such colonies wouldn’t last long if human civilization destroyed itself.

And even if we did figure out how to set up an independent ecosystem, it would be far cheaper and less dangerous to colonize Antarctica, the sea floor, or underground.  Yes, living in those locations would be difficult and expensive, but the difficulty and expense are a fraction of what any conceivable space colony might involve.

Of course, eventually the sun will force us to migrate somewhere else, but “eventually” is hundreds of millions of years from now.  And even then, we might find it easier to alter Earth’s orbit than to relocate to another solar system.

But when we start talking on longer time scales, other possibilities improve the chances that some sort of interstellar colonies might be feasible.  We should eventually be able to modify our biology to make the lifeline with Earth’s ecosystem unnecessary, transform ourselves into machine intelligences, or some combination of the two with machines and biology merging into engineered life.  Which could make space a far less inhospitable place.

But worrying that eventually we might turn on each other?  That does seem inevitable, but it also seems inevitable if we just stay on Earth.  The difference is that a distributed humanity (or post-humanity, or whatever) seems far more resilient to stupid wars or movements than a humanity with all its eggs in one basket.

The issue isn’t that we shouldn’t leave that one basket, it’s finding a way to become independent of it, and taking care of it until we do.

Unless of course I’m missing something?

Posted in Zeitgeist | Tagged , , | 24 Comments