Fruit fly fear and AI sentience

I found this study interesting: Do flies have fear (or something like it)? — ScienceDaily.

A fruit fly starts buzzing around food at a picnic, so you wave your hand over the insect and shoo it away. But when the insect flees the scene, is it doing so because it is actually afraid? Using fruit flies to study the basic components of emotion, a new Caltech study reports that a fly’s response to a shadowy overhead stimulus might be analogous to a negative emotional state such as fear — a finding that could one day help us understand the neural circuitry involved in human emotion.

It might seem obvious that, since a fly avoids the fly swatter, it must have some kind of fear.  However:

“There are two difficulties with taking your own experiences and then saying that maybe these are happening in a fly. First, a fly’s brain is very different from yours, and second, a fly’s evolutionary history is so different from yours that even if you could prove beyond any doubt that flies have emotions, those emotions probably wouldn’t be the same ones that you have,” he says. “For these reasons, in our study, we wanted to take an objective approach.”

It’s a fair point.  Fly fear is probably very different from human fear.  Still, it’s hard not to conclude that flies have some type of fear.  The only way to conclude otherwise would be to narrow the definition of fear so that it only applies to mammalian brains, but that seems excessively speciesist, and anyway, this study did find what appears to be evidence for fly emotions.

“These experiments provide objective evidence that visual stimuli designed to mimic an overhead predator can induce a persistent and scalable internal state of defensive arousal in flies, which can influence their subsequent behavior for minutes after the threat has passed,” Anderson says. “For us, that’s a big step beyond just casually intuiting that a fly fleeing a visual threat must be ‘afraid,’ based on our anthropomorphic assumptions. It suggests that the flies’ response to the threat is richer and more complicated than a robotic-like avoidance reflex.”

What’s interesting about this evidence is that it seems to mean that, to at least some extent, flys are sentient beings.  What makes that interesting is that their brains are relatively simple systems, with about 100,000 neurons and 10 million synapses (compared to the 100 billion neurons and 100 trillion synapses in humans).

Mapping brains to computing capacity is fraught with problems, but unless you assume the resolution of mental processing is smaller than neurons and synapses,  the device you’re using to read this post has far more storage and processing complexity than a fruit fly’s brain.  Yet I doubt you see your device as a sentient being.  (You may not see a fly as a sentient being either, but you have to admit it seems more sentient than a smartphone.)

All of which brings me back to the realization that sentience is a matter of having the right software, the right data processing architecture.  We don’t understand that architecture yet.  As simple as a fly brain is, we have little understanding of how it generates the fly’s emotions, although the researchers do hope to change that.

In the future, the researchers say that they plan to combine the new technique with genetically based techniques and imaging of brain activity to identify the neural circuitry that underlies these defensive behaviors. Their end goal is to identify specific populations of neurons in the fruit fly brain that are necessary for emotion primitives — and whether these functions are conserved in higher organisms, such as mice or even humans.

I have to wonder how they plan to do brain imaging on flies.

Anyway, one of the things that is becoming sharper in my mind is the distinction between intelligence and sentience.  Fly sentience is almost certainly not as rich as mouse sentience, much less human sentience.  But while we have computing systems that are intelligent enough to beat humans in narrow domains like chess or Jeopardy, we don’t yet have a system with even the limited sentience of a fly.  (At least none that I know of.  I’m sure the fly biologists and neuroscientists of all types would like to know if we did.)

A lot of sci-fi scenarios have sentience creeping in by accident as machines progressively become more intelligent.  Personally, I doubt we’re going to get it by accident.  We’re probably going to have to understand how it arises in creatures such as flies well before we have much of a chance of generating it in machines.  Fortunately, sentience won’t be required for most of what we want from artificial intelligence.

Posted in Zeitgeist | Tagged , , , , , , , , | 45 Comments

A Democrat in a two party system

John Scalzi, as he periodically does, is responding to reader questions, and one was on his attitude toward Republicans.  If you’re familiar with Scalzi, then you can probably guess that his attitude toward Republican politicians isn’t generally positive.  I found a lot to agree with in his post, notably on his social positions such as being pro-choice and supporting same sex marriage.

Although unlike Scalzi, I’m interested in having the social safety net in the US more than “slightly” better.  I personally wouldn’t mind an European “cradle to grave” type of welfare system, with universal healthcare, free (or at least low cost) higher education, and a robust public pension system, among other things.  Yes, it would mean higher taxes, but in evaluating that, we have to take into account how much we each already spend personally on healthcare, education, retirement, etc, and how well that’s currently working for us.

Anyway, the thing that caught my attention with Scalzi’s post was all the effort he seemingly engaged in to avoid labels like “progressive”, “liberal”, or “Democrat”.  Of course, Scalzi isn’t at all unusual in this sentiment, so I’m not picking on him in particular with this post.  It’s a very common impulse.  People will espouse many positions, while making sure everyone knows that they’re not actually a member of the party that most aligns with those positions.

Well, I am a Democrat and a progressive liberal.  I accept those labels.  And, as I described in my midterm election post, I pretty consistently vote Democrat.  I don’t do this because I agree with the Democrats all of the time (for instance, I thought their opposition this week to free trade was misguided), but because I disagree with them far less than I disagree with the Republicans.

I recognize a simple fact.  America is a two party system.  It’s been a two party system for virtually its entire history since the Constitution was ratified.  The only exceptions were a brief period after the War of 1812 (when the Federalist party disintegrated under allegations of disloyalty), and in the tumultuous years leading up to the Civil War (when the Whigs disintegrated as the slavery issue convulsed existing factional lines).  Within a decade or two after each of these events, the US had settled back into its two party structure.

The ideological stance of the two parties change over time.  (In the 19th century, the Republicans were often the progressives and the Democrats the traditionalists.)  In our political system, with power separated among different branches of government, long term political alliances are necessary to accomplish anything.  Governing coalitions are built within the two parties, instead of in a unicameral legislature as often happens in democracies with proportional representation.  This makes these coalitions long term, with the rare generational changes in them referred to as “realigning elections.”

So, you can bemoan the reality of the two party system, but it’s unlikely to change unless we overhaul the Constitution.

If you vote for a third party candidate,  you’re almost always giving aid and comfort to the major political party on the other end of the ideological spectrum from that third party.  In other words, from a pure game theory strategic point of view, you’re causing the country to effectively move further away from where you’d like it to go.

Now maybe you’re a centrist, with your disagreements more or less evenly balanced between the two major parties.  If so, then avoiding either label makes sense.  But maybe you simply don’t follow the issues closely enough to know which party you are more aligned with.  If so, I suggest googling around; there are sites that will help you with that.

Most people, if they look at their long held political positions, will find themselves aligning more with one of the major parties.  Once you’ve identified that party, it makes sense to support it.  Yes, the other party will occasionally have more attractive candidates, and the party you align with will occasionally have less attractive candidates, but the reality is that all politicians, once in power, have to take care of their political allies, since they need them to accomplish things.  To ignore this is simply to ignore history.

So again, I’m a progressive liberal and a Democrat.  I’m not so partisan that I don’t think Republicans occasionally have good ideas, or that Democrat occasionally have awful ones, but on balance, with our current politics, the Democrats are by far, at least for me, the lesser evil.

I should note that I’m a progressive within the political spectrum of the United States.  If you transplanted me into another country where the value of science, social equality, reproductive freedoms, and a universal safety net were already part of the bipartisan consensus, then as a capitalist, I might find myself aligning with that country’s fiscal conservatives.  But, as Scalzi noted, viewed from an international perspective, US politics are currently shifted so far to the right, it will probably be a long time before there’s much chance of that happening here.

Posted in Society | Tagged , , , , , , , , | 17 Comments

Freedom regained

Originally posted on Scientia Salon:

81lyH-va9ELby Julian Baggini

[This is an edited extract from Freedom Regained: The Possibility of Free Will, University of Chicago Press. Not to be reproduced without permission of the publisher.]

We’ve heard a lot in recent years about how scientists — neuroscientists in particular — have “discovered” that actions in the body and thoughts in the mind can be traced back to events in the brain. In many ways it is puzzling why so many are worried by this. Given what we believe about the brain’s role in consciousness, wouldn’t it be more surprising if nothing was going on in your brain before you made a decision? As the scientist Colin Blakemore asks, “What else could it be that’s making our muscles move if it’s not our brains?” And what else could be making thoughts possible other than neurons firing? No one should pretend that we understand exactly how it…

View original 3,074 more words

Posted in Zeitgeist | 4 Comments

Emotional versus intellectual attributions of consciousness

Click through for full sized version and the red button caption.

via Saturday Morning Breakfast Cereal.

This SMBC reminds me of a concept that I’ve been debating on ways to express, but a brief comment here seems like the opportunity to do so.  We’ve had a lot of discussions about exactly when we might start to consider an AI (artificial intelligence) a fellow being.  This is a philosophical question with no right or wrong answer.

One of the things that’s become apparent to me over time, is that there are two answers to it.  The first is the emotional one, which this strip satirizes.  We come pre-wired to see things as fellow conscious beings.  Many see this anthropomorphizing tendency as the basis for beliefs in ghosts, spirits, demons, gods, and other supernatural entities.  We often intuitively extend it to things like storms, cars, and existing computer systems.  In experiments, people have been reluctant to destroy cute robots after they had played with them for a while, obviously intuitively feeling that they were conscious entities.

I recently listened to an interview with the director of the new Ex Machina, the new AI movie, who stated that he knew he wouldn’t have a problem convincing audiences that the AI in the movie was sentient.  He knew that emotionally, they’d be predisposed to accepting it as such, at least within make believe framework of the movie.  (Having actress Alicia Vikander‘s lovely face on the AI probably helped tremendously.)

Of course, intellectually we know that things like storms, cars, and cute robots aren’t conscious systems.  Even though we feel at times emotionally that they are, we don’t intellectually give ourselves permission to regard them as such.  (At least most of us in the modern developed world don’t.)  I think this intellectual threshold where we give permission is the second answer.  And, as before, it remains a philosophical threshold.

The other thing this strip brilliantly points out, is that we have to be careful of being too guarded with that intellectual permission, too skeptical.  It’s the same intellectual skepticism that once allowed people to consider animals as not being conscious, and to then feel okay with mistreating them.

I think we’re still a long way from having a sentient conscious machine, but as we get closer, we have to be on guard against setting the standard too high.  We don’t want to find ourselves making statements like the one in the last caption.

Posted in Zeitgeist | Tagged , , , , , , , , , | 19 Comments

NASA has never accidentally sent a probe into the Sun.

Last week, I was having lunch with some friends, which included a number of programmers.  One of them mentioned an old urban myth, that I hadn’t heard in several years, which claims that, due to a programming bug (involving a misplaced semicolon), NASA once accidentally sent a probe into the Sun.  I pointed out to my friend how implausible this was.  He didn’t believe me, and we ended up having a conversation about the logistics of solar system navigation, some of which I’m reproducing here.

So, how can I say that NASA accidentally sending a probe into the Sun is implausible?  After all, the Sun is a giant ball of fusion heated plasma, over a million kilometers in diameter, sitting in the center of the solar system.  Why isn’t something like that a major navigation hazard?  And why is the idea of accidentally sending anything into it unlikely?

The answer is that orbital mechanics actually make the Sun the most difficult location in the solar system to reach, even on purpose.  It’s more difficult to send something to the Sun than it is to send it completely out of the solar system, as we’ve done with the Voyager probes.

To understand why, let’s start by remembering that everything in the solar system orbits the Sun: Earth, the other planets, asteroids, comets, etc.  Note that almost all of it is orbiting in the same direction.  And that an orbit is essentially an object moving fast enough to avoid falling into a gravity source, but not fast enough to break away from it.  Slow the orbiting object down, and gravity brings it closer to the gravitational source; speed it up, and the additional speed brings it further from the gravitational source.  Slow it down enough, and it falls into the gravity well; speed it up enough, and it breaks free.

MRO_Transfer_Orbit

Image credit: NASA via Wikicommons

When NASA, or anyone else, sends a probe to another planet, such as Mars, what they’re actually doing is putting the probe into a transfer orbit that intersects the orbit of Mars, hopefully when Mars is in that position.

The way this works is that when an interplanetary probe is launched, it first is launch with enough velocity, or delta-v, to escape Earth’s gravity.  This is a little over 11 kilometers per second.  If the probe is being sent to Mars, it’s launched in the direction that Earth moves in its orbit around the Sun, with enough extra velocity (the exact amount varies) to put it in its own orbit that will take it further away from the Sun and intercept the Martian orbit.  If launched at the correct time, it will meet Mars at that intersection.

Image credit: NASA

Image credit: NASA

When a probe is sent to Venus, it is actually launched in the direction opposite Earth’s orbital direction, with enough delta-v to put it in orbit around the Sun at a slower velocity than Earth, which will bring it in closer to the Sun.  Again, hopefully if all the calculations are correct, this new orbit will intersect Venus’s orbit at the right time to arrive at Venus.

Mars and Venus are the easiest planets to reach, mainly because they are the ones with the closest orbits and with the smallest differences in orbital speed from Earth’s.  The delta-v to get into a transfer orbit to them isn’t too severe, and neither is the delta-v to match speeds with the planet at the intersection.

Image credit: NASA via Wikipedia

Image credit: NASA via Wikipedia

Getting out to the outer planets requires considerably more energy, although when sending probes into the outer solar system, Jupiter’s gravity can be used to slingshot the probe to higher speeds.  The Voyager probes actually used multiple gravity assists via the gas giants to build up enough velocity to escape the solar system.  Voyager 1 in particular used all of the gas giants (Jupiter, Saturn, Uranus, Neptune) for successive gravity assists, which is why it’s currently the furthest man made object.  (Oops, it was Voyager 2 who used all those planet gravity assists.  Voyager 1 is moving slower, but is currently further out due to its trajectory.)

Mercury is actually a pretty difficult planet to reach because of its orbit.  Considerable delta-v is required to slow a probe’s orbital velocity around the Sun enough to put it in a transfer orbit to Mercury.  In addition, objects move faster at the lowest point in their orbit.  An object in an elongated elliptical orbit, when it is at its low point, such as a probe, will be moving much faster than another object in a more circular orbit at the same distance from the Sun, such as Mercury.  This speed difference made getting a probe into orbit around Mercury difficult.

In the early 70s, NASA had sent Mariner 10 to both Venus and Mercury, using Venus’s gravity to slow the probe enough to approach Mercury.  But all it was able to do was a periodic near pass as its solar orbit swung near Mercury’s.  It wasn’t until MESSENGER, which took an elaborate multi-orbit path around the Sun, using Earth and Venus’s gravity to repeatedly slow it down sufficiently, that we were able to get a probe into orbit around Mercury.

Okay, so what does this all mean for sending a probe to the Sun?  Well, it means you can’t get there by just naively pointing a rocket in the Sun’s direction.  Without enough delta-v, you’ll just end up putting the spacecraft into a different orbit around the Sun.

Earth’s orbital velocity around the Sun is about 30 kilometers per second.  The most straightforward way to send a probe to the Sun would be to launch with enough velocity to escape Earth’s gravity (11 km/s) plus enough in the direction opposite of Earth’s orbital direction to kill all the probe’s solar orbital velocity (30 km/s), allowing it to fall into the Sun.  That would require a total delta-v of over 41 kilometers per second.  Currently there is no rocket that can provide this much velocity.  (Although it may be doable with help from long running electric propulsion systems such as VASIMIR, or  with solar sails.)

Of course, similar to the MESSENGER probe, we could probably use various gravity assists to lessen the delta-v requirement.  But the point is that doing so is very complicated.  Huge delta-v requirements plus complexity means that this is not something anyone is going to do accidentally.  At least not until our propulsion technologies get a lot better than they currently are.  Which is why you really don’t have to hit the NASA archives to know that stories like this are myth.

Incidentally, this urban legend demonstrates something about the way that oral myths evolve, even over a few decades.  It likely began from the Mariner I launch failure in the early 60s, which reportedly involved a software bug with a misplaced hyphen.  (Although even that isn’t definite.)  Somehow, by the early 80s (which is when I can first recall hearing or reading it), it had mutated through various embellishments into the Sun version.

If you’re interested in more details on transfer orbits and the like, I highly recommend NASA’s write up on it.

Posted in Space | Tagged , , , , , , , , , , | 17 Comments

A darker vision of the post-singularity: The Quantum Thief trilogy

TheQuantumThiefCoverI just finished reading Hannu Rajaniemi’s Quantum Thief trilogy: ‘The Quantum Thief‘, ‘The Fractal Prince‘, and ‘The Causal Angel‘.  (The official name of the trilogy is the Jean le Flambeur series, named after one of the chief protagonists, but everyone seems to call it the Quantum Thief trilogy instead.)

Most visions of society after the singularity (or something like the singularity) tend to be utopias, or near utopias.  Rajaneimi’s vision is far darker and more mixed, with some aspects being nightmarish.  Of course, from a story perspective, that actually makes for more fertile ground.

In the Quantum Thief universe, a posthuman civilization TheFractalPrinceCoverexists throughout the solar system, spread between numerous societies.  Mind uploading developed prior to AGI (artificial general intelligence).  Raw AGI itself has proven to be extremely dangerous and AGIs are referred to as “dragons.”  They are rarely released due to their ravenous and uncontrollable nature.  (Rajaneimi, in an interview, stated that this is not necessarily how he expects things would be, but that it was a useful plot device to restrain the story to being between human-like agents.)

The most powerful society in the inner solar system is the Sobornost, a series of collectives with billions of TheCausalAngelCoverminds, referred to as “gogols” in the story.  Gogols are either uploaded human minds or minds created in the template of human minds.  Just about every piece of Sobornost technology involves the use of legions of gogol slaves, implying that for much of humanity, existence has been reduced to the role of mind slaves.  The Sobornost is divided between physical substrates called guberniyas, with each one ruled by a “founder”, presumably one of the people who developed the Sobornost system.

The outer solar system is ruled by a series of collectives referred to as “Zokus”, which are initially presented as gaming circles.  But as the story progresses, it becomes apparent that Zoku society refers to just about any endeavor as a “game”, with an overarching governing authority referred to as the “Great Game” Zoku.   Zoku society appears to be far more appealing than the Sobornost, and they are essentially the Sobornost’s primary power rival.

But there are several other societies.  One is the Oubliette, a city on Mars that is on huge mechanical legs so that it can keep moving to avoid nanobot infections infesting the Martian surface.  The Oubliette society is a posthuman one, but one where everyone lives in human form, except when doing a tour of duty running the city’s machinery.  Every citizen also has control of just how much information they share in social interactions with others.

Another society is the city of Sirr, the last outpost of humanity on Earth, situated somewhere in the Middle East region, and constantly having to protect itself from Earth’s version of nanobot infections, referred to as wildcode.  And Oort, a society of humans living in the Oort cloud, the vast regions of cometary bodies extending for up to two light years from the Sun.  There are also references to societies of bear like creatures (presumably posthumans who adopted that shape for some reason) in the asteroid belt, as well as various mercenary clans.

The story begins with the main protagonist, a thief named Jean le Flambeur, serving time in a the “Dilemma Prison”, a virtual prison where prisoners are forced to regularly undergo painful prisoner dilemmas, a game theory exercise that, presumably, is meant to teach them that tit-for-tat is the most successful strategy, reforming them for society.  le Flambeur is rescued from the prison by the story’s other protagonist, Meili, a warrior from Oort working for the Sobornost.  le Flambeur’s thief skills are needed for a mysterious mission.

The stories go on to explore a number of concepts, such as how reliable our memories can be, the concept of self, whether or not and to what degree we have free will, and what it means to be human.  There are a lot of interesting ideas in these books, and plenty of action to keep things exciting.  And much of what is presented at the beginning is not how it appears.  If you like posthuman science fiction, I highly recommend them, with a qualification.

That qualification is that the prose is very dense.  Rajaneimi introduces new concepts without explanation and counts on the reader picking up their meaning through context, which I for one wasn’t always able to do.  I actually found the first and last books manageable in this regard, with an occasional Wikipedia break helping, but the middle book was a tough slog, with many concepts given names from Islamic spirituality and/or Arabian mythology, whose meanings often weren’t readily available from quick Google searches.

In some cases, I suspected the dense prose masked scientific or plot weaknesses.  And some concepts never seem to get an adequate explanation, with several interpretations of the narrative possible.  I’m generally not a fan of this type of writing, and probably wouldn’t have tolerated it if Rajaneimi’s story and universe hadn’t been so compelling.

But they are, and that’s why, despite its flaws (which some might see as strengths), I still recommend these books.

Posted in Science Fiction | Tagged , , , , , , , , , , | 4 Comments

The definition of the science fiction genre

Charlie Stross has an interesting post up on the distinction between science fiction and fantasy.  He looks at a question I haven’t thought about in a while:

Not too long ago, someone in the twittersphere asked, “Whatever happened to psi? It used to be all the rage in science fiction.”

The answer, essentially, was that John Campbell died and nobody believes in that crap any more. And anyway, it’s fantasy.

Now here’s the thing. If you accept Clarke’s Third Law, which boils down in the common wisdom to “Any sufficiently advanced technology is indistinguishable from magic,” you kind of have to ask, “Do we believe psi is crap because it really is crap, or do we just not have the technology to detect or manipulate it?”

Yes, of course, that way lies madness. But with quantum physicists messing around with teleportation, and computer engineers inching toward a technological form of telepathy, are we really that far off from making at least part of the Campbellian weirdness a reality?

And if that’s the case, where did the psi go? It’s no more improbable than the ftl drive that’s a staple of the space-opera canon. Why is ftl still a thing, but psi is now subsumed under “Magic, Fantasy, Tropes of”?

I’ve written before that I think the definition of science fiction proper is speculative fiction that can’t be ruled out as impossible, but I admitted that there isn’t much actually labeled as science fiction that strictly meets that definition.  Most science fiction is actually a blend of fantasy and actual scientific speculation.

But reading Charlie’s post, something “clicked” for me.  I still think the definition of “hard” science fiction is what I said above, but the definition of the science fiction genre is different.

It seems to me that the science fiction genre is fiction that the general population, or at least the population of science fiction fans, thinks can’t be ruled out as impossible.

I like this definition, because it recognizes that the genre will change.  (Strictly speaking, of course, even hard science fiction changes as science progresses.)  What counted as science fiction in the 70s, such as stories involving psi or ESP (extrasensory perception), doesn’t count today, because most of the population, at least the population that reads science fiction literature, doesn’t see that stuff as scientific any longer.

It also explains why FTL (faster than light) travel remains a staple of science fiction space opera, despite the fact that most scientists see it, or at least most conceptions of it, as fantasy.  The general population hasn’t come around yet on FTL, so it remains science fiction.

Of course, it depends on exactly which audience we’re talking about, and there is a spectrum between diamond hard science fiction (Clarke, Egan, etc) and Star Wars novels.  But all of it comes down to what that audience accepts as still possible.  Most Star Wars readers probably still see what they’re reading as science fiction, even though most hard science fictions fans see it as complete fantasy.  While few readers of Tolkien, Martin, or Howard think they’re reading anything plausible.

Many young readers start out with things like Star Wars or Star Trek novels.  In many ways, these types of books serve as a type of “gateway drug.”  Some graduate to harder science fiction literature.  As they do, their conception of what is scientific probably becomes more rigorous.

This may seem like a minor realization, but for some reason I’m inordinately pleased with it right now.  Of course, I may decide that this definition is wrong later.  I’d be interested in seeing what you guys think.

Posted in Science Fiction | Tagged , , , , , , | 50 Comments