Robot masters new skills through trial and error

Related to our various AI discussions, I noticed this news: Robot masters new skills through trial and error — ScienceDaily.

Researchers at the University of California, Berkeley, have developed algorithms that enable robots to learn motor tasks through trial and error using a process that more closely approximates the way humans learn, marking a major milestone in the field of artificial intelligence.

They demonstrated their technique, a type of reinforcement learning, by having a robot complete various tasks — putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, and more — without pre-programmed details about its surroundings.

…In the world of artificial intelligence, deep learning programs create “neural nets” in which layers of artificial neurons process overlapping raw sensory data, whether it be sound waves or image pixels. This helps the robot recognize patterns and categories among the data it is receiving. People who use Siri on their iPhones, Google’s speech-to-text program or Google Street View might already have benefited from the significant advances deep learning has provided in speech and vision recognition.

If the robot learning concerns you, if you’re concerned that it, or more likely one of its successors, might bootstrap itself into Ultron or Skynet, then consider this part:

The algorithm controlling BRETT’s learning included a reward function that provided a score based upon how well the robot was doing with the task.

BRETT takes in the scene, including the position of its own arms and hands, as viewed by the camera. The algorithm provides real-time feedback via the score based upon the robot’s movements. Movements that bring the robot closer to completing the task will score higher than those that do not. The score feeds back through the neural net, so the robot can learn which movements are better for the task at hand.

Obviously, what direction the robot learns in, and what it will do with what it’s learned, will be heavily influenced by its reward system, in other words, by its programming.  (Just as what direction we learn in is heavily influenced by the gene propagating reward system evolution programmed into us.)

Elon Musk: Killer robots will be here within 5 Years

Not sure what to make of this one: ELON MUSK: Killer Robots Will Be Here Within 5 Years – Business Insider.

Elon Musk has been ranting about killer robots again.

Musk posted a comment on the futurology site, warning readers that developments in AI could bring about robots that may autonomously decide that it is sensible to start killing humans.

…Here’s Musk’s deleted comment from

The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I don’t understand.

I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen…

According to the article, the comment was deleted a few minutes after it was posted.  (Which will no doubt be the source of new conspiracy theories.)

Here’s an article on the secretive Deepmind initiative that Musk mentions.

I don’t know if Musk’s assessment of how close we might be to general artificial intelligence is accurate, but this was more than just a cautionary note.  It was outright fear mongering.  I think he realized it, which no doubt is why he deleted it so quickly.  (Assuming his account didn’t get hacked or something, but this sounds like it is in line with what he’s been saying in interviews.)

I’m not going to repeat everything I’ve said about AI in the last few days.  I’ll just note that Musk’s familiarity with the putative progress of AI research doesn’t mean he understands how minds work, or what it would take for an artificial one to actually be a threat.

Reaching the stars will require serious out-of-the-box thinking

Sten Odenwald, an astronomer with the National Institute of Aerospace, has an article up at HuffPost that many will find disheartening: The Dismal Future of Interstellar Travel | Dr. Sten Odenwald.

I have been an avid science fiction reader all my life, but as an astronomer for over half my life, the essential paradox of my fantasy world can no longer be maintained. Basically, science tells us that traveling fast enough to make interstellar travel possible requires more money than society will ever be able to invest in the attempt.

Einstein’s theory of special relativity works phenomenally well, with no obvious errors in the domain relevant to space travel. His more comprehensive theory of general relativity also works exceptionally well and offers no workable opportunity to “warp” space in a way that can be technologically applied to space travel without killing the traveler or incinerating the universe. Interstellar travel will be constrained by the reality of special relativity and general relativity, and there is no monkeying with Mother Nature to make science fiction a reality.

…Andreas Hein, an engineer with the Icarus Interstellar Project, developed a rigorous method for forecasting the economics of interstellar travel, only to find that most economically plausible scenarios for a “Daedalus-type” mission would cost upwards of $174 trillion and require nearly 40 years of development and 0.4 percent of the world GDP. This would be for an unmanned, 50-year journey to Barnard’s Star using “fusion drive” technology. It consists of 50,000 tons of fuel and 500 tons of scientific equipment. Top speed: 12 percent of the speed of light.

There’s a fair amount of chest thumping in the comments decrying Odenwald’s pessimism, comparing him to people in history who claimed we’d never fly, exceed the sound barrier, etc.  Most of these commenters don’t understand how fundamentally different the challenges of interstellar travel really are.  Many of the historical thresholds they reference were engineering challenges, but there was never any serious doubt among scientists that they were fundamentally possible.

The speed of light limit is based on Einstein’s theory of special relativity.  It basically says that nothing with mass can reach the speed of light, much less exceed it.  The reason is that as your speed increases, so does your mass, albeit infinitesimally at normal speeds.  As you get closer to the speed of light, more and more of the energy you’re using to increase your speed actually goes into increasing your mass.  At 99.9999% of the speed of light, almost all of the energy goes to increasing mass.  To actually reach the speed of light would require an infinite amount of energy.  All the energy in the observable universe wouldn’t be enough to push a single proton up to the speed of light.

To be clear, nothing in nature has been observed to travel faster than light.  Lots of people have tried to find loopholes in the laws of physics.  They have speculated about things like wormholes, Alcubierre drives, quantum entanglement communication, and many other notions.  But these are all profoundly speculative concepts with zero evidence and major theoretical problems.  Many people know about some of the proposed solutions to these problems, but the solutions themselves are also profoundly speculative.  The majority of physicists are far from optimistic that there is any feasible way to travel, or even communicate, faster than light.

Even achieving a reasonable percentage of the speed of light is going to require major breakthroughs in physics if we want to send biological humans.  It’s trivial to espouse confidence that those breakthroughs will come, but counting on them is simply engaging in fantasy rather than scientific speculation.

Paul Gilster at Centauri Dreams, a blog I enthusiastically recommend for anyone interested in interstellar travel, has provided a couple of much more intelligent responses, here and here.  Gilster’s best argument against the economics issue that Odenwald raises is to point out how much of a difference centuries of economic growth might make, which I think is an excellent point.  But it only gets us to robotic missions, with manned missions being orders of magnitude more complicated.  And although his attitude is far more optimistic, his actual final conclusions really aren’t that different from Odenwald’s.

So, can humanity make it to the stars?  I think the answer is yes, but it’s going to require profound out-of-the-box thinking.  Forget Star Wars or Star Trek type universes unless you just want to fantasize.  We need to look at possibilities actually allowed by the laws of physics.  No one alive today really knows what interstellar exploration will look like.  But here is plausible speculation that doesn’t violate the laws of physics and recognizes that economic limitations would be important.

  1. Biological humans will likely never go to the stars, or if they do, it will be as symbolic vanity projects of a society orders of magnitude richer than we are today, and they will be going places pioneered long before by robots.
  2. Interstellar probes will likely be small, possibly microscopic, in order to economically be accelerated to a significant percentage of the speed of light.  Even launching these small probes will be staggeringly expensive, but there will only need to be one per destination.  (Or possibly two in case one malfunctions.)
  3. Once at a destination, the probe may be programmed to find resources (asteroids, etc) and bootstrap an infrastructure in order to communicate with home, to create local probes to explore the destination solar system, and possibly to create daughter probes to be sent on to farther stars.
  4. Once a communication link is established with home, information on the destination can be transmitted back.  Depending on the initial communications, new AIs might be transmitted to the destination to enhance the exploration.
  5. As speculated by Odenwald, biological humans will be able to experience the remote locations in virtual reality built using the information transmitted back.
  6. Is there any hope of humans ever routinely going to the stars in person?  Well, that depends on what we mean by “in person”, and our attitude toward the possibility of mind uploading, the plausibility some of us have been debating on another thread.  In the absence of that, it’s hard to see humans having much of a presence in other solar systems.

Learning to work in the universe we have, rather than the one we wished we had, isn’t always easy.  But once you get used to it, the possibilities are exciting.  Odenwald talked about how much more democratic the experience of these locations would be with all of us essentially doing it in virtual, rather than a select few elite explorers.  There’s a lot to like in that vision.

The future will be strange.  No doubt it will be stranger than we can imagine.  I’m convinced interstellar exploration can happen, but it will likely require us giving up preconceived notions of how we wish it could work.

Termite inspired robots | Machines Like Us

Inspired by termites and their building activities, the TERMES project is working toward developing a swarm construction system in which robots cooperate to build 3D structures much larger than themselves. The current system consists of simple but autonomous mobile robots and specialized passive blocks; the robot is able to manipulate blocks to build tall structures, as well as maneuver over and around the structures it creates. A multi-robot control would allow many simultaneously active robots to cooperate in building structures.

via Termite inspired robots | Machines Like Us.

Little robots given simple rules, designed to result in the construction of complex structures.  I wonder if a good name for this would be ‘planned emergence’.

BBC – Future – Technology – Is it OK to torture or murder a robot?

In the discussion on my post on computer consciousness from the other day, my friend amanimal just provided the following link:

BBC – Future – Technology – Is it OK to torture or murder a robot?.

I think this powerfully corroborates my thesis in that post, but it also illustrates that I might have been too conservative in estimating when people would start believing that the machines were conscious.  You’re generally not going to be concerned about the rights of an entity that you don’t see as being conscious to at least some degree.