Friday, February 28, 2014

Why robots will not be smarter than humans by 2029

In the last few days we've seen a spate of headlines like 2029: the year when robots will have the power to outsmart their makers, all occasioned by an Observer interview with Google's newest director of engineering Ray Kurzweil.

Much as I respect Kurzweil's achievements as an inventor, I think he is profoundly wrong. Of course I can understand why he would like it to be so - he would like to live long enough to see this particular prediction come to pass. But optimism doesn't make for sound predictions. Here are several reasons that robots will not be smarter than humans by 2029.

  • What exactly does as-smart-as-humans mean? Intelligence is very hard to pin down. One thing we do know about intelligence is that it is not one thing that humans or animals have more or less of. Humans have several different kinds of intelligence - all of which combine to make us human. Analytical or logical intelligence of course - the sort that makes you good at IQ tests. But emotional intelligence is just as important, especially (and oddly) for decision making. So is social intelligence - the ability to intuit others' beliefs, and to empathise. 
  • Human intelligence is embodied. As Rolf Pfeifer and Josh Bongard explain in their outstanding book you can't have one without the other. The old Cartesian dualism - the dogma that robot bodies (the hardware) and mind (the software) are distinct and separable - is wrong and deeply unhelpful. We now understand that the hardware and software have to be co-designed. But we really don't understand how to do this - none of our engineering paradigms fit. A whole new approach needs to be invented.
  • As-smart-as-humans probably doesn't mean as-smart-as newborn babies, or even two year old infants. They probably mean somehow-comparable-in-intelligence-to adult humans. But an awful lot happens between birth and adulthood. And the Kurzweilians probably also mean as-smart-as-well-educated-humans. But of course this requires both development - a lot of which somehow happens automatically - and a great deal of nurture. Again we are only just beginning to understand the problem, and developmental robotics - if you'll forgive the pun - is still in its infancy.
  • Moore's Law will not help. Building human-equivalent robot intelligence needs far more than just lots of computing power. It will certainly need computing power, but that's not all. It's like saying that all you need to build a cathedral is loads of marble. You certainly do need large quantities of marble - the raw material - but without (at least) two other things: the design for a cathedral, and/or the knowhow of how to realise that design - there will be no cathedral. The same is true for human-equivalent robot intelligence. 
  • The hard problem of learning and the even harder problem of consciousness. (I'll concede that a robot as smart as a human doesn't have to be conscious - a philosophers-zombie-bot would do just fine.) But the human ability to learn, then generalise that learning and apply it to completely different problems is fundamental, and remains an elusive goal for robotics and AI. In general this is called Artificial General Intelligence, which remains as controversial as it is unsolved.

These are the reasons I can be confident in asserting that robots will not be smarter than humans within 15 years. It's not just that building robots as smart as humans is a very hard problem. We have only recently started to understand how hard it is well enough to know that whole new theories (of intelligence, emergence, embodied cognition and development, for instance) will be needed, as well as new engineering paradigms. Even if we had solved these problems and a present day Noonian Soong had already built a robot with the potential for human equivalent intelligence - it still might not have enough time to develop adult-equivalent intelligence by 2029.

That thought leads me to another reason that it's unlikely to happen so soon. There is - to the best of my knowledge - no very-large-scale multidisciplinary research project addressing, in a coordinated way, all of the difficult problems I have outlined here. The irony is that there might have been. The project was called Robot Companions, it made it to the EU FET 10-year Flagship project shortlist but was not funded.


Search Results

Saturday, February 22, 2014

What does it mean to have giants like Google, Apple and Amazon investing in robotics?

This was the latest question posed to the Robotics by Invitation panel on Robohub. Here, reposted, is my answer.

Judging by the levels of media coverage and frenzied speculation that has followed each acquisition, the short answer to what does it mean is: endless press exposure. I almost wrote ‘priceless exposure’ but then these are companies with very deep pockets; nevertheless the advertising value equivalent must be very high indeed. The coverage really illustrates the fact that these companies have achieved celebrity status. They are the Justin Beibers of the corporate world. Whatever they do, whether it is truly significant or not, is met with punditry and analysis about what it means. A good example is Google’s recent acquisition of British company DeepMind. In other words: large AI Company buys small AI Company. Large companies buy small companies all the time but mostly they don’t make prime time news. It’s the Beiberisation of the corporate world.
But the question is about robotics, and to address it in more detail I think we need to think about the giants separately.
Take Amazon. We think of Amazon as an Internet company, but the web is just its shop window. Behind that shop window is a huge logistics operation with giant warehouses – Amazon’s distribution centres, so no one should be at all surprised by their acquisition of brilliant warehouse automation company Kiva Systems. Amazon’s recent stunt with the ‘delivery drone’ was I think just that – a stunt. Great press. But I wouldn’t be at all surprised to see more acquisitions toward further automation of Amazon’s distribution and delivery chain.
Apple is equally unsurprising. They are a manufacturing company with a justifiable reputation for super high quality products. As an electronics engineer who started his career by taking wirelesses and gramophones apart as a boy, I’m fascinated by the tear-downs that invariably follow each new product release. It’s obvious that each new generation of Apple devices is harder to manufacture than the last. Precision products need precision manufacture and it is no surprise that Apple is investing heavily in the machines needed to make its products.
Google is perhaps the least obvious candidate to invest in robotics. You could of course take the view that a company with more money than God can make whatever acquisitions it likes without needing a reason – that these are vanity acquisitions. But I don’t think that’s the case. I think Google has its eyes on the long term. It is an Internet company and the undisputed ruler of the Internet of Information. But computers are no longer the only things connected to the Internet. Real world devices are increasingly networked – the so-called Internet of Things. I think Google doesn’t want to be usurped by a new super company that emerges as the Google of real-world stuff. It’s not quite sure how the transition to the future Internet of Everything will pan out, but figures that mobile robots – as well as smart environments– will feature heavily in that future. I think Google is right. I think it’s buying into robotics because it wants to be a leader and shape the future of the Internet of Everything.


Do please read the other panelists' answers - all interesting, and different! 

Saturday, December 07, 2013

Soft Robotics in Space

Space robotics is understandably conservative. When the cost of putting a robot on a planet, moon or asteroid runs into billions we need to be sure the technology will work. And with very long project lifetimes - spanning decades from engineering design to on-planet robot exploration - it's a long hard road from the research lab to the real off-world use for new advances in robotics.

This context was very much in mind when I gave a talk on Advanced Robotics for Space at the Appleton Space Conference last week. I used this great opportunity to outline a few examples of new research directions in robotics for the European space community, and suggest how these could benefit future planetary robots. I had just 20 minutes, so I couldn't do much more than show a few video clips. The four new directions I highlighted are:
  1. Soft Robotics: soft actuation and soft sensing
  2. Robots with Internal Models, for self-repair
  3. Self-assembling swarm robots, for adaptive/evolvable morphology
  4. Autonomous 3D collective robot construction
In this post I want to talk about just the first of these: soft robotics, and why I think we should seriously think about soft robotics in space. Soft robotics - as the name implies - is concerned with making robots soft and compliant. It's a new discipline which already has its own journal, but not yet a wikipedia page. Soft robots would be soft on the inside as well as the outside - so even the fur covered Paro robot is not a Soft robot. Soft robotics research is about developing new soft, smart materials for both actuation and sensing (ideally within the same material). Soft robots would have the huge advantage over conventional stiff metal and plastic robots, of being light and, well, soft. For robots designed to interact with humans that's obviously a huge advantage because it makes the robot intrinsically much safer. 

Soft robotics research is still at the exploratory stage, so there are not yet prefered materials and approaches. In our lab we are exploring several avenues, one is electroactive polymers (EAPs) for artificial muscles; another is the bio-mimetic 3D printed flexible artificial whisker. Another approach makes use of shape memory alloys to actuate octopus like limbs: here is a very nice YouTube movie from the EU OCTOPUS project. And perhaps one of the most unlikely but very promising approaches: exploiting fluid-solid phase changes in ground coffee to make a soft gripper: the Jaeger-Lipson coffee balloon gripper.

Let me elaborate a little more on the coffee balloon gripper. Based on the simple observation that when you buy vacuum-packed ground coffee the pack is completely solid, yet as soon as you cut open the pack and release the vacuum the ground coffee returns to its flowing fluid state. Heinrich Jaeger, Hod Lipson and co-workers put ground coffee into a latex balloon then, by controlling the vacuum via a pump, they demonstrate a gripper able to safely pick up and hold more or less any object. Here is a YouTube video showing this remarkable ability.

Almost any planetary exploration robot is likely to need a gripper to pick up or collect rock samples for analysis or collection (for return to Earth). Conventional robot grippers are complex mechanical devices that need very precise control in order to reliably pick up irregularly shaped and sized objects. That control is mechanically and computationally expensive, and problematical because of time delays if it has to be performed remotely from Earth. Something like the Jaeger-Lipson coffee balloon gripper would - I think - provide a much better solution. This soft gripper avoids the hard control and computation because the soft material adapts itself to the thing it is gripping; it's a great example of what we call morphological computation.

The second example I suggested is inspired by work in our lab on bio-inspired touch sensing. Colleagues have developed a device called TACTIP - a soft flexible touch sensor which provides robots (or robot fingers) with very sensitive touch sensing capable of sensing both shape and texture. Importantly the sensing is done inside TACTIP, so the outside surface of the sensor can sustain damage without loss of sensing. Here is a very nice YouTube report on the TACTIP project

It's easy to see that giving planetary robots touch sensing could be useful, but there's another possibility I outlined: the potential to allow Earth scientists to feel what the robot's sensor is feeling. PhD student Callum Roke and his co-workers developed a system based on TACTIP for what we call remote tele-haptics. Here is a video clip demonstrating the idea:



Imagine being able to run your fingers across the surface of Mars, or directly feel the texture of a piece of asteroid rock without actually being there.

Tuesday, November 26, 2013

Noisy imitation speeds up group learning

Broadly speaking there are two kinds of learning: individual learning and social learning. Individual learning means learning something entirely on your own, without reference to anyone else who might have learned the same thing before. The flip side of individual learning is social learning, which means learning from someone else. We humans are pretty good at both individual and social learning although we very rarely have to truly work something out from first principles. Most of what we learn, we learn from teachers, parents, grandparents and countless others. We learn everything from how to make chicken soup to flying an aeroplane from watching others who already know the recipe (or wrote it down), or have mastered the skill. For modern humans I reckon it’s pretty hard to think of anything we have truly learned, on our own; maybe learning to control our own bodies as babies, leading to crawling and walking are candidates for individual learning (although as babies we are surrounded by others who already know how to walk – would we walk at all if everyone else got around on all fours?). Learning to ride a bicycle is perhaps also one of those things no-one can really teach you – although it would be interesting to compare someone who has never seen a bicycle, or anyone riding one, in their lives with those (most of us) who see others riding bicycles long before climbing on one ourselves.

In robotics we are very interested in both kinds of learning, and methods for programming robots that can learn are well known. A method for individual learning is called reinforcement learning (RL). It’s a laborious process in which the robot tries out lots and lots of actions and gets feedback on whether each action helps or hinders the robot in getting closer to its goal – actions that help/hinder are re/de-inforced so the robot is more/less likely to try them again; it’s a bit like shouting “warm, hot, cold, colder…” in a hide-and-seek game. It’s fair to say that RL in robotics is pretty slow; robots are not good individual learners, but that's because, in general, they have no prior knowledge. As a fair comparison think of how long it would take you to learn how to make fire from first principles if you had no idea that getting something hot may, if you have the right materials and are persistent, create fire, or that rubbing things together can make them hot. Roboticists are also very interested in developing robots that can learning socially, especially by imitation. Robots that you can program by showing them what to do (called programming by demonstration) clearly have a big advantage over robots that have to be explicitly programmed for each new skill.

Within the artificial culture project PhD student (now Dr) Mehmet Erbas developed a new way of combining social learning by imitation and individual reinforcement learning, and the paper setting out the method together with results from simulation and real robots has been published in the journal Adaptive Behavior. Let me explain the experiments with real robots, and what we have learned from them.

Here's our experiment. We have two robots - called e-pucks. The inset shows a closeup. Each robot has its own compartment and must - using individual (reinforcement) learning - learn how to navigate from the top right hand corner, to the bottom left hand corner of its compartment. Learning this way is slow, taking hours. But in this experiment the robots also have the ability to learn socially, by watching each other. Every so often one of the robots will stop its individual learning and drive itself out of its own compartment, to the small opening at the bottom left of the other compartment. There it will stop and simply watch the other robot while it is learning, for a few minutes. Using a movement imitation algorithm the watching robot will (socially) learn a fragment of what the other robot is doing, then combine this knowledge into what it is individually learning. The robot then runs back to its own compartment and resumes its individual learning. We call the combination of social and individual learning 'imitation enhanced learning'.

In order to test the effectiveness of our new imitation enhanced learning algorithm we first run the experiment with the imitation turned off, so the robots learn only individually. This gives us a baseline for comparison. We then run two experiments with imitation enhanced learning. In the first we wait until one robot has completed its individual learning, so it is an 'expert'; the other robot then learns - using its combination of individual learning and social learning from the expert. Not surprisingly learning this way is faster.

This graph shows individual learning only as the solid black line, and imitation-enhanced learning from an expert as the dashed line. In both cases learning is more or less complete when the graphs transition from vertical to horizontal. We see that individual learning takes around 360 minutes (6 hours). With the benefit of an expert to watch, learning time drops to around 60 minutes.




The second experiment is even more interesting. Here we start the two robots at the same time, so that both are equally inexpert. Now you might think it wouldn't help at all, but remarkably each robot learns faster when it can observe, from time to time, the other inexpert robot, than when learning entirely on its own. As the graph below shows, the speedup isn't as dramatic - but imitation enhanced learning is still faster.

Think of it this way. It's like two novice cooks, neither of whom knows how to make chicken soup. Each is trying to figure it out by trial and error but, from time to time, they can watch each other. Even though its pretty likely that each will copy some things that lead to worse chicken soup, on average and over time, each hapless cook will learn how to make chicken soup a bit faster than if they were learning entirely alone.



In the paper we analyse what's going on when one robot imitates part of the semi-learned sequence of moves by the other. And here we see something completely unexpected. Because the robots imitate each other imperfectly - when one robot watches another and then tries to copy what it saw, the copy will not be perfect - from time to time, one inexpert robot will miscopy the other inexpert robot and the miscopy, by chance, helps it to learn. To use the chicken soup analogy: it's as if you are spying on the other cook - try to copy what they're doing but get in wrong and, by accident, end up with better chicken soup.

This is deeply interesting because it suggests that when we learn in groups making mistakes - noisy social learning - can actually speed up learning for each individual and for the group as a whole.

Full reference:
Mehmet D Erbas, Alan FT Winfield, and Larry Bull (2013), Embodied imitation-enhanced reinforcement learning in multi-agent systems, Adaptive Behavior. Published online 29 August 2013. Download pdf (final draft)

Wednesday, October 30, 2013

Ethical Robots: some technical and ethical challenges

Here are the slides of my keynote at last week's excellent EUCog meeting: Social and Ethical Aspects of Cognitive Systems. And the talk itself is here, on YouTube.

I've been talking about robot ethics for several years now, but that's mostly been about how we roboticists must be responsible and mindful of the societal impact of our creations. Two years ago I wrote - in my Very Short Introduction to Robotics - that robots cannot be ethical. Since then I've completely changed my mind*. I now think there is a way of making a robot that is at least minimally ethical. It's a huge technical challenge which, in turn, raises new ethical questions. For instance: if we can build ethical robots, should we? Must we..? Would we have an ethical duty to do so? After all, the alternative would be to build amoral robots. Or, would building ethical robots create a new set of ethical problems? An ethical Pandora's box.




The talk was in three parts.

Part 1: here I outline why and how roboticists must be ethical. This is essentially a recap of previous talks. I start with the societal context: the frustrating reality that even when we meet to discuss robot ethics this can be misinterpreted as scientists fear a revolt of killer robots. This kind of media reaction is just one part of three linked expectation gaps, in what I characterise as a crisis of expectations. I then outline a few ethical problems in robotics - just as examples. Here I argue it's important to link safe and ethical behaviour - something that I return to later. Then I recap the five draft principles of robotics.

Part 2: here I ask the question: what if we could make ethical robots? I outline new thinking which brings together the idea of robots with internal models, with Dennett's Tower of Generate and Test, as a way of making robots that can predict the consequences of their own actions. I then outline a generic control architecture for robot safety, even in unpredictable environments. The important thing about this approach is that the robot can generate next possible actions, test them in its internal model, and evaluate the safety consequences of each possible action. The unsafe actions are then inhibited - and the robot controller determines which of the remaining safe actions is chosen, using its usual action-selection mechanism. Then I argue that it is surprisingly easy to extend this architecture for ethical behaviour, to allow the robot to predict the robot actions that would minimise harm for a human in its environment. This appears to represent an implementation of Asimov's 1st and 3rd laws. I outline the significant technical challenges that would need to be overcome to make this work.

But, assuming such a robot could be built, how ethical would it be? I suggest that with a subset of Asimovian ethics it probably wouldn't satisfy an ethicist or moral philosopher. But, nevertheless - I argue there's a good chance that such a minimally ethical robot could help to increase trust, in the robot, from its users.

Part 3: in the final part of the talk I conclude with some ethical questions. The first is: if we could build an ethical robot, are we ethically compelled to do so? Some argue that we have an ethical duty to try and build moral machines. I agree. But the counter argument, my second ethical question, is are there ethical hazards? Are we opening a kind of ethical Pandora's box, by building robots that might have an implicit claim to rights, or responsibilities. I don't mean that such a robot would ask for rights, but instead that, because it is has some moral agency, then we might think it should be accorded rights. I conclude that we should try and build ethical robots. The benefits I think far outweigh any ethical hazards, which in any event can, I think, be minimised.


*It was not so much an epiphany, as a slow conversion from sceptic to believer. I have long term collaborator Michael Fisher to thank for doggedly arguing with me that it was worth thinking deeply about how to build ethical robots.

Sunday, October 20, 2013

A Close(ish) Encounter with Voyager 2

It is summer 1985. I'm visiting Caltech with colleague and PhD supervisor Rod Goodman. Rod has just been appointed in the Electrical Engineering Department at Caltech, and I'm still on a high from finishing my PhD in Information Theory. Exciting times.

Rod and I are invited to visit the Jet Propulsion Labs (JPL). It's my second visit to JPL. But it turned into probably the most inspirational afternoon of my life. Let me explain.

After the tour the good folks who were showing us round asked if I would like to meet some of the post-docs in the lab. As he put it: the fancy control room with the big wall screens is really for the senators and congressmen - this is where the real work gets done. So, while Rod went off to discuss stuff with his new Faculty colleagues I spent a couple of hours in a back room lab, with a Caltech post-doc working on - as he put it - a summer project. I'm ashamed to say I don't recall his name so I'll call him Josh. Very nice guy, a real southern californian dude.

Now, at this point, I should explain that there was a real buzz at JPL. Voyager 2, which had already more than met its mission objectives was now on course to Uranus and due to arrive in January 1986. It was clear that there was a significant amount of work in planning for that event. The first ever opportunity to take a close look at the seventh planet.

So, Josh is sitting at a bench and in front of him is a well-used Apple II computer. And behind the Apple II is a small display screen so old that the phosphor is burned. This used to happen with CRT computer screens - it's the reason screen savers were invented. Beside the computer are notebooks and manuals, including prominently a piece of graph paper with a half-completed plot. Josh then starts to explain: one of the cameras on Voyager 2 has (they think) a tiny piece of grit* in the camera turntable - the mechanism that allows the camera to be panned. This space grit means that the turntable is not moving as freely as it should. It's obviously extremely important that when Voyager gets to Uranus they need to be able to point the cameras accurately, so Josh's project is to figure out how much torque is (now) needed to move the camera turntable to any desired position. In other word's re-calibrate the camera's controller.

At this point I stop Josh. Let me get this straight: there's a spacecraft further from earth, and flying faster, than any manmade object ever, and your summer project is to do experiments with one of its cameras, using your Apple II computer. Josh: yea, that's right.

Josh then explains the process. He constructs a data packet on his Apple II, containing the control commands to address the camera's turntable motor and to instruct the motor to drive the turntable. As soon as he's happy that the data packet is correct, he then sends it - via the RS232 connection at the back of his Apple II - to a JPL computer (which, I guess would be a mainframe). That computer then, in turn, puts Josh's data packet together with others, from other engineers and scientists also working on Voyager 2, after - I assume - carefully validating the correctness of these commands. Then the composite data packet is sent to the Deep Space Network (DSN) to be transmitted, via one of the DSNs big radio telescopes, to Voyager 2.

Then, some time later, the same data packet is received by Voyager 2, decoded and de-constructed and said camera turntable moves a little bit. The camera then sends back to Earth, again via a composite data packet, some feedback from the camera - the number of degrees the turntable moved. So a day or two later, via a mind-bogglingly complex process involving several radio telescopes and some very heavy duty error-correcting codes, the camera-turntable feedback arrives back at Josh's desktop Apple II with the burned-phosphor screen. This is where the graph paper comes in. Josh picks up his pencil and plots another point on his camera-turntable calibration graph. He then repeats the process until the graph is complete. It clearly worked because six months later Voyager 2 produced remarkable images of Uranus and its moons.

This was, without doubt, the most fantastic lab experiment I'd ever seen. From his humble Apple II in Pasadena Josh was doing tests on a camera rig, on a spacecraft, about 1.7 billion miles away. For a Thunderbirds kid, I really was living in the future. And being a space-nerd I already had some idea of the engineering involved in NASA's deep space missions, but that afternoon in 1985 really brought home to me the extraordinary systems engineering that made these missions possible. Given the very long project lifetimes - Voyager 2 was designed in the early 1970s, launched in 1977, and is still returning valuable science today - its engineers had to design for the long haul; missions that would extend over several generations. Systems design like this requires genius, farsightedness and technical risk taking. Engineering that still inspires me today.

*it later transpired that the problem was depleted lubricant, not space grit.

Monday, September 30, 2013

Don't build robots, build robot systems

Why aren't there more intelligent mobile robots in real world applications? It's a good question, and one I'm often asked. The answer I give most often is that it's because we're still looking for that game changing killer app - the robotics equivalent of the spreadsheet for PCs. Sometimes I place the blame on a not-quite-yet-solved technical deficit - like poor sensing, or sensor fusion, or embedded AI; in other words, our intelligent robots are not yet smart enough. Or I might cite a not-fully-developed-capability, like robots not able to cope with unpredictable (i.e. human) environments, or we can't yet assure that our robots are safe, and dependable.

Last week at euRathlon 2013 I realised that these answers are all wrong. Actually that would be giving myself credit where none is due. The answer to the question: why aren't there more intelligent mobile robots in real world applications was pointed out by several of the presenters at the euRathlon 2013 workshop, but most notably by our keynote speaker Shinji Kawatsuma, from the Japan Atomic Energy Authority (JAEA). In an outstanding talk Kawatsuma explained, with disarming frankness, that, although his team had robots they were poorly prepared to use those robots in the Fukushima Daiichi NPP, because the systems for deployment were not in place. The robots are not enough. Just as important are procedures and protocols for robot deployment in an emergency; mobile infrastructure, including vehicles to bring the robots to the emergency, which are capable - as he vividly explained - of negotiating a road system choked with debris (from the Tsunami) and strained with other traffic (rescue workers and evacuees); integration with other emergency services; and, above all, robot operators trained, practised and confident to guide the robots through whatever hazards they will face in the epicentre of the disaster.

In summing up the lessons learned from robots at Fukushima, Shinji Kawatsuma offered this advice - actually it was more of a heartfelt plea: don't build robots, build robot systems. And, he stressed, those systems must include operator training programmes. It was a powerful message for all of us at the workshop. Intelligent robots are endlessly fascinating machines, with all kinds of difficult design challenges, so it's not surprising that our attention is focussed on the robots themselves. But we need to understand that real world robots are like movie stars - who (despite what they might think) wouldn't be movie stars at all without the supporting cast, camera and sound crews, writers, composers, special effects people and countless other departments that make the film industry. Take the Mars rover Curiosity - an A-list movie star of robotics. Curiosity could not do its job without an extraordinary supporting infrastructure that, firstly, delivered her safely to the surface of Mars and, secondly, allows Curiosity's operators to direct her planetary science exploration.

Curiosity: an A-list movie star robot (NASA/JPL-Caltech/MSSS), with a huge supporting cast of science and technology.

















So, to return to my question why aren't there more intelligent mobile robots in real world applications. The answer is plain. It's because without supporting systems: infrastructure and skilled operators integrated and designed to meet the real world need, a robot - regardless of how innovative and intelligent it is - will never make the transition from the lab to the real world. Without those systems that robot will remain no more than a talented but undiscovered actor.


Hans-Arthur Marsiske, The use of robots in Fukushima: Shinji Kawatsuma Interview, Heise online, 25 September 2013 (in German).
K Nagatani, S Kiribayashi, Y Okada, K Otake, K Yoshida, S Tadokoro, T Nishimura, T Yoshida, E Koyanagi, M Fukushima and S Kawatsuma, Emergency response to the nuclear accident at the Fukushima Daiichi Nuclear Power Plants using mobile rescue robots, Journal of Field Robotics, 30 (1), 44-63, 2013.

Friday, September 20, 2013

The Triangle of Life: Evolving Robots in Real-time and Real-space

At the excellent European Conference on Artificial Life (ECAL) a couple of weeks ago we presented a paper called The Triangle of Life: Evolving Robots in Real-time and Real-space (this links to the paper in the online proceedings).

As the presenting co-author I gave a one-slide one-minute pitch for the work, and here is that slide.



The paper proposes a new conceptual framework for evolving robots, that we call the Triangle of Life. Let me outline what this means. But first a quick intro to evolutionary robotics. In my very short introduction to Robotics I wrote:
One of the most fascinating developments in robotics research in the last 20 years is evolutionary robotics. Evolutionary robotics is a new way of designing robots. It uses an automated process based on Darwinian artificial selection to create new robot designs. Selective breeding, as practised in human agriculture to create new improved varieties of crops, or farm animals, is (at least for now) impossible for real robots. Instead, evolutionary robotics makes use of an abstract version of artificial selection in which most of the process occurs within a computer. This abstract process is called a genetic algorithm. In evolutionary robotics we represent the robot that we want to evolve, with an artificial genome. Rather like DNA, our artificial genome contains a sequence of symbols but, unlike DNA, each symbol represents (or ‘codes for’) some part of the robot. In evolutionary robotics we rarely evolve every single part of a robot.

A robot consists of a physical body with an embedded control system - normally a microprocessor running control software. Without that control software the robot just wouldn't do anything - it would be the robot equivalent of a physical body without a mind. In biological evolution bodies and minds co-evolved (although the dynamics of that co-evolutionary process are complex and interesting). But in 20 years or so of evolutionary robotics the vast majority of work has focussed only on evolving the robot's controller. In other words we take a pre-designed robot body, then use the genetic algorithm to discover a good controller for that particular body. There has been little work on body-brain co-evolution, and even less work on evolving real robot bodies. In fact, we can count the number of projects that have evolved new physical robot bodies on the fingers of one hand*. Here is one of those very rare projects: the remarkable Golem project of Hod Lipson and Jordan Pollack.

This is surprising. When we think of biological evolution and the origin of species, our first thoughts are of the evolution and diversity of body shapes and structures. In the same way, the thing about a robot that immediately captures our attention is its physical body. And bodies are not just vessels for minds. As Rolf Pfeifer and Josh Bongard explain in their terrific book How the Body Shapes the Way We Think, minds depend crucially on bodies. The old dogma of Artificial Intelligence, that we can simply design an artificial brain without any regard to its embodiment, is wrong. True artificial intelligence will only be achieved by co-evolving physical bodies with their artificial minds.

In this paper we are arguing for a radical new approach in which the whole process of co-evolving robot bodies and their controllers takes place in real space and real time. And, as the title makes clear, we are also advocating a open-ended cycle of artificial life, in which every part of the robots' artificial life cycle takes place in real space and real time, from artificial conception, through to artificial birth, artificial infancy and development, then artificial maturity and mating. Of course these words are metaphors: the artificial processes are at best a crude analogue. But let me stress that no-one has demonstrated this. The examples that we give in the paper, from the EU Symbrion project, are just fragments of the process - not joined up in reality. And the Symbrion example is very constrained because of the modular robotics approach which means that the building blocks of these 'multi-cellular' robot organisms - the 'cells' - are themselves quite chunky robots; we have only 3 cell types and only a handful of cells for evolution to work with. Evolving robots in real space and real time is ferociously hard but, as the paper concludes: Our proposed artificial life system could be used to investigate novel evolutionary processes, not so much to model biological evolution – life as it is, but instead to study life as it could be.

Full reference:

Eiben AE, Bredeche N, Hoogendoorn M, Stradner J, Timmis J, Tyrrell A, and Winfield A (2013), The Triangle of Life: Evolving Robots in Real-time and Real-space, pp 1056-1063 in Advances in Artificial Life, ECAL 2013, proc. Twelfth European Conference on the Synthesis and Simulation of Living Systems, eds. LiĆ² P, Miglino O, Nicosia G, Nolfi S and Pavone M, MIT Press.


*was surprised to discover this when searching the literature for a new book chapter I'm co-authoring with Jon Timmis on Evolvable Robot Hardware.

Related blog posts:
New experiments in embodied evolutionary swarm robotics
New video of 20 evolving e-pucks

Monday, August 26, 2013

Memories of Skyrim

From time to time I like to visit Skyrim. I've been going there for about 2 years - completed the main quest a year or so ago, and since then go back to undertake a side quest or, more often that not, just wander around admiring the scenery. Of course my character is now reasonably levelled-up so wandering around is not quite as perilous as it used to be; interesting rather than terrifying.

But the thing I've noticed in recent months is that I have memories of being in places in Skyrim that, subjectively, feel completely indistinguishable from memories of being in real places in the real world. In other words the quality and character of the memory - the sense of having really been there - is no different when I recall, say, standing on the porch of Dragonsreach Hall looking South toward Throat of the World mountain, as when I remember looking down toward the Cumberland basin from the south footpath of the Clifton suspension bridge.

For me this is a new experience. I've been playing (and sometimes coding) video games since that meant batting a pixelated ball from one side of a low-res monochrome screen to the other on home built 8-bit micros in the 1980s. I have fond memories of playing the first generation Alone in the Dark ~1993 on a 386 PC, with my 5 year old son - but my memories are of the experience of playing the game with Tom, not of actually being in that haunted house. More recent games on the Xbox 360, with graphics I would have had difficulty imagining 20 years ago, have not had the same effect of creating such compellingly real memories for me.

So what is it about Skyrim that is making these memories feel so real? I think there are several factors. The first is that the scenery is so breathtakingly beautiful, which means you really do want to just stop and stare for awhile. Second, and equally important I think, is that this is not some imagined alien landscape. It is decidedly Earth - a cold Northerly Earth certainly, but the fells and mountains, the lakes and forests, the grasses and especially trees, are realised so accurately you can identify whether it's a birch or an oak. Third, the landscape is in constant motion - so the grasses sway in the wind, the brooks gurgle and splash and insects flutter. Wait a little longer and you realise the day is passing from afternoon to dusk, the sky turns golden in the sunset, then to night. A star spangled and moon-crossed night sky then rewards the patient (and the brave - this is a wild and dangerous place), following by a glorious sunrise. And there is weather too. Rain, which splashes delightfully on the lake, occasional thunderstorms (learn the power and you can call them up!), and snow blizzarding in the mountains. In case you can't go there yourself watch this YouTube movie: Skyrim - landscapes and scenery.

All of these are I think important cues in making the experience, and hence the memory, feel so real. But I think there is another factor, which is that my journeying through Skyrim is part of a narrative which connects places with events, quests and discoveries. So the memorable places are those I have arrived at following some perilous and occasionally epic trail, with multiple trials on the journey, Or they are places I have discovered offer safety and refuge; places to return to after days questing in the wilderness. Perhaps the depth and intensity of the experienced narrative somehow makes up for the limitations of the sensory experience? With only 2D vision on a flat TV screen and stereo audio, and nothing at all to stimulate the rest of the considerable human sensorium, it seems incredible that such a weakly immersive experience - compared to being in real places - can create subjectively comparable memories.

Since thinking about these memories of Skyrim, I've wondered if these are technically false memories. Am I experiencing so called False Memory Syndrome? Although FSM is controversial what is beyond doubt is the extraordinary suggestibility and unreliability of human memory. I was astonished by the unreliability of memory I witnessed at first hand two years ago while on Jury service. But false memories are memories of events that never happened, but are strongly believed. On reflection I think my memories of Skyrim do not fall into this category. The events and places occurred in the virtual rather than the real, but they, and my experience of them, really happened.

So if immersive video games technology has reached the point that it can create, for the gamer, memories of places and events which feel no different to memories of places and events in the real world, is this a bad thing? And, as the technology improves to make the experience more immersive, involving more senses, will we find ourselves unable to distinguish between the virtual and the real - confusing memories of one with the other? I think the answer is no. After all we each create a personal narrative - the remembered story of our lives - and I don't think it matters whether the events that make up that story happen in the virtual or the real. I think we're just as able to recall the difference between a trip to Rhyll and Rome, as between Skegness and Skyrim. And if, with advancing years or just because that's the way we are, we start to confuse these memories I don't think we're any more likely to confuse the virtual and the real, as we are the real and the real. Nor do I think the degree of immersion in the virtual matters as far as memories of being there is concerned*. After all, being in the real world is a fully immersive experience. Even if, and when, we can climb into a full-body immersive gaming rig, like those of Ernest Cline's brilliant Ready Player One, we will still only have an experience equal to that of the real world. So why should those experiences be remembered any differently to those in the real?

Ok. I've persuaded myself there isn't a problem. Time for another trip to Skyrim.


*There is of course another quite different concern - to do with how much more addictive the experience will become.Will we neglect the real - and ourselves, like Larry Niven's wireheads?

Thursday, August 22, 2013

The scourge of the RoboTroll is already upon us

When a robot ethics working group met nearly three years ago one of the things we fretted about was privacy. We were concerned especially about personal companion robots. Whatever their purpose, be it healthcare for the elderly or disabled, or childcare, or simply Robot and Frank style companionship we debated the inevitable privacy issues of this kind of robot. Two aspects directly impact privacy, since personal companion robots are likely to (1) collect and store data and (2) be networked. We attempted to cover both privacy and security when we drafted our Principles of Robotics.

The second of our principles states: Robots should be designed and operated to comply with existing law, including privacy. Yes, sometimes the obvious does have to be stated.

We also worried about hacking. Our third principle is: Robots are products: as with other products, they should be designed to be safe and secure. And the commentary for this principle has this to say about hacking:
We are aware that the public knows that software and computers can be “hacked” by outsiders, and processes need to be developed to show that robots are secure as far as possible from such attacks.
It seems our concerns were well founded. Last week a report appeared of a networked baby monitor that was apparently hacked. It was pretty distressing. The hacker was shouting abuse at the baby, chillingly using her name - it seems that he (let's assume it was a he) was able to gain access to the baby monitor's video feed and read the baby's name displayed above her bed. Even more chilling (adding to the parent's horror) the execrable hacker then turned the camera to look at them when they entered their child's room to find out what was going on.

A WiFi IP Camera, aka Baby Monitor is, I contend, a teleoperated robot. The thing that makes it a robot is the motorised pan and tilt mechanism for steering the camera. So, despite password protection, Mr Gilbert's networked robot was hacked. The hack was a clear violation of both privacy and security. This particular robot, and I hazard hundreds of thousands like it, is absolutely not secure from attack. It fails our 2nd and 3rd principles.

The consequences of this particular attack were, fortunately, not much more serious than giving the Gilbert's a fright they surely won't forget for some time. But for me one particularly egregious aspect of this robot hack - something that the robot ethics working group did not anticipate - was the verbal abuse hurled at baby Allyson and her parents. It is with profound dismay that I ask the question: is the first case of RoboTrolling..?