Meet the robots that can reproduce, learn and evolve all by themselves

New Scientist Default Image

Ruby Fresson

ROBOTS have come a long way in the century since Czech writer Karel Čapek used the word to describe artificial automata. Once largely confined to factories, they are now found everywhere from the military and medicine to education and underground rescue. People have created robots that can make art, plant trees, ride skateboards and explore the ocean’s depths. There seems no end to the variety of tasks we can design a machine to do.

But what if we don’t know exactly what our robot needs to be capable of? We might want it to clean up a nuclear accident where it is unsafe to send humans, explore an unmapped asteroid or terraform a distant planet, for example. We could simply design it to meet any challenges we think it might face and then keep our fingers crossed. There is a better alternative, though: take a lesson from evolution and create robots that can adapt to their environment. It sounds far-fetched, but that is exactly what my colleagues and I are working on in a project called Autonomous Robot Evolution (ARE).

We aren’t there yet, but we have already created robots that can “mate” and “reproduce” to generate new designs that are built autonomously. What’s more, using the evolutionary mechanisms of variation and survival of the fittest, over generations, these robots can optimise their design. If successful, this would be a way to produce robots that can adapt to difficult, dynamic environments without direct human oversight. It is a project with huge potential – but not without major challenges and ethical implications.

Advertisement

The notion of using evolutionary principles to design objects can be traced back to the early 1960s and the origins of evolutionary computation, when a group of German engineering students invented the first “evolution strategy”. Their novel algorithm generated a range of designs and then selected a set of them, biased towards high-performing ones, to build upon in subsequent iterations. When applied to a real-world engineering problem, this not only optimised the design of a nozzle but also generated a final product that was so unintuitive that the process could be described as creative – one of the most prized properties of biological evolution.

Since then, there has been a step change in our ability to apply artificial evolution to designing objects. The enormous increase in computational power allows computers to churn through generations of designs in short order and to generate high-fidelity simulations of real environments in which to test these. Meanwhile, advances in evolutionary computation theory have resulted in better ways to represent the information from which designs are built – their virtual DNA – and to manipulate this when generating “offspring” so that it mirrors processes found in nature. These include mutation and DNA recombination, which creates genetic diversity through breaking stretches of DNA and recombining them in novel ways. Examples of evolutionary design in practice now range from tables to new molecules with desired functions. As far back as 2006, NASA sent a satellite into space with a communication antenna created via artificial evolution.

“A Raspberry Pi computer, which acts as a brain, is wired up to the sensors and motors”

Yet designing robots brings a challenging new dimension to the field: as well as bodies, they require brains to interpret information from their environments and to translate this into a desired behaviour. Much of the early work in evolutionary robotics addressed this problem by simply adapting a brain to a newly evolved body design. But intelligence isn’t simply a property of the brain; it also lies in the body. And in the 21st century, there has been a shift to simultaneously evolve both the robot’s body and brain. Although this complicates the evolutionary process, there is a large pay-off: devolving some intelligence to the body can reduce the need for complexity in the brain.

In 2000, Hod Lipson and Jordan Pollack at Brandeis University in Massachusetts, used this approach to evolve small robots capable of forward motion, which self-built using automated assembly techniques. Since then, rapid advances in materials, simulation and 3D-printing methods have vastly increased the potential range of robot designs. A decade later, Lipson and Jonathan Hiller, then both at Cornell University, New York, used the same principles to evolve self-building “soft robots”, machines made from compliant materials rather than rigid ones. Another milestone came in 2020, when Josh Bongard at the University of Vermont and his colleagues used a similar method to design living robots, or xenobots, made from frog cells.

Although each of these examples represents a noteworthy landmark in evolutionary robotics, they all have two shortcomings. First, none of these robots have sensors, so although they are capable of directed motion, they lack the ability to acquire information from their environment and use it to adapt their behaviour. Second, the robots are evolved in simulation and then manufactured post-evolution. This introduces a “reality gap”, a phenomenon infamous in robotics that results from inevitable differences between a simulation and reality. In other words, regardless of the fidelity of the simulator, the behaviour of physical robots is different from that of their simulated counterparts.

An obvious way around this second shortcoming is to skip the simulation stage and build and evaluate new evolved designs directly in hardware. This was first demonstrated by researchers at ETH Zurich, Switzerland, in 2015. They used a “mother robot” equipped with an evolutionary algorithm to autonomously design and fabricate offspring. These were then tested, with only those achieving the best results being selected as designs to feed into the next generation. In 2016, Guszti Eiben at Free University Amsterdam, the Netherlands, and his team described a different approach. They used physical robots programmed with rules allowing them to “meet and mate”, triggering a production process to create a new “robot baby”.

The triangle of life

These developments laid the ground for ARE, a project that envisions a fully autonomous system through which robots equipped with sensors can be manufactured, adapt and evolve in the real world. Launched in 2018 and funded by the UK’s Engineering and Physical Sciences Research Council, it is a collaboration between Edinburgh Napier University – where I lead the Nature-Inspired Intelligent Systems group, which develops algorithms based on biological evolution to discover novel solutions to challenging problems – the University of York, the University of the West of England, Bristol, the University of Sunderland and Free University Amsterdam.

In ARE, we use an artificial genetic code to define a robot’s body and brain. Evolution takes place in a facility dubbed the EvoSphere, by putting each robot through a three-phase cycle – fabrication, learning and testing – that we call the “triangle of life”. In the first phase, new evolved designs are built autonomously. A 3D printer initially creates a plastic skeleton. Then, an automated assembly arm selects and attaches the specified sensors and means of locomotion from a bank of pre-built components. Finally, a Raspberry Pi computer is added to act as a brain. It is wired up to the sensors and motors, and software representing the evolved brain is downloaded.

Next comes the all-important learning phase. In most animal species, newborns undergo some kind of learning to fine-tune their motor control. This is even more pressing for our robots because breeding can occur between different “species”. For example, one with wheels might reproduce with another that has jointed legs, resulting in an offspring with both types of locomotion. In such situations, the inherited brain is unlikely to provide good control over the new body. The learning phase runs an algorithm to refine the brain over a small number of trials in a simplified environment. The process is analogous to a child learning new skills in a kindergarten. Only those robots deemed viable proceed to the third stage: testing.

“The learning phase is analogous to a child learning new skills in a kindergarten”

Currently, we are using a mock-up of the inside of a nuclear reactor for testing, in which the robot must clear radioactive waste, which requires it to avoid various obstacles and correctly identify the waste. Each robot is scored according to its success, and these scores are fed back to a computer. A selection process uses these scores to determine which robots are permitted to reproduce. Then, software that mimics reproduction performs DNA recombination and mutation operations on the genetic blueprints of two parents to create a new robot for fabrication, completing the triangle of life. Parent robots can either remain in the population, where they can take part in further reproduction events, or be broken down into their constituent parts and recycled into new robots.

New Scientist Default Image

A 3D-printed skeleton (top left) has limbs and a brain added (top right), before the new robot is put through its paces

Matthew Hale

By working with real robots rather than simulations, we eliminate any reality gap. However, printing and assembling each new machine takes about 4 hours, depending on the complexity of its skeleton, so limits the speed at which a population can evolve. To address this drawback, we also study evolution in a parallel, virtual world. This entails creating a digital version of every robot baby in a simulator once mating has occurred, then training and testing them in virtual kindergartens and test sites. Although these environments are unlikely to be totally accurate representations of their real-world counterparts, they do allow us to build and test new designs within seconds and identify those that look particularly promising. Their genomes can then be prioritised for fabrication into real-world robots. What’s more, we have a novel breeding process that permits reproduction between a physical robot and one of its virtual cousins, which enables useful traits discovered in simulation to quickly spread into the real-world population, where they can be further refined.

In principle, the system we are developing could operate completely autonomously in an inaccessible environment or distant location. The potential opportunities are great, but we also run the risk that things might get out of control, creating robots with unintended behaviours that could cause damage or even harm humans. We need to think about this now, while the technology is still being developed. Limiting the availability of materials from which to fabricate new robots provides one safeguard. We could also anticipate unwanted behaviours by continually monitoring the evolutionary process and the evolved robots, then using that information to build analytical models to predict future problems. Ultimately, we need the ability to shut down the whole process. The most obvious and effective solution is to use a centralised reproduction system with a human overseer equipped with a kill switch.

Some of the applications of ARE, such as terraforming, may seem quite futuristic, but our research could also bring more immediate benefits. As climate change gathers pace, it is clear that robot designers need to radically rethink their approach to reduce their ecological footprint. They may, for example, want to create new types of robots that are built from sustainable materials, operate at low energy levels and are easily repaired and recycled. These probably won’t look anything like the robots we see around us today, but that is exactly why artificial evolution can help. Unfettered by the constraints that our own understanding of engineering science imposes on our designs, evolution can generate creative solutions we cannot even imagine.

Insights into evolution

“So far, we have been able to study only one evolving system [life on Earth] and we cannot wait for interstellar flight to provide us with a second. If we want to discover generalizations about evolving systems, we will have to look at artificial ones.” It is 30 years since evolutionary biologist John Maynard Smith wrote these words. Today, the Autonomous Robot Evolution (ARE) project is taking up that challenge. Although conceived to create robots that can reproduce and adapt (see main story), it also has the potential to shed light on evolution itself.

“Robotic experiments can be conducted under controllable conditions and validated over many repetitions, something that is hard to achieve when working with biological organisms,” says evolutionary roboticist and ARE team member Guszti Eiben at Free University Amsterdam in the Netherlands. Evolution in robots is also much faster than in many biological systems, so ideas can be tested more rapidly. But the real advantage is that robots allow researchers to do things that life can’t. You can give a robot two brains, for example, and you can manipulate the “genetic language”, the code that describes how a robot should be formed. When two robots “mate”, for instance, you can control the rules governing how their “genomes” recombine to produce offspring.

Studying robotic evolution could give new insights into the processes that drive – or perhaps limit – evolution. Take interspecies breeding, once viewed by biologists as an evolutionary dead end. ARE provides an ideal way to investigate it because, unlike in nature, very different “species” can breed: legged robots can reproduce with wheeled ones, for example. Biologists are only just beginning to uncover the importance of hybridisation to evolution and robot studies could prove invaluable in accelerating our understanding, with practical implications for biodiversity and conservation.

ARE is also hoping to shine new light on another fundamental property of evolution: natural selection. Biological evolution is driven purely by the need to survive and reproduce, with mate selection informed by observable physical or behavioural properties. Artificial evolution, by contrast, can be driven by goals defined by researchers, such as the need for robots to be energy efficient or to have a low-carbon footprint. Studies can then explore how such guided selection affects the efficiency of the evolutionary process – or whether imposing specific goals limits the essential creativity of evolution.

“Robot evolution provides endless possibilities to tweak the system,” says evolutionary ecologist Jacintha Ellers at Free University Amsterdam. “We can come up with novel types of creatures and see how they perform under different selection pressures.” It offers a way to use evolutionary principles to explore a rich set of “what if” questions.

More on these topics: