How Google’s hot air balloon surprised its creators

The gaggle of Google employees peered at their computer screens in bewilderment. They had spent many months honing an algorithm designed to steer an unmanned hot air balloon all the way from Puerto Rico to Peru. But something was wrong. The balloon, controlled by its machine mind, kept veering off course.

Salvatore Candido of Google’s now-defunct Project Loon venture, which aimed to bring internet access to remote areas via the balloons, couldn’t explain the craft’s trajectory. His colleagues manually took control of the system and put it back on track.

It was only later that they realised what was happening. Unexpectedly, the artificial intelligence (AI) on board the balloon had learned to recreate an ancient sailing technique first developed by humans centuries, if not thousands of years, ago. “Tacking” involves steering a vessel into the wind and then angling outward again so that progress in a zig-zag, roughly in the desired direction, can still be made.

Under unfavourable weather conditions, the self-flying balloons had learned to tack all by themselves. The fact they had done this, unprompted, surprised everyone, not least the researchers working on the project.

“We quickly realised we’d been outsmarted when the first balloon allowed to fully execute this technique set a flight time record from Puerto Rico to Peru,” wrote Candido in a blog post about the project. “I had never simultaneously felt smarter and dumber at the same time.”

This is just the sort of thing that can happen when AI is left to its own devices. Unlike traditional computer programs, AIs are designed to explore and develop novel approaches to tasks that their human engineers have not explicitly told them about.

But while learning how to do these tasks, sometimes AIs come up with an approach so inventive that it can astonish even the people who work with such systems all the time. That can be a good thing, but it could also make things controlled by AIs dangerously unpredictable – robots and self-driving cars could end up making decisions that put humans in harm’s way.