Consider the many challenges we must solve when moving on foot: If we’re walking or running uphill, we lean forward; if we’re walking on a slippery surface, we take our steps carefully to avoid sliding; if we’re running barefoot on gravel, we try to step lightly to avoid bruising the soles of our feet; if one of our feet is bruised, we shift our weight to remove pressure from it.
Interestingly, we perform all of these complicated functions with little thought.
Every one of us struggles with our baby steps, but through practice we learn to maintain our balance while walking and running. Eventually, we become so adept at walking that we do it subconsciously while focusing on other tasks.
Drawing lessons from nature, a group of researchers at Boston University have created a controller technology that helps drones maintain stability and balance as their environment and hardware changes. Called Neuroflight, the system uses machine learning to automatically adjust the drone’s flight when say wind conditions change or if it loses a propeller.
Old controllers have limited capacity—AI controllers are flexible
Before artificial intelligence, most drones and other remote-controlled vehicles used linear controllers to maintain flight stability. These controllers use mathematical control equations to respond to changes in flight conditions. But these equations respond to a limited set of parameters and quickly break under conditions they haven’t been designed for.
“We are still using the PID controller, which was invented in the 1920s. Current controllers are extremely difficult to tune, because every quadcopter drone is unique, and you have to modify them to obtain stable flight,” says William Koch, PhD candidate at Boston University.
Koch became interested about drone flight after a friend introduced him to it. But soon, he started thinking about ways he could improve it. This put him on the path to create Neuroflight with Azer Bestavros, Professor and former Chair of the BU Computer Science Department, and Renato Mancuso, Assistant Professor of the BU Computer Science Department.
In a nutshell, Neuroflight replaces classic control mechanisms with machine learning and neural networks, a kind of AI construction that develops behavior through the evaluation of data samples. Instead of manually creating thousands of complicated behavior rules, you provide a neural network with many examples of situations and their corresponding responses, and the AI develops its own internal representations that enable it to react properly to different conditions.
“Machine learning removes the need for any sort of manual system,” Koch says.
Interestingly, the AI-based controller very much resembles the learning process that our brain undergoes when we learn to walk. We don’t develop mathematical equations. We learn through experience and trial and error.
“When I’m walking around, I’m not thinking about balancing myself. It just happens without me thinking about it. Our body has that controller. How did we come up with that? When we started walking, our brain got wired to do that,” says Bestavros.
So basically, the idea behind Neuroflight was to use create the AI equivalent of our own body controllers for drones.
Training the AI controller in a simulated environment
“Flight is subject to many disturbances. It could be wind changes or loss of efficiency in your motor. If one propeller is less efficient, it should put less pressure on it,” says Mancuso. “It’s very hard to model flight conditions. But they can be learned. And they can be learned in a simulated environment.”
The Neuroflight team has developed a simulation of physical flight conditions which they used to train their AI model on the different possible conditions that can happen during flight. Having a simulated environment, the team was able to apply reinforcement learning. Reinforcement learning is a form of AI where the model is provided with the basic actions (e.g. adjusting the power of each of the propellers) and the intended goals (e.g. maintaining the balance of the drone). The AI agent then starts to explore the results that different actions produce in various states. Gradually, the AI becomes better and better at what it does.
Doing simulated reinforcement learning enables the AI to train in fast-forward, much faster than it would have taken if it was a real physical drone. You can also simulate conditions that would be hard to replicate in the real world, such as quickly changing wind speeds or the level of wear and tear of the motors. You can also simulate and train large flock of drones all moving together and affecting each other’s’ air flow.
Simulated learning is also much safer, especially as reinforcement learning AI agents tend to make erratic decisions in the beginning of their training, when they have no previous model of the environment.
Once the AI agent gets enough training, it can then be deployed on real drones to replace the static control unit. However, the researchers also acknowledge that training the AI in the simulated environment has some tradeoffs.
“You can’t possibly simulate the real physical world in this environment,” Mancuso says. “There’s no way to predict all the possible variables that can happen. But the digital twinned world provides a good environment to learn it the simplified simulation of the physical world. The AI learns enough to be able to self-sustain itself in flight—it would be the equivalent of a human learning to walk on an even floor.”
Running the AI controller on real drones
After training the AI model, the next stage is to integrate it into the drone’s hardware and replace the classical PID controller with the neural network.
One of the challenges of neural networks is that they consume a lot of electricity and require strong compute resources. Usually, this means that the AI model must run on some cloud server and communicate with a device through a stable internet connection.
There are now vast efforts to develop lighter AI models and specialized hardware that can run neural network computations on-device, an effort collectively known as edge AI.
The interesting thing about Neuroflight was that the Boston University researchers were able to deploy it on existing hardware without the need for specialized AI chips.
“To our knowledge, this is the first situation where an artificial neural network–based flight controller is able to run in such a resource constrained environment, and what I mean by that, we’re actually running this neural network on a microcontroller,” Koch says.
The Neuroflight team deployed their neural network on microcontrollers with 1MB of flash memory and a 512MHz processor. Other AI models usually require resource-intensive hardware or full-fledged computers, which limits the type of quadcopters on which they can be used.
“We’re able to deploy this neural network to a quadcopter that can fit in the palm of your hand,” Koch says.
Another benefit of Neuroflight is that unlike static controllers, it doesn’t need to be tuned to any specific model before being deployed on it. A neural network only needs to be trained with minimal information about the kind of drone it will be deployed on.
“We don’t have to have any assumptions or knowledge of the actual aircraft we’re training in simulation,” Koch says. “All the artificial neural network needs to know is how many motors it is able to control. It doesn’t need to know anything else and can develop an optimal strategy to control any single aircraft. That is a very powerful technique.”
Dynamically adapting the AI to the drone
An ideal neural network should get its initial training in the simulated environment, and then gradually retrain and adapt itself to the specific drone it’s installed on as the engines go through wear and tear or get replaced. It should also be able to learn from new weather and wind conditions it hasn’t seen during training.
But training neural networks still requires vast amounts of computing power that is usually not available at the edge, and especially not on a small drone. So the next phase of the Neuroflight project will be to figure out how to perform this dynamic training on-device. One solution that the researchers are working on is to enable the controller to gather data about the device and the environment and to send it to the cloud to further train and enhance the AI model.
“We will eventually push everything on the drone and do the training and adapting locally. This will allow us to scale to very large formation of hundreds of thousands of drones which will be able to achieve stable flight without relying on the cloud or only use the cloud when needed,” Koch says.