Ramin Hassani looked at a tiny, crawling worm through a microscope. He thought it was fascinating. Later, he said, “It can move better than any robotic system we have.” This tiny worm has an even smaller brain, yet it can move around and explore with ease.
Its brain has now inspired Hassani and his team to develop a new type of artificial intelligence. They call it liquid neural networks. In both the worm’s brain and your larger human brain, cells called neurons connect to each other through connections called synapses.
These interconnected connections form networks that process thoughts and sensations. ChatGPT and most of today’s popular AI models run on artificial neural networks, or ANNs. However, despite their name, these networks have almost nothing to do with a real brain.
Liquid neural networks are a new type of ANN. They model our brain more closely. “I think Ramin Hassani’s approach is an important step toward more realistic AI,” says Kanak Rajan, who wasn’t involved in the new work.
But she knows about such things. She’s a computational neuroscientist at Harvard Medical School and the Kempner Institute for the Study of Natural and Artificial Intelligence in Boston, Massachusetts. She uses AI to better understand the brain.
Compared to a standard ANN, a liquid neural network requires less energy and computer power to run. Yet it can solve some problems more quickly.
Hassani’s team at the Massachusetts Institute of Technology (MIT) in Cambridge showed it in tests with self-driving cars and drones. And in December 2023, her group launched a company to bring the technology to the mainstream. Hassani is now the CEO of Liquid AI in Cambridge, Massachusetts.
Thinking small and smart
The best ideas come when you’re in the shower or out for a run, says Daniela Rus. She directs MIT’s Computer Science and Artificial Intelligence Laboratory. The idea for liquid neural networks first came up on a hot summer day several years ago. The Liquid AI co-founder was at a conference. Ramin Hassani’s PhD advisor Radu Grosu was also there. The two went for a run and talked about their work.
Grosu is a computer scientist at the Technical University of Vienna in Austria. At the time of the conference, he and Hassani were building models of the brain of C. elegans. This tiny worm has just 302 neurons. About 8,000 connections connect them. (For comparison, the human brain has about 100 billion neurons and 100 trillion connections.)
Rus was working on self-driving cars. To train the car, his team was using ANNs with tens of thousands of artificial neurons and half a million connections.
Rus realized that if a worm doesn’t need that many neurons to move around, maybe AI models could get by with fewer. He recruited Hassani and another student of Grosu’s to join him at MIT. In 2020, they started a new project — giving a car a more worm-like brain.
At the time, most AI researchers were building bigger and bigger ANNs. Today’s largest ANNs have hundreds of billions of artificial neurons with trillions of connections! Making these models bigger has made them smarter. But they’re also getting more expensive to build and run.
Simplifying the math
Brains, even worm brains, are surprisingly complex. Scientists are still figuring out how they do what they do.
Hasani focused on how worm neurons influence each other.
In C. elegans they don’t always respond the same way to the same input. There is a probability — or likelihood — of different outputs. Timing matters. Also, neurons pass information both forward and backward through a network. (In contrast, most ANNs have no probability, time or information flowing backward.)
Modeling brain-like traits in neurons requires some tricky math, called differential equations. Solving these equations means doing a series of complex calculations, step by step. Normally, the solution of each step feeds into an equation for use in the next step.
But Hasani figured out a way to solve the equations in a single step. Rajan says it’s “amazing.” The achievement makes it possible to run liquid neural networks in real time on a car, drone or other device.
An ANN learns a task during a period called training. It uses examples of the task to adjust the connections between its neurons. For most ANNs, once training is over, “the model remains static,” Rus says. Liquid neural networks are different. Rus notes that even after training, they “can learn and adapt based on the inputs seen.”
In a self-driving car, a liquid neural network with 19 neurons did a better job of staying in its lane than the larger model Rus was using before.