clock menu more-arrow no yes mobile

Filed under:

Brain activity is too complicated for humans to decipher. Machines can decode it for us.

Why we need artificial intelligence to study our natural intelligence. 

Getty Creative Images
Brian Resnick is Vox’s science and health editor, and is the co-creator of Unexplainable, Vox's podcast about unanswered questions in science. Previously, Brian was a reporter at Vox and at National Journal.

Over the past several years, Jack Gallant’s neuroscience lab has produced a string of papers that sound absurd.

In 2011, the lab showed it was possible to recreate movie clips just from observing the brain activity of people watching movies. Using a computer to regenerate the images of a film just by scanning the brain of a person watching one is, in a sense, mind reading. Similarly, in 2015, Gallant’s team of scientists predicted which famous paintings people were picturing in their minds by observing the activity of their brains.

This year, the team announced in the journal Nature that they had created an “atlas” of where 10,000-plus individual words reside in the brain — just by having study participants listen to podcasts.

How did they do all this? By using machine learning tools — a type of artificial intelligence — to mine huge troves of brain data and find the patterns of brain activity that predict our perception.

The goal here isn’t to build a mind-reading machine (although it’s often confused for that). Neuroscientists aren’t interested in stealing your passwords right out of your head. Nor are they interested in your darkest secrets. The real goal is a lot bigger. By turning neuroscience into a “big data” science, and using machine learning to mine that data, Gallant and others in the field have the potential to revolutionize our understanding of the brain.

The human brain, after all, is the most complicated object we know of in the universe, and we barely understand it. The wild idea of Gallant’s lab — an idea that could lift the field of neuroscience out of its infancy — is this: Maybe we have to build machines to figure out the brain for us. The hope is if we can decipher the intensely intricate patterns of the brain, we can figure out how to fix the brain when it’s suffering from disease.


The functional MRI — the main tool we use to peer into and analyze the function of the brain and its anatomy — has only been around since the 1990s, and it gives us only a cruddy view.

To put that view into perspective, the smallest unit of brain activity an fMRI can detect is called a voxel. Usually, these voxels are smaller than a millimeter cubed. And there might be 100,000 neurons in a voxel. As University of Texas neuroscientist Tal Yarkoni put it to me, fMRI is “like flying over a city and seeing where the lights are on.”

A basic fMRI scan.
Wikipedia

Traditional fMRI images can show where broad areas crucial to a behavior exist — for instance, you can see where we process negative emotions, or the areas that light up when we see a familiar face.

But you don’t know exactly what role that area plays in the behavior or whether other, less active areas play a crucial role as well. The brain isn’t like a Lego set, with each brick serving a specific function. It’s a mesh of activity. “Every area of the brain will have a 50 percent chance to be connected to every other area in the brain,” Gallant says.

That’s why simple experiments — to identify a “hunger” area or a “vigilance” area of the brain — can’t really yield satisfying conclusions.

“For the last 15 years, we’ve been looking at these blobs of activity and thinking that’s all the information that’s there — just these blobs,” Peter Bandettini, chief of the department on fMRI methods at the National Institute of Mental Health, told me in July. “And it turns out every nuance of the blob, every little nuance of fluctuation, contains information about what the brain is doing that we haven’t really tapped into completely yet. That’s why we need these machine learning techniques. Our eyes see the blobs, but we don’t see the patterns. The patterns are too complicated.”

Here’s an example. The traditional view of how the brain processes language is that it takes place in the left hemisphere, and two specific areas — Broca’s area and Wernicke's area — are the centers of language activity. If those areas are damaged, you can’t produce language.

But Alex Huth, a postdoc in Gallant’s Lab, recently showed that’s too simplistic an understanding. Huth wanted to know if the whole brain was involved in language comprehension.

In an experiment, he had several participants listen up to two hours of the storytelling podcast The Moth while he and colleagues recorded their brain activity in fMRI scanners. The goal was to correlate distinct areas of brain activity with hearing individual words.

That produces an enormous amount of data, more than any human can possibly deal with, Gallant says. But a computer program trained to look for patterns can find them. And the program Huth designed was able to reveal an “atlas” of where individual words “live” in the brain.

“Alex’s study showed huge parts of the brain are involved in semantic comprehension,” Gallant says. He also showed that words with similar meanings — like “poodle” and “dog” — are located near one another in the brain.

So what’s the significance of a project like this? In science, prediction is power. If scientists can predict how a dizzying flurry of brain activity translates to language comprehension, they can build a better model of how the brain works. And if they can build a working model, they can better understand what’s happening when the variables change — when the brain is sick.

What is machine learning?

“Machine learning” is a broad term that encompasses a huge array of software. In consumer tech, machine learning technology is accelerating by leaps and bounds — identifying learning how to “see” objects in photos, for instance, at near human levels. Using a machine learning technique called “deep learning,” Google’s Translate service has gone from a rudimentary (often humorously so) translation tool to a machine that can translate Hemingway in a dozen languages with a style that rivals the pros.

But on the most basic level, machine learning programs look for patterns — what’s the likelihood that X variable will correlate with Y.

Typically, machine learning programs need to be “trained” on a data set first. In training, these programs look for patterns in the data. The more training data, typically, the “smarter” and more accurate these programs become. After training, the machine learning programs are given brand new sets of data they’ve never seen before. And with those new sets of data, they can start to make predictions.

A good simple example is your email’s spam filter. Machine learning programs have scanned enough pieces of junk mail — learning the patterns of language contained within them — to know a piece of spam when they see a new email.

Machine learning can be very simple programs that just calculate mathematical regressions. (You remember those from middle school math, right? Hint: It’s about finding the slope of a line that explains the patterns of a smattering of data points.) Or it can be like Google DeepMind, which feeds off millions of data points and is the reason why Google was able to build a computer to beat a human at Go, a game so complicated that its board and pieces have more possible configurations than there are atoms in the universe.

Neuroscientists are using machine learning for several different ends. Here are the two basic ones: encoding and decoding.

With “encoding,” machine learning tries to predict the pattern of brain activity a stimulus will produce.

“Decoding” is the opposite: looking at areas of brain activity and predicting what the participants are looking at.

(Note: Neuroscientists can use machine learning on other forms of brain scans — like EEGs and MEGS — in addition to fMRI.)

Brice Kuhl, a neuroscientist at the University of Oregon, recently used decoding to reconstruct faces that participants were looking at from fMRI data alone.

The brain regions Kuhl targeted in the MRI have been long known to be related to vivid memories. “Is that region representing details of what you saw — or just [lighting up] because you were just confident in the memory?” Kuhl says. That the machine learning program could predict features of the face from the brain activity in that region suggests that’s where the information on the “details of what you saw” lives.

The top row are the original faces in Kuhl’s study. The second two rows are guesses based on the activity in two different regions of the brain. The reconstructions are far from perfect, but they do convey basic details of the original faces — information on gender, skin tone, and smile come through.
The Journal of Neuroscience

Similarly, Gallant’s experiment to predict what works of art participants were thinking about unveiled a small secret about the mind: We activate the same brain areas to remember visual features as we do when we’re seeing them.

The neuroscientists I spoke to all said that machine learning isn’t revolutionizing their field drastically yet. The big reason why is that they don’t have enough data. Brain scans take a lot of time and are very costly, and studies typically use a few dozen participants, not a few thousand.

“In the ’90s, when neuroimaging was just taking off, people were looking at category-level representation — what part of the brain is looking at faces versus words versus houses versus tools, large-scale questions,” says Avniel Ghuman, a neurodynamic researcher at the University of Pittsburgh. “Now we’re able to ask more refined questions. Like, ‘Is this memory someone is recalling right now the same thing they were thinking about 10 minutes ago?’”

This progress is “is more evolutionary than revolutionary,” he says.

Neuroscientists hope machine learning could help diagnose and treat mental disorders

To this day, psychiatrists can’t put a patient in an fMRI and determine if she has a mental disorder like schizophrenia from brain activity alone. They have to rely on clinical conversations with the patient (which has great value, no doubt). But a more machine-driven diagnostic approach could differentiate one form of the disease from another, which could have implications for treatment. To solve this, Bandettini at NIMH says neuroscientists will need to be able to access huge 10,000-subject databases of blobs — fMRI scans.

Machine learning programs could mine those data sets looking for telltale patterns of mental disorders. “You can then go back and start using this more clinically — put a person in a scanner and say, ‘Based on this biomarker that was generated by this 10,000-person database, we can now make the diagnosis of, let’s say, schizophrenia,’” he says. Efforts here are still preliminary, and have not yet yielded blockbuster results.

But with enough understanding of how the networks of the brain work with one another, it could be possible “to design more and more sophisticated kinds of interventions to fix things in the brain when they go wrong,” Dan Yamins, a computational neuroscientist at MIT, says. “It might be something like you put an implant into the brain that corrects Alzheimer’s in some way, or corrects Parkinson’s in some way.”

Machine learning could also help psychiatrists predict how an individual patient’s brain will respond to a drug treatment for depression. “Right now psychiatrists have to guess which medication is likely to be effective from a diagnostic point of view,” Yamins says. “Because what presents symptomatically is not a strong enough picture of what’s happening in the brain.”

He stressed that this may not happen until well into the future. But scientists are starting to think through these problems now. The journal NeuroImage just devoted a whole issue to papers on predicting individual brain differences and diagnoses from neuroimaging data alone.

It’s important work. Because when it comes to health care, prediction offers new paths for treatment and prevention.

Machine learning could predict epileptic seizures

With epilepsy, patients never know when a debilitating seizure will strike. “It’s a huge impairment in your life — you can’t drive a car, it’s a burden, you don’t participate in everyday life as you would,” Christian Meisel, a neuroscientist at the National Institutes of Health, tells me on a recent afternoon. “Ideally you would have a warning system.”

Treatment options for epilepsy aren’t perfect either. Some patients are medicated with anti-convulsants 24/7, but those drugs have serious side effects. And for some 20 to 30 percent of epileptics, there’s no drug that works for them.

Prediction could change the game.

If epileptics knew a seizure was imminent, they could at least get themselves to a safe place. Prediction could also change treatment options: A warning could cue a device to give a patient a fast-acting epileptic drug, or send an electrical signal to stop the seizure in its tracks.

This is an EEG — electroencephalogram — Meisel shared of an epileptic patient. “There's no seizure there,” Meisel says. “The question is, though, is this activity one hour away from a seizure, or is it more than four hours away?”

It would be very hard for a clinician to predict, “if not impossible,” he says.

But information about a coming seizure may be hidden in that flurry. To test this possibility, Meisel’s lab recently took part in a contest hosted by Kaggle, a data science community hub on the web. Kaggle provided months and years’ worth of EEG recordings on three epilepsy patients. Meisel used deep learning to analyze the data and look for patterns.

How good is it at predicting seizures based on EEG scans in the lead-up to one? “If you have a perfect system that predicts everything, you get a score of 1,” Meisel says. “If you have a random system that flips a coin, you get a 0.5. We get a 0.8 now. This means we’re not at perfect prediction. But we’re much better than random.” (That sounds great, but this approach is more theoretical than practical for now. These patients were monitored by intercranial EEG, an invasive procedure.)

Meisel is a neurological theorist, drafting models for how epileptic seizures grow from small pockets of neural activity to total debilitating storms. He says machine learning is becoming a useful tool to help him refine the theory. He can include his theory in the machine learning model and see if that makes the system more predictive or less predictive. “If it works, then my theory is right,” he says.

For machine learning to really make a difference, neuroscience will need to become a big data science

Illustration by GraphicaArtis/Getty Images

Machine learning won’t solve all the big problems in neuroscience. It may be limited by the quality of data coming in from fMRI and other brain scanning techniques. (Recall that fMRI paints a fuzzy picture of the brain, at best.) “If we had an infinite amount of imaging data, you would not get perfect prediction because those imaging procedures are very imperfect,” Gael Varoquaux, a computational scientist who has developed machine learning toolkits for neuroscientists, says.

But at the very least, the neuroscientists I spoke to were excited about machine learning because it makes for cleaner science. Machine learning combats a problem called “multiple comparison problem,” wherein researchers essentially go fishing for a statistically significant result in their data (with enough brain scans, some region is bound to “light up” somewhere). With machine learning, either your prediction about brain behavior is correct or it’s not. “Prediction,” Varoquaux says, “is something you can control.”

A big data approach also means neuroscientists may be able to start studying behavior outside the confines of the lab. “All of our traditional models of how brain activity works are based on these very artificial experimental settings,” Ghuman says. “We’re not entirely clear that it works the same way when you get to the real-world setting.” If you have enough data on brain activity (perhaps from a wearable EEG monitor) and behavior, machine learning could start to find the patterns that connect the two without the need for contrived experiments.

And there’s one last possibility for the use of machine learning in neuroscience, and it sounds like science fiction: We can use machine learning on the brain to build better machine learning programs. The biggest advance in machine learning in the past 10 years is an idea called “convolutional neural networks.” This is what Google uses to recognize objects in photos. These neural nets are based on theories in neuroscience. So as machine learning gets better at understanding the brain, it may pick up new tricks and grow smarter. Those improved machine learning programs can then be turned back on the brain, and we can learn even more about neuroscience.

(Researchers can also pull insights from machine learning programs trained to reproduce a human behavior like vision. Perhaps in learning the behavior, the program will reproduce the way the brain actually does it.)

“I don’t want people to think we’re suddenly going to get brain-reading devices; that’s not true,” Varoquaux says. “The promises are to get richer computational models to better understand the brain. I think we’re going to get there.”

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.