Artificial intelligence is changing the way we look at the world by providing meaningful access to data that makes people’s lives much more efficient, by powering a number of programs and services that help them do everyday things, such as social networking with friends and using or sharing travel via ride service apps. The most common examples of AI in everyday life are travel navigation, smart home devices, smartphones, drones, and smart cars.
But what if AI could provide more and deeper access to the way people’s emotions are perceived and expressed? Usually we can read emotions through facial expressions and most of us can understand how someone might feel by the look on their face. Still, reading facial expressions can be a difficult task for autistic people, but it’s not entirely clear why this is.
Recently, Kohitij Kar, a researcher in the lab of MIT Professor James DiCarlo, published his work in: The Journal of Neuroscience, which can provide more insight into how the brain works. The researchers believe that machine learning could be an effective tool that could open up new avenues for modeling the computational capabilities of the human brain.
The answer may lie in the proposition that there are two distinct brain regions over which most differences can be spread: the primate (human) brain, known as the inferior temporal cortex, which is associated with facial recognition, and the amygdala, which receives partial input from the inferior temporal cortex processes emotions.
Mimicking the function of the human brain
Kar’s work began by examining data from Ralph Adolphs of Caltech and Shuo Wang of Washington University in St. Louis. The data was the result of an experiment in which Wang and Adolphs presented two control groups with images of faces; one made up of those considered neurotypical and one made up of autistic adults.
A software program generated the images, which ranged across a spectrum from anxious to happy. Each control group had to distinguish whether the faces depicted happiness or not. Compared to the neurotypical control group, Wang and Adolphs found that autistic adults reported more faces as happy.
Kar then took the data from Wang and Adolphs’ study and trained an artificial intelligence model to perform the same task. The neural network is made up of units similar to biological neurons that have the capacity to process visual information and recognize certain signals and decide which faces appear happy.
The findings showed that the artificial intelligence developed by Kar was able to distinguish happy from unhappy faces with an accuracy closer to the neurotypical control group than that of the autistic adults. In addition, using the results, Kar was able to strip the neural network to see how and where it differed from the autistic adults.
Interestingly, Kar found that the inferior temporal cortex was, in part, responsible for processing the visual cues that are able to differentiate and identify emotions in facial expressions.
†These are promising results‘ says Kar. Better models of the brain will come,”but often we don’t have to wait in the clinic for the absolute best product.†
In fact, in a clinical setting, artificial intelligence could be used for a more efficient and effective diagnosis of autism and to detect certain features of autism at an earlier stage. This was demonstrated when individual neural networks were taught to match the judgments of autistic adults and neurotypical controls.
The control “weights” in the network correlated with neurotypic controls were much stronger than those in autistic adults; this was evident in both the negative or “inhibiting” and positive or “exciting” weights. This means that the brains of autistic adults have inefficient or noisy sensory neural connections.
To determine how effective the neural network was at revealing the inferior temporal cortex as the partial human brain primarily responsible for recognizing facial expressions, Kar used the data from Wang and Adolphs to evaluate the role of the amygdala. Kar concluded that the inferior temporal cortex was the main driver of amygdala function in this task.
Ultimately, the work will help demonstrate how useful computational models, especially image processing neural networks, are in the clinical setting for better diagnoses of autism and perhaps other cognitive behaviors.
Even if these models are far from brains, they are falsifiable, rather than people just making up stories… To me, that’s a more powerful version of science.
Kohitij Kar, a researcher in MIT professor James DiCarlo’s lab
References and further reading
Kar, K., (2022_ A computational study of the behavioral and neural markers of atypical facial emotion processing in autism. The Journal of Neuroscience† [online] pp. JN-RM-2229-21. Available at: https://www.jneurosci.org/content/early/2022/05/23/JNEUROSCI.2229-21.2022
Hutson, M., (2022) Artificial neural networks model facial processing in autism† [online] MIT News | Massachusetts Institute of Technology. Available at: https://news.mit.edu/2022/artificial-neural-networks-model-face-processing-in-autism-0616
#Artificial #intelligence #sheds #light #autistic #brains #work