• Subscribe

How to catch a liar

Speed read
  • Involuntary facial expressions may indicate a speaker’s true feelings
  • Deep neural networks (DNN) can learn to detect deceit by analyzing expressions
  • Behavioral models narrow down sequences of suspicious events and could help humans respond to security threats

You’ve heard that a picture is worth a thousand words, but the tiniest movements of our facial muscles may be worth even more.

Microexpressions are involuntary expressions that flash across our faces in less than 1/25th of a second. All people make them, and they are very difficult to disguise or suppress.

<strong>When we practice to deceive.</strong> Tiny movements of the facial muscles give away our true feelings. Computers can be trained to recognize these microexpressions and contrast them with speech to detect deception. Courtesy Unsplash/Aatik Tasneem.

For example, in a normal, honest laugh, a person’s mouth spreads wider and their eyes crinkle and become smaller. But someone who fakes a laugh moves only their mouth and the eyes remain unaffected.

We learn nonverbal communication in early childhood, as babies imitating our parents. But once verbal language develops, our awareness of microexpressions dims, even though the physical movements remain.

Scientists at The University of Texas at San Antonio (UTSA) believe that a computer can be trained to focus on those remnants of physical expression in order to uncover not just what we say, but what we actually mean.  

“Our body language speaks faster than our verbal language. And in our body language we have less control,” says Paul Rad, associate professor of Information Systems and Cyber Security at UTSA. “If a question impacts our emotions, the face expresses that before you start constructing the sentences to respond.”

Rad has taught a deep neural network to analyze video images and spot when someone is lying. The model combines detailed observation of facial micro-expressions with text analysis of their speech to identify deceit.

We wanted to see if with machine learning we could predict or discover human deception. The answer was yes. ~ Paul Rad

Practice to deceive

Rad’s team built their deep neural network based on prior experience data and trained the model to learn to generalize the concept of deception. They then started applying it to cases it hadn’t previously seen.

Pants on fire. By analyzing facial microexpressions, Rad's deep neural network can spot when Armstrong tells a whopper.

They tested the model with Lance Armstrong’s 2013 interview with Oprah Winfrey, in which she asked him about allegations of doping. Though Armstrong admitted to past drug use, he also, it was later revealed, lied about more recent doping. The model was able to detect those lies.

The same learning model can also be trained to detect other emotions, such as pain. Doctors and caregivers need to know the level of pain a patient is experiencing. But for those who can’t communicate, due to sedation, cognitive impairment, or very young children, a trained neural network could evaluate pain intensity through observation of facial micro expressions.

These sophisticated multi-modal machine-learning models are extremely compute and storage intensive, and Rad turns to the Jetstream research cloud to provide a computing environment that can be scaled up on demand. 

“One of the most attractive things about Jetstream is that my students don’t need to be computationally savvy from day one,” says Rad. “They can come to the environment with the goal of focusing on their algorithm without worrying about the compute or storage aspects—they just use the platform.”

<strong>Crime signature.</strong> Deception detection models may help combat cybercrime by recognizing the unique identity of hackers when they attempt to deceive. Courtesy Unsplash/Jefferson Santos.Rad is also interested in applying his deception detection models in cybersecurity. This may seem unlikely at first, given that most cyberattacks are deployed over networks by unknown—and unseen—actors.

But the person who sits at the keyboard on the other side of a cyberattack has a similar thought process and behavior modeling. He or she still intends to deceive.

“We won’t see his face. We won’t hear his voice. But we will see the way he touches the network,” says Rad. “We can discover the signature of the way he touches the file. There is the potential to capture his behavior at target environments and define it as a unique identifier.”

By any other name

Another use for Rad’s model is re-identification of the same person.

In the aftermath of the Boston Marathon bombing in 2013, it took federal agencies several days to comb through all the video footage of the crowds to identify possible suspects.

<strong>Where's Waldo?</strong> Facial analysis models could assist with re-identification, i.e., tracking the same individual across multiple cameras, backgrounds, and expressions. Courtesy Unsplash/Anna Dziubinska.But if a computer could recognize a person’s identity across various cameras, backgrounds, and scenarios, it would save authorities valuable time. Police could quickly assemble a timeline of the suspect’s movements surrounding an incident, and begin to bring knowledge to a chaotic situation. 

Rad envisions using Jetstream to build the re-identification model and then encapsulate the learning algorithm in a smaller footprint and push it to a stand-alone device. A small ‘smart’ chip embedded in surveillance cameras could perform detection and identification in real-time as events unfold.

“We’re using Jetstream to build little brains for the Internet of Things,” says Rad. “Their individual computation is small, but in a group or social swarm way, they could discover things for us that are really significant.”

Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2023 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.


We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.