Can you tell if someone is lying?
Close your eyes. You’re already twice as good as you were before.
Our voices change in an instant. When you’re hit by a surge of adrenaline, your fight-or-flight response triggers muscles around your larynx, making your voice high-pitched and wobbly. When you answer the phone to someone you love, your voice softens and deepens. When someone lies, the rhythm and intonation of their speech change. And, weirdly, you are almost twice as good at spotting that distortion if you only hear – not see – them speak.
Our voices give away a huge amount of information with every sentence, and human beings are remarkably good at interpreting these subtleties. But what exactly are our voices revealing, and how do our brains process that information?
I volunteered as a Samaritan at university. After the initial training, I spent hundreds of hours listening to callers as they talked about everything from unreciprocated crushes to financial crises to the death of someone dear. The listening role was vital – Samaritans helps thousands of people a year – but as I continued, I found myself getting more and more fascinated by voices and how we process the information they provide.

For starters, human beings are brilliant at deducing information from just a few words, partly because our physique dictates many aspects of our voice. “Voices are an instrument and they reflect our physical nature,” says Prof Sophie Scott, the director of the Institute of Cognitive Neuroscience at University College London. “If you think about a ukulele, a guitar and a violin, their sound is defined by the material they are made out of, the number of strings and how you play them. The voice is the same.”
We are good at telling height because taller people have longer vocal tracts and therefore produce lower vocal tract resonances. A man’s voice is usually roughly one octave lower than a woman’s. As we age, the cartilage of the larynx may harden, making a voice hoarser or weaker. Interestingly, a woman’s voice may become lower because of this effect, while a man’s may become higher.
Research has even shown that women’s voices get higher in the days leading up to and during ovulation, because the larynx reacts to the amount of oestrogen in their bodies. Your voice also reveals if you are smiling or not, because your smile changes the shape of your mouth and the acoustic characteristics of your voice, producing a warmer, brighter and slightly higher-pitched tone.
This vast range of information is often received subconsciously. “We’re very good at telling if someone is ill from their voice, for example,” says Scott. “The vocal folds get inflamed and vibrate differently.”
We also make other calculations. “We can tell where someone comes from by their accent and we often assess their socioeconomic status,” says Scott – though these aspects of our voices change, too. If you hear a lot of vocal fry in someone’s voice – the low-frequency Kardashian-style “whateverrrr” – you might guess at their TV viewing habits. Even the late queen’s voice changed significantly over her lifetime. “Voices are aspirational,” says Scott. “We had a charismatic senior person working here and everyone suddenly started talking like her. You change your voice depending on who you’re talking to.”
I went to a French school until I was 13 and I can still tell immediately if someone mostly speaks French. Different languages use different facial muscles, creating specific movements of the jaw, cheeks and tongue. French speakers don’t use the muscles at the top of their cheeks in the same way as a typical English speaker, and you can usually tell from their voice, no matter how perfect the English accent. My father, on the other hand, grew up just outside Glasgow and his party trick was to tell someone which area of Scotland they came from. He would then inform them which town. But it was when he told canny old Glaswegians which street they had grown up on that jaws would drop.
Of course, that was a few decades ago. Accents used to change roughly every 25 miles across the UK. Nowadays, the distinctions are much less marked and Scott warns that we should not set too much store by them. “People project a lot on to voices. Your reaction will often tell you more about your bias than about the other person.”

We make these assessments astonishingly fast. “When we hear someone speak, our brain starts evaluating voice cues within an eyblink, or 200 milliseconds,” says Prof Silke Paulmann, the executive dean of the Faculty of Science and Health at the University of Essex. “Before we’ve fully processed the words or meaning, the brain has already started [analysis]. A wide variety of studies have shown that listeners pick up cues about emotions, motivations, engagement or attitude. I call this the ‘social intention’ of the speaker. Within an eyeblink, we can hear if someone is warm or cold, calm or stressed, positive or negative.”
These characteristics have evolved over millions of years. The superficially simple process of speaking and listening – one of the key elements of the transition from ape to Homo sapiens – is in fact enormously complex. As listening evolved from a defensive mechanism for detecting danger to a vital communication tool with complex language, our vocal structures, ears and brains all had to evolve: vocal structures to make sounds, ears to hear them, and the brain to form and interpret those sounds.
This process probably began around 27m years ago, when our ancestors began to understand the difference between vowel sounds. Progress, however, was not fast. In the same way that your coccyx is the vestigial remains of a tail, humans retain auricular muscles – allowing ears to move, as seen in cats and dogs. Perhaps sadly, we seem to have lost our ability to swivel our ears around 25m years ago. Meanwhile, the hyoid bone in the throat – crucial to more sophisticated vocalisations – appeared “only” about half a million years ago.
This evolution created idiosyncrasies, and one of them is to make us less effective at identifying liars. Dora Giorgianni at the University of Portsmouth’s International Centre for Research – who discovered that people are better at identifying lies when they can only hear them – says that this is because humans have a limited capacity to process information, meaning that both attention and memory can become overloaded when individuals have to follow audio and visual information at the same time. While I was listening at Samaritans, I found I could read people better by talking to them over the phone because all my attention was focused on their voice alone; from Giorgianni’s analysis, this seems to be correct.
In Giorgianni’s tests, some participants watched a video with audio of a mock suspect being interviewed, while others only listened to the audio. “Participants who only listened to the audio achieved substantially higher overall accuracy [in assessing lies] – 61.7% – than those who watched the video with sound – 35%,” says Giorgianni. “When too much information is presented at once – for example, visual details, facial expressions, body movements, tone of voice and the actual content of what is being said – the cognitive system must continually select what to focus on and what to ignore, which increases the risk of making inaccurate judgments.” Other research by the University of Portsmouth into juries during the pandemic concluded that the wearing of face masks actually improved a jury’s ability to differentiate between truth and lies.
“From an intuitive or evolutionary perspective, one might assume that seeing facial expressions, gestures and posture should help humans detect deception,” says Giorgianni. “However, modern investigative settings differ from ancestral environments. The cues that matter for survival are not the same as those that distinguish a practised liar from a truthful witness in an investigative interview.”
It is also the case that some of the clues we have been taught to expect – talking faster, voice rising – appear in some people but not others. Those clues are also an indicator of stress – and you can be stressed without lying. “There is no single verbal cue that ‘gives away’ lying in a strong or reliable way,” says Giorgianni. “Common beliefs about nonverbal indicators of deception are frequently inaccurate and a clear, reliable ‘Pinocchio’s nose’ simply does not exist.”

The difficulty involved in spotting a liar is familiar territory to Harriet Tyce, a novelist and a recent contestant on The Traitors. “What’s most surprising about the difficulties of spotting a liar on The Traitors is that one goes into it knowing that everyone could be – and in fact pretty much is – lying about something, which means that it should in theory be almost impossible not to spot it,” says Tyce. “But I think we are hardwired as humans to trust, and trying to override that instinct is nearly impossible.”
This doesn’t stop us trying. Several companies promise a variety of AI-driven analyses to identify lying, which track voice, along with facial muscle movements, eye tracking and brain activity. But Dr Frederika Holmes, a consultant specialising in the forensic analysis of speech and language samples who frequently acts as an expert witness, says limitations remain in voice analysis.
“Voices aren’t like DNA, which doesn’t change over the course of your life and can be directly compared from one sample to the next,” says Holmes. “Voices are plastic and they change depending on circumstances, so we can’t say with absolute certainty. We assess the points of similarity and difference and reach a conclusion regarding the strength of the evidence.”
Ultimately, if you listen closely enough to a voice, it will tell you some of its secrets. But it still won’t tell you everything.
The Good Listener by Holly Watt is published by Raven Books (£18.99). To support the Guardian buy a copy at guardianbookshop.com. Delivery charges may apply.

3 hours ago
14

















































