• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.

The Future Of Lie Detection (Theories; Techniques; Etc.)

A

Anonymous

Guest
this is for those who are missing this topic in the Sniper thread.
------------------------------------------------------------------------------

http://www.brainwavescience.com/

Farwell Brain Fingerprinting is a revolutionary new technology for investigating crimes and exonerating innocent suspects, with a record of 100% accuracy in research on FBI agents, research with US government agencies, and field applications.

The technology is proprietary and patented. Brain Fingerprinting fulfills an urgent need for government, law enforcement agencies, corporations, and individuals. Over a trillion dollars are spent annually on crime fighting worldwide.

Brain Fingerprinting solves the central problem by determining scientifically whether a suspect has the details of a crime stored in his brain. It has received extensive media coverage around the world. The technology is fully developed and available for application.

Brain Fingerprinting is a powerful tool for the investigation of suspected terrorists. Measuring the brain wave activity while suspects are shown words or pictures related to specifics of the September 11, 2001 attacks can help determine if they are members of terrorist cells. Brain Fingerprinting can identify trained terrorists before they strike
 
It looks like someone's actually trying to use brain fingerprinting, let's hope it's a bit more reliable than polygraphs in seeking the truth.

At: http://news.bbc.co.uk/1/hi/sci/tech/3495433.stm

Brain fingerprints under scrutiny

By Becky McCall
in Seattle

The technique relies on electrical signals in the brain
A controversial technique for identifying a criminal mind using involuntary brainwaves that could reveal guilt or innocence is about to take centre stage in a last-chance court appeal against a death-row conviction in the US.

The technique, called "brain fingerprinting", has already been tested by the FBI and has now become part of the key evidence to overturn the murder conviction of Jimmy Ray Slaughter who is facing execution in Oklahoma.

Brain Fingerprinting, developed by Dr Larry Farwell, chief scientist and founder of Brain Fingerprinting Laboratories, is a method of reading the brain's involuntary electrical activity in response to a subject being shown certain images relating to a crime.

Unlike the polygraph or lie detector to which it is often compared, the accuracy of this technology lies in its ability to pick up the electrical signal, known as a p300 wave, before the suspect has time to affect the output.

"It is highly scientific, brain fingerprinting doesn't have anything to do with the emotions, whether a person is sweating or not; it simply detects scientifically if that information is stored in the brain," says Dr Farwell.

"It doesn't depend upon the subjective interpretation of the person conducting the test. The computer monitors the information and comes up with information present or information absent."

Dr Larry Farwell
Brain fingerprinting is admissible in court for use in identifying or exonerating individuals in the US.

Maximum security

A few days ago Dr Farwell ran the test on Jimmy Ray Slaughter at the maximum security state prison in Oklahoma.

A jury convicted Slaughter of shooting, stabbing and mutilating his former girlfriend, Melody Wuertz, and of shooting to death their eleven-month old-daughter, Jessica.

The crimes for which he is sentenced to death took place in a house that he is very familiar with. The results were revealing.

"Jimmy Ray Slaughter did not know where in the house the murder took place; he didn't know where the mother's body was lying or what was on her clothing at the time of death - a salient fact in the case," says Dr Farwell.

During the test, the suspect wears a headband equipped with sensors to measure activity in response to recognition of an image relating to the crime - for example, a murder weapon or possibly a code word in the case of a spy.


Dr Farwell claims some tests were 100% accurate
"In research with the FBI, we presented words and phrases that only an FBI agent would know and we could tell by the brain responses who was an FBI agent and who was not; we could do that with 100% accuracy," says Dr Farwell.

Brain Fingerprinting has profound implications for the criminal justice system.

Any decision relies on more than just the outcome of a forensic test such as brain fingerprinting. However, in the light of these findings, the case for appeal hopes that Slaughter will either be granted a pardon, clemency or a retrial.

Critics of brain fingerprinting believe it needs far more refinement before its use becomes widespread and cases are won and lost on its evidence.

Needless to say, Dr Farwell disagrees.

"What I can say definitively from a scientific standpoint, is that Jimmy Ray Slaughter's brain does not contain a record of some of the most salient details about the murder for which he's been convicted and sentenced to death," says Dr Farwell.
 
This is terrifying. We know far too little about how the brain works to be making life-or-death decisions based on crude measurements of its activity. Its just polygraphs all over again :(
 
BBC News Online: The future of lying
By Chris Summers, Friday, 14 January, 2005

As the British government unveils plans to make lie detector tests mandatory for convicted paedophiles, some scientists in the US are working on more advanced technology which might be better equipped at detecting deception.

Imagine the Pentagon equipped with a machine which can read minds. Sound like the plot of a Hollywood thriller?

Well, it might not be that far away.

How conventional lie detectors work

The US Department of Defense has given Dr Jennifer Vendemia a $5m grant to work on her theory that by monitoring brainwaves she can detect whether someone is lying.

She claims the system has an accuracy of between 94% and 100% and is an improvement on the existing polygraph tests, which rely on heart rate and blood pressure, respiratory rate and sweatiness.

Her system involves placing 128 electrodes on the face and scalp, which translate brainwaves in under a second. Subjects only have to hear interrogators' questions to give a response.

But the system has a long way to go before it replaces polygraphs, which were invented almost a century ago and remain a tried and tested system of deception detection.

...
Things to come: Just imagine having to pass through the brain reader at the airport, while customs asks you a few questions.
 
The future of lying

As the British government unveils plans to make lie detector tests mandatory for convicted paedophiles, some scientists in the US are working on more advanced technology which might be better equipped at detecting deception.

Imagine the Pentagon equipped with a machine which can read minds. Sound like the plot of a Hollywood thriller?

Well, it might not be that far away.


How conventional lie detectors work
The US Department of Defense has given Dr Jennifer Vendemia a $5m grant to work on her theory that by monitoring brainwaves she can detect whether someone is lying.

She claims the system has an accuracy of between 94% and 100% and is an improvement on the existing polygraph tests, which rely on heart rate and blood pressure, respiratory rate and sweatiness.

Her system involves placing 128 electrodes on the face and scalp, which translate brainwaves in under a second. Subjects only have to hear interrogators' questions to give a response.

But the system has a long way to go before it replaces polygraphs, which were invented almost a century ago and remain a tried and tested system of deception detection.

Paedophile tests

On Thursday the UK government unveiled its Management of Offenders and Sentencing Bill.

POLYGRAPH PILOT AREAS
West Midlands
Thames Valley
Northumbria
Northamptonshire
Greater Manchester
London
Leicestershire and Rutland
Lancashire
Devon and Cornwall
Bedfordshire, Hertfordshire and Cambridgeshire
A key plank of the bill is increasing the use of polygraph tests for convicted paedophiles who have been released on licence.

A voluntary scheme has been running in 10 pilot areas in England since September 2003.

But under the new bill the tests will become compulsory for paedophiles in the 10 pilot areas.

They are asked whether they have had contact with children, while having their anxiety levels measured.

But some critics believe the polygraph is flawed.

"The idea with polygraphs is that there is a tell-tale physical response associated with deception and I just don't accept that is true.

"Even if it were true for the normal person then I don't think it's true for psychopaths, or others with mental abnormalities," says Steven Aftergood, of the Federation American Sciences.


"The mouth may lie, but the face it makes nonetheless tells the truth "

Friedrich Nietzsche
Philosopher

Mr Aftergood says he doesn't know about Dr Vendemia's invention but "if there was a machine which was able to read people's minds, it would give greater urgency to questions of people's privacy.

"In the United States it could even be unconstitutional because, under the Fifth Amendment, citizens have a right not to self-incriminate themselves."

In the US a specific piece of legislation, the Employee Polygraph Protection Law, forbids firms from using lie detectors to vet workers.

The one exception is the intelligence community, where polygraphs are a ubiquitous form of checking on existing and potential employees.

Dr Vendemia says her system would be an improvement on polygraphs.

"If you are examined by a good interrogator a polygraph will be 85 to 90% accurate," she says. "But others have less than 50% accuracy. My technology has levels of accuracy around 94 to 100%."

Dr Vendemia says her research has found it takes longer for the brain to process lies, than to process the truth and this, she says, can be tested by monitoring the brainwaves.


The new system relies on brainwaves
Her work is funded by US government grants but she says there were ethical questions which arose from it.

Could it be used, for example, to help in the interrogation of innocent people accused of being al-Qaeda terrorists?

"Anything can be misused. As a researcher working with technology which has huge implications you have a responsibility to make sure that what you are doing is ethical and make sure there is someone more objective than you looking at what you do," says Dr Vendemia.

Professor Paul Matthews, a neuroscientist at Oxford University, says a mind-reading machine is pure science fiction. "There is no technology which can tell somebody what you are thinking. But you can see what sort of areas of the brain are active. It is the same sort of technology which is used in hospitals with MRI and EEG scanners."

Tor Butler-Cole, a philosopher and ethicist from King's College, London, thinks we should be wary of allowing this technology to be used if it is not 100% accurate.

"The recent controversy with cot deaths has taught us that we should be aware of relying on science which may turn out to be wrong," she says.

Ms Butler-Cole believes there is also the danger jurors would give it a lot of credibility simply because it was "scientific evidence".

Dr Vendemia was one of a number of experts discussing the subject of "Criminal Memories" in a special debate at the Dana Centre in London on Thursday. The event will be shown on a webcast next week.

HOW A LIE DETECTOR WORKS

A polygraph works on principle that a person who is lying will show signs of stress
Pneumographs (1) measure breathing rate
Galvanometers (2) test how much the subject is sweating by measuring skin's electrical resistance
Cuff (3) measures heart rate and BP which increase under stress
The results from each instrument appear as wave patterns
By comparing the patterns with those when the subject was definitely telling the truth, the examiner can spot a potential

http://news.bbc.co.uk/2/hi/uk_news/magazine/4169313.stm
 
Scientist: MRIs Can Serve As Lie Detectors

Scientist: MRIs Can Serve As Lie Detectors
Tue Sep 27,10:35 PM ET



A scientist at the Medical University of South Carolina has found that magnetic resonance imaging machines also can serve as lie detectors.

The study found MRI machines, which are used to take images of the brain, are more than 90 percent accurate at detecting deception, said Dr. Mark George, a distinguished professor of psychiatry, radiology and neurosciences.

That compares with polygraphs that range from 80 percent to "no better than chance" at finding the truth, George said.

His results are to be published this week in the journal Biological Psychiatry.

Software expected to be on the market next year could make it easier to tell if someone is a liar, which has implications for law enforcement.

Researchers at MUSC conducted the study using 60 healthy men. They offered some extra money if they could manage to trick the machine but none could.

"We had some of our study group try to dupe us, and they were unable," George said.

The MRI images show that more blood flows to parts of the brain associated with anxiety and impulse control when people lie. More blood also flows to the part of the brain handling multitasking because it is hard for people to keep track of lies they have told.

In the study, researchers had participants commit a mock theft. Then questions about the theft were projected onto a screen while they were inside the MRI machine. Participants pressed a button to respond to the yes or no questions.

The test won't work if people don't remain still in the MRI machine so a clear imagine of the brain can be recorded. And some people's brains don't seem to show the same changes while lying.

It's also not clear whether certain psychiatric conditions might change the test results.

Source
___

Information from: The Post and Courier, http://www.charleston.net

(Url tidied up - stu)
 
I was under the impression that the polygraph wasn't recognised as a legitimate source of evidence under British law.
 
I think you are correct - results rest on interpretation...from what I remember police etc are actually no better than lay people at detecting lies - indeed, I think that Carlson (2004) maintains that one of the only physical indicators that seems to have any accuracy is if someone's voice goes up in pitch when talking.
 
If someone accuses you of being a terrorist or a paedophile, then straps you into a brainscanner- I think you are going to be apprehensive, innocent ot not.

There is no way to reliably separate out the nervous innocents from the nervous guilty;
conversely certain types of psychopathic individuals might give abberant readings which don't show up as lies, or might not even realise they are lying at all.
 
Don't Even Think About Lying
How brain scans are reinventing the science of lie detection.
By Steve Silberman



Feature:
Don't Even Think About Lying
Plus:
The Cortex Cop
I'm flat on my back in a very loud machine, trying to keep my mind quiet. It's not easy. The inside of an fMRI scanner is narrow and dark, with only a sliver of the world visible in a tilted mirror above my eyes. Despite a set of earplugs, I'm bathed in a dull roar punctuated by a racket like a dryer full of sneakers.

Functional magnetic resonance imaging - fMRI for short - enables researchers to create maps of the brain's networks in action as they process thoughts, sensations, memories, and motor commands. Since its debut in experimental medicine 10 years ago, functional imaging has opened a window onto the cognitive operations behind such complex and subtle behavior as feeling transported by a piece of music or recognizing the face of a loved one in a crowd. As it migrates into clinical practice, fMRI is making it possible for neurologists to detect early signs of Alzheimer's disease and other disorders, evaluate drug treatments, and pinpoint tissue housing critical abilities like speech before venturing into a patient's brain with a scalpel.

Now fMRI is also poised to transform the security industry, the judicial system, and our fundamental notions of privacy. I'm in a lab at Columbia University, where scientists are using the technology to analyze the cognitive differences between truth and lies. By mapping the neural circuits behind deception, researchers are turning fMRI into a new kind of lie detector that's more probing and accurate than the polygraph, the standard lie-detection tool employed by law enforcement and intelligence agencies for nearly a century.

The polygraph is widely considered unreliable in scientific circles, partly because its effectiveness depends heavily on the intimidation skills of the interrogator. What a polygraph actually measures is the stress of telling a lie, as reflected in accelerated heart rate, rapid breathing, rising blood pressure, and increased sweating. Sociopaths who don't feel guilt and people who learn to inhibit their reactions to stress can slip through a polygrapher's net. Gary Ridgway, known as the Green River Killer, and CIA double agent Aldrich Ames passed polygraph tests and resumed their criminal activities. While evidence based on polygraph tests is barred from most US trials, the device is being used more frequently in parole and child-custody hearings and as a counterintelligence tool in the war on terrorism. Researchers believe that fMRI should be tougher to outwit because it detects something much harder to suppress: neurological evidence of the decision to lie.

My host for the morning's experiment is Joy Hirsch, a neuroscientist and founder of Columbia's fMRI Research Center, who has offered me time in the scanner as a preview of the near future. Later this year, two startups will launch commercial fMRI lie-detection services, marketed initially to individuals who believe they've been unjustly charged with a crime. The first phase of today's procedure is a baseline interval that maps the activity of my brain at rest. Then the "truth" phase begins. Prompted by a signal in the mirror, I launch into an internal monologue about the intimate details of my personal life. I don't speak aloud, because even little movements of my head would disrupt the scan. I focus instead on forming the words clearly and calmly in my mind, as if to a telepathic inquisitor.

Then, after another signal, I start to lie: I've never been married. I had a girlfriend named Linda in high school back in Texas. I remember standing at the door of her parents' house the night she broke up with me. In fact, I grew up in New Jersey, didn't have my first relationship until I went to college, and have been happily married since 2003. I plunge deeper and deeper into confabulation, recalling incidents that never happened, while trying to make the events seem utterly plausible.

I'm relieved when the experiment is over and I'm alone again in the privacy of my thoughts. After an hour of data crunching, Hirsch announces, "I've got a brain for you." She lays out two sets of images, one labeled truth and the other deception, and gives me a guided tour of my own neural networks, complete with circles and Post-it arrows.

"This is a very, very clear single-case experiment," she says. In both sets of images, the areas of my cortex devoted to language lit up during my inner monologues. But there is more activity on the deception scans, as if my mind had to work harder to generate the fictitious narrative. Crucially, the areas of my brain associated with emotion, conflict, and cognitive control - the amygdala, rostral cingulate, caudate, and thalamus - were "hot" when I was lying but "cold" when I was telling the truth.

"The caudate is your inner editor, helping you manage the conflict between telling the truth and creating the lie," Hirsch explains. "Look here - when you're telling the truth, this area is asleep. But when you're trying to deceive, the signals are loud and clear."

I not only failed to fool the invisible inquisitor, I managed to incriminate myself without even opening my mouth.

The science behind fMRI lie detection has matured with astonishing speed. The notion of mapping regions of the brain that become active during deception first appeared in obscure radiology journals less than five years ago. The purpose of these studies was not to create a better lie detector but simply to understand how the brain works.

One of the pioneers in the field is Daniel Langleben, a psychiatrist at the University of Pennsylvania. Back in 1999, he was at Stanford, examining the effects of a drug on the brains of boys diagnosed with attention deficit hyperactivity disorder. He had read a paper theorizing that kids with ADHD have difficulty lying. In Langleben's experience, however, they were fully capable of lying. But they would often make socially awkward statements because "they had a problem inhibiting the truth," he says. "They would just blurt things out."

Langleben developed a hypothesis that in order to formulate a lie, the brain first had to stop itself from telling the truth, then generate the deception - a process that could be mapped with a scanner. Functional imaging makes cognitive operations visible by using a powerful magnetic field to track fluctuations in blood flow to groups of neurons as they fire. It reveals the pathways that thoughts have taken through the brain, like footprints in wet sand.


Feature:
Don't Even Think About Lying
Plus:
The Cortex Cop
When Langleben ran an online search for studies of deception using fMRI, however, he found nothing. He was surprised to find "such a low-hanging fruit," as he puts it, still untouched in the hothouse of researchers hungry to find applications for functional imaging.

After taking a job at the University of Pennsylvania School of Medicine later that year, he mapped the brains of undergraduates who had been instructed to lie about whether a playing card displayed on a computer screen was the same one they'd been given in an envelope along with $20. The volunteers - who responded by pressing a button on a handheld device so they wouldn't have to speak - were told that if they "fooled" the computer, they could keep the money. Langleben concluded in 2002 in a journal called NeuroImage that there is "a neurophysiological difference between deception and truth" that can be detected with fMRI.

As it turned out, other researchers in labs across the globe were already reaching for the same fruit. Around the same time, a UK psychiatrist named Sean Spence reported that areas of the prefrontal cortex lit up on fMRI when his subjects lied in response to questions about what they had done that day. Researchers from the University of Hong Kong provided additional confirmation of a distinctive set of neurocircuits involved in deception.

For fMRI early adopters, these breakthroughs validated the practical value of functional imaging itself. "I felt this was one of the first fMRI applications with real value and global interest," Langleben says. "It had implications in crime and society at large, in defense, and even for the insurance industry."

The subject took on a new urgency after 9/11 as security shot to the top of the national agenda. Despite questions about reliability, the use of polygraph machines grew rapidly, both domestically - where the device is employed to evaluate government workers for security clearances - and in places like Iraq and Afghanistan, where Defense Department polygraphers are deployed to extract confessions, check claims about weapons of mass destruction, confirm the loyalty of coalition officers, and grill spies.

The need for a better way to assess credibility was underscored by a 2002 report, The Polygraph and Lie Detection, by the National Research Council. After analyzing decades of polygraph use by the Pentagon and the FBI, the council concluded that the device was still too unreliable to be used for personnel screening at national labs. Stephen Fienberg, the scientist who led the evaluation committee, warned: "Either too many loyal employees may be falsely judged as deceptive, or too many major security threats could go undetected. National security is too important to be left to such a blunt instrument." The committee recommended the vigorous pursuit of other methods of lie detection, including fMRI.

"The whole area of research around deception and credibility assessment had been minimal, to say the least, over the last half-century," says Andrew Ryan, head of research at the Department of Defense Polygraph Institute. DoDPI put out a call for funding requests to scientists investigating lie detection, noting that "central nervous system activity related to deception may prove to be a viable area of research." Grants from DoDPI, the Department of Homeland Security, Darpa, and other agencies triggered a wave of research into new lie-detection technologies. "When I took this job in 1999, we could count the labs dedicated to the detection of deception on one hand," Ryan says. "Post-2001, there are 50 labs in the US alone doing this kind of work."

Through their grants, federal agencies began to influence the direction of the research. The early studies focused on discovering "underlying principles," as Columbia's Hirsch puts it - the basic neuromechanisms shared by all acts of deception - by averaging data obtained from scanning many subjects. But once government agencies like DoDPI started looking into fMRI, what began as an exploration of the brain became a race to build a better lie detector.

Paul Root Wolpe, a senior fellow at the Center for Bioethics at the University of Pennsylvania, tracks the development of lie-detection technologies. He calls the accelerated advances in fMRI "a textbook example of how something can be pushed forward by the convergence of basic science, the government directing research through funding, and special interests who desire a particular technology."

Langleben's team, whose work was funded partially by Darpa, began focusing more on detecting individual liars and less on broader psychological issues raised by the discovery of deception networks in the brain. "I wanted to take the research in that direction, but I was hell-bent on building a lie detector, because that's where our funders wanted us to go," he says.

To eliminate one major source of polygraph error - the subjectivity of the human examiner - Langleben and his colleagues developed pattern-recognition algorithms that identify deception in individual subjects by comparing their brain scans with those in a database of known liars. In 2005, both Langleben's lab and a DoDPI-funded team led by Andrew Kozel at the Medical University of South Carolina announced that their algorithms had been able to reliably identify lies.

By the end of 2006, two companies, No Lie MRI and Cephos, will bring fMRI's ability to detect deception to market. Both startups originated in the world of medical diagnostics. Cephos founder Steven Laken helped develop the first commercial DNA test for colorectal cancer. "FMRI lie detection is where DNA diagnostics were 10 or 15 years ago," he says. "The biggest challenge is that this is new to a lot of different groups of people. You have to get lawyers and district attorneys to understand this isn't a polygraph. I view it as no different than developing a diagnostic test."

Laken got interested in marketing a new technology for lie detection when he heard about the number of prisoners being held without charges at the US base in Guantánamo Bay, Cuba. "If these detainees have information we haven't been able to extract that could prevent another 9/11, I think most Americans would agree that we should be doing whatever it takes to extract it," he says. "On the other hand, if they have no information, detaining them is a gross violation of human rights. My idea was that there has to be a better way of determining whether someone has useful information than torture or the polygraph."


Feature:
Don't Even Think About Lying
Plus:
The Cortex Cop
Cephos' lie-detection technology will employ the patents and algorithms developed by Kozel's team in South Carolina. Laken and Kozel recently launched another DoDPI-funded study designed to mimic as closely as possible the emotions experienced while committing a crime. In the spring, after this research is complete, Laken will start looking for Cephos' first clients - ideally "people who are trying to show that they're being truthful and who want to use our technology to help support their cases."

No Lie MRI will debut its services this July in Philadelphia, where it will demonstrate the technology to be used in a planned network of facilities the company is calling VeraCenters. Each facility will house a scanner connected to a central computer in California. As the client responds to questions using a handheld device, the imaging data will be fed to the computer, which will classify each answer as truthful or deceptive using software developed by Langleben's team. For No Lie MRI founder Joel Huizenga, scanner-based lie detection represents a significant upgrade in "the arms race between truth-tellers and deceivers."

Both Laken and Huizenga play up the potential power of their technologies to exonerate the innocent and downplay the potential for aiding prosecution of the guilty. "What this is really all about is individuals who come forward willingly and pay their own money to declare that they're telling the truth," Huizenga says. (Neither company has set a price yet.) Still, No Lie MRI plans to market its services to law enforcement and immigration agencies, the military, counterintelligence groups, foreign governments, and even big companies that want to give prospective CEOs the ultimate vetting. "We're really pushing the positive side of this," Huizenga says. "But this is a company - we're here to make money."

Scott Faro, a radiologist at Temple University Hospital who conducted experiments using fMRI in tandem with the polygraph, predicts that the invention of a more accurate lie detector "is going to change the entire judicial system. First it will be used for high-profile crimes like terrorism and Enron. You could have centers across the country built close to airports, staffed with cognitive neuroscientists, MRI physicists, and interrogation experts. Eventually you could have 20 centers in each major city, and the process will start to become more streamlined and cost-effective.

"People say fMRI is expensive," Faro continues, "but what's the cost of a six-month jury trial? And what's the cost to America for missing a terrorist? If this is a more accurate test, I don't see any moral issues at all. People who can afford it and believe they are telling the truth are going to love this test."

The guardians of another Philadelphia innovation that changed the judicial system - the US Constitution - are already sounding the alarm. In September, the Cornell Law Review weighed the legal implications of the use of brain imaging in courtrooms and federal detention centers, calling fMRI "one of the few technologies to which the now clichéd moniker of 'Orwellian' legitimately applies."

When lawyers representing Cephos' and No Lie MRI's clients come to court, the first legal obstacles they'll have to overcome are the precedents barring so-called junk science. Polygraph evidence was excluded from most US courtrooms by a 1923 circuit court decision that became known as the Frye test. The ruling set a high bar for the admission of new types of scientific evidence, requiring that a technology have "general acceptance" and "scientific recognition among physiological and psychological authorities" to be considered. When the polygraph first came before the courts, it had almost no paper trail of independent verification.

FMRI lie detection, however, has evolved in the open, with each new advance subjected to peer review. The Supreme Court has already demonstrated that it is inclined to look favorably on brain imaging: A landmark 2005 decision outlawing the execution of those who commit capital crimes as juveniles was influenced by fMRI studies showing that adolescent brains are wired differently than those of adults. The acceptance of DNA profiling may be another bellwether. Highly controversial when introduced in the 1980s, it had the support of the scientific community and is now widely accepted in the courts.

The introduction of fMRI evidence at trial may have to be vetted against legal precedents designed to prevent what's called invading the province of the jury, says Carter Snead, former general counsel for the President's Council on Bioethics. In 1973, a federal appeals court ruled that "the jury is the lie detector" and that scientific evidence and expert testimony can be introduced only to help the jury reach a more informed judgment, not to be the final arbiter of truth. "The criminal justice system is not designed simply to ensure accurate truth finding," Snead says. "The human dimension of being subjected to the assessment of your peers has profound social and civic significance. If you supplant that with a biological metric, you're losing something extraordinarily important, even if you gain an incremental value in accuracy."

No Lie MRI's plans to market its services to corporations will likely run afoul of the 1988 Employee Polygraph Protection Act, which bars the use of lie-detection tests by most private companies for personnel screening. Government employers, however, are exempt from this law, which leaves a huge potential market for fMRI in local, state, and federal agencies, as well as in the military.

It is in these sectors that fMRI and other new lie-detection technologies are likely to take root, as the polygraph did. The legality of fMRI use by government agencies will probably focus on issues of consent, predicts Jim Dempsey, executive director of the Center for Democracy & Technology, a Washington, DC-based think tank. "From a constitutional standpoint, consent covers a lot of sins," he explains. "Most applications of the polygraph in the US have been in consensual circumstances, even if the consent is prompted by a statement like 'If you want this job, you must submit to a polygraph.' The police can say, 'Would you blow into this Breathalyzer? Technically you're free to say no, but if you don't consent, we're going to make life hard for you.'"


Feature:
Don't Even Think About Lying
Plus:
The Cortex Cop
Today's fMRI scanners are bulky, cost up to $3 million each, and in effect require consent because of their sensitivity to head movement. Once Cephos and No Lie MRI make their technology commercially available, however, these limitations will seem like glitches that merely need to be fixed. If advances make it possible to perform brain scans on unwilling or even unwitting subjects, it will raise a thicket of legal issues regarding privacy, constitutional protections against self-incrimination, and the prohibitions against unlawful search and seizure.

The technological innovations that produce sweeping changes often evolve beyond their designers' original intentions - the Internet, the cloud chamber, a 19th-century doctor's cuff for measuring blood pressure that, when incorporated into the polygraph, became the unsteady foundation of the modern counterintelligence industry.

So what began as a neurological inquiry into why kids with ADHD blurt out embarrassing truths may end up forcing the legal system to define more clearly the inviolable boundaries of the self.

"My concern is precisely with the civil and commercial uses of fMRI lie detection," says ethicist Paul Root Wolpe. "When this technology is available on the market, it will be in places like Guantánamo Bay and Abu Ghraib in a heartbeat.

"Once people begin to think that police can look right into their brains and tell whether they're lying," he adds, "it's going to be 1984 in their minds, and there could be a significant backlash. The goal of detecting deception requires far more public scrutiny than it has had up until now. As a society, we need to have a very serious conversation about this."


Feature:
Don't Even Think About Lying
Plus:
The Cortex Cop
The Cortex Cop

Your flight is now boarding. Please walk through the "mental detector."

For all the promise of fMRI lie detection, some practical obstacles stand in the way of its widespread use: The scanners are huge and therefore not portable, and a slight shake of the head - let alone outright refusal to be scanned - can disrupt the procedure. Britton Chance, a professor emeritus of biophysics at the University of Pennsylvania, has developed an instrument that records much of the same brain activity as fMRI lie detection - but fits in a briefcase and can be deployed on an unwilling subject.

Chance has spent his life chasing and quantifying elusive signals - electromagnetic, optical, chemical, and biological. During the Second World War, he led the team at the MIT Radiation Lab that helped develop military radar and incorporated analog computers into the ranging system of bombers. In the 1970s, long before the invention of fMRI, Chance began using a related technique called magnetic-resonance spectroscopy to study living tissue. The first functionally imaged brain was that of a hedgehog in one of his experiments. Now 92, Chance still rides his bike to the university six days a week to teach and work in his lab. His mind is as acute as ever. After glancing through a book to confirm a data point, he resumes the conversation by saying, "I'm back online."

He explains that his goal is to create a wearable device "that lets me know what you're thinking without you telling me. If I ask you a question, I'd like to know before you answer whether you're going to be truthful."

To map neural activity without fMRI, Chance uses beams of near-infrared light that pass harmlessly through the forehead and skull, penetrating the first few centimeters of cortical tissue. There the light bounces off the same changes in blood flow tracked by fMRI. When it reemerges from the cranium, this light can be captured by optical sensors, filtered for the "noise" of light in the room, and used to generate scans.

Though near-infrared light doesn't penetrate the brain as deeply as magnetic resonance, some of the key signatures of deception mapped by fMRI researchers occur in the prefrontal cortex, just behind the forehead. The first iteration of Chance's lie detector consisted of a Velcro headband studded with LEDs and silicon diode sensors. Strapping these headbands on 21 subjects in a card-bluffing experiment in 2004, a neuroscientist at Drexel named Scott Bunce was able to accurately detect lying 95 percent of the time. The next step, Chance says, is to develop a system that can be used discreetly in airports and security checkpoints for "remote sensing" of brain activity. This technology could be deployed to check for deception during standard question-and-answer exchanges (for example, "Has anyone else handled your luggage?") with passengers before boarding a plane, or during interviews with those who have been singled out for individual searches.

With funding from the Office of Naval Research, Chance and his colleagues are working to replace the LED headband with an invisible laser and a hypersensitive photon collector to create a system that can pick up the neural signals of deception from across a room.

Before undertaking this project, Chance consulted with Arthur Caplan, director of Penn's Center for Bioethics. "Dr. Chance was a little uneasy about it," Caplan recalls. "But there are certain public places where we lose the right to privacy as a condition of entering the building. Airport security staff is allowed to search your bag, your possessions, and even your body. In my view, there's no blanket rule that says it's always wrong to scan someone without their consent. What we need is a set of policies to determine when you have to have consent."

Chance believes the virtues of what he calls "a network to detect malevolence" outweigh the impact on personal liberties. "It would certainly represent an invasion of privacy," he says. "I'm sure there may be people who, for very good reasons, would not want to come near this device - and they're the interesting ones. But we'll all feel a bit safer if this kind of technology is used in places like airports. If you don't want to take the test, you can turn around and fly another day." Then he smiles. "Of course, that's the biggest selector of guilt you could want." - S.S.

http://www.wired.com/wired/archive/14.01/lying_pr.html
 
The Lie Behind Lie Detectors

Commentary by Jennifer Granick | Also by this reporter
02:00 AM Mar, 15, 2006 EST

If we can put a man on the moon, why can't we detect when someone is lying?

Just as the space program seemed to be just the thing for combating communism during the Cold War, lie detection looks like just what we need in the fight against terrorism. The popular press, including Wired magazine, has been pretty optimistic that a high-tech replacement for the archaic and mistrusted polygraph machine is coming soon.

Last weekend, Stanford Law School hosted a workshop called "Reading Minds: Lie Detection, Neuroscience, Law and Society," where attendees took a closer look at the technology -- a look that suggests we're still light years away.

As a criminal defense attorney, I found the polygraph test useful, and I submitted my clients to testing on several occasions. There's little evidence that the polygraph is accurate, and most courts won't admit test results as evidence. But many people in law enforcement, including the FBI, believe in lie detectors, so strapping a defendant to a polygraph can be a useful tool in convincing prosecutors to drop borderline charges.

One time, I got to sit in the room as the examiner, paid by our firm, strapped and clipped the sensors to our high-strung, jittery female client. The machine looked like something out of the 1950s, with wires and electrodes connected to needles that marked variations on a roll of paper. The test measures the subject's changes in respiration, heartbeat and perspiration -- anxiety reactions allegedly correlated with lying.

In a protocol called the "control-question test," the polygraph operator asks irrelevant questions to obtain a base-line reaction, and asks "probable-lie" questions to get a sample of a deceptive reading. My client was anxious during all of these, whether the harmless, "Are you sitting down?" or the loaded, "Have you ever stolen anything?" that is designed to embarrass the subject into lying.

When my client almost jumped out of the chair when asked if she'd stolen the particular watch in question, the examiner declared that she passed with flying colors.

That was a good result for her, but an example of how far from hard science the polygraph falls. Proper protocol would have required that she not move during the test. For that matter, I wasn't supposed to be allowed in the room -- it should just be the suspect alone with the intimidating examiner. She was also supposed to believe that the examiner was neutral, rather than paid by her attorneys.

The problems with the polygraph are more fundamental than in-the-field variables such as partisan experts and improper testing procedures. In 2003, the National Academy of Sciences reviewed scientific evidence on the polygraph. The study found that there is a lack of scientific evidence that the physiological reactions the polygraph measures are uniquely related to deception, as opposed to some other psychological process, like anxiety or fear.

In the lab, with a trained examiner and a cooperative subject who is not trying to game the device by pressing his feet against the floor or squeezing his fists during the control questions, a polygraph can distinguish lies from truth better than random chance. Beyond that, it's science fiction.

And that's why there's a significant push underway to develop more-reliable lie-detection devices.

Functional magnetic resonance imaging, or fMRI, and electroencephalography, or EEG, are the most promising modern techniques vying to replace the polygraph. One reason researchers think these methods might be superior is that instead of using sweat and heartbeat to tell us what's going on in the mind, these technologies map the brain itself. Another reason is that both methods are better suited than the polygraph to identifying whether the subject has guilty knowledge, and this is more useful in security screening than the highly targeted interrogation required by the control-question test.

But these modern methods are less miraculous than they might seem. The fMRI test measures oxygen in the brain, and oxygen is related to blood flow. The scientific hypothesis is that greater blood flow (oxygen) is tightly coupled with greater neural activity. If scientists can figure out which part of the brain we use to lie, the theory goes, then fMRI can tell when we are lying.

The hard part, what Georgetown Medical School associate professor of neurology Tom Zeffiro calls the "black art," is generating accurate models of the relationship between neurological activity and blood flow. The fMRI results have to account for up to 30 or 40 factors other than deception -- including heart rate, respiration, motion -- that might all cause variance in the signal. Also, the area of the brain related to deception differs a bit from individual to individual. Culture, language, personality, handedness, gender, medications and health can all affect the results.

Most importantly, fMRI is susceptible to simple countermeasures. Since fMRI measures oxygen in the brain, a subject can defeat the test by breathing deeply or by holding her breath.

EEG has some of the same problems as fMRI, and some unique challenges. An EEG measures electrical activity on the surface of the scalp, on the tip of the nose and around the eyes. The device then infers through skin, skull and hair what's happening with electrical waves in the brain.

Researchers have identified one wave shape, P300, as associated with deception. Research assistant professor Jennifer Vendemia from the University of South Carolina studies P300, and at the Stanford workshop she said that it's possible to see a lie by looking at this wave shape, which occurs milliseconds after a question is posed. But it's difficult to measure deception separately from other neurological phenomena like switching tasks, recalling something autobiographical or recalling something learned.

As with fMRI, the existence of wave variations can be generalized over a pool of people, but differs from person to person. Moreover, the science suffers from Zeno's paradox: As EEG measurement becomes more refined, smaller errors in the readings have larger consequences for the results. Vendemia showed the audience slides of an EEG test, and it looked to me like a child's drawing of a fleet of purposeful worms.

Under laboratory conditions, fMRI technology might be 90 percent accurate in determining whether individuals in a test group of Americans are lying about taking a watch or a ring. But it's useless for employee screening, convicting the guilty, identifying terrorists at the airport or separating innocents from enemy combatants at Guantanamo Bay -- at least at the moment.

At some point soon, these high-tech lie detectors will be cheap, accurate, portable and unobtrusive enough to replace the polygraph in incident investigations. But we are a long way from reading minds.

Lie detection raises a host of complicated ethical problems about autonomy and the privacy of one's own thoughts. But before we get there, we have to know whether the thing works, and what exactly it does. Being a smart consumer of security technology means asking about accuracy rates, validity, reproducibility, specificity and sensitivity.

Once these tools are on the market, there will be immense pressure to use, or rather misuse, them in Guantanamo Bay, on the battlefield, in the courtroom and at your workplace. We'll hear the usual argument about the need to trade some privacy for increased security. But that bargain is only equitable when you actually get some security in the exchange. With even the best technology, science says lie detection is still only a little better than a shot in the dark.

http://www.wired.com/news/columns/1,70411-0.html
wired.com/news/columns/1,70411-0.html
Link is dead. The MIA webpage can be accessed via the Wayback Machine:
https://web.archive.org/web/20060613210720/https://www.wired.com/news/columns/1,70411-0.html
 
Last edited by a moderator:
Who needs machines?
Liars don't blink: they keep still and concentrate hard
Roger Dobson and Ed Habershon

FORGET the fidgety liar nervously blinking, scratching his nose and stroking the back of his head. Researchers have found that liars stay motionless and control their blinking as they try not to give anything away.
When liars do use their hands, they use extravagant movements to cover up their dishonesty, stretching out their arms or rhythmically jabbing the air to emphasise a point.

The findings are likely to be of interest to police, employers and suspicious spouses, who may wrongly interpret nervousness as dishonesty but miss more reliable indicators.

“There is a popular perception that things like scratching the nose, playing with the hair, increase with people lying,” said Dr Samantha Mann, a psychologist at Portsmouth University. “People expect liars to be nervous and shifty and to fidget more, but our research shows that is not the case.

“People who are lying have to think harder, and when we think harder we tend to be a lot stiller, with fewer movements, because we are concentrating harder.”

In the research, to be reported shortly in the Journal of Nonverbal Behavior, the academics from Portsmouth and universities in Italy looked for changes in seven categories of hand movements in 130 volunteers told to make a series of honest and dishonest statements.

Metaphoric gestures — such as a heart to show love or holding the hands apart to indicate size — occurred 25% more often when lying.

Emblematic gestures that give out a direct message — such as thumbs up for okay, or palms outstretched for “calm down” — are also used slightly more often by liars.

A typical emblematic gesture was used in April 2003 by Mohammed Saeed al-Sahaf, the Iraqi information minister nicknamed Comical Ali.

As Iraqi troops ran for cover from American shellfire, Sahaf stretched out his arms, palms held forward, and told reporters: “Baghdad is safe. The infidels are committing suicide by the hundreds on the gates of Baghdad. Don’t believe those liars. As our leader Saddam Hussein said, ‘God is grilling their stomachs in hell’.”

Another liar’s trick is the rhythmic gesture, as in 1998 when Bill Clinton jabbed the air with each word: “I did not have sexual relations with that woman, Miss Lewinsky.”

Liars use self-adaptor gestures — touching the nose, hair or other parts of the body — 15%-20% less than truth-tellers. They also point at people about 20% less.

Mann has carried out separate research on the behaviour of suspects in police interviews. She found that, when lying, participants paused more in their speech and blinked less frequently — 18.5 times a minute compared with 23.6 times when telling the truth. About 81% of suspects paused longer or blinked less when telling a lie.

Debunking another myth, she said liars were just as likely as an honest person to look a questioner in the eye.
timesonline.co.uk/article/0, ... 87,00.html
Link is dead.
 
Last edited by a moderator:
Not totally OT, but seems to fit in here better than anywhere else:

Science has designs on your brain
By Jane Elliott
Health reporter, BBC News

How could your brain be developed in the future?

Should technology be used to stimulate and improve the brain - improving grades for instance?

These are just some of the questions posed by a new exhibition at London's Science Museum: NEURObotics - the future of thinking?

It investigates how medical technology could boost our brains, read our thoughts or give us mind-control over machines.

It will also show how a shock to the brain could improve creativity, how a scan could reveal your deepest thoughts, or how your brainwaves could enable movement in a virtual world.

Visitors will be able to use some of the interactive exhibits.

One of the exhibits shows how classical pianist Cassie Yukawa significantly boosted her performance - and creativity - by undergoing EEG (electroencephalogram) neurofeedback treatment.

This monitors brainwave activity, and gives the subject instant feedback about changes they could make to reach the next level of achievement.

Professor John Gruzelier, professor of psychology at Goldsmith College, London, found after studying 97 students from the Royal College of Music that the technique, which involves you seeing your brain activity on a screen represented as sound and then trying to influence it, had improved performance by as much as 17%.

Cassie, who was a student at the Royal College of Music when she took part in the research, said it had been a very interesting experiment - and had helped to enhance her awareness of the creative process.

"I was monitored for about a year and it was fantastic because it gave me invaluable time to think about performance.

"I was wired up to electrodes and they did two different types of monitoring.

"I just think it was an invaluable pursuit to explore your 'creative zones' whilst free from the physicality of playing the piano.

"It allowed me to draw on a myriad of resources, and after using it I would have a much larger palette to explore when performing and it helped make things more fluid."

Ethics

The Science Museum will also be launching a debate about how technologies like these are used.

Emma Hedderwick, exhibition manager, said: "Researchers have already been able to use today's technology to diagnose and treat many conditions that affect the brain, allowing new insight into how our brains work.

"But in the future, could it become common to use these technologies for personal enhancement?

"This new research is both exciting and fascinating, but it is important to consider the ethical issues of using it to better our brains.

"This technology is here and has the potential to radically affect what it means to be human in the 21st Century.

"We have to think about where we want the boundaries to be, both morally and in terms of legislation."

Uses

Anders Sandberg, research associate at the Future of Humanity Institute, Oxford, said that although the technology is still often crude, neurobotics is very much a reality.

But he agreed that increasing applications would necessitate ethical debate, particularly if children were using the techniques for enhancement as they are unable to give informed consent.

He added that in some cases people might be found to be negligent if they didn't use the new techniques to enable them to do their work more safely.

"If we are talking about a doctor working in a hospital, would he not be being ethical if he did not take something to improve his attention."

The exhibition, sponsored by Siemens, will also look at fMRI (functional Magnetic Resonance Imaging) scans which can show whether a person is lying, simply by scanning their brain activity.

If proved to be accurate, this has the potential to be used as evidence in court cases.

But the exhibition also asks whether this form of modern mind reading could effectively end the centuries' old tradition of a defendant's right to remain silent.

And shows how a TMS (Transcranial Magnetic Stimulation) machine can be used to activate, or knock-out, part of the brain with magnetic pulses. This technology has been used to give ordinary people a glimpse into what it would be like to have extraordinary brain powers.

The use of brain chips and brain caps - including the highly advanced 100 electrode plus Berlin version - which allow people to control objects with their brain power, will also be showcased.

Rachel Bowden, of the museum, said one of the most fun exhibits would probably be the Mindball game, which allows users to play a ball game with their brain waves.

"People can have a go and see how they can move the ball with the power of their mind," she said.

The exhibition runs for six months until April 2007 and is free.

The museum, in South Kensington, London, is open between 10am-6pm.

http://news.bbc.co.uk/1/hi/health/5410092.stm
 
Nothing but the truth with Israeli Internet lie detector

Have you ever wondered if someone you are chatting to is telling the truth? New lie detector software from high-tech powerhouse Israel says it can show you -- across the Internet.

"We tested it with the (former US president Bill) Clinton speech about his relationship with his intern Monica Lewinsky," said Zvi Marom, head of the company behind the product.

"When he says 'I have never had sexual relations with Monica Lewinsky', the lie detector's needle jumps through the roof," said the founder and CEO of BATM Advanced Communications, a high-tech firm in an industrial park on the outskirts of Netanya on Israel's northern coast.

Since signing a deal in December with Internet telephony giant Skype, the company's server has crashed five times after tens of thousands of web users rushed to download the lie detector, offered for free.

"This is a really neat application, and the kind of thing we want to see more of," said a statement from Paul Amery, director of the Skype Developer programme.

The six employees of BATM subsidiary KishKish (www.kishkish.com), in Israel and Bulgaria, developed the add-on, which has an interface resembling a real polygraph, complete with monitors and needles.

It is one of many new applications for Internet telephony, which is rapidly becoming one of the most popular methods of communication.

The lie detector monitors in real time the stress levels in a speaker's voice.

"In the end, the voice is the biggest manifesto of what we think," said Marom.

Voice stress analysis, or VSA, is a disputed technology that attempts to measure stress levels by observing the amplitude of tremors in a person's voice. The Israeli military is often cited as a major user of this technology.

A user of the Internet lie detector needs to talk for 15 seconds to calibrate his or her voice, then sound waves start to peak if stress levels are high, a light flashes from green to red and a needle jumps to the end of a scale.

Despite little evidence to prove that lie-detecting machines work, let alone over the Internet, KishKish and Skype clients remain unfazed.

Employees spend hours responding to emails and forums from thousands of users across the world, and they brush off criticism on their web forum from people having trouble using the tool.

"If you just make something up for the sake of it, it won't work because you won't be stressed," explained Alex Rosenbaum, 35, head of development at KishKish.

The company said it tested the tool on an insurance company that reported it was more than 90 percent accurate. It also said it had requests from police services to adapt the software, and even offers from former Russian spies to help develop it.

"I get a lot of emails about people wanting to carry out professional interviews over Skype, but we say you should check with your legal authorities," Rosenbaum said.

The lie detector warns users when they are being monitored to avoid legal problems, he added.

Working with Skype for more than a year, KishKish has developed a number of add-ons for Internet telephone services, including an answering machine, contacts book and Short Message Service (SMS), but none has been as popular as the lie detector.

KishKish sees the project as a further attempt to stretch the possibilities of Internet communications, and for now -- like many Internet start ups -- it offers the product for free.

"We're trying to play in the right area and build the correct business model for the future," said Rosenbaum.

Workers at the small office, in a landscape of palm tree-lined roads, shopping malls and beachside apartments, spend many hours playing with ideas for a range of new products.

The next one they plan to release is a "Love-o-meter", designed to detect emotional interest levels across the web.

"They'll like it in France," Marom predicted.

http://www.physorg.com/news87465415.html
 
--------------------------------------------------------------------------------
Watching the Brain Lie
Can fMRI replace the polygraph?
By Ishani Ganguli
--------------------------------------------------------------------------------

Amanda lies flat on her back, clad in a steel blue hospital gown and an air of anticipation, as she is rolled headfirst into a beeping, 10-ton functional magnetic resonance imaging (fMRI) unit. Once inside, the 20-something blonde uses a handheld device to respond to questions about the playing cards appearing on the screen at the foot of the machine. With each click of the button, she is either lying or telling the truth about whether a card presented to her matches the one in her pocket, and the white-coated technician who watches her brain image morph into patterns on his computer screen seems to know the difference.

It's unlikely anyone would shell out $10,000 to exonerate herself in a dispute over gin rummy. But Amanda, the model in a demo video for Tarzana, Calif.-based No Lie MRI, is helping to make a point: lie-detection is going high-tech. No Lie MRI claims it can identify lies with 90% accuracy. The service is meant for "anybody who wants to demonstrate that they are telling truth to others," says founder and CEO Joel Huizenga. "Everyone should be allowed to use whatever method they can to defend themselves."

No Lie MRI isn't the only company hawking fMRI scans as lie detection tests. A competitor, Cephos, based in Pepperell, Mass., makes similar claims though the company has yet to unveil its test. And some government and law enforcement officials are bullish on the technology, as suggested by the federal research dollars being poured into the field.

But at a symposium hosted by the American Academy of Arts and Sciences this past February, several neuroscientists and legal experts said they're not quite ready to save a place for fMRI lie detection in the courtroom or elsewhere. "No published studies come even close to demonstrating the kind of lie detection that would be useful in a real world situation," says Nancy Kanwisher, a professor of cognitive neuroscience at MIT, who spoke at the symposium. "Scientists are endlessly clever, so I'm not saying that it can't be done. But I can't see how."

Humans aren't particularly good at knowing when they're being deceived. In studies, subjects can only correctly identify 47% of lies on average, according to a review by Bella DePaulo at the University of California, Santa Barbara. So those who detect lies for a living have turned to science. The polygraph test - used in the United States since the 1920's to root out liars by measuring physiological responses to stress - has largely been discredited as a scientific tool (see "A History in Deception"). Researchers are now honing in on the brain itself, turning to imaging techniques including fMRI, which measures blood oxygen concentrations across the brain every few seconds, in an attempt to map neural activity in real-time.

There are many types of lies-omissions, white lies, exaggerations, denials-that likely involve differing neural processes that scientists are just beginning to parse (see "Anatomy of Lying"). But in comparing fMRI images in such studies, it's clear that the brain generally works harder at lying than telling the truth. As Marcus Raichle, professor at the Washington University in St. Louis School of Medicine, puts it, "You slow down, that's not what you're used to doing. In your brain, a whole new set of areas come online as you try to abort this learned response [to tell the truth] and institute something new and novel."

"If I were asked to be involved with the company and if they made the claims they do on the website, I would be horrified. There are some decent scientists on the board for No-Lie. I don't understand their motives."
-Elizabeth Phelps


Steve Kosslyn, a psychologist at Harvard University, is studying how fMRI results differ for spontaneous versus rehearsed lies-for which the work of concocting the new story has already been done. Nearby, John Gabrieli's group at the Harvard-MIT Division of Health Sciences and Technology hopes to find a characteristic brain response associated with preparing to lie or tell the truth. Since September 11, 2001, grants from US agencies including the Departments of Defense and Homeland Security have burst open the field (Gabrieli is partially funded by the Central Intelligence Agency), and pushed many of its practitioners to seek practical applications.

Daniel Langleben at the University of Pennsylvania, who has spent nearly a decade studying deception, has recently been trying to apply fMRI to lie detection on the premise that a scanner can detect the suppression of truth, or "guilty knowledge." No Lie MRI's technology is based on the results of this research, partially funded by the Department of Defense (DoD). In one study, published in NeuroImage in 2002, Langleben gave each of his 18 subjects a playing card (a five of clubs) and a $20 bill before entering the fMRI machine. They looked at a string of cards on a screen and manually responded yes or no when asked about the identity of that card-their guilty knowledge-among a series of questions. They could keep the cash if they successfully fooled the tester. Using this approach, Langleben and colleagues have found increased activity associated with lying in cortical regions associated with conflict and suppression of a truthful response. They report they can distinguish lies from truths with up to 88% accuracy.

In 2005, the researchers who are now behind Cephos, and are also partially funded by the DoD, published results of another experimental approach in Biological Psychiatry. Mark George, director of the Brain Stimulation Laboratory at the Medical University of South Carolina, and Andy Kozel, a professor of psychiatry at the University of Texas Southwestern Medical Center, had subjects steal either a ring or a watch from a room, then deny it when they were asked a series of questions. They imaged the brains of 30 subjects while asking questions about the mock crime to establish a model for brain differences associated with lying, then applied this model to predict when another 31 subjects were lying or telling the truth. The researchers found greater activation in the anterior cingulate-thought to monitor intention-and the right middle and orbital frontal lobes, thought to carry out the lie. They say they could predict accurately for 90% of the subjects in the latter group.

"This isn't...a tool that is going to be 100% accurate," says Cephos CEO Steve Laken. It's "a forensic tool that should be looked at in totality of all other evidence in a case."

But MIT's Kanwisher says she is skeptical about such research. For one thing, group averages of brain patterns-which are required to make sense of the patterns in the first place-are difficult to interpret (and fraught with noise) on the level of individual prediction. And in the real world, lying is verbal and carried out in defiance of instruction, and the stakes are incomparably higher. Rather than missing out on a $20 study reward, being caught in a lie could mean life in prison. Lying under these circumstances comes with an emotional component that is poorly elicited by a playing card, she argues.

"Applied fMRI studies of the kinds done so far have similar limitations to those of typical laboratory polygraph research," according to a 2003 National Academy of Sciences report. "Real deception in real life circumstances is almost impossible to explore experimentally. You can't randomly assign people to go do crimes. I do think that's an inherent limit," says Gabrieli, a professor of cognitive neuroscience. Others worry about the level of nuance that fMRI-posed questions can accommodate.

The limitations in the research haven't stopped people from trying to take its applications to market. No Lie MRI's Huizenga was selling fMRI scans as screens for heart disease at his last company, Ischem, when he read about Langleben's work in 2001. He says he thought, "I can automate what you're doing, can make it into a product." So he acquired the technology from the University of Pennsylvania.

Though the company's product is still based on comparing brain scans to those in Langleben's preliminary studies, No Lie MRI had its first commercial customer in December 2006: Harvey Nathan, who has been trying to get compensation from his insurance company after his Charleston, South Carolina delicatessen burned down in 2003. He had been cleared of arson charges in a criminal case, but wanted to use No Lie MRI to convince his insurance company he hadn't started the blaze, for a per-session fee of $1,500 (clients get a hefty discount from the $10,000 going rate for agreeing to be televised). Nathan came out squeaky clean in the test, though his insurance company has yet to pay up, Huizenga reports.

Huizenga won't say how many people have since tried the technology, but he's clear on the philosophy behind it: "We're testing individuals that want to be tested in areas [in which] they want to be tested. If they want to be tested on the topic of taking money from the cash register, we won't test them on: are you having sex with your assistant? We deliver results to them personally. They get to use the results in the manner that they wish," he says.

Huizenga eagerly points to the high prediction rate in Langleben's study as a huge step up from the rate associated with the "nearest competing product"-the polygraph. He counts on snagging a worldwide patent for the service, administered at "Veracenters," and if the company's website is any indication, he will continue to market it for such uses as "risk reduction in dating."


Cephos CEO Steve Laken got into the business of lie detection when he met Kozel, then at the Medical University of South Carolina, at a 2003 conference on human brain mapping in New York. Laken wanted to bring his findings to bear in post 9/11 counterterrorism efforts. Since its 2004 incorporation, Cephos has had weekly calls from individuals eager to use the technology, according to Laken, and government agencies have expressed interest as well. But he expects it will be a while until he is ready to put people through the fMRI machine-he's hoping to increase what he claims is a 90% accuracy rate to 95%.

Laken says they are making strides toward this goal. In addressing concerns that the studies are poor approximations of reality, they are raising the perceived stakes in deception and imposing realistic time delays. In one, college students executed "a pretty elaborate mock crime" that involved stabbing a dummy and they were tested days or weeks afterwards, George explains. You can't really have people go out and break the law. [Institutional Review Boards] won't allow you to do that," George chuckles. "[Still,] they thought they were involved in something a little bit illegal. We had people's hands shaking."


One of fMRI-based lie detection's hurdles, oddly enough, is bettering the oft-questioned polygraph. Though "polygraphy isn't much of a gold standard," it still needs to be directly compared to these new methods before they can be widely adopted, Gabrieli says. Laken and Huizenga tout fMRI as the anti-polygraph, but the new technology may not be as different that people would like to think. fMRI "involves many of the same presumptions and interpretative leaps and gamesmanship," argues Ken Alder, a historian at Northwestern University and author of The Lie Detectors: The History of an American Obsession. Research on fMRI lie detection has progressed much more openly than that on polygraphs, but Alder is concerned that fMRI may turn out to similarly operate as a placebo if used at this stage, catering to what Morse calls the "lure of mechanism" in courts and otherwise.

Still, Laken sees the machine as a clear alternative to the polygraph. Unlike the older technology, he says, on-site fMRI test administrators can send out brain scans for independent analysis. Questions are presented on a screen, eliminating the human element, and the entire process is completed in under an hour.

Elizabeth Phelps, a professor of psychology at New York University, raises concerns about potential test-beating strategies such as thinking about unrelated topics or doing mental arithmetic, though Laken denies being fooled by these in preliminary studies. But in reality, a nonconsensual testtaker need only move his or her head slightly to render the results useless.

And there are other challenges. For one, individuals with psychopathologies or drug use (overrepresented in the criminal defendant population) may have very different brain responses to lying, says Phelps. They might lack the sense of conflict or guilt used to detect lying in other individuals. Laken concedes that they've tested the machine on a rather limited population-18-50 year-olds with no history of drug use, psychiatric disease, or serious traumatic brain injuries. But he says he is content for his clientele to be restricted to "relatively normal people" like Martha Stewart and Lewis "Scooter" Libby - neither of whom has actually used the technology.

There's another drawback: If a person actually believes an untruth, it's not clear if a machine could ever identify it as such. Researchers including Phelps are still debating whether the brain can distinguish true from false memory in the first place. "In law, we're concerned with acting human beings [who] can intentionally falsify or unintentionally falsify," says Stephen Morse, professor of law and psychiatry at the University of Pennsylvania. "To the extent that we're trying to get at the truth, we need a valid measure to understand [the difference]."

Jed Rakoff, US District Judge for the Southern District of New York, says he doubts that fMRI tests will meet the courtroom standards for scientific evidence (reliability and acceptance within the scientific community) anytime in the near future, or that the limited information they provide will have much impact on the stand. In court, most lies are omissions or exaggerations of the truth - among the trickiest to recreate in a laboratory. In his experience, and given the polygraph's history, he says he would argue that the potential for harm outweighs the foreseeable benefits.

"Somehow a brain image seems more convincing than a squiggle on a polygraph. It is more information but is it more informative? I think the jury's really out on that one."
-Elizabeth Phelps

On the other hand, Judy Illes, who is the director of neuroethics at the Stanford Center for Biomedical Ethics, expects to see the technique enter courtrooms in the not-too-distant future. "I believe that technology like fMRI will certainly reach the point where its reliability and accuracy is sufficient to be an indicator of whether someone is lying or being forthright (i.e, the answer to the "if" question)," writes Illes in an e-mail. "A significant challenge for the legal system, however, is that this kind of technology will unlikely be able to 'get inside someone's head' enough that it can reveal answers to the 'what' question, i.e., what is someone lying about, what is motivating them to lie, and does content and motivation interact with the concept of moral culpability or guilt."

As for Huizenga and Laken, they are both optimistic that the fMRI test will eventually be legally viable, but in the meantime, they would be content to sell their services for out-of-court settlements. According to Rakoff, the best way to get at the truth in the courtroom is still "plain old cross-examination." And in the national security sphere, there's "much more to detecting spies than the perfect gadget," Raichle agrees. "There's some plain old-fashioned footwork that needs to be done."
http://www.the-scientist.com/article/home/53137/
 
Last edited by a moderator:
Can the suspect tell his story backwards? If not, he's lying
Michael Horsnell

Gene Hunt, the copper from the TV series Life on Mars who batters crooks into submission in the interview room, may not approve. But a cunning new method of dragging the truth from criminals may be on the horizon, thanks to research by university psychologists.

Researchers from the University of Portsmouth claim that the best way to spot a lie is to make the suspect repeat his or her version of events in reverse order.

In a £136,000 project, the researchers worked on the theory that it takes more effort to make up a story than it does to tell the truth. A subject asked to repeat a concocted series of events in reverse order would be under too much of a strain, they claimed, and would make mistakes.

Detectives use many psychological tricks to trip up liars. These betray obvious signals from shifting uncomfortably in a seat, through stumbling over words to failing to make eye contact.

Another interview strategy used, the baseline method, requires investigators to note the way a suspect reacts to small talk before an interview compared with how he reacts to penetrating questions.

Finally there is the behavioural analysis strategy (BAI), in which interviewers compare the body language of liars and those telling the truth to set a list of questions.

Researchers asked 290 police officers to examine the interviews of 255 students who were given true and false details to use in their answers.

Traditional police interview methods were used in the study, and in those that employed the reverse order tactics – described as “cognitive load interviews” – the interviewer asked the suspects to recall a series of events from the most recent backwards.

Officers were less likely to detect the liars when traditional methods were used in the interviews but were more likely to detect lies when the subjects were asked them to recall events in a reverse order.

The researchers, whose study, Interviewing to Detect Deception, was funded by the Economic and Social Research Council, believe that serial criminals are so well versed in police interviews that they know how to dodge the psychological tricks. But the reverse order method imposes an additional mental stress on liars.

Professor Aldert Vrij, one of the researchers, said: “Those [police officers] paying attention to visual cues proved significantly worse at distinguishing liars from those telling the truth than those looking for speech-related cues.

“In another experiment, liars appeared less nervous and more helpful than those telling the truth contrary to the advice of the BAI strategy.

“Certain visual behaviours are associated with lying, but this doesn’t always work. Nor is comparing a suspect’s responses during small talk, and then in a formal interview, likely to be much help.

“Whether lying or telling the truth, people are likely to behave quite differently in these two situations.

“Evidence also suggests that liars are concerned about not being believed, and so are unlikely to come across as less helpful than truthful people during interview. If anything, guilty people are probably even keener to make a positive impression. All of this makes the investigator’s job very difficult.”

Trying the reverse order tactic worked much better. “Unlike truth-tellers, liars tend to tell their stories in a strict chronological time order and diverting from this order may well be too difficult for them to do,” Professor Vrij said.

“Lying takes a lot of mental effort in some situations, and we wanted to test the idea that introducing an extra demand would induce additional cues in liars. Analysis showed significantly more nonverbal cues occurring in the stories told in this way and, tellingly, police officers shown the interviews were better able to discriminate between truthful and false accounts.”

Further research on this method is to be conducted and the full findings will be shared with constabularies, possibly to come up with a new technique for interviewing suspects.

http://www.timesonline.co.uk/tol/news/u ... 895986.ece
 
Babies not as innocent as they pretend
By Richard Gray, Science Correspondent
Last Updated: 12:01am BST 01/07/2007

Whether lying about raiding the biscuit tin or denying they broke a toy, all children try to mislead their parents at some time. Yet it now appears that babies learn to deceive from a far younger age than anyone previously suspected.

Behavioural experts have found that infants begin to lie from as young as six months. Simple fibs help to train them for more complex deceptions in later life.

Until now, psychologists had thought the developing brains were not capable of the difficult art of lying until four years old.

Following studies of more than 50 children and interviews with parents, Dr Vasudevi Reddy, of the University of Portsmouth's psychology department, says she has identified seven categories of deception used between six months and three-years-old.

Infants quickly learnt that using tactics such as fake crying and pretend laughing could win them attention. By eight months, more difficult deceptions became apparent, such as concealing forbidden activities or trying to distract parents' attention.

By the age of two, toddlers could use far more devious techniques, such as bluffing when threatened with a punishment.

Dr Reddy said: "Fake crying is one of the earliest forms of deception to emerge, and infants use it to get attention even though nothing is wrong. You can tell, as they will then pause while they wait to hear if their mother is responding, before crying again.

"It demonstrates they're clearly able to distinguish that what they are doing will have an effect. This is essentially all adults do when they tell lies, except in adults it becomes more morally loaded."

She added: "Later it becomes more sophisticated by saying, 'I don't care' when threatened with a punishment - when they clearly do."

Dr Reddy thinks children use early fibs to discover what kinds of lie work in certain situations, and also learn the negative consequences of lying too much.

http://tinyurl.com/yt7cdc
 
I was certain there was a thread specifically about this particular issue, but I'll be buggered if I can find it.
Anyway, there is currently a fair bit of interest in the area of 'Voice Stress Analysis' (lie detecting to you and me) because the government, with their fixation on modernising (aka nausing stuff up by using inflexible and inappropriate technologies/databases for every bloody thing) are determined to use this shiny 'science' to catch out all those tax-dodging billionaires...sorry, my mistake, those benefits claimants that James Purnell is currently gunning for.

The truth is on the line


Charles Arthur
The Guardian, Thursday 12 March 2009
Article history

A voice analysis system is heralded as the answer to millions lost through fraud - yet two academics claim it is about as valid as astrology


It may seem contrary - even churlish - to doubt a technology claimed to have prevented millions of pounds of fraudulent insurance and benefit claims around the world. Yet that's what Francisco Lacerda, a professor of linguistics at Stockholm University, and Anders Eriksson, professor of phonetics at Gothenburg University, have done in a scientific paper.

They say the system, used to try to detect people lying in phone calls made to 25 UK councils and a number of car insurers, is no more reliable than flipping a coin - and that millions of pounds have been spent on a technology that has not been validated scientifically, and for which the claims about its function are "at the astrology end of the validity spectrum".

The claims publicly made for the voice risk analysis (VRA) software being used by trained operators at some local councils since May 2007 sound impressive. "Phone lie detector led to 160 Birmingham benefit cheat investigations", said the Birmingham Post. The Department for Work and Pensions has already spent £1.5m installing 150 "seats" of the software - plus training from its UK reseller, Amersham-based DigiLog, for each group - in councils, as part of two sets of pilot tests of the VRA system.

Insurance claim

Highway Insurance, which has used DigiLog's product since 2002, claimed in 2007 that the system has "successfully prevented more than £11m in potentially fraudulent motor insurance claims" because "Highway has screened nearly 19,000 motor claims cases since 2002, with more than 15% repudiated or withdrawn."

That suggests the system works. But perhaps the wording is important: it says they were potentially, not demonstrably, fraudulent. Scientists say telling people they are being monitored by a "lie detector" (real or not) makes them more likely to be truthful. The example cited in Lacerda and Eriksson's paper is of prison inmates interviewed about their drug use, and then tested by urinalysis and hair samples - an objective method. With "lie detection", only 14% lied; without it, 40% did.

The software is from Nemesysco, an Israeli company, which licenses DigiLog to sell it in the UK. Sales to the government are handled jointly by DigiLog (which does the staff training) and Capita. Nemesysco claims it applies "layered voice analysis" (LVA): "LVA uses a patented technology to detect 'brain activity traces' using the voice as a medium. By utilising a wide-range spectrum analysis to detect minute involuntary changes in the speech waveform itself, LVA can detect anomalies in brain activity and classify them in terms of stress, excitement, deception and varying emotional states".

In the UK, the system is known as VRA. Callers to Harrow council to make a housing benefit claim are warned their call may be subjected to voice analysis. The DigiLog software monitors the line that operator is on: if it reckons patterns in the voice indicate some form of stress, the operator hears a beep. Thus alerted, the operator is trained to begin asking questions that may uncover the truth.

Harrow visits anyone who chooses not to take part in a VRA call; it says there has been only one complaint since its introduction, "indicating that customers do not feel intimidated by the process". It claims the technology has saved it about £110,000 in benefits payments, helped identify 126 incorrectly awarded single-person council tax discounts - worth £40,000 - and prompted reviews of 304 claims. Of these, 47 were no longer valid, saving another £70,000. Birmingham city council is equivocal: no prosecutions have followed VRA's use - and in some cases, the benefits paid have even been raised.

Yet nobody testing the system seems to have tried generating the beep in the operator's ear by the electronic equivalent of a coin flip. Measuring the difference in effectiveness between random beeps and the proper system (without telling the operator) would be a scientific "blind" test: that could show whether the system is worth its cost or whether it was just the more assertive questions, allied to the "lie detector" warning, that made the difference.

In the absence of such scientific investigation, the next best step is to analyse the software. In a paper titled "Charlatanry in forensic speech science: a problem to be taken seriously", published in the International Journal of Speech, Language and the Law, Eriksson and Lacerda analysed the code in the 2003 patent for Nemesysco's software. They say it comprises about 500 lines in Microsoft's simple Visual Basic programming language. That code carries out the signal analysis, they say, and then offers the multiple levels of "certainty" to operators trying to decide whether someone is being truthful.

Call their bluff

"At best, this thing is giving you an indication of how [voice] pitch is changing," Lacerda told the Guardian. "But there's so much contamination by other [noise] factors that it's a rather crude measure." In the paper - which has been withdrawn from the website of its publisher, Equinox Publishing, after complaints from Nemesysco's founder that it contains personal attacks - the scientists say the scientific provability of the Nemesysco code is akin to astrology. The deterrent effect "is no proof of validity, just a demonstration that it is possible to take advantage of a bluff".

That chimes with one specialist, who spoke on condition of remaining anonymous. "Nobody seems to have done any sensible research into this," he says. "[The clients have] all talked to salesmen rather than scientists. Study after study shows low validity, and chance level for reliability. But people won't listen. They don't try them in controlled trials; they make a public announcement they're using it, then feel happy they've got a 30% fall in claims. It's called the 'bogus pipeline effect'. People are frightened [of the threat]."

Stress at work

But Lior Koskas, the business development manager of DigiLog, says the VRA system cannot be separated from its user, because the system only picks up stress. He does not claim it spots "lies" on its own. "Only when the technology and an operator trained by us spots it, then can we say there's a risk someone is lying." Has there been a scientific "blind test" of the system? "No," Koskas says, "you can't say you're using something if you aren't."

He adds that the technology "hasn't been scientifically validated", but he rejects Lacerda and Eriksson's criticisms. "With any technology you will have opinions," he says. "But how many of these scientists have tested it properly? They talk about the technology in isolation, as though you don't need anything from the operator except turning it on or off. But the majority of the training course is about linguistic training analysis, learning to listen. Anybody using this [technology] in the UK doesn't use it in isolation."

What would Lacerda advise the government and companies considering spending money on the system to do? "Spend it on educating the people who are going to interview people, because that would be much more valid and ethically sensible."

Yossi Pinkas, Nemesysco's vice-president of sales and marketing, insists the system "can't be tested in a lab environment, because you're testing emotion". To him, Lacerda and Eriksson's analysis is flawed because "there's no scientific field of 'voice analysis', only voice recognition".

LINK

Note, "Hasn't Been Scientifically Tested" but being rolled out at enormous cost anyway.

The paper "Charlatanry in forensic speech science: a problem to be taken seriously" is available to read, of course, on tinternet and runs to around 25 pages. Really, do seek it out. The LVA stuff starts on page 11.

Also, a politics blogger going by the name of Unity, along with a couple of journalist sorts, are doing some investigating into this, after all it is increasingly scarce public money that will be funding all this.

http://www.liberalconspiracy.org/2009/0 ... r-testing/
http://www.liberalconspiracy.org/2009/0 ... l-started/
http://www.ministryoftruth.me.uk/2009/0 ... -evidence/
http://www.ministryoftruth.me.uk/2009/0 ... t-mention/
 
A recent experiment suggests a liar's lies can be made to appear more divergent from the same informant's true statements / stories by giving the informant a secondary cognitive / memory task to perform.
Exposing Liars by Distraction – Science Reveals a New Method of Lie Detection

According to an experiment, investigators who asked a suspect to carry out an additional, secondary, task while being questioned were more likely to expose liars.

A new method of lie detection shows that lie-tellers who are made to multitask while being interviewed are easier to detect.

It has been clearly established that lying during interviews consumes more cognitive energy than telling the truth. Now, a new study by the University of Portsmouth has found that investigators who used this knowledge to their advantage by asking a suspect to carry out an additional, secondary, task while being questioned were more likely to expose liars. The extra brain power required to concentrate on a secondary task (other than lying) was particularly challenging for lie-tellers.

In this experiment, the secondary task used was to recall a seven-digit car registration number. The secondary task was only found to be effective if lie tellers were led to believe that it was important. ...
FULL STORY: https://scitechdaily.com/exposing-l...cience-reveals-a-new-method-of-lie-detection/

PUBLISHED REPORT:
The Effects of a Secondary Task on True and False Opinion Statements
Aldert Vrij, Haneen Deeb, Sharon Leal and Ronald P. Fisher
28 March 2022, International Journal of Psychology and Behaviour Analysis.
DOI: 10.15344/2455-3867/2022/185
 
A recent experiment suggests a liar's lies can be made to appear more divergent from the same informant's true statements / stories by giving the informant a secondary cognitive / memory task to perform.

FULL STORY: https://scitechdaily.com/exposing-l...cience-reveals-a-new-method-of-lie-detection/

PUBLISHED REPORT:
The Effects of a Secondary Task on True and False Opinion Statements
Aldert Vrij, Haneen Deeb, Sharon Leal and Ronald P. Fisher
28 March 2022, International Journal of Psychology and Behaviour Analysis.
DOI: 10.15344/2455-3867/2022/185
Interesting (Vrij is always a good read, his book "Detecting Lies and deceit" is really very good). It feel as if the study is doing more to support the hypothesis that lying requires a higher cognitive load, than the other way around...Vrij notes that it's very subjective as a technique - cognitive load might matter when yer making it up on the spot but if you've planned and rehearsed out loud your story, it really might not matter...and we've all met folk for whom lying is so second nature I doubt their brains are working any harder than an honest persons...when lying is literally some people’s way of operating in the world.
 
... It feel as if the study is doing more to support the hypothesis that lying requires a higher cognitive load, than the other way around...Vrij notes that it's very subjective as a technique - cognitive load might matter when yer making it up on the spot but if you've planned and rehearsed out loud your story, it really might not matter ...

Agreed ... The experiment only demonstrates you might be able to diminish the coherence or believability of a lie while it's being created in realtime. That doesn't strike me as "news". In any case, it's still up to the listener(s) to decide whether the story being told suggests falsehood. Finally, this technique represents active interference with the informant while he / she is being interviewed / interrogated, so there's a built-in basis for rebuttal (i.e., that the interviewer / interrogator disrupted the informant's performance and caused the errors suspected of representing lies).
 
Agreed ... The experiment only demonstrates you might be able to diminish the coherence or believability of a lie while it's being created in realtime. That doesn't strike me as "news". In any case, it's still up to the listener(s) to decide whether the story being told suggests falsehood. Finally, this technique represents active interference with the informant while he / she is being interviewed / interrogated, so there's a built-in basis for rebuttal (i.e., that the interviewer / interrogator disrupted the informant's performance and caused the errors suspected of representing lies).
As things currently stand no lie detection techique is accurate enough to get within shouting distance of 'reasonable doubt' as evidence, but as an aid to an ongoing investigation some techniques seem to help a bit.
 
There is no single thing that is a "polygraph". It is a category of devices which measure multiple things.

A polygraph measures several involuntary physiological responses to stress, and looks for patterns. The test has to be calibrated by the subject first being asked a series of neutral questions and the operator measuring their responses. The interviewer then conducts the interview and the operator notes any unusual "spikes" in physiological responses.

It is complex and nuanced, but relies on experience, skill, judgement and opinion. If it works at all, it is more of an art than a science.

At best, this can tell the operator that the subject is having a stress reaction either to hearing the question itself, or to the answer that they are giving. It may help a skilled interviewer to direct their later questions.

The leap from observing a stress reaction to concluding that the subject must be lying is completely unreliable for three reasons:
  1. A guilty subject can use known techniques to induce stress reactions during the neutral questions, thus messing with the calibration of the test.
  2. An innocent subject can have a stress reaction during the test, for example because they fear not being believed, or because they are hiding something else unrelated to the purpose of the interview.
  3. Many people are perfectly capable of believing what they are saying at the moment that they are saying it. In my previous employment as a fraud investigator, I was very familiar with the sort of customer who did something not dissimilar to method acting. They were as guilty as Hell, but genuinely indignant at the suggestion that they would sully themselves by lying.
Most jurisdictions do not accept polygraph results as evidence in court cases because they are known to be unreliable.

This is widely known by the public and, as a fraud investigator, I treated the bold assertion, "I'll even take a lie detector test to prove it" as a risk indicator that the person may be lying.

A so-called lie detector was used in the daytime TV show, The Jeremy Kyle Show. Individuals were challenged in a hostile and aggressive manner if they dared to suggest that the lie detector results were wrong, even though Kyle himself only claimed it was "96%" accurate: an admission that it would be wrong about 1 time in 25. Most responsible polygraph operators claim a much lower accuracy percentage.

However, the type of person who went on the Jermy Kyle Show as a guest was typically unsophisticated, and was likely to believe in the "science" of the polygraph. This may have influenced the results, and the way that the subjects interacted with the experts conducting the test.

My favourite example was the woman who said to the operator in the calibration stage, "I didn't steal the money, but if I fail the test, I'll give it back."

Famously, the show was eventually cancelled when a guest who maintained his innocence committed suicide after "failing" the lie detector test.


As a former industry professional, I have little faith in any technique or device claimed to detect when someone is lying — or which part of what they are saying is a lie — purely from their physiological responses, or body language, or voice intonation, etc. The subject is too complex, and variable such as cultural background, mental health, physical health, and life experience would add to the unreliability. All you can identify are risk indicators: things which suggest the probability that the person is being dishonest.

Intuition and experience are equally unreliable in detecting a liar by their behaviours. We all think we're good at it, and we're all pretty bad at it.

The only way to tell if someone is lying is to find discrepancies.

If the person says two things that cannot both be true at the same time, then at least one of them must be untrue. They are saying something that is a lie.

If the person says something that cannot be reconciled with facts that have been established conclusively with evidence, then you know exactly what the lie is.
 
The Jeremy Kyle Show. Individuals were challenged in a hostile and aggressive manner if they dared to suggest that the lie detector results were wrong, even though Kyle himself only claimed it was "96%" accurate: an admission that it would be wrong about 1 time in 25. Most responsible polygraph operators claim a much lower accuracy percentage.
All over the place according to Vrij, 63-99% depending on polygraph type and those are laboratory tests so the stakes are not high.

From memory, best results were using CBCA, and only if the results were scored independently by three trained people and then peer reviewed together, generally 60-70% accurate. I suspect you could train a machine learning program to do this quite well. Still not 'reasonable doubt' level. Might help with investigation.

If the person says two things that cannot both be true at the same time, then at least one of them must be untrue. They are saying something that is a lie.

If the person says something that cannot be reconciled with facts that have been established conclusively with evidence, then you know exactly what the lie is.
Establish ground truth, best way. What people say is not so nearly as useful as finding out what they actually do or did.

PS. "Detecting Lies and Deceit" by Aldert Vriji. Well worth a read, it's written for the lay reader and reviews just about any lie detecting technique you can think of and none of them come out rose smelling. Refreshingly direct.
 
Last edited:
Back
Top