“Perceptual processes that are very easy for humans are hard for computers,” [Marian] Bartlett said. “This is one of the first examples of computers being better than people at a perceptual process.”
First, Bartlett’s team recruited 25 volunteers and recorded two videos with each. One video captured the subject’s facial expression as he or she experienced real pain from submerging one arm in a bucket of ice water for a minute. For the other video, the researchers asked subjects to fake being in pain for a minute while they dipped their arm in a bucket of warm water.
To set a benchmark for testing their computer system, the researchers first showed these videos to 170 people and asked them to distinguish fake from real pain. They did no better than chance. And they didn’t improve much with practice: even after watching 24 pairs of videos and being told which were fake and which were real, human observers only achieved about 55 percent accuracy — statistically better than chance, but just barely.
The computer system, on the other hand, got it right 85 percent of the time.
The system has two main elements: computer vision and machine learning. The computer vision system can identify 20 of the 46 facial movements described in FACS, virtually in real-time… The system also captures information about the timing of the movements, such as how quickly the lips part and how long they stay that way. Information gathered by the computer vision system then gets fed into a machine learning system that learns to identify patterns of features that distinguish real from fake expressions.