Call centers may soon employ AI-based lie detectors to determine when people are being genuine or not. Notably, a new “audio analysis system” was developed in Germany, not the US, where “the customer is always right” attitude generally prevails.
According to the researchers, their “AI-method for capturing and sensing emotional data” proved to have an accuracy rate of up to 98%.
When the Customer Is Full of It
Using the AI algorithms, the researchers suggest it can determine when customers are full of it:
“Great progress has been made in recent years to automatically detect basic emotions like joy, anger, etc. Complex emotions, consisting of multiple interdependent basic emotions, are more difficult to identify. One complex emotion which is of great interest to the service industry is difficult to detect: whether a customer is telling the truth or just a story…,” the study abstract states.
Notably, they suggest the AI would be most suitable for customer interactions by phone. Also, there may be other even more disturbing applications, like job interviews.
“This could, for example, lead to a reduction in doubtful insurance claims, or untruthful statements in job interviews. This would not only reduce operational losses for service companies but also encourage customers to be more truthful,” they state.
Now, what would customers use to detect if their call center is being truthful or not? Perhaps, there would also be an app for that? (Because it definitely goes both ways!) Furthermore, will we need to test our prospective employers to see if they are full of 💩?
Pre-Processing Using Methods From the 80s
First, the researchers recorded a group of volunteers, predominantly high school and undergraduate university students. After 20 online (due to Covid) debates on polarizing subjects, they recorded audio recordings.
Then, they analyzed the recordings using a method first proposed in the 1980s: the Mel-frequency Cepstral Coefficients (MFCCs). Today, it’s widely used in speech recognition. Notably, since this method works even in suboptimal audio conditions, it was ideal for analyzing poor-quality sounds online or by phone.
From there, it becomes quite technical, using an algorithm to create a “spectral profile” of each audio frame. Then, the profile is mapped to the Mel Scale.
When you look into it, it might seem convincing, but on the other hand, is it? Moreover, is an AI lie detector a “good” thing to employ in job interviews or for solving a customer service issue?
When there is an error, it could significantly impact lives and wrongly reflect negatively on character. On the other hand, that hasn’t stopped polygraph testing from being widely used today. Even so, the American Psychological Association states there is “little evidence that polygraph tests can accurately detect lies.”
“For now, although the idea of a lie detector may be comforting, the most practical advice is to remain skeptical about any conclusion wrung from a polygraph,” the association states.
Will AE Lie Detectors be the Next Facial Recognition Fiasco?
What comes to mind is something similar: facial recognition software. Ongoing reports suggest that such software is racially biased and, more and more, infringes on personal privacy. Nevertheless, the facial recognition startup sector is thriving.
Similarly, one could easily see how AI lie detectors are almost certainly going to be widely used one day. However, the researchers note that their algorithms are based on German communication patterns. Thus, it may not be accurate for people speaking in another culture.
Despite the obvious ethical and accuracy concerns, we might expect widespread AI lie detectors given our (screwed up) world today.
In the end, corporations may see it as a means to lower the cost of customer claims. In that case, will they genuinely care if the claim is valid or not? Will we need a lie detector to determine that answer?
“Flagging questionable parts of the conversation would enable the agent to follow up more closely on that topic and potentially prevent the customer from making false statements or committing wrongdoing,” states the research paper.
Interestingly, the paper is entitled, “Put your money where your mouth is: Using AI voice analysis to detect whether spoken arguments reflect the speaker’s true convictions.” Thus, the issue of money is front and center.
We all know the customer is most definitely not always right. But, will the use of AI to tell them so be a wise option? Or, could it be a fantastic way to lose customers and new hires? In America, one can just imagine the potential culture shock to come.