Home Tech AI-Generated Reviews Undermine Trust in Online Platforms by Deceiving Humans and Detection...

AI-Generated Reviews Undermine Trust in Online Platforms by Deceiving Humans and Detection Systems

0
AI-Generated Reviews Undermine Trust in Online Platforms by Deceiving Humans and Detection Systems

0:00

Recent research conducted by Balázs Kovács, a professor at the Yale School of Management, has shown that AI-generated restaurant reviews have the ability to deceive both human readers and AI detectors. This could potentially undermine the credibility of online reviews.

According to a report from SuchScience, online reviews play a crucial role in the decision-making process of consumers, with many relying on them to make informed choices. However, the rise of advanced AI language models poses a threat to the reliability of these reviews. Professor Kovács carried out two experiments involving 301 participants to assess the capacity of AI-generated reviews to trick both humans and AI detectors.

In the first study, participants were presented with a mix of real Yelp reviews and AI-generated reviews created by OpenAI’s GPT-4. Surprisingly, they were only able to correctly identify the source about half of the time, which is no better than random chance. The second study, where GPT-4 produced entirely fictitious reviews, yielded even more striking results, with participants mistakenly categorizing AI-generated reviews as human-written 64 percent of the time.

Kovács also tested various AI detectors meant to distinguish between human-written and AI-generated text. Results from using Copyleaks, a publicly available AI-text recognition tool, showed that all reviews were labeled as human-generated, indicating the tool’s inability to identify AI-generated content. Even GPT-4 itself struggled to differentiate between human-written and AI-generated reviews when asked to assess the likelihood of each review being AI-generated on a scale from 0 to 100.

These findings have significant implications for review platforms, businesses, and consumers. There is a risk that dishonest individuals could exploit AI to construct fake reviews, leading to a loss of trust in online platforms and disproportionately affecting small businesses that rely heavily on genuine reviews. This study highlights the need for review platforms to reconsider their authentication processes and for policymakers to consider implementing regulations to ensure transparency.

For more information, visit SuchScience.

No comments

Leave a reply

Please enter your comment!
Please enter your name here

Exit mobile version