Reviews have always been an essential part of Amazon’s customer shopping experience. Amazon makes it easy for customers to leave honest reviews, which helps inform millions of other customers worldwide about their purchase decisions. At the same time, the company makes it hard for bad actors to take advantage of Amazon’s trusted shopping experience.
What Happens When a Customer Submits a Review?
Before being published online, Amazon uses Artificial Intelligence (AI) to analyze the review for known indicators such as the reviewer’s behavior, language patterns, and the relationship between the reviewer and the product to ascertain that the review is fake. Amazon’s AI also considers the reviewer’s purchase history, browsing behavior, and other contextual information to determine the review’s authenticity. The vast majority of reviews meet Amazon’s high bar for authenticity and get posted right away.
However, the company takes several steps if potential review abuse is detected. If Amazon is confident the review is fake, they move quickly to block or remove it and take further action, such as revoking a customer’s review privileges, blocking bad actor accounts, and even pursuing litigation against the bad actors. If a review is suspicious but additional evidence is needed, Amazon’s expert investigators, who are specially trained to identify abusive behavior, investigate other signals before taking action. In fact, in 2022, Amazon observed and proactively blocked more than 200 million suspected fake reviews on its stores worldwide.
Among other measures, Amazon uses the latest advancements in AI to stop hundreds of millions of suspected fake reviews, manipulated ratings, fake customer accounts, and other abuse before customers see it. Machine Learning (ML) models analyze a multitude of proprietary data, including whether the seller has invested in ads (which may be driving additional reviews), customer-submitted reports of abuse, risky behavioral patterns, review history, and more. Large Language Models (LLMs) are leveraged alongside Natural Language Processing (NLP) techniques to analyze anomalies in this data that might indicate a review was fake or incentivized – say with a gift card, free product, or some other form of reimbursement. Amazon also uses Deep Graph Neural Networks (GNNs) to analyze and understand complex relationships and risk patterns to help detect and remove groups of bad actors.