There is a particular kind of confidence that kicks in when you see a product with 4.8 stars and 3,400 reviews. The brain does a quick calculation, decides the crowd can’t be wrong, and the item lands in the cart. It’s a shortcut that feels rational. It relies on the implicit assumption that those thousands of ratings represent thousands of real people who bought the thing, used it, and reported back honestly.
That assumption is increasingly wrong. The system underpinning consumer trust in online shopping has been quietly corrupted to the point where a five-star average on a cheap Amazon gadget may reflect almost nothing about the actual product. The manipulation runs deeper and operates through more channels than most shoppers realize. And for the product categories most people browse on impulse – inexpensive electronics, phone accessories, kitchen tools – the problem is especially severe.
This is not a fringe issue or an occasional anomaly. It is, at this point, a structural feature of how the online marketplace operates. Understanding exactly how the gaming works, and what the regulatory response looks like in 2026, is now a practical necessity for any consumer who shops online.
The Scale of the Problem
On average, 30% of online reviews across major platforms are considered fake or inauthentic. That figure alone is unsettling. But when you break it down by platform and product category, it gets worse. A Fakespot analysis found that approximately 43% of reviews on Amazon’s bestselling products were unreliable or fabricated. The problem was especially severe for clothes, shoes, and jewelry – where 88% of reviews were flagged as unreliable – and electronics, at 53%.
The financial damage is not abstract. Fake reviews cost online consumers an estimated $770.7 billion worldwide in 2025 alone, and projections put that figure at $1.07 trillion by 2030. At the individual level, the average consumer wastes $125 per year purchasing products based on fake reviews.
The incentive structure driving this fraud is straightforward and powerful. In a five-star rating system, one additional star can boost demand for a product by 38%. Fake reviews boost product sales 12.5% in the first two weeks after they appear. When those numbers are sitting in front of a seller competing in a crowded marketplace, the temptation to manufacture ratings is obvious – particularly when platform algorithms use review scores to determine search rankings, making a better rating directly equivalent to more visibility and more revenue.
82% of consumers encounter fake reviews at least once over a 12-month period, and 46% of identified fake reviews are five out of five stars.
Six Tactics Sellers Use to Inflate Ratings
Understanding how review manipulation actually works is the first step toward seeing through it. Sellers have developed a diverse toolkit, and the more sophisticated methods are specifically designed to avoid detection.
1. Outright Fraud: Automated Server Farms
The most aggressive form of review manipulation involves no real customers at all. Sellers pay third-party operators to use server farms – networks of automated computers or devices – to download apps or purchase products and submit five-star reviews at scale. These operations run entirely outside the platform’s visible ecosystem. Because each review is generated through a unique device or account, the pattern can be difficult to detect through standard moderation. The number of fake reviews is growing 12.1% faster than the total number of real reviews every year, in part because automated generation has reduced the cost and effort required to flood a listing.
2. Cash and Gift Card Incentivization
This tactic is lower-tech but staggeringly widespread. Sellers slip a card or note into their product packaging offering buyers a reward – an Amazon gift card, a full refund, or free merchandise – in exchange for leaving a five-star review. According to a Which? investigation, over four million people could have been offered an incentive in exchange for a five-star Amazon review in a single year. Their nationally representative survey of 1,556 people found that 10% of Amazon shoppers had received a note or card in their product packaging offering an incentive for a five-star review, equating to an estimated 4.5 million people in Great Britain.
The rewards documented in that investigation were not trivial. One customer was sent £50 in Amazon vouchers plus a full refund for leaving a positive review. The incentivization of reviews is strictly against Amazon’s terms and conditions, which is why sellers often instruct reviewers not to mention the incentive letter – making it difficult to know how many reviews have been incentivized.
Of those who had shopped on Amazon in the previous 12 months, 4% were offered a reward specifically for changing a negative review to a positive one.
3. Review Merging (Catalogue Misuse)
This tactic exploits a structural vulnerability in how Amazon manages product listings. Sellers take a dormant product listing that has accumulated genuine positive reviews for a different, unrelated item – and then merge those reviews onto their own current listing. The result is a product that appears to have hundreds of verified five-star reviews when in reality those ratings belong to an entirely different product.
The Which? investigation found more than 100 reviews for a completely different product – a worm on a string – attached to an unrelated listing. 90 of those reviews were five stars. This practice is known as review merging, where sellers merge the positive reviews of a dormant Amazon listing with their own to boost their five-star count.
The UK’s Competition and Markets Authority (CMA) conducted a four-year investigation into fake reviews on Amazon. The CMA’s probe revealed issues like “catalogue abuse,” where sellers misuse reviews from unrelated products to boost their own ratings. Following that investigation, Amazon agreed to implement stricter measures, including sanctions on UK businesses that artificially inflate ratings through fraudulent reviews.
4. Coordinated Private Group Networks
Some sellers coordinate review manipulation through private Facebook groups or messaging platforms. The model works like a reciprocal exchange ring: members post links to their Amazon listings, and other members buy the product, leave a five-star review, and then receive a reimbursement. In 2022, Amazon took legal action against the administrators of more than 10,000 Facebook groups dedicated to orchestrating fake reviews. Despite that enforcement action, the networks have migrated to other platforms and continue operating.

5. AI-Generated Reviews at Scale
The newest and most difficult-to-detect form of review fraud involves large language models writing fake reviews wholesale. DoubleVerify’s Fraud Lab identified a significant increase in apps with AI-powered fake reviews in 2024, finding over three times the number compared to the same period in 2023. Some of the AI-generated reviews contain obvious phrases that point to their artificial origin (“I am a language model”), but others come across as authentic and would be difficult for users to detect.
The scale of AI-generated review fraud across non-Amazon platforms is even more striking. Temu and Shein have seen increases of over 1,300% and 1,500% respectively in AI-generated fake reviews through 2025 – not verified reviews or fake reviews from human review factories, but AI-generated content specifically designed to bury negative reviews and inflate five-star counts.
According to research from Pangram Labs, among AI-written Amazon reviews, 74% gave five-star ratings and 93% carried the “verified purchase” stamp. That last detail is particularly significant: a “verified purchase” badge has traditionally been one of the main signals consumers rely on to distinguish legitimate reviews from planted ones. AI-generated fake reviews are rendering that signal unreliable.
6. Suppression of Negative Reviews
Not all review manipulation involves adding fake positives. Sellers also work to remove or bury legitimate negative reviews. This can mean pressuring customers who left critical feedback by reaching out directly and offering incentives to change their rating – a practice documented in the Which? investigation – or it can involve the more sophisticated tactic of simply drowning out negative reviews by flooding the listing with positive ones. Platforms have also been caught suppressing negative reviews algorithmically or through selective moderation. The FTC has taken enforcement action against companies for this conduct, including against Fashion Nova for allegations the company was suppressing negative reviews.
The scale of the problem has finally produced a regulatory response in both the United States and the United Kingdom, though the two approaches differ significantly.
The FTC’s Consumer Review Rule
The FTC’s Final Rule became effective October 21, 2024, and formally prohibits the sale and purchase of fake consumer reviews, the purchase of positive or negative consumer reviews, and requires appropriate disclosures for reviews written by company insiders.
The rule addresses reviews and testimonials that misrepresent that they are by someone who does not exist – such as AI-generated fake reviews – or that misrepresent the experience of the person giving it. Violations entitle the FTC to seek penalties of up to $51,744 per violation, or per day for ongoing violations.
On December 22, 2025, the FTC took its first enforcement step under the new rule, issuing warning letters to 10 unidentified companies for potential violations, cautioning that continued noncompliance could lead to enforcement action and substantial civil penalties.
UK Enforcement Under the DMCC Act
In April 2025, several practices relating to online reviews became “banned practices” under the UK’s Digital Markets, Competition and Consumers Act 2024, meaning they are automatically deemed unfair and illegal. This includes obtaining and posting fake reviews, paid-for reviews that are not clearly marked as incentivised, hiding negative reviews, and presenting star ratings that give an inaccurate picture.
The investigations launched under these powers bring the total number of businesses under CMA review to 14, and these powers enable the CMA to decide whether consumer laws have been broken without having to go through the courts.
Amazon’s Own Enforcement
Amazon has committed enormous resources to the problem. In one year, Amazon spent over $500 million and hired 8,000 employees to combat fake reviews, and in 2024 the company blocked or removed over 275 million fake reviews. In 2025, Amazon’s legal actions led to the shutdown of more than 100 websites attempting to facilitate fake reviews and scams targeting its store.
The scale of those removals is genuinely impressive. But it also points to the magnitude of the underlying problem: that volume of fake content was reaching the platform in the first place, and it is reasonable to assume that not everything was caught.

Consumer Detection: Getting Harder, Not Easier
Consumer confidence in spotting fakes has been improving slightly – 24% of consumers reported being confident they spotted a fake review in 2025, up from 19% in 2024. But detection is becoming more difficult precisely as confidence grows.
A 2025 arXiv study found that both humans and large language models operate at chance level when detecting AI-generated fake reviews – no better than a coin flip. Humans are overconfident in their detection ability, while AI models tend to default to labeling reviews as “real.”
95% of consumers suspect censored or fake reviews if there aren’t any negative ones at all – which is useful as a signal, but sellers have adapted to this too by occasionally allowing a few three-star reviews to remain visible, creating the appearance of authenticity while the five-star average remains artificially elevated.
46% of consumers would suspect a review is fake if it looks AI-generated, while 42% are suspicious of reviews that seem paid or incentivized. The problem is distinguishing AI-generated reviews that look AI-generated from those specifically engineered not to.
What to Do Now
The core takeaway is not that online reviews are worthless – it’s that five-star averages alone are nearly meaningless for certain product categories. The signal-to-noise ratio on cheap, unbranded electronics, accessories, and fast-moving gadgets sold by third-party Amazon sellers has been degraded to the point where the star rating is the least reliable number on the page.
A more defensible approach involves reading the actual text of reviews, not just the average. Reading the full review content rather than relying on star ratings is advisable because you’ll often find reviews that sound dubious, overly vague, or completely unrelated to the item they’re supposedly endorsing. Look specifically at two- and three-star reviews, which are less likely to be fake in either direction. Watch for a burst of reviews over a short period of time, which can indicate manufactured feedback.
For higher-stakes purchases, cross-referencing with professional review sites and independent publications that receive no affiliate income from the sale is worth the extra time. The FTC’s consumer guidance also recommends looking at whether a reviewing website is independent or sponsored. Third-party tools that analyze listing review patterns can flag unusual clustering of positive ratings, reviewer behavior anomalies, or bursts of activity consistent with coordinated posting – though it’s worth knowing that Fakespot, one of the most widely used such tools, was shut down by Mozilla in July 2025, and the category of independent checkers is currently in flux.
There is also a broader shift worth making in how you approach the whole exercise. The five-star system was built on an honest premise: that enough people reporting back on the same experience would tell you something true. Enough of those reports have been falsified that the premise no longer holds for a significant portion of what you’re browsing. Treating the star rating as a starting point for skepticism rather than a shortcut past it isn’t paranoia – it’s just accurate. The manipulation is real, it’s widespread, and regulators on both sides of the Atlantic are only beginning to catch up. Until enforcement is robust enough to actually change seller behavior at scale, the most reliable fraud-detection tool available is your own attention.
AI Disclaimer: This article was created with the assistance of AI tools and reviewed by a human editor.