Cyber fraudsters are increasingly using new-age technology – artificial intelligence (AI) – to ramp up their attacks, with a shift that has made users the victims of fake app reviews that have flooded mobile and smart television app stores, the new one. report shows.
The report, published by digital media analytics firm DoubleVerify, shows that scammers are using AI to create fake reviews in app stores, including giving five-star ratings to artificially targeted apps to boost credibility, leading people to download potentially dangerous or fraudulent apps. filling.
“DoubeVerify’s Fraud Lab has increased applications with AI-powered fake reviews in 2024, identifying more than three times the number compared to the same period in 2023,” the company said.
Reviews show that users who download the app are often bombarded with tons of out-of-context ads, similar to websites created just to display ads.
This approach, DoubleVerify notes, disrupts the user experience and reduces app durability in the long run as frustrated users end up uninstalling the rogue app.
“The dangers of fake app reviews go beyond just being annoying. Some fraudulent apps can hijack your device, running ads endlessly even when your device appears to be off.
“This can lead to severe problems like battery drain, device overheating, and excessive data usage, resulting in significant costs for users,” the report said.
“These apps can be difficult to remove, and some even cause the device to malfunction or shut down unexpectedly.”
Fake reviews have also been found to trick advertisers into creating marketing campaigns on seemingly popular apps, which will have a dramatic impact on ad managers, especially in the streaming space where the cost per thousand impressions is often in the $35 (Sh4,500) to $65 (Sh8 ,400).
DoubleVerify research shows that bot-based app fraud is costing unsuspecting advertisers millions of dollars every year, with a new onslaught of AI-generated fake reviews poised to increase losses.
Apps specifically flagged for having the most AI-generated reviews include a Fire TV app called ‘Wotcho TV’, another called ‘My AI Chatbot’, as well as ‘Brain E-Books’ on the Google Play Store.
DoubleVerify also found that malicious apps that host audio content rely heavily on AI-generated ratings.
In this case, advertisers pay a premium for audio ads, and the scheme relies on making the app appear legitimate to users and advertisers.
Once downloaded, the app installs malware that simulates audio playback or plays audio in the background of the user’s device without their knowledge, allowing app creators to trick advertisements into listening to fakes.
To identify fake reviews, the report says users should look for several key indicators, including uniform syntax and style across different users, unusual formatting such as capitalization of the first letter of each word, and consistent five-star ratings even for critical comments such as ‘not yet’.
The company further warns users to be aware of situations where each rating is accompanied by a written review even though this is not a requirement, as well as reviewers who are only active in that particular ecosystem.
“AI-generated app reviews occur in all types of app stores, from mobile to streaming services. Our analysis of streaming apps on popular smart TV platforms, for example, shows that 50 percent of these reviews are fake.
“This conclusion was reached using a combination of proprietary technology and human analysis to identify signals of manipulation,” the report states.
To stay protected from app fraud, DoubleVerify advises all stakeholders to take sophisticated measures to counter the negative impact of AI-generated content on app stores.
This, he said, includes developing more advanced verification processes and maintaining a strong awareness of the evolving tactics of fraud.
“Only through continued innovation and rigorous oversight can we ensure that the benefits of AI in content creation outweigh the potential for misuse.”