A new investigative report has revealed that Meta Platforms Inc., the parent company of Facebook, Instagram, and WhatsApp, earns an estimated $7 billion annually from advertisements linked to fraudulent or high-risk activities. The findings, based on internal Meta documents obtained by Reuters, paint a troubling picture of how scam ads have become embedded in the company’s advertising ecosystem — and how Meta’s internal policies may indirectly profit from them.
According to the documents, Meta displays as many as 15 billion “high-risk” ads per day, many of which promote fraudulent e-commerce schemes, illegal gambling platforms, and banned medical products. These ads target users globally through Facebook’s and Instagram’s algorithmic systems, often exploiting Meta’s data-driven personalization features to reach vulnerable audiences.
Internal Tolerance for Scam-Linked Revenue
The documents reviewed by Reuters indicate that Meta’s internal enforcement systems use automated detection tools to flag suspicious advertisers. However, the company reportedly bans these advertisers only when its systems are 95% certain that the ads are fraudulent. When confidence falls below that threshold, Meta does not remove the ads — instead, it imposes higher advertising fees on the suspected accounts as a penalty, effectively allowing them to continue operating while generating revenue for the company.
This practice means that Meta benefits financially from potentially fraudulent advertisers, even as users are exposed to harmful content. Once users engage with such ads, Meta’s ad-personalization algorithms often recommend similar advertisements, creating a cycle in which scam-related content continues to spread across the platform.
Scam Ads Represent Up to 10% of Meta’s Annual Revenue
Across various internal departments — including finance, safety, and government affairs — Meta employees estimated that scam and prohibited advertisements contributed roughly 10.1% of the company’s total revenue in 2024, equivalent to about $16 billion. These earnings were described internally as “violating revenue,” referring to income generated from ads that breach Meta’s internal policies or local advertising laws.
More concerningly, the reports show that Meta had internal limits on how much revenue it was willing to sacrifice to combat fraudulent advertising. In early 2025, enforcement teams reportedly could not take actions that would cost the company more than 0.15% of its total revenue — about $135 million out of the $90 billion generated in the first half of the year. This restriction effectively placed a cap on the company’s willingness to remove scam ads that could hurt its financial performance.
Meta Responds: “Data Misrepresented”
In response to Reuters’ findings, Meta spokesperson Andy Stone rejected the characterization of the documents, stating that they “present a selective and misleading view” of the company’s internal processes. Stone explained that the 10.1% figure was a “rough and overly inclusive” estimate that also captured “many legitimate ads.”
“We aggressively fight fraud and scams because people on our platforms don’t want this content, legitimate advertisers don’t want it, and we don’t want it either,” Stone said. He added that Meta has made “substantial progress,” claiming that user reports of scam ads have fallen by 58% over the past 18 months and that the company removed 134 million scam-related ads globally in 2025 alone.
Despite these efforts, Reuters reports that Meta’s own internal assessments contradict its public statements. A May 2025 internal safety division report found that Meta platforms were implicated in one-third of all successful scams in the United States, while another analysis concluded that “it is easier to advertise scams on Meta than on Google.”
Regulators Investigate Meta’s Role in Global Scam Epidemic
Meta’s handling of scam ads has drawn scrutiny from regulators across several jurisdictions. The U.S. Securities and Exchange Commission (SEC) is reportedly investigating Meta’s advertising operations and its potential role in facilitating large-scale online financial scams.
In the United Kingdom, a 2023 report by the Financial Conduct Authority (FCA) found that Meta’s platforms were responsible for 54% of all payments-related scam losses in the country — more than any other social media platform combined.
Experts argue that Meta’s advertising model, which prioritizes engagement and personalization, has unintentionally become fertile ground for fraudsters. Once a user clicks on or interacts with a scam ad, Meta’s algorithms serve them more similar content, amplifying exposure and risk.
Meta’s Broader Enforcement Actions
In a separate update released earlier this year, Meta announced that it had taken enforcement action against about 500,000 accounts involved in spam or fake engagement behavior during the first half of 2025. The company also removed around 10 million fake profiles, many of which were impersonating well-known content creators.
These actions were part of Meta’s broader initiative to improve “feed integrity” and promote authentic user content. However, the company’s update did not address enforcement measures specifically related to scam ads, leaving questions about how aggressively Meta intends to confront the issue moving forward.
A Growing Ethical Dilemma
The revelations add to growing criticism of Meta’s business practices and the ethical implications of its advertising algorithms. As digital fraud surges worldwide, consumer advocates and lawmakers are urging Meta to adopt stricter ad verification systems and greater transparency in its revenue sources.
While Meta insists that it is committed to protecting users and cleaning up its platforms, the internal documents suggest a fundamental tension between profit maximization and user safety — one that regulators may now seek to resolve through tighter oversight and new accountability frameworks.











































