Meta, the parent company of Facebook, has intensified its crackdown on misinformation and fake engagement, deleting over 10 million fake profiles and 500,000 spam accounts in the first half of 2025.

The tech giant announced the sweeping action in a blog post on Monday, describing it as part of its ongoing strategy to promote originality, protect creators, and strengthen user trust across its platforms.
Meta said the latest purge focused on impersonation accounts, copycat content creators, and spam networks designed to manipulate engagement metrics. According to the company, these accounts distort Facebook’s algorithm, deprive genuine creators of visibility, and threaten the credibility of the platform.
“We’re making progress. In the first half of 2025, we took action on around 500,000 accounts engaged in spammy behaviour or fake engagement. We also removed about 10 million profiles impersonating large content producers,” Meta stated.
The company emphasized that accounts recycling content without permission or adding minimal edits, such as stitching clips together or placing watermarks, will face reduced reach on Facebook feeds and could lose access to monetisation tools.
In a bid to reward originality, Meta has rolled out post-level insights on its Professional Dashboard to help creators track performance and understand how their content ranks.

The update also includes a Support Home screen, where creators can check if they are at risk of penalties, such as reduced reach or restrictions on earning revenue from posts.
Furthermore, Meta is introducing attribution tools to link reposted content to its original creator, ensuring that authentic voices receive proper recognition and wider distribution.
“Pages and profiles that post mostly original content tend to enjoy wider distribution across Facebook. Content that provides real value and tells an authentic story is likely to perform better,” the company added.
Meta issued a stern warning to users posting watermarked content from other platforms, stating that such behaviour violates its new content guidelines. Profiles caught repeatedly sharing such materials risk losing monetisation eligibility or having their reach drastically reduced.
The company says the changes are designed to encourage creativity, storytelling, and ethical content-sharing practices, which are key to maintaining user trust.
Meta’s announcement comes as other major tech platforms also move to safeguard content quality. Google’s YouTube recently updated its monetisation policies, prohibiting mass-produced or overly repetitive content from earning ad revenue.
The policy update initially sparked concerns about restrictions on AI-generated content, but YouTube clarified that creators using AI to enhance storytelling remain eligible to monetise their work.
Both Meta and YouTube stress that these measures are part of broader efforts to protect creators, combat fake engagement, and sustain a healthy digital economy.
Industry experts view Meta’s sweeping crackdown as a crucial step in restoring user confidence in social media platforms, which have been plagued by fake accounts, misinformation, and content theft.
With billions of users depending on platforms like Facebook for news, entertainment, and commerce, the crackdown is expected to level the playing field for genuine creators while discouraging unethical content practices.