• en
ON NOW
d

Meta, TikTok Put Engagement Ahead Of Safety, Exposing Users To Harmful Content, Say Whistleblowers

Whistleblowers say Meta and TikTok amplified harmful content and sidelined safety concerns to boost engagement and protect business interests.

More than a dozen whistleblowers and insiders have accused Meta and TikTok of making decisions that exposed users to harmful content, despite internal research showing that outrage and other toxic material fuelled engagement on their platforms.

Speaking to the BBC documentary Inside the Rage Machine, the whistleblowers described how the race to compete for users’ attention pushed both companies to take risks on safety, including around violence, sexual blackmail, terrorism, cyberbullying and extremist material.

A former engineer at Meta, identified only as Tim, said senior management instructed staff to allow more “borderline” harmful content in users’ feeds as the company battled to keep pace with TikTok.

“They sort of told us that it’s because the stock price is down,” Tim said.

He said the change marked a sharp departure from earlier efforts to reduce harmful but legal content, including misogyny, conspiracy theories and other inflammatory posts, as Meta sought to recover lost ground in the social media market.

According to Tim, the decision to stop limiting such content was taken by a senior Meta vice-president whom he believed reported directly to Chief Executive Mark Zuckerberg.

“You’re losing to TikTok and therefore your stock price must suffer. People started becoming paranoid and reactive and they were like, let’s just do whatever we can to catch up. Where can we get like 2%, 3% revenue for the next quarter?” he said.

The allegations are echoed by former senior Meta researcher Matt Motyl, who told the BBC that Instagram Reels was launched in 2020 without adequate safeguards, despite internal evidence that the product carried heightened risks.

Motyl said his work between 2019 and 2023 involved “running large-scale experiments on sometimes as many as hundreds of millions of people” who often had “no idea” such tests were taking place.

“Meta’s products are used by north of three billion people and the more time they can keep you on there, the more ads they sell, the more money they make. But it’s very important that they get this stuff right, because when they don’t, really bad things happen,” he said.

Internal research documents shared with the BBC indicated that comments on Reels had significantly higher rates of harmful content than elsewhere on Instagram, including bullying and harassment, hate speech, and violence or incitement.

Motyl said there was a “common trade-off between protecting people from harmful content and engagement” and argued that safety teams were often overruled by product teams focused on growth.

He said there was also a “power imbalance” inside the company because safety staff needed approval from Reels teams before user protection features could be launched.

“The Reels staff had incentives to not let those products launch because toxic stuff gets more engagement than non-toxic,” Motyl said.

Another former Meta insider, Brandon Silverman, said the company’s leadership, especially Zuckerberg, was deeply focused on competitive threats.

“When he feels like there are potential competitive forces there’s no amount of money that is too much,” Silverman said.

Silverman said he saw safety teams struggle to secure approval for relatively small staffing increases, even as Meta heavily expanded resources for Reels.

“There was another team that went, oh, we just got 700 for Instagram Reels. I was like, OK yeah,” he said.

The BBC also cited internal Meta documents showing the company understood that its algorithms tended to reward anger-inducing and divisive content because such posts generated disproportionate engagement.

One internal study said the platform offered creators a “path that maximizes profits at the expense of their audience’s wellbeing” and warned that “the current set of financial incentives our algorithms create does not appear to be aligned with our mission” of bringing the world closer together.

The same document added that Facebook could “choose to be idle and keep feeding users fast-food, but that only works for so long”.

Silverman said Meta initially appeared reflective about the harms caused by toxic content, but that approach later hardened.

“Nobody’s saying you’re responsible for all polarisation. We’re just saying you contribute to it, and probably in ways where you don’t have to. If you just made a few changes, you might not contribute to it as much,” he said.

At TikTok, a whistleblower identified as Nick told the BBC that company decisions were sometimes made not on the basis of user risk, but to preserve political relationships and avoid regulation.

Decisions were being taken to maintain a “strong relationship” with political figures, he said, rather than because of the actual danger posed to users.

Nick gave the BBC access to internal dashboards showing how his trust and safety team handled complaints, including the prioritisation of some cases involving politicians over reports of harm involving children and teenagers.

“If you’re feeling guilty on a daily basis because of what you’re instructed to do, at some point you can decide, should I say something?” Nick said.

He said the volume of cases facing moderators made it difficult to keep users safe, especially children, and argued that staff cuts and team restructuring, including the replacement of some roles with artificial intelligence, had weakened the company’s response.

“Material linked to terrorism, sexual violence, physical violence, abuse, trafficking appears to be increasing,” he said.

“The reality of what the app recommends, and the action taken against harmful content, is very different in a lot of aspects to what the sites are saying” publicly, he added.

Nick pointed to one example in which a politician mocked by being compared to a chicken was treated as a higher-priority case than reports involving a 17-year-old cyberbullying victim in France and a 16-year-old in Iraq who said sexualised images falsely claiming to depict her were being circulated on the app.

Referring to the Iraq case, he said: “If you look at the country where this report comes from, it’s very high risk because it’s a minor and it involves sexual blackmail and then you can see the priority here. The urgency is not high.”

He also said some posts encouraging people to join terror groups or commit crimes were not classified as top priority.

When trust and safety staff asked for cases involving minors to be prioritised above political complaints, Nick said management refused.

He argued that the company cared less about children’s safety than about maintaining a “strong relationship” with politicians and governments to avoid bans or regulation that could damage its business.

Nick said managers were often detached from the realities facing moderation teams.

“They are not being exposed to this content on a day-to-day basis,” he said.

His advice to parents was blunt: “Delete it, keep them as far away as possible from the app for as long as possible.”

Ruofan Ding, a former machine-learning engineer who worked on TikTok’s recommendation engine between 2020 and 2024, also described the platform’s algorithm as opaque and difficult to fully control.

“The algorithms are a black box,” Ding said, adding, “We have no control of the deep-learning algorithm in itself.”

He explained that engineers working on recommendations often focused on content only as data points.

“To us, all the content is just an ID, a different number,” he said.

Ding said his team relied on content safety units to remove harmful posts before they could be amplified by the recommendation system.

“There’s the team that are responsible for the acceleration, the engine, right? So we expect the team working on the braking system was doing a good job,” he said.

But as TikTok updated its systems almost weekly in pursuit of greater market share, he said he began to notice more “borderline” and problematic content surfacing after users had spent time browsing videos.

Teenagers interviewed said platform tools meant to signal they did not want to see certain kinds of posts were ineffective, and that they continued to receive recommendations for violent and hateful material.

One teenager, Calum, now 19, said he was “radicalised by algorithm” from the age of 14 after repeated exposure to racist and misogynistic content.

“The videos energised me, but not really in a good way,” he said. “They just made me very kind of angry. It very much reflected the way I felt internally, that I was angry at the people around me.”

UK counter-terrorism police specialists who monitor thousands of social media posts each year also told the BBC they had seen the “normalisation” of antisemitic, racist, violent and far-right content in recent months.

“People are more desensitised to real-world violence and they are not afraid to share their views,” one officer said.

In response, Meta rejected the allegations that it knowingly amplified harmful material for profit.

“Any suggestion that we deliberately amplify harmful content for financial gain is wrong,” the company said.

A Meta spokesperson added: “The truth is, we have strict policies to protect users on our platforms and have made significant investments in safety and security over the last decade.”

The company also said it had “made real changes to protect teens online,” including Teen Accounts with “built-in protections and tools for parents to manage their teens’ experiences”.

TikTok also dismissed the allegations, calling them “fabricated claims”.

The company said it rejected the suggestion that political content was prioritised over the safety of young people and argued that the accusation “fundamentally misrepresents the way their moderation systems operate”.

TikTok said specialist workflows for some issues did not lead to the deprioritisation of child safety cases, which it said were handled by dedicated teams in parallel review structures.

A spokesperson added that the criticism “ignore[s] the reality of how TikTok enables millions to discover new interests, find community, and supports a thriving creator economy”.

The company also said teenage accounts are protected by more than 50 preset safety features and settings, and that it invests in technology to stop harmful content from being viewed, maintains strict recommendation policies and offers users tools to tailor their experience.

Boluwatife Enome 

Follow us on:

ON NOW