According to Neowin, newly unredacted court filings reveal Meta allegedly conducted internal research called Project Mercury about five years ago that found people who stopped using Facebook and Instagram for just one week experienced lower depression, anxiety, loneliness, and social comparison. The documents claim Meta shut down the research when it discovered this causal evidence, internally dismissing the findings as “tainted” by media narratives about social media harms. Staff reportedly warned leadership about the causal impact and even compared Meta’s actions to tobacco companies hiding health risks. The allegations suggest Meta misled Congress by claiming it couldn’t quantify harm to teenage girls while allegedly knowing the exact impacts. Additionally, the filings claim Meta designed ineffective youth safety features, maintained a policy requiring 17 violations before removing sex trafficking accounts, and that CEO Mark Zuckerberg in 2021 prioritized metaverse development over child safety funding. A hearing is scheduled for January 26, 2026, while Meta has moved to strike these documents from the case.
The big tobacco parallels are hard to ignore
When internal researchers start comparing your company to cigarette manufacturers, you’ve got a serious problem. The Project Mercury situation reads like a corporate ethics nightmare – complete with the classic “kill the research when results look bad” playbook. Meta’s defense about flawed methodology feels convenient, doesn’t it? Especially when you consider they apparently had staff confirming the causal relationship between platform use and mental health decline.
Business interests over user safety
Here’s the thing that really stands out: the allegations suggest Meta knew exactly how to make its platforms safer but deliberately chose not to. The claims about designing safety features to be ineffective because they might hurt growth? That’s brutal. And the reported 17-strike policy for sex trafficking accounts before removal – if true, that’s genuinely shocking. It paints a picture of a company that prioritized engagement and revenue above everything else, even when it came to protecting vulnerable users.
The congressional testimony question
Remember when Meta executives testified before Congress about teen safety? If these documents are accurate, they might have been seriously misleading lawmakers. Claiming you can’t quantify harm while allegedly sitting on research that does exactly that? That’s the kind of thing that could lead to perjury charges. It makes you wonder how much other internal research exists that we haven’t seen yet.
What happens now?
With the hearing not until 2026, we’re looking at a long legal battle ahead. Meta’s motion to strike these documents suggests they’re taking the allegations very seriously. But the timing is interesting – we’re seeing this right as multiple states are passing laws to protect kids online and as the broader tech backlash continues. Basically, this couldn’t come at a worse time for Meta’s reputation. Whether these specific allegations hold up in court or not, they reinforce the growing public perception that social media companies know their products can be harmful but aren’t doing enough about it.
