7 Allegations Against Meta in Newly Unsealed Filings
- - 7 Allegations Against Meta in Newly Unsealed Filings
Charlotte AlterNovember 23, 2025 at 6:23 AM
461
Mark Zuckerberg, chief executive officer of Meta Platforms Inc., during a Senate Judiciary Committee hearing on Jan. 31, 2024 in Washington, D.C. Credit - Kent NishimuraâBloomberg/Getty Images
Sex trafficking on Meta platforms was both difficult to report and widely tolerated, according to a court filing unsealed Friday. In a plaintiffsâ brief filed as part of a major lawsuit against four social media companies, Instagramâs former head of safety and well-being Vaishnavi Jayakumar testified that when she joined Meta in 2020 she was shocked to learn that the company had a â17xâ strike policy for accounts that reportedly engaged in the âtrafficking of humans for sex.â
âYou could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,â Jayakumar reportedly testified, adding that âby any measure across the industry, [it was] a very, very high strike threshold.â The plaintiffs claim that this testimony is corroborated by internal company documentation.
The brief, filed by plaintiffs in the Northern District of California, alleges that Meta was aware of serious harms on its platform and engaged in a broad pattern of deceit to downplay risks to young users. According to the brief, Meta was aware that millions of adult strangers were contacting minors on its sites; that its products exacerbated mental health issues in teens; and that content related to eating disorders, suicide, and child sexual abuse was frequently detected, yet rarely removed. According to the brief, the company failed to disclose these harms to the public or to Congress, and refused to implement safety fixes that could have protected young users.
âMeta has designed social media products and platforms that it is aware are addictive to kids, and theyâre aware that those addictions lead to a whole host of serious mental health issues,â says Previn Warren, the co-lead attorney for the plaintiffs in the case. âLike tobacco, this is a situation where there are dangerous products that were marketed to kids,â Warren adds. âThey did it anyway, because more usage meant more profits for the company.â
The following allegations against Meta come from the brief filed in an unprecedented multidistrict litigation. More than 1,800 plaintiffsâincluding children and parents, school districts, and state attorneys generalâhave joined together in a suit alleging that the parent companies behind Instagram, TikTok, Snapchat, and YouTube ârelentlessly pursued a strategy of growth at all costs, recklessly ignoring the impact of their products on childrenâs mental and physical health,â according to their master complaint. The newly unsealed allegations about Meta are just one small part of the sprawling suit. (TIME filed a motion to intervene in the case to ensure public access to court records; the motion was denied.)
The plaintiffsâ brief, first reported by TIME, purports to be based on sworn depositions of current and former Meta executives, internal communications, and company research and presentations obtained during the lawsuitâs discovery process. It includes quotes and excerpts from thousands of pages of testimony and internal company documents. TIME was not able to independently view the underlying testimony or research quoted in the brief, since those documents remain under seal.
Read More: The Lawyer Suing Social Media Companies On Behalf of Kids
But the brief still paints a damning picture of the companyâs internal research and deliberations about issues that have long plagued its platforms. Plaintiffs claim that since 2017, Meta has aggressively pursued young users, even as its internal research suggested its social media products could be addictive and dangerous to kids. Meta employees proposed multiple ways to mitigate these harms, according to the brief, but were repeatedly blocked by executives who feared that new safety features would hamper teen engagement or user growth.
âWe strongly disagree with these allegations, which rely on cherry-picked quotes and misinformed opinions in an attempt to present a deliberately misleading picture," a Meta spokesperson said in a statement to TIME. "The full record will show that for over a decade, we have listened to parents, researched issues that matter most, and made real changes to protect teens â like introducing Teen Accounts with built-in protections and providing parents with controls to manage their teensâ experiences. Weâre proud of the progress weâve made and we stand by our record.â
In the years since the lawsuit was filed, Meta has implemented new safety features designed to address some of the problems described by plaintiffs. In 2024, Meta unveiled Instagram Teen Accounts, which defaults any user between 13 and 18 into an account that is automatically private, limits sensitive content, turns off notifications at night, and doesnât allow messaging from unconnected adults. âWe know parents are worried about their teens having unsafe or inappropriate experiences online, and thatâs why weâve significantly reimagined the Instagram experience for tens of millions of teens with new Teen Accounts,â a Meta spokeswoman told TIME in June. âThese accounts provide teens with built-in protections to automatically limit whoâs contacting them and the content theyâre seeing, and teens under 16 need a parentâs permission to change those settings. We also give parents oversight over their teensâ use of Instagram, with ways to see who their teens are chatting with and block them from using the app for more than 15 minutes a day, or for certain periods of time, like during school or at night.â
And yet the plaintiffsâ brief suggests that Meta resisted safety changes like these for years.
The brief quotes testimony from Brian Boland, Metaâs former vice president of partnerships who worked at the company for 11 years and resigned in 2020. âMy feeling then and my feeling now is that they donât meaningfully care about user safety,â he allegedly said. âItâs not something that they spend a lot of time on. Itâs not something they think about. And I really think they donât care.â
After the plaintiffsâ brief was unsealed late Friday night, Meta did not immediately respond to TIMEâs requests for comment.
Here are some of the most notable allegations from the plaintiffsâ omnibus brief:
Allegation: Meta had a high threshold for "sex trafficking" contentâand no way to report child sexual content
Despite Instagramâs âzero toleranceâ policy for child sexual abuse material, the platform did not offer users a simple way to report child sexual abuse content, according to the brief. Plaintiffs allege that Jayakumar raised the issue multiple times when she joined Meta in 2020, but was told it would be too difficult to address. Yet Instagram allowed users to easily report far less serious violations, like âspam,â âintellectual property violationâ and âpromotion of firearms,â according to plaintiffs.
Jayakumar was even more shocked to learn that Instagram had a disturbingly high tolerance for sex trafficking on the platform. According to the brief, she testified that Meta had a â17xâ strike policy for accounts that reportedly engaged in the âtrafficking of humans for sex,â meaning it would take at least 16 reports for an account to be deleted.
âMeta never told parents, the public, or the Districts that it doesnât delete accounts that have engaged over fifteen times in sex trafficking,â the plaintiffs wrote.
A Meta spokesperson disputed this allegation to TIME, saying the company has for years removed accounts immediately if it suspects them of human trafficking or exploitation and has made it easier over time for users to report content that violates child-exploitation policies.
Allegation: Meta "lied to Congress" about its knowledge of harms on the platform
For years, plaintiffs allege, Metaâs internal research had found that teenagers who frequently use Instagram and Facebook have higher rates of anxiety and depression.
In late 2019, according to the brief, Meta designed a âdeactivation study,â which found that users who stopped using Facebook and Instagram for a week showed lower rates of anxiety, depression, and loneliness. Meta halted the study and did not publicly disclose the results, stating that the research study was biased by the âexisting media narratives around the company.â (A Meta spokesperson told TIME that the study was initially conceived as a pair of one-weeks pilots, and researchers declined to continue it because it found that the only reductions in feelings of depression, anxiety, and loneliness were among people who already believed Facebook was bad for them.)
At least one Meta employee was uncomfortable with the implications of this decision: âIf the results are bad and we donât publish and they leak,â this employee wrote, according to the brief, âis it going to look like tobacco companies doing research and knowing cigs were bad and then keeping that info to themselves?â
Indeed, in December 2020, when the Senate Judiciary Committee asked the company in a set of written questions whether it was âable to determine whether increased use of its platform among teenage girls has any correlation with increased signs of depressionâ and âincreased signs of anxiety,â the company offered only a one-word answer: âNo.â
To the plaintiffs in the case, the implication is clear: âThe company never publicly disclosed the results of its deactivation study. Instead, Meta lied to Congress about what it knew.â
Allegation: The company knew Instagram was letting adult strangers connect with teenagers
For years Instagram has had a well-documented problem of adults harassing teens. Around 2019, company researchers recommended making all teen accounts private by default in order to prevent adult strangers from connecting with kids, according to the plaintiffsâ brief. Instead of implementing this recommendation, Meta asked its growth team to study the potential impact of making all teen accounts private. The growth team was pessimistic, according to the brief, and responded that the change would likely reduce engagement.
By 2020, the growth team had determined that a private-by-default setting would result in a loss of 1.5 million monthly active teens a year on Instagram. The plaintiffsâ brief quotes an unnamed employee as saying: âtaking away unwanted interactions⊠is likely to lead to a potentially untenable problem with engagement and growth.â Over the next several months, plaintiffs allege, Metaâs policy, legal, communications, privacy, and well-being teams all recommended making teen accounts private by default, arguing that the switch âwill increase teen safetyâ and was in line with expectations from users, parents, and regulators. But Meta did not launch the feature that year.
Safety researchers were dismayed, according to excerpts of an internal conversation quoted in the filing. One allegedly grumbled: âIsnât safety the whole point of this team?â
âMeta knew that placing teens into a default-private setting would have eliminated 5.4 million unwanted interactions a day,â the plaintiffs wrote. Still, Meta didnât make the fix. Instead, inappropriate interactions between adults and kids on Instagram skyrocketed to 38 times that on Facebook Messenger, according to the brief. The launch of Instagram Reels allegedly compounded the problem. It allowed young teenagers to broadcast short videos to a wide audience, including adult strangers.
Read More: The AG Putting Big Tech On Trial.
An internal 2022 audit allegedly found that Instagramâs Accounts You May Follow feature recommended 1.4 million potentially inappropriate adults to teenage users in a single day. By 2023, according to the plaintiffs, Meta knew that they were recommending minors to potentially suspicious adults and vice versa.
It wasnât until 2024 that Meta rolled out default privacy settings to all teen accounts. In the four years it took the company to implement their own safety recommendations, teens experienced billions of unwanted interactions with strangers online. Inappropriate encounters between teens and adults were common enough, according to the brief, that the company had an acronym for them: âIIC,â or âinappropriate interactions with children.â
A Meta spokesperson said the company has defaulted teens under 16 to private accounts since 2021, began defaulting teens under 18 into private accounts with the introduction of its Teen Accounts program, and has taken steps to protect users from online predators.
Allegation: Meta aggressively targeted young users
Meta feared young users would abandon Facebook and Instagram for their competitors. Acquiring and keeping young users became a central business goal. Meta CEO Mark Zuckerberg suggested that âteen time spent be our top goal of 2017,â according to a company executive quoted in the brief. That has remained the case, plaintiffs allege; internal company documents from 2024 stated that âacquiring new teen users is mission critical to the success of Instagram.â (A Meta spokesperson said time spent on its platforms is not currently a company goal.)
Meta launched a campaign to connect with school districts and paid organizations like the National Parent Teacher Association and Scholastic to conduct outreach to schools and families. Meanwhile, according to the brief, Meta used location data to push notifications to students in âschool blasts,â presumably as part of an attempt to increase youth engagement during the school day. As one employee allegedly put it: âOne of the things we need to optimize for is sneaking a look at your phone under your desk in the middle of Chemistry :)â.
Though Meta aggressively pursued young users, it may not have known exactly how old those new users were. Whistleblower Jason Sattizahn recently testified to Congress that Meta does not reliably know the age of its users. (Meta pushed back on Sattizahnâs testimony, saying in a statement to NBC that his claims were ânonsenseâ and âbased on selectively leaked internal documents that were picked specifically to craft a false narrative.â) In 2022, according to the plaintiffsâ brief, there were 216 million users on Meta platforms whose age was âunknown.â
Federal law requires social media platforms to observe various data-privacy safeguards for users under 13, and Meta policy states that users under 13 are not allowed on its platforms. Yet the plaintiffsâ court filing claims Meta knew that children under 13 used the companyâs products anyway. Internal research cited in the brief suggested there were 4 million users under 13 on Instagram in 2015; by 2018, the plaintiffs claim, Meta knew that roughly 40% of children aged 9 to 12 said they used Instagram daily.
The plaintiffs allege that this was a deliberate business strategy. The brief describes a coordinated effort to acquire young users that included studying the psychology and digital behavior of âtweensâ and exploring new products designed for âusers as young as 5-10.â
Internally, some employees expressed disgust at the attempt to target preteens. âOh good, weâre going after <13 year olds now?â one wrote, according to the brief. âZuck has been talking about that for a while...targeting 11 year olds feels like tobacco companies a couple decades ago (and today). Like weâre seriously saying âwe have to hook them youngâ here.â
Allegation: Meta's executives initially shelved efforts to make Instagram less toxic for teens
To combat toxic âsocial comparison,â in 2019 Instagram CEO Adam Mosseri announced a new product feature that would âhideâ likes on posts. Meta researchers had determined that hiding likes would make users âsignificantly less likely to feel worse about themselves,â according to the plaintiffsâ brief. The initiative was code-named Project Daisy.
But after a series of tests, Meta backtracked on Project Daisy. It determined the feature was âpretty negative to FB metrics,â including ad revenue, according to the plaintiffsâ brief, which quotes an unnamed employee on the growth team insisting: âItâs a social comparison app, fucking get used to it.â
A similar debate took place over the appâs beauty filters. Plaintiffs claim that an internal review concluded beauty filters exacerbated the ârisk and maintenance of several mental health concerns, including body dissatisfaction, eating disorders, and body dysmorphic disorder,â and that Meta knew that âchildren are particularly vulnerable.â Meta banned beauty filters in 2019, only to roll them back out the following year after the company realized that banning beauty filters would have a ânegative growth impact,â according to the plaintiffsâ brief.
Other company researchers allegedly built an AI âclassifierâ to identify content that would lead to negative appearance comparison, so that Meta could avoid recommending it to vulnerable kids. But Mosseri allegedly killed the project, disappointing developers who âfelt like they had a solutionâ to âa big problem.â
Allegation: Meta doesn't automatically remove harmful content, including self-harm content
While Meta developed AI tools to monitor the platforms for harmful content, the company didnât automatically delete that content even when it determined with â100% confidenceâ that it violated Metaâs policies against child sexual-abuse material or eating-disorder content. Metaâs AI classifiers did not automatically delete posts that glorified self-harm unless they were 94% certain they violated platform policy, according to the plaintiffsâ brief. As a result, most of that content remained on the platform, where teenage users often discovered it. In a 2021 internal company survey cited by plaintiffs, more than 8% of respondents aged 13 to 15 reported having seen someone harm themselves, or threaten to do so, on Instagram during the past week.
Read More: âEverything I Learned About Suicide, I Learned On Instagram.â
A Meta spokesperson said the company reports more child sexual-abuse material than any other service and uses an array of tools to proactively find that content, including photo and video-matching technologies as well as machine learning. The spokesperson said human reviewers assess content flagged before it is deleted to ensure it violates policies, prevent mistakes that could affect users, and maintain the integrity of the company's detection databases.
Allegation: Meta knew its products were addictive, but publicly downplayed the harms
The addictive nature of the companyâs products wasnât a secret internally. âOh my gosh yall IG is a drug,â one of the companyâs user-experience researchers allegedly wrote to a colleague. âWeâre basically pushers.â
Meta does not officially study addiction to its products, plaintiffs allege; it studies âproblematic use.â In 2018, company researchers surveyed 20,000 Facebook users in the U.S. and found that 58% had some level of âproblematic useââ55% mild, and 3.1% severe. But when Meta published an account of this research the following year, only the smaller number of users with âsevereâ problematic use was mentioned. âWe estimate (as an upper bound) that 3.1% of Facebook users in the U.S. experience problematic use,â wrote the researchers. The other 55% of users are not mentioned anywhere in the public report.
Plaintiffs allege that Metaâs safety team proposed features designed to lessen addiction, only to see them set aside or watered down. One employee who helped develop a âquiet modeâ feature said it was shelved because Meta was concerned that this feature would negatively impact metrics related to growth and usage.
Around the same time, another user-experience researcher at Instagram allegedly recommended that Meta inform the public about its research findings: âBecause our product exploits weaknesses in the human psychology to promote product engagement and time spent,â the researcher wrote, Meta needed to âalert people to the effect that the product has on their brain.â
Meta did not.
This story has been to reflect additional comments from Meta.
Write to Charlotte Alter at [email protected].
Source: âAOL Breakingâ