Normal view

Received before yesterday

Child exploitation, grooming, and social media addiction claims put Meta on trial

12 February 2026 at 07:35

Meta is facing two trials over child safety allegations in California and New Mexico. The lawsuits are landmark cases, marking the first time that any such accusations have reached a jury. Although over 40 state attorneys general have filed suits about child safety issues with social media, none had gone to trial until now.

The New Mexico case, filed by Attorney General Raúl Torrez in December 2023, centers on child sexual exploitation. Torrez’s team built their evidence by posing as children online and documenting what happened next, in the form of sexual solicitations. The team brought the suit under New Mexico’s Unfair Trade Practices Act, a consumer protection statute that prosecutors argue sidesteps Section 230 protections.

The most damaging material in the trial, which is expected to run seven weeks, may be Meta’s own paperwork. Newly unsealed internal documents revealed that a company safety researcher had warned about the sheer scale of the problem, claiming that around half a million cases of child exploitation are happening daily. Torrez did not mince words about what he believes the platform has become, calling it an online marketplace for human trafficking. From the complaint:

“Meta’s platforms Facebook and Instagram are a breeding ground for predators who target children for human trafficking, the distribution of sexual images, grooming, and solicitation.”

The complaint’s emphasis on weak age verification touches on a broader issue regulators around the world are now grappling with: how platforms verify the age of their youngest users—and how easily those systems can be bypassed.

In our own research into children’s social media accounts, we found that creating underage profiles can be surprisingly straightforward. In some cases, minimal checks or self-declared birthdates were enough to access full accounts. We also identified loopholes that could allow children to encounter content they shouldn’t or make it easier for adults with bad intentions to find them.

The social media and VR giant has pushed back hard, calling the state’s investigation ethically compromised and accusing prosecutors of cherry-picking data. Defence attorney Kevin Huff argued that the company disclosed its risks rather than concealing them.

Yesterday, Stanford psychiatrist Dr. Anna Lembke told the court she believes Meta’s design features are addictive and that the company has been using the term “Problematic Internet Use” internally to avoid acknowledging addiction.

Meanwhile in Los Angeles, a separate bellwether case against Meta and Google opened on Monday. A 20-year-old woman identified only as KGM is at the center of the case. She alleges that YouTube and Instagram hooked her from childhood. She testified that she was watching YouTube at six, on Instagram by nine, and suffered from worsening depression and body dysmorphia. Her case, which TikTok and Snap settled before trial, is the first of more than 2,400 personal injury filings consolidated in the proceeding. Plaintiffs’ attorney Mark Lanier called it a case about:

“two of the richest corporations in history, who have engineered addiction in children’s brains.”

A litany of allegations

None of this appeared from nowhere. In 2021, whistleblower Frances Haugen leaked internal Facebook documents showing the company knew its platforms damaged teenage mental health. In 2023, Meta whistleblower Arturo Béjar testified before the Senate that the company ignored sexual endangerment of children.

Unredacted documents unsealed in the New Mexico case in early 2024 suggested something uglier still: that the company had actively marketed messaging platforms to children while suppressing safety features that weren’t considered profitable. Internal employees sounded alarms for years but executives reportedly chose growth, according to New Mexico AG Raúl Torrez. Last September, whistleblowers said that the company had ignored child sexual abuse in virtual reality environments.

Outside the courtroom, governments around the world are moving faster than the US Congress. Australia banned under 16s from social media in December 2025, becoming the first country to do so. France’s National Assembly followed, approving a ban on social media for under 15s in January by 130 votes to 21. Spain announced its own under 16 ban this month. By last count, at least 15 European governments were considering similar measures. Whether any of these bans will actually work is uncertain, particularly as young users openly discuss ways to bypass controls.

The United States, by contrast, has passed exactly one major federal child online safety law: the Children’s Online Privacy Protection Act (COPPA), in 1998. The Kids Online Safety Act (KOSA), introduced in 2022, passed the Senate 91-3 in mid-2024 then stalled in the House. It was reintroduced last May and has yet to reach a floor vote. States have tried to fill the gap, with 18 proposed similar legislation in 2025, but only one of those was enacted (in Nebraska). A comprehensive federal framework remains nowhere in sight.

On its most recent earnings call, Meta acknowledged it could face material financial losses this year. The pressure is no longer theoretical. The juries in Santa Fe and Los Angeles will now weigh whether the company’s design choices and safety measures crossed legal lines.

If you want to understand how social media platforms can expose children to harmful content—and what parents can realistically do about it—check out our research project on social media safety.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Child exploitation, grooming, and social media addiction claims put Meta on trial

12 February 2026 at 07:35

Landmark trials now underway allege Meta failed to protect children from sexual exploitation, grooming, and addiction-driven design.

The post Child exploitation, grooming, and social media addiction claims put Meta on trial appeared first on Security Boulevard.

How safe are kids using social media? We did the groundwork

10 February 2026 at 08:50

When researchers created an account for a child under 13 on Roblox, they expected heavy guardrails. Instead, they found that the platform’s search features still allowed kids to discover communities linked to fraud and other illicit activity.

The discoveries spotlight the question that lawmakers around the world are circling: how do you keep kids safe online?

Australia has already acted, while the UK, France, and Canada are actively debating tighter rules around children’s use of social media. This month US Senator Ted Cruz reintroduced a bill to do it while also chairing a Congressional hearing about online kid safety.

Lawmakers have said these efforts are to keep kids safe online. But as the regulatory tide rises, we wanted to understand what digital safety for children actually looks like in practice.

So, we asked a specialist research team to explore how well a dozen mainstream tech providers are protecting children aged under 13 online.

We found that most services work well when kids use the accounts and settings designed for them. But when children are curious, use the wrong account type, or step outside those boundaries, things can go sideways quickly.

Over several weeks in December, the research team explored how platforms from Discord to YouTube handled children’s online use. They relied on standard user behavior rather than exploits or technical tricks to reflect what a child could realistically encounter.

The researchers focused on how platforms catered to kids through specific account types, how age restrictions were enforced in practice, and whether sensitive content was discoverable through normal browsing or search.

What emerged was a consistent pattern: curious kids who poke around a little, or who end up using the wrong account type, can run into inappropriate content with surprisingly little effort.

A detailed breakdown of the platforms tested, account types used, and where sensitive content was discovered appears in the research scope and methodology section at the end of this article.

When kids’ accounts are opt-in

One thing the team tried was to simply access the generic public version of a site rather than the kid-protected area.

This was a particular problem with YouTube. The company runs a kid-specific service called YouTube Kids, which the researchers said is effectively sanitized of inappropriate content (it sounds like things have changed since 2022).

The issue is that YouTube’s regular public site isn’t sanitized, and even though the company says you must be at least 13 to use the service unless ‘enabled’ by a parent, in reality anyone can access it. From the report:

“Some of the content will require signing in (for age verification) prior the viewing, but the minor can access the streaming service as a ‘Guest’ user without logging in, bypassing any filtering that would otherwise apply to a registered child account.”

That opens up a range of inappropriate material, from “how-to” fraud channels through to scenes of semi-nudity and sexually suggestive material, the researchers said. Horrifically, they even found scenes of human execution on the public site. The researchers concluded:

“The absence of a registration barrier on the public platform renders the ‘YouTube Kids’ protection opt-in rather than mandatory.”

When adult accounts are easy to fake

Another worry is that even when accounts are age-gated, enterprising minors can easily get around them. While most platforms require users to be 13+, a self-declaration is often enough. All that remains is for the child to register an email address with a service that doesn’t require age verification.

This “double blind” vulnerability is a big problem. Kids are good at creating accounts. The tech industry has taught them to be, because they need them for most things they touch online, from streaming to school.

When they do get past the age gates, curious kids can quickly get to inappropriate material. Researchers found unmoderated nudity and explicit material on the social network Discord, along with TikTok content providing credit card fraud and identity theft tutorials. A little searching on the streaming site Twitch surfaced ads for escort services.

This points to a trade-off between privacy and age verification. While stricter age verification could close some of these gaps, it requires collecting more personal data, including IDs or biometric information. That creates privacy risks of its own, especially for children. That’s why most platforms rely on self-declared age, but the research shows how easily that can be bypassed.

When kids’ accounts let toxic content through

Cracks in the moderation foundations allow risky content: Roblox, the website and app where users build their own content, filters chats for child accounts. However, it also features “Communities,” which are groups designed for socializing and discovery.

These groups are easily searchable, and some use names and terminology commonly linked to criminal activities, including fraud and identity theft. One, called “Fullz,” uses a term widely understood to refer to stolen personal information, and “new clothes” is often used to refer to a new batch of stolen payment card data. The visible community may serve as a gateway, while the actual coordination of illicit activity or data trading occurs via “inner chatter” between the community members.

This kind of search wasn’t just an issue for Roblox, warned the team. It found Instagram profiles promoting financial fraud and crypto schemes, even from a restricted teen account.

Some sites passed the team’s tests admirably, though. The researchers simulated underage users who’d bypassed age verification, but were unable to find any harmful content on Minecraft, Snapchat, Spotify, or Fortnite. Fortnite’s approach is especially strict, disabling chat and purchases on accounts for kids under 13 until a parent verifies via email. It also uses additional verification steps using a Social Security number or credit card. Kids can still play, but they’re muted.

What parents can do

There is no platform that can catch everything, especially when kids are curious. That makes parental involvement the most important layer of protection.

One reason this matters is a related risk worth acknowledging: adults attempting to reach children through social platforms. Even after Instagram took steps to limit contact between adult and child accounts, parents still discovered loopholes. This isn’t a failure of one platform so much as a reminder that no set of controls can replace awareness and involvement.

Mark Beare, GM of Consumer at Malwarebytes says:

“Parents are navigating a fast-moving digital world where offline consequences are quickly felt, be it spoofed accounts, deepfake content or lost funds. Safeguards exist and are encouraged, but children can still be exposed to harmful content.”

This doesn’t mean banning children from the internet. As the EFF points out, many minors use online services productively with the support and supervision of their parents. But it does mean being intentional about how accounts are set up, how children interact with others online, and how comfortable they feel asking for help.

Accounts and settings

  • Use child or teen accounts where available, and avoid defaulting to adult accounts.
  • Keep friends and followers lists set to private.
  • Avoid using real names, birthdays, or other identifying details unless they are strictly required.
  • Avoid facial recognition features for children’s accounts.
  • For teens, be aware of “spam” or secondary accounts they’ve set up that may have looser settings.

Social behavior

  • Talk to your child about who they interact with online and what kinds of conversations are appropriate.
  • Warn them about strangers in comments, group chats, and direct messages.
  • Encourage them to leave spaces that make them uncomfortable, even if they didn’t do anything wrong.
  • Remind them that not everyone online is who they claim to be.

Trust and communication

  • Keep conversations about online activity open and ongoing, not one-off warnings.
  • Make it clear that your child can come to you if something goes wrong without fear of punishment or blame.
  • Involve other trusted adults, such as parents, teachers, or caregivers, so kids aren’t navigating online spaces alone.

This kind of long-term involvement helps children make better decisions over time. It also reduces the risk that mistakes made today can follow them into the future, when personal information, images, or conversations could be reused in ways they never intended.


Research findings, scope and methodology 

This research examined how children under the age of 13 may be exposed to sensitive content when browsing mainstream media and gaming services. 

For this study, a “kid” was defined as an individual under 13, in line with the Children’s Online Privacy Protection Act (COPPA). Research was conducted between December 1 and December 17, 2025, using US-based accounts. 

The research relied exclusively on standard user behavior and passive observation. No exploits, hacks, or manipulative techniques were used to force access to data or content. 

Researchers tested a range of account types depending on what each platform offered, including dedicated child accounts, teen or restricted accounts, adult accounts created through age self-declaration, and, where applicable, public or guest access without registration. 

The study assessed how platforms enforced age requirements, how easy it was to misrepresent age during onboarding, and whether sensitive or illicit content could be discovered through normal browsing, searching, or exploration. 

Across all platforms tested, default algorithmic content and advertisements were initially benign and policy-compliant. Where sensitive content was found, it was accessed through intentional, curiosity-driven behavior rather than passive recommendations. No proactive outreach from other users was observed during the research period. 

The table below summarizes the platforms tested, the account types used, and whether sensitive content was discoverable during testing. 

Platform Account type tested Dedicated kid/teen account Age gate easy to bypass Illicit content discovered Notes
YouTube (public) No registration (guest) Yes (YouTube Kids) N/A Yes Public YouTube allowed access to scam/fraud content and violent footage without sign-in. Age-restricted videos required login, but much content did not. 
YouTube Kids Kid account Yes N/A No Separate app with its own algorithmic wall. No harmful content surfaced. 
Roblox All-age account (13+) No Not required Yes Child accounts could search for and find communities linked to cybercrime and fraud-related keywords. 
Instagram Teen account (13–17) No Not required Yes Restricted accounts still surfaced profiles promoting fraud and cryptocurrency schemes via search. 
TikTok Younger user account (13+) Yes Not required No View-only experience with no free search. No harmful content surfaced. 
TikTok Adult account No Yes Yes Search surfaced credit card fraud–related profiles and tutorials after age gate bypass. 
Discord Adult account No Yes Yes Public servers surfaced explicit adult content when searched directly. No proactive contact observed. 
Twitch Adult account No Yes Yes Discovered escort service promotions and adult content, some behind paywalls. 
Fortnite Cabined (restricted) account (13+) Yes Hard to bypass No Chat and purchases disabled until parent verification. No harmful content found. 
Snapchat Adult account No Yes No No sensitive content surfaced during testing. 
Spotify Adult account Yes Yes No Explicit lyrics labeled. No harmful content found. 
Messenger Kids Kid account Yes Not required No Fully parent-controlled environment. No search or
external contacts. 

Screenshots from the research

  • List of Roblox communities with cybercrime-oriented keywords
    List of Roblox communities with cybercrime-oriented keywords
  • Roblox community that offers chat without verification
    Roblox community that offers chat without verification
  • Roblox community with cybercrime-oriented keywords
    Roblox community with cybercrime-oriented keywords
  • Graphic content on publicly accessible YouTube
    Graphic content on publicly accessible YouTube
  • Credit card fraud content on publicly accessible YouTube
    Credit card fraud content on publicly accessible YouTube
  • Active escort page on Twitch
    Active escort page on Twitch
  • Stolen credit cards for sale on an Instagram teen account
    Stolen credit cards for sale on an Instagram teen account
  • Carding for beginners content on an Instagram teen account
    Crypto investment scheme on an Instagram teen account
  • Carding for beginners content on a TikTok adult account, accessed by kids with a fake date of birth.
    Carding for beginners content on a TikTok adult account, accessed by kids with a fake date of birth.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Google will pay $8.25m to settle child data-tracking allegations

20 January 2026 at 06:40

Google has settled yet another class-action lawsuit accusing it of collecting children’s data and using it to target them with advertising. The tech giant will pay $8.25 million to address allegations that it tracked data on apps specifically designated for kids.

AdMob’s mobile data collection

This settlement stems from accusations that apps provided under Google’s “Designed for Families” programme, which was meant to help parents find safe apps, tracked children. Under the terms of this programme, developers were supposed to self-certify COPPA compliance and use advertising SDKs that disabled behavioural tracking. However, some did not, instead using software embedded in the apps that was created by a Google-owned mobile advertising company called AdMob.

When kids used these apps, which included games, AdMob collected data from these apps, according to the class action lawsuit. This included IP addresses, device identifiers, usage data, and the child’s location to within five meters, transmitting it to Google without parental consent. The AdMob software could then use that information to display targeted ads to users.

This kind of activity is exactly what the Children’s Online Privacy Protection Act (COPPA) was created to stop. The law requires operators of child-directed services to obtain verifiable parental consent before collecting personal information from children under 13. That includes cookies and other identifiers, which are the core tools advertisers use to track and target people.

The families filing the lawsuit alleged that Google knew this was going on:

“Google and AdMob knew at the time that their actions were resulting in the exfiltration data from millions of children under thirteen but engaged in this illicit conduct to earn billions of dollars in advertising revenue.”

Security researchers had alerted Google to the issue in 2018, according to the filing.

YouTube settlement approved

What’s most disappointing is that these privacy issues keep happening. This news arrives at the same time that a judge approved a settlement on another child privacy case involving Google’s use of children’s data on YouTube. This case dates back to October 2019, the same year that Google and YouTube paid a whopping $170m fine for violating COPPA.

Families in this class action suit alleged that YouTube used cookies and persistent identifiers on child-directed channels, collecting data including IP addresses, geolocation data, and device serial numbers. This is the same thing that it does for adults across the web, but COPPA protects kids under 13 from such activities, as do some state laws.

According to the complaint, YouTube collected this information between 2013 and 2020 and used it for behavioural advertising. This form of advertising infers people’s interests from their identifiers, and it is more lucrative than contextual advertising, which focuses only on a channel’s content.

The case said that various channel owners opted into behavioural advertising, prompting Google to collect this personal information. No parental consent was obtained, the plaintiffs alleged. Channel owners named in the suit included Cartoon Network, Hasbro, Mattel, and DreamWorks Animation.

Under the YouTube settlement (which was agreed in August and recently approved by a judge), families can file claims through YouTubePrivacySettlement.com, although the deadline is this Wednesday. Eligible families are likely to get $20–$30 after attorneys’ fees and administration costs, if 1–2% of eligible families submit claims.

COPPA is evolving

Last year, the FTC amended its COPPA Rule to introduce mandatory opt-in consent for targeted advertising to children, separate from general data-collection consent.

The amendments expand the definition of personal information to include biometric data and government-issued ID information. It also lets the FTC use a site operator’s marketing materials to determine whether a site targets children.

Site owners must also now tell parents who they’ll share information with, and the amendments stop operators from keeping children’s personal information forever. If these all sounds like measures that should have been included to protect children online from the get-go, we agree with you. In any case, companies have until this April to comply with the new rules.

Will the COPPA rules make a difference? It’s difficult to say, given the stream of privacy cases involving Google LLC (which owns YouTube and AdMob, among others). When viewed against Alphabet’s overall earnings, an $8.25m penalty risks being seen as a routine business expense rather than a meaningful deterrent.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Lego’s Smart Bricks explained: what they do, and what they don’t

8 January 2026 at 08:35

Lego just made what it claims is its most important product release since it introduced minifigures in 1978. No, it’s not yet another brand franchise. It’s a computer in a brick.

Called the Smart Brick, it’s part of a broader system called Smart Play that Lego hopes will revolutionize your child’s interaction with Lego.

These aren’t your grandma’s Lego bricks. The 2×4 techno-brick houses a custom ASIC chip that Lego says is smaller than a single Lego stud, measuring about 4.1mm. Inside are accelerometers, light and sound sensors, an LED array, and a miniature speaker with an onboard synthesizer that generates sound effects in real time, rather than just playing pre-recorded clips.

How the pieces talk to each other

The bricks charge wirelessly on a dedicated pad and contain batteries that Lego says can last for years. They also communicate with each other to trigger actions, such as interactive sound effects.

This is where the other Smart Play components come in: Smart Tags and Smart Minifigures. The 2×2 stud-less Smart Tags contain unique digital IDs that tell bricks how to behave. A helicopter tag, for example, might trigger propeller sounds.

There’s also a Neighbor Position Measurement system that detects brick proximity and orientation. So a brick might do different things as it gets closer to a Smart Tag or Smart Minifigure, for example.

The privacy implications of Smart Bricks

Any time parents hear about toys communicating with other devices, they’re right to be nervous. They’ve had to contend with toys that give up kids’ sensitive personal data and allegedly have the potential to become listening devices for surveillance.

However, Lego says its proprietary Bluetooth-based protocol, called BrickNet, comes with encryption and built-in privacy controls.

One clear upside is that the system doesn’t need an internet connection for these devices to work, and there are no screens or companion apps involved either. For parents weary of reading about children’s apps quietly harvesting data, that alone will come as a relief.

Lego also makes specific privacy assurances. Yes, there’s a microphone in the Smart Brick, but no, it doesn’t record sound (it’s just a sensor), the company says. There are no cameras either.

Perhaps the biggest relief of all, though, is that there’s no AI in this brick.

At a time when “AI-powered” is being sprinkled over everything from washing machines to toilets, skipping AI may be the smartest design decision here. AI-driven toys come with their own risks, especially when children don’t get a meaningful choice about how that technology behaves once it’s out of the box.

In the past, they’ve been subjected to sexual content from AI-powered teddy bears. Against that backdrop, Lego’s restraint feels deliberate, and welcome.

Are these the bricks you’re looking for?

Will the world take to Smart Bricks? Probably.

Should it? The best response comes from my seven-year-old, scoffing,

“Kids can make enough annoying noises themselves.”

We won’t have long to wait to find out. Lego announced Lucasafilm as its first Smart Play partner when it unveiled the system at CES 2026 in Las Vegas this week, and pre-orders open on January 9. The initial lineup includes three kits: Tie Fighters, X-Wings, and A-Wings, complete with associated scenery.

Expect lots of engine, laser, and light sabre sounds from those rigs—and perhaps a lack of adorable sound effects from your kids when the blocks start doing the work. That makes us a little sad.

More optimistically, perhaps there are opportunities for creative play, such as devices that spin, flip, and light up based on their communications with other bricks. That could turn this into more of a experiment in basic circuitry and interaction than a simple noise-making device. One of the best things about watching kids play is how far outside the box they think.

Whatever your view on Lego’s latest development, it doesn’t seem like it’ll let people tailor advertising to your kids, whisper atrocities at them from afar, or hack your home network. That, at the very least, is a win.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

❌