Normal view

Received yesterday — 13 February 2026
Received before yesterday

French Police Raid X Offices as Grok Investigations Grow

3 February 2026 at 16:25

French Police Raid X Offices as Grok Investigations Grow

French police raided the offices of the X social media platform today as European investigations grew into nonconsensual sexual deepfakes and potential child sexual abuse material (CSAM) generated by X’s Grok AI chatbot. A statement (in French) from the Paris prosecutor’s office suggested that Grok’s dissemination of Holocaust denial content may also be an issue in the Grok investigations. X owner Elon Musk and former CEO Linda Yaccarino were issued “summonses for voluntary interviews” on April 20, along with X employees the same week. Europol, which is assisting in the investigation, said in a statement that the investigation is “in relation to the proliferation of illegal content, notably the production of deepfakes, child sexual abuse material, and content contesting crimes against humanity. ... The investigation concerns a range of suspected criminal offences linked to the functioning and use of the platform, including the dissemination of illegal content and other forms of online criminal activity.” The French action comes amid a growing UK probe into Grok’s use of nonconsensual sexual imagery, and last month the EU launched its own investigation into the allegations. Meanwhile, a new Reuters report suggests that X’s attempts to curb Grok’s abuses are failing. “While Grok’s public X account is no longer producing the same flood of sexualized imagery, the Grok chatbot continues to do so when prompted, even after being warned that the subjects were vulnerable or would be humiliated by the pictures,” Reuters wrote in a report published today.

French Prosecutor Calls X Investigation ‘Constructive’

The French prosecutor’s statement said the investigation “is, at this stage, part of a constructive approach, with the objective of ultimately guaranteeing the X platform's compliance with French laws, insofar as it operates in French territory” (translated from the French). The investigation initially began in January 2025, the statement said, and “was broadened following other reports denouncing the functioning of Grok on the X platform, which led to the dissemination of Holocaust denial content and sexually explicit deepfakes.” The investigation concerns seven “criminal offenses,” according to the Paris prosecutor’s statement:
  • Complicity in the possession of images of minors of a child pornography nature
  • Complicity in the dissemination, offering, or making available of images of minors of a child pornography nature by an organized group
  • Violation of the right to image (sexual deepfakes)
  • Denial of crimes against humanity (Holocaust denial)
  • Fraudulent extraction of data from an automated data processing system by an organized group
  • Tampering with the operation of an automated data processing system by an organized group
  • Administration of an illicit online platform by an organized group
The Paris prosecutor’s office deleted its X account after announcing the investigation.

Grok Investigations in the UK Grow

In the UK, the Information Commissioner’s Office (ICO) announced that it was launching an investigation into Grok abuses, on the same day the UK Ofcom communications services regulator said its own authority to investigate chatbots may be limited. William Malcolm, ICO's Executive Director for Regulatory Risk & Innovation, said in a statement: “The reports about Grok raise deeply troubling questions about how people’s personal data has been used to generate intimate or sexualised images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this.” “Our investigation will assess whether XIUC and X.AI have complied with data protection law in the development and deployment of the Grok services, including the safeguards in place to protect people’s data rights,” Malcolm added. “Where we find obligations have not been met, we will take action to protect the public.” Ilia Kolochenko, CEO at ImmuniWeb and a cybersecurity law attorney, said in a statement “The patience of regulators is not infinite: similar investigations are already pending even in California, let alone the EU. Moreover, some countries have already temporarily restricted or threatened to restrict access to X’s AI chatbot and more bans are probably coming very soon.” “Hopefully X will take these alarming signals seriously and urgently implement the necessary security guardrails to prevent misuse and abuse of its AI technology,” Kolochenko added. “Otherwise, X may simply disappear as a company under the snowballing pressure from the authorities and a looming avalanche of individual lawsuits.”

ShinyHunters Leads Surge in Vishing Attacks to Steal SaaS Data

2 February 2026 at 11:39
credentials EUAC CUI classified secrets SMB

Several threat clusters are using vishing in extortion campaigns that include tactics that are consistent with those used by high-profile threat group ShinyHunters. They are stealing SSO and MFA credentials to access companies' environments and steal data from cloud applications, according to Mandiant researchers.

The post ShinyHunters Leads Surge in Vishing Attacks to Steal SaaS Data appeared first on Security Boulevard.

Why Gen Z is Ditching Smartphones for Dumbphones

2 February 2026 at 00:00

Younger generations are increasingly ditching smartphones in favor of “dumbphones”—simpler devices with fewer apps, fewer distractions, and less tracking. But what happens when you step away from a device that now functions as your wallet, your memory, and your security key? In this episode, Tom and Scott explore the dumbphone movement through a privacy and […]

The post Why Gen Z is Ditching Smartphones for Dumbphones appeared first on Shared Security Podcast.

The post Why Gen Z is Ditching Smartphones for Dumbphones appeared first on Security Boulevard.

💾

Security Researcher Finds Exposed Admin Panel for AI Toy

29 January 2026 at 15:46

Security Researcher Finds Exposed Admin Panel for AI Toy

A security researcher investigating an AI toy for a neighbor found an exposed admin panel that could have leaked the personal data and conversations of the children using the toy. The findings, detailed in a blog post by security researcher Joseph Thacker, outlines the work he did with fellow researcher Joel Margolis, who found the exposed admin panel for the Bondu AI toy. Margolis found an intriguing domain (console.bondu.com) in the mobile app backend’s Content Security Policy headers. There he found a button that simply said: “Login with Google.” “By itself, there’s nothing weird about that as it was probably just a parent portal,” Thacker wrote. But instead of a parent portal, it turned out to be the Bondu core admin panel. “We had just logged into their admin dashboard despite [not] having any special accounts or affiliations with Bondu themselves,” Thacker said.

AI Toy Admin Panel Exposed Children’s Conversations

After some investigation in the admin panel, the researchers found they had full access to “Every conversation transcript that any child has had with the toy,” which numbered in the “tens of thousands of sessions.” The panel also contained personal data about children and their family, including:
  • The child’s name and birth date
  • Family member names
  • The child’s likes and dislikes
  • Objectives for the child (defined by the parent)
  • The name given to the toy by the child
  • Previous conversations between the child and the toy (used to give the LLM context)
  • Device information, such as location via IP address, battery level, awake status, and more
  • The ability to update device firmware and reboot devices
They noticed the application is based on OpenAI GPT-5 and Google Gemini. “Somehow, someway, the toy gets fed a prompt from the backend that contains the child profile information and previous conversations as context,” Thacker wrote. “As far as we can tell, the data that is being collected is actually disclosed within their privacy policy, but I doubt most people realize this unless they go and read it (which most people don’t do nowadays).” In addition to the authentication bypass, they also discovered an Insecure Direct Object Reference (IDOR) vulnerability in the product’s API “that allowed us to retrieve any child’s profile data by simply guessing their ID.” “This was all available to anyone with a Google account,” Thacker said. “Naturally we didn’t access nor store any data beyond what was required to validate the vulnerability in order to responsibly disclose it.”

A (Very) Quick Response from Bondu

Margolis reached out to Bondu’s CEO on LinkedIn over the weekend – and the company took down the console “within 10 minutes.” “Overall we were happy to see how the Bondu team reacted to this report; they took the issue seriously, addressed our findings promptly, and had a good collaborative response with us as security researchers,” Thacker said. The company took other steps to investigate and look for additional security flaws, and also started a bug bounty program. They examined console access logs and found that there had been no unauthorized access except for the researchers’ activity, so the company was saved from a data breach. Despite the positive experience working with Bondu, the experience made Thacker reconsider buying AI toys for his own kids. “To be honest, Bondu was totally something I would have been prone to buy for my kids before this finding,” he wrote. “However this vulnerability shifted my stance on smart toys, and even smart devices in general.” “AI models are effectively a curated, bottled-up access to all the information on the internet,” he added. “And the internet can be a scary place. I’m not sure handing that type of access to our kids is a good idea.” Aside from potential security issues, “AI makes this problem even more interesting because the designer (or just the AI model itself) can have actual ‘control’ of something in your house. And I think that is even more terrifying than anything else that has existed yet,” he said. Bondu's website says the AI toy was built with child safety in mind, noting that its "safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from bondu throughout the entire beta period."

EU Data Breach Notifications Surge as GDPR Changes Loom

29 January 2026 at 13:02

EU Data Breach Notifications Surge as GDPR Reform Looms

EU data breach notifications have surged 22% in the last year and GDPR fines remain high, according to a new report from law firm DLA Piper. The “sustained high level of data enforcement activity across Europe” noted in the report occurs amid the EU Digital Omnibus legislative process that critics say could substantially weaken the GDPR’s data privacy provisions. Given the high number of data breach notifications, the report noted, “It is perhaps not surprising that the EU Digital Omnibus is proposing to raise the bar for incident notification to regulators, to capture only breaches which are likely to cause a high risk to the rights and freedoms of data subjects. Supervisory authorities have been inundated with notifications and understandably want to stem the flood so they can focus on the genuinely serious incidents.” The success of the Digital Omnibus process may depend on how EU legislative bodies address the concerns of data privacy advocates, said the report, whose publication coincided with Data Privacy Week. “If simplification is perceived as undermining fundamental rights, the outcome could be legal uncertainty, increased litigation, and political backlash – the very opposite of the simplification and clarity businesses seek,” the law firm said. “The Omnibus therefore faces a delicate balancing act: simplifying rules without eroding trust or core rights. It is expected that the proposals will change as they are debated among the European Commission, the European Parliament, and the EU Council during the trialogue process in 2026.”

EU Data Breach Notifications Top 400 Per Day

The report found that for the first time since May 25, 2018 – the GDPR’s implementation date – average data breach notifications per day topped 400, “breaking the plateauing trend we have seen in recent years.” Between January 28, 2025 and January 27, 2026, the average number of breach notifications per day increased from 363 to 443, a jump of 22%. “It is not clear what is driving this uptick in breach notifications, but the geo-political landscape driving more cyber-attacks, as well as the focus on cyber incidents in the media and the raft of new laws including incident notification requirements ... may be focusing minds on breach notifications,” the law firm said. Laws and regulations that may be driving the increase in EU data breach notifications include NIS2, the Network and Information Security Directive, and DORA, the Digital Operation Resilience Act, the firm said.

GDPR Fines Reverse Downward Trend

GDPR fines remained high, with European supervisory authorities issuing fines totaling approximately EUR1.2 billion in 2025, in line with 2024 levels. “While there is no year-on-year increase in aggregate GDPR fines, this figure marks a reversal of last year’s downward trend and underscores that European data protection supervisory authorities remain willing to impose substantial monetary penalties,” the law firm said. The aggregate total fines since the implementation of GDPR across the jurisdictions surveyed stands at EUR7.1 billion as of January 27, 2026 – EUR4.04 billion of which were issued by the Irish Data Protection Commission. The Irish Data Protection Commission also imposed the highest fine in 2025, a EUR530 million fine in April 2025 against TikTok for violating GDPR's international data transfer restrictions. Fines resulting from breaches of the GDPR integrity and confidentiality principle, also known as the security principle, continue to be prominent, the report said. “Supply chain security and compliance is increasingly attracting the attention of data protection supervisory authorities,” the law firm said. “Supervisory authorities expect robust security controls to prevent personal data breaches and processors, as well as controllers, are directly liable for breaches of the security principle resulting in several fines being imposed directly on processors this year.”

Non-Material Damage Allowed Under GDPR Compensation Claims

Follow-on GDPR compensation claims also saw some notable developments, the law firm found. “This year has brought several notable rulings from the Court of Justice of the European Union (CJEU) and European courts on GDPR-related compensation claims – particularly regarding the criteria for pursuing claims for non-material damage.” One notable CJEU ruling found that non-material damage referred to in Article 82(1) GDPR “can include negative feelings, such as fear or annoyance, provided the data subject can demonstrate that they are experiencing such feelings,” the report said. “This was a win for claimants. However, in the same decision, the CJEU ruled that the mere assertion of negative feelings is insufficient for compensation; national courts must assess evidence of such feelings and be satisfied that they arise from the breach of GDPR. This provides some comfort for defendants as theoretical distress is insufficient to sound in compensation.” Ross McKean, Chair of the DLA Piper UK Data, Privacy and Cybersecurity practice, said in a statement that “Most evident in this year's report is the validation that the cybersecurity threat landscape has reached an unprecedented level. ... Coupled with the slew of new cybersecurity laws impacting business, some of which impose personal liability on members of management bodies, our report underscores the urgency and need for organisations to optimise cyber defences and operational resilience.”

One privacy change I made for 2026 (Lock and Code S07E02)

26 January 2026 at 08:31

This week on the Lock and Code podcast…

When you hear the words “data privacy,” what do you first imagine?

Maybe you picture going into your social media apps and setting your profile and posts to private. Maybe you think about who you’ve shared your location with and deciding to revoke some of that access. Maybe you want to remove a few apps entirely from your smartphone, maybe you want to try a new web browser, maybe you even want to skirt the type of street-level surveillance provided by Automated License Plate Readers, which can record your car model, license plate number, and location on your morning drive to work.

Importantly, all of these are “data privacy,” but trying to do all of these things at once can feel impossible.

That’s why, this year, for Data Privacy Day, Malwarebytes Senior Privacy Advocate (and Lock and Code host) David Ruiz is sharing the one thing he’s doing different to improve his privacy. And it’s this: He’s given up Google Search entirely.

When Ruiz requested the data that Google had collected about him last year, he saw that the company had recorded an eye-popping 8,000 searches in just the span of 18 months. And those 8,000 searches didn’t just reveal what he was thinking about on any given day—including his shopping interests, his home improvement projects, and his late-night medical concerns—they also revealed when he clicked on an ad based on the words he searched. This type of data, which connects a person’s searches to the likelihood of engaging with an online ad, is vital to Google’s revenue, and it’s the type of thing that Ruiz is seeking to finally cut off.

So, for 2026, he has switched to a new search engine, Brave Search.

Today, on the Lock and Code podcast, Ruiz explains why he made the switch, what he values about Brave Search, and why he also refused to switch to any of the major AI platforms in replacing Google.

Tune in today to listen to the full episode.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Are we ready for ChatGPT Health?

9 January 2026 at 07:26

How comfortable are you with sharing your medical history with an AI?

I’m certainly not.

OpenAI’s announcement about its new ChatGPT Health program prompted discussions about data privacy and how the company plans to keep the information users submit safe.

ChatGPT Health is a dedicated “health space” inside ChatGPT that lets users connect their medical records and wellness apps so the model can answer health and wellness questions in a more personalized way.

ChatGPT health

OpenAI promises additional, layered protections designed specifically for health, “to keep health conversations protected and compartmentalized.”

First off, it’s important to understand that this is not a diagnostic or treatment system. It’s framed as a support tool to help understand health information and prepare for care.

But this is the part that raised questions and concerns:

“You can securely connect medical records and wellness apps to ground conversations in your own health information, so responses are more relevant and useful to you.”

In other words, ChatGPT Health lets you link medical records and apps such as Apple Health, MyFitnessPal, and others so the system can explain lab results, track trends (e.g., cholesterol), and help you prepare questions for clinicians or compare insurance options based on your health data.

Given our reservations about the state of AI security in general and chatbots in particular, this is a line that I don’t dare cross. For now, however, I don’t even have the option, since only users with ChatGPT Free, Go, Plus, and Pro plans outside of the European Economic Area, Switzerland, and the United Kingdom can sign up for the waitlist.

OpenAI only uses partners and apps in ChatGPT Health that meet OpenAI’s privacy and security requirements, which, by design, shifts a great deal of trust onto ChatGPT Health itself.

Users should realize that health information is very sensitive and as Sara Geoghegan, senior counsel at the Electronic Privacy Information Center told The Record: by sharing their electronic medical records with ChatGPT Health, users in the US could effectively remove the HIPAA protection from those records, which is a serious consideration for anyone sharing medical data.

She added:

“ChatGPT is only bound by its own disclosures and promises, so without any meaningful limitation on that, like regulation or a law, ChatGPT can change the terms of its service at any time.”

Should you decide to try this new feature out, we would advise you to proceed with caution and take the advice to enable 2FA for ChatGPT to heart. OpenAI claims 230 million users already ask ChatGPT health and wellness questions each week. I’d encourage them to do the same.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Building Trustworthy AI Agents

12 December 2025 at 07:00

The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we haven’t made trustworthy. We can’t. And today’s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions.

These aren’t edge cases. They’re the result of building AI systems without basic integrity controls. We’re in the third leg of data security—the old CIA triad. We’re good at availability and working on confidentiality, but we’ve never properly solved integrity. Now AI personalization has exposed the gap by accelerating the harms.

The scope of the problem is large. A good AI assistant will need to be trained on everything we do and will need access to our most intimate personal interactions. This means an intimacy greater than your relationship with your email provider, your social media account, your cloud storage, or your phone. It requires an AI system that is both discreet and trustworthy when provided with that data. The system needs to be accurate and complete, but it also needs to be able to keep data private: to selectively disclose pieces of it when required, and to keep it secret otherwise. No current AI system is even close to meeting this.

To further development along these lines, I and others have proposed separating users’ personal data stores from the AI systems that will use them. It makes sense; the engineering expertise that designs and develops AI systems is completely orthogonal to the security expertise that ensures the confidentiality and integrity of data. And by separating them, advances in security can proceed independently from advances in AI.

What would this sort of personal data store look like? Confidentiality without integrity gives you access to wrong data. Availability without integrity gives you reliable access to corrupted data. Integrity enables the other two to be meaningful. Here are six requirements. They emerge from treating integrity as the organizing principle of security to make AI trustworthy.

First, it would be broadly accessible as a data repository. We each want this data to include personal data about ourselves, as well as transaction data from our interactions. It would include data we create when interacting with others—emails, texts, social media posts—and revealed preference data as inferred by other systems. Some of it would be raw data, and some of it would be processed data: revealed preferences, conclusions inferred by other systems, maybe even raw weights in a personal LLM.

Second, it would be broadly accessible as a source of data. This data would need to be made accessible to different LLM systems. This can’t be tied to a single AI model. Our AI future will include many different models—some of them chosen by us for particular tasks, and some thrust upon us by others. We would want the ability for any of those models to use our data.

Third, it would need to be able to prove the accuracy of data. Imagine one of these systems being used to negotiate a bank loan, or participate in a first-round job interview with an AI recruiter. In these instances, the other party will want both relevant data and some sort of proof that the data are complete and accurate.

Fourth, it would be under the user’s fine-grained control and audit. This is a deeply detailed personal dossier, and the user would need to have the final say in who could access it, what portions they could access, and under what circumstances. Users would need to be able to grant and revoke this access quickly and easily, and be able to go back in time and see who has accessed it.

Fifth, it would be secure. The attacks against this system are numerous. There are the obvious read attacks, where an adversary attempts to learn a person’s data. And there are also write attacks, where adversaries add to or change a user’s data. Defending against both is critical; this all implies a complex and robust authentication system.

Sixth, and finally, it must be easy to use. If we’re envisioning digital personal assistants for everybody, it can’t require specialized security training to use properly.

I’m not the first to suggest something like this. Researchers have proposed a “Human Context Protocol” (https://papers.ssrn.com/sol3/ papers.cfm?abstract_id=5403981) that would serve as a neutral interface for personal data of this type. And in my capacity at a company called Inrupt, Inc., I have been working on an extension of Tim Berners-Lee’s Solid protocol for distributed data ownership.

The engineering expertise to build AI systems is orthogonal to the security expertise needed to protect personal data. AI companies optimize for model performance, but data security requires cryptographic verification, access control, and auditable systems. Separating the two makes sense; you can’t ignore one or the other.

Fortunately, decoupling personal data stores from AI systems means security can advance independently from performance (https:// ieeexplore.ieee.org/document/ 10352412). When you own and control your data store with high integrity, AI can’t easily manipulate you because you see what data it’s using and can correct it. It can’t easily gaslight you because you control the authoritative record of your context. And you determine which historical data are relevant or obsolete. Making this all work is a challenge, but it’s the only way we can have trustworthy AI assistants.

This essay was originally published in IEEE Security & Privacy.

Air fryer app caught asking for voice data (re-air) (Lock and Code S06E24)

2 December 2025 at 11:22

This week on the Lock and Code podcast

It’s often said online that if a product is free, you’re the product, but what if that bargain was no longer true? What if, depending on the device you paid hard-earned money for, you still became a product yourself, to be measured, anonymized, collated, shared, or sold, often away from view?

In 2024, a consumer rights group out of the UK teased this new reality when it published research into whether people’s air fryers—seriously–might be spying on them.

By analyzing the associated Android apps for three separate air fryer models from three different companies, researchers learned that these kitchen devices didn’t just promise to make crispier mozzarella sticks, crunchier chicken wings, and flakier reheated pastries—they also wanted a lot of user data, from precise location to voice recordings from a user’s phone.

As the researchers wrote:

“In the air fryer category, as well as knowing customers’ precise location, all three products wanted permission to record audio on the user’s phone, for no specified reason.”

Bizarrely, these types of data requests are far from rare.

Today, on the Lock and Code podcast, we revisit a 2024 episode in which host David Ruiz tells three separate stories about consumer devices that somewhat invisibly collected user data and then spread it in unexpected ways. This includes kitchen utilities that sent data to China, a smart ring maker that published de-identified, aggregate data about the stress levels of its users, and a smart vacuum that recorded a sensitive image of a woman that was later shared on Facebook.

These stories aren’t about mass government surveillance, and they’re not about spying, or the targeting of political dissidents. Their intrigue is elsewhere, in how common it is for what we say, where we go, and how we feel, to be collected and analyzed in ways we never anticipated.

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Australian Man Gets 7 Years for ‘Evil Twin’ WiFi Attacks

1 December 2025 at 12:38

Australian evil twin wifi attack

An Australian man has been sentenced to more than seven years in jail on charges that he created ‘evil twin’ WiFi networks to hack into women’s online accounts to steal intimate photos and videos. The Australian Federal Police (AFP) didn’t name the man in announcing the sentencing, but several Australian news outlets identified him as Michael Clapsis, 44, of Perth, an IT professional who allegedly used his skills to carry out the attacks. He was sentenced to seven years and four months in Perth District Court on November 28, and will be eligible for parole after serving half that time, according to the Sydney Morning Herald. The AFP said Clapsis pled guilty to 15 charges, ranging from unauthorised access or modification of restricted data to unauthorised impairment of electronic communication, failure to comply with an order, and attempted destruction of evidence, among other charges.

‘Evil Twin’ WiFi Network Detected on Australian Domestic Flight

The AFP investigation began in April 2024, when an airline reported that its employees had identified a suspicious WiFi network mimicking a legitimate access point – known as an “evil twin” – during a domestic flight. On April 19, 2024, AFP investigators searched the man’s luggage when he arrived at Perth Airport , where they seized a portable wireless access device, a laptop and a mobile phone. They later executed a search warrant “at a Palmyra home.” Forensic analysis of data and seized devices “identified thousands of intimate images and videos, personal credentials belonging to other people, and records of fraudulent WiFi pages,” the AFP said. The day after the search warrant, the man deleted more than 1,700 items from his account on a data storage application and “unsuccessfully tried to remotely wipe his mobile phone,” the AFP said. Between April 22 and 23, 2024, the AFP said the man “used a computer software tool to gain access to his employer’s laptop to access confidential online meetings between his employer and the AFP regarding the investigation.” The man allegedly used a portable wireless access device, called a “WiFi Pineapple,” to detect device probe requests and instantly create a network with the same name. A device would then connect to the evil twin network automatically. The network took people to a webpage and prompted them to log in using an email or social media account, where their credentials were then captured. AFP said its cybercrime investigators identified data related to use of the fraudulent WiFi pages at airports in Perth, Melbourne and Adelaide, as well as on domestic flights, “while the man also used his IT privileges to access restricted and personal data from his previous employment.” “The man unlawfully accessed social media and other online accounts linked to multiple unsuspecting women to monitor their communications and steal private and intimate images and videos,” the AFP said.

Victims of Evil Twin WiFi Attack Enter Statements

At the sentencing, a prosecutor read from emotional impact statements from the man’s victims, detailing the distress they suffered and the enduring feelings of shame and loss of privacy. One said, “I feel like I have eyes on me 24/7,” according to the Morning Herald. Another said, “Thoughts of hatred, disgust and shame have impacted me severely. Even though they were only pictures, they were mine not yours.” The paper said Clapsis’ attorney told the court that “He’s sought to seek help, to seek insight, to seek understanding and address his way of thinking.” The case highlights the importance of avoiding free public WiFi when possible – and not accessing sensitive websites or applications if one must be used. Any network that requests personal details should be avoided. “If you do want to use public WiFi, ensure your devices are equipped with a reputable virtual private network (VPN) to encrypt and secure your data,” the AFP said. “Disable file sharing, don’t use things like online banking while connected to public WiFi and, once you disconnect, change your device settings to ‘forget network’.”

French Regulator Fines Vanity Fair Publisher €750,000 for Persistent Cookie Consent Violations

28 November 2025 at 05:49

Vanity Fair, Condé Nast, Cookie Consent

France's data protection authority discovered that when visitors clicked the button to reject cookies on Vanity Fair (vanityfair[.]fr), the website continued placing tracking technologies on their devices and reading existing cookies without consent, a violation that now costs publisher Les Publications Condé Nast €750,000 in fines six years after privacy advocate NOYB first filed complaints against the media company.

The November 20 sanction by CNIL's restricted committee marks the latest enforcement action in France's aggressive campaign to enforce cookie consent requirements under the ePrivacy Directive.

NOYB, the European privacy advocacy organization led by Max Schrems, filed the original public complaint in December 2019 concerning cookies placed on user devices by the Vanity Fair France website. After multiple investigations and discussions with CNIL, Condé Nast received a formal compliance order in September 2021, with proceedings closed in July 2022 based on assurances of corrective action.

Repeated Violations Despite Compliance Order

CNIL conducted follow-up online investigations in July and November 2023, then again in February 2025, discovering that the publisher had failed to implement compliant cookie practices despite the earlier compliance order. The restricted committee found Les Publications Condé Nast violated obligations under Article 82 of France's Data Protection Act across multiple dimensions.

Investigators discovered cookies requiring consent were placed on visitors' devices as soon as they arrived on vanityfair.fr, even before users interacted with the information banner to express a choice. This automatic placement violated fundamental consent requirements mandating that tracking technologies only be deployed after users provide explicit permission.

The website lacked clarity in information provided to users about cookie purposes. Some cookies appeared categorized as "strictly necessary" and therefore exempt from consent obligations, but useful information about their actual purposes remained unavailable to visitors. This misclassification potentially allowed the publisher to deploy tracking technologies under false pretenses.

Most significantly, consent refusal and withdrawal mechanisms proved completely ineffective. When users clicked the "Refuse All" button in the banner or attempted to withdraw previously granted consent, new cookies subject to consent requirements were nevertheless placed on their devices while existing cookies continued being read.

Escalating French Enforcement Actions

The fine amount takes into account that Condé Nast had already been issued a formal notice in 2021 but failed to correct its practices, along with the number of people affected and various breaches of rules protecting users regarding cookies.

The CNIL fine represents another in a series of NOYB-related enforcement actions, with the French authority previously fining Criteo €40 million in 2023 and Google €325 million earlier in 2025. Spain's AEPD issued a €100,000 fine against Euskaltel in related NOYB litigation.

Also read: Google Slapped with $381 Million Fine in France Over Gmail Ads, Cookie Consent Missteps

According to reports, Condé Nast acknowledged violations in its defense but cited technical errors, blamed the Internet Advertising Bureau's Transparency and Consent Framework for misleading information, and stated the cookies in question fall under the functionality category. The company claimed good faith and cooperative efforts while arguing against public disclosure of the sanction.

The Cookie Consent Conundrum

French enforcement demonstrates the ePrivacy Directive's teeth in protecting user privacy. CNIL maintains material jurisdiction to investigate and sanction cookie operations affecting French users, with the GDPR's one-stop-shop mechanism not applying since cookie enforcement falls under separate ePrivacy rules transposed into French law.

The authority has intensified actions against dark patterns in consent mechanisms, particularly practices making cookie acceptance easier than refusal. Previous CNIL decisions against Google and Facebook established that websites offering immediate "Accept All" buttons must provide equivalent simple mechanisms for refusing cookies, with multiple clicks to refuse constituting non-compliance.

The six-year timeline from initial complaint to final sanction illustrates both the persistence required in privacy enforcement and the extended timeframes companies exploit while maintaining non-compliant practices generating advertising revenue through unauthorized user tracking.

What does Google know about me? (Lock and Code S06E21)

20 October 2025 at 10:26

This week on the Lock and Code podcast…

Google is everywhere in our lives. It’s reach into our data extends just as far.

After investigating how much data Facebook had collected about him in his nearly 20 years with the platform, Lock and Code host David Ruiz had similar questions about the other Big Tech platforms in his life, and this time, he turned his attention to Google.

Google dominates much of the modern web. It has a search engine that handles billions of requests a day. Its tracking and metrics service, Google Analytics, is embedded into reportedly 10s of millions of websites. Its Maps feature not only serves up directions around the world, it also tracks traffic patterns across countless streets, highways, and more. Its online services for email (Gmail), cloud storage (Google Drive), and office software (Google Docs, Sheets, and Slides) are household names. And it also runs the most popular web browser in the world, Google Chrome, and the most popular operating system in the world, Android.

Today, on the Lock and Code podcast, Ruiz explains how he requested his data from Google and what he learned not only about the company, but about himself, in the process. That includes the 142,729 items in his Gmail inbox right now, along with the 8,079 searches he made, 3,050 related websites he visited, and 4,610 YouTube videos he watched in just the past 18 months. It also includes his late-night searches for worrying medical symptoms, his movements across the US as his IP address was recorded when logging into Google Maps, his emails, his photos, his notes, his old freelance work as a journalist, his outdated cover letters when he was unemployed, his teenage-year Google Chrome bookmarks, his flight and hotel searches, and even the searches he made within his own Gmail inbox and his Google Drive.

After digging into the data for long enough, Ruiz came to a frightening conclusion: Google knows whatever the hell it wants about him, it just has to look.

But Ruiz wasn’t happy to let the company’s access continue. So he has a plan.

 ”I am taking steps to change that [access] so that the next time I ask, “What does Google know about me?” I can hopefully answer: A little bit less.”

Tune in today to listen to the full episode.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

❌