Normal view

Received before yesterday

French Police Raid X Offices as Grok Investigations Grow

3 February 2026 at 16:25

French Police Raid X Offices as Grok Investigations Grow

French police raided the offices of the X social media platform today as European investigations grew into nonconsensual sexual deepfakes and potential child sexual abuse material (CSAM) generated by X’s Grok AI chatbot. A statement (in French) from the Paris prosecutor’s office suggested that Grok’s dissemination of Holocaust denial content may also be an issue in the Grok investigations. X owner Elon Musk and former CEO Linda Yaccarino were issued “summonses for voluntary interviews” on April 20, along with X employees the same week. Europol, which is assisting in the investigation, said in a statement that the investigation is “in relation to the proliferation of illegal content, notably the production of deepfakes, child sexual abuse material, and content contesting crimes against humanity. ... The investigation concerns a range of suspected criminal offences linked to the functioning and use of the platform, including the dissemination of illegal content and other forms of online criminal activity.” The French action comes amid a growing UK probe into Grok’s use of nonconsensual sexual imagery, and last month the EU launched its own investigation into the allegations. Meanwhile, a new Reuters report suggests that X’s attempts to curb Grok’s abuses are failing. “While Grok’s public X account is no longer producing the same flood of sexualized imagery, the Grok chatbot continues to do so when prompted, even after being warned that the subjects were vulnerable or would be humiliated by the pictures,” Reuters wrote in a report published today.

French Prosecutor Calls X Investigation ‘Constructive’

The French prosecutor’s statement said the investigation “is, at this stage, part of a constructive approach, with the objective of ultimately guaranteeing the X platform's compliance with French laws, insofar as it operates in French territory” (translated from the French). The investigation initially began in January 2025, the statement said, and “was broadened following other reports denouncing the functioning of Grok on the X platform, which led to the dissemination of Holocaust denial content and sexually explicit deepfakes.” The investigation concerns seven “criminal offenses,” according to the Paris prosecutor’s statement:
  • Complicity in the possession of images of minors of a child pornography nature
  • Complicity in the dissemination, offering, or making available of images of minors of a child pornography nature by an organized group
  • Violation of the right to image (sexual deepfakes)
  • Denial of crimes against humanity (Holocaust denial)
  • Fraudulent extraction of data from an automated data processing system by an organized group
  • Tampering with the operation of an automated data processing system by an organized group
  • Administration of an illicit online platform by an organized group
The Paris prosecutor’s office deleted its X account after announcing the investigation.

Grok Investigations in the UK Grow

In the UK, the Information Commissioner’s Office (ICO) announced that it was launching an investigation into Grok abuses, on the same day the UK Ofcom communications services regulator said its own authority to investigate chatbots may be limited. William Malcolm, ICO's Executive Director for Regulatory Risk & Innovation, said in a statement: “The reports about Grok raise deeply troubling questions about how people’s personal data has been used to generate intimate or sexualised images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this.” “Our investigation will assess whether XIUC and X.AI have complied with data protection law in the development and deployment of the Grok services, including the safeguards in place to protect people’s data rights,” Malcolm added. “Where we find obligations have not been met, we will take action to protect the public.” Ilia Kolochenko, CEO at ImmuniWeb and a cybersecurity law attorney, said in a statement “The patience of regulators is not infinite: similar investigations are already pending even in California, let alone the EU. Moreover, some countries have already temporarily restricted or threatened to restrict access to X’s AI chatbot and more bans are probably coming very soon.” “Hopefully X will take these alarming signals seriously and urgently implement the necessary security guardrails to prevent misuse and abuse of its AI technology,” Kolochenko added. “Otherwise, X may simply disappear as a company under the snowballing pressure from the authorities and a looming avalanche of individual lawsuits.”

European Commission Launches Fresh DSA Investigation Into X Over Grok AI Risks

27 January 2026 at 01:11

European Commission investigation into Grok AI

The European Commission has launched a new formal investigation into X under the Digital Services Act (DSA), intensifying regulatory scrutiny over the platform’s use of its AI chatbot, Grok. Announced on January 26, the move follows mounting concerns that Grok AI image-generation and recommender functionalities may have exposed users in the EU to illegal and harmful content, including manipulated sexually explicit images and material that could amount to child sexual abuse material (CSAM). This latest European Commission investigation into X runs in parallel with an extension of an ongoing probe first opened in December 2023. The Commission will now examine whether X properly assessed and mitigated the systemic risks associated with deploying Grok’s functionalities into its platform in the EU, as required under the Digital Services Act (DSA).

Focus on Grok AI and Illegal Content Risks

At the core of the new proceedings is whether X fulfilled its obligations to assess and reduce risks stemming from Grok AI. The Commission said the risks appear to have already materialised, exposing EU citizens to serious harm. Regulators will investigate whether X:
  • Diligently assessed and mitigated systemic risks, including the dissemination of illegal content, negative effects related to gender-based violence, and serious consequences for users’ physical and mental well-being.
  • Conducted and submitted an ad hoc risk assessment report to the Commission for Grok’s functionalities before deploying them, given their critical impact on X’s overall risk profile.
If proven, these failures would constitute infringements of Articles 34(1) and (2), 35(1), and 42(2) of the Digital Services Act. The Commission stressed that the opening of formal proceedings does not prejudge the outcome but confirmed that an in-depth investigation will now proceed as a matter of priority.

Recommender Systems Also Under Expanded Scrutiny

In a related step, the European Commission has extended its December 2023 investigation into X’s recommender systems. This expanded review will assess whether X properly evaluated and mitigated all systemic risks linked to how its algorithms promote content, including the impact of its recently announced switch to a Grok-based recommender system. As a designated very large online platform (VLOP) under the DSA, X is legally required to identify, assess, and reduce systemic risks arising from its services in the EU. These risks include the spread of illegal content and threats to fundamental rights, particularly those affecting minors. Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy, underlined the seriousness of the case in a statement: “Sexual deepfakes of women and children are a violent, unacceptable form of degradation. With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens - including those of women and children - as collateral damage of its service.” Earlier this month, a European Commission spokesperson had also addressed the issue while speaking to journalists in Brussels, calling the matter urgent and unacceptable. “I can confirm from this podium that the Commission is also very seriously looking into this matter,” the spokesperson said, adding: “This is not ‘spicy’. This is illegal. This is appalling. This is disgusting. This has no place in Europe.”

International Pressure Builds Around Grok AI

The investigation comes against a backdrop of rising regulatory pressure worldwide over Grok AI’s image-generation capabilities. On January 16, X announced changes to Grok aimed at preventing the creation of nonconsensual sexualised images, including content that critics say amounts to CSAM. The update followed weeks of scrutiny and reports of explicit material generated using Grok. In the United States, California Attorney General Rob Bonta confirmed on January 14 that his office had opened an investigation into xAI, the company behind Grok, over reports describing the depiction of women and children in explicit situations. Bonta called the reports “shocking” and urged immediate action, saying his office is examining whether the company may have violated the law. U.S. lawmakers have also stepped in. On January 12, three senators urged Apple and Google to remove X and Grok from their app stores, arguing that the chatbot had repeatedly violated app store policies related to abusive and exploitative content.

Next Steps in the European Commission Investigation Into X

As part of the Digital Services Act (DSA) enforcement process, the Commission will continue gathering evidence by sending additional requests for information, conducting interviews, or carrying out inspections. Interim measures could be imposed if X fails to make meaningful adjustments to its service. The Commission is also empowered to adopt a non-compliance decision or accept commitments from X to remedy the issues under investigation. Notably, the opening of formal proceedings shifts enforcement authority to the Commission, relieving national Digital Services Coordinators of their supervisory powers for the suspected infringements. The investigation complements earlier DSA proceedings that resulted in a €120 million fine against X in December 2025 for deceptive design, lack of advertising transparency, and insufficient data access for researchers. With Grok AI now firmly in regulators’ sights, the outcome of this probe could have major implications for how AI-driven features are governed on large online platforms across the EU.

How DPDP Rules Are Quietly Reducing Deepfake and Synthetic Identity Risks

17 December 2025 at 02:54

DPDP rules

Nikhil Jhanji, Principal Product Manager, Privy by IDfy The DPDP rules have finally given enterprises a clear structure for how personal data enters and moves through their systems. What has not been discussed enough is that this same structure also reduces the space in which deepfakes and synthetic identities can slip through. For months the Act lived in broad conversation without detail. Now enterprises have to translate the rules into real action. As they do that work, a practical advantage becomes visible. The discipline required around consent, accuracy, and provenance creates an environment where false personas cannot blend in as easily. This was not the intention of the framework, but it is an important consequence.

DPDP Rules Bring Structure to Enterprise Data Intake

The first shift happens at data entry. The rules require clear consent, proof of lawful purpose, and timely correction of errors. This forces organisations to examine the origin of the data they collect and to maintain records that confirm why the data exists. Better visibility into the source and purpose of data makes it harder for synthetic identities to enter the system through weak or careless intake flows. This matters because the word synthetic now carries two very different meanings. One meaning refers to responsible synthetic data used in privacy enhancing technologies. This type is created intentionally, documented carefully, and used to train models or test systems without revealing personal information. It supports the goals of privacy regulation and does not imitate real individuals.

Synthetic Data vs Synthetic Identity: A Critical Difference

The other meaning refers to deceptive synthetic identities, false personas deliberately created to exploit weak verification processes. These may include deepfake facial images, manipulated voice samples, and fabricated documents or profiles that appear legitimate enough to pass routine checks.

This form of synthetic identity thrives in environments with poor data discipline and is designed specifically to mislead systems and people.

The DPDP rules help enterprises tell the difference with more clarity. Responsible synthetic data has provenance and purposeful creation. Deceptive synthetic identity has neither. Once intake and governance become more structured, the distinction becomes easier to detect through both human review and automated systems.

Cleaner Data Improves Fraud and Risk Detection

As organisations rewrite consent journeys and strengthen provenance under the DPDP rules, the second advantage becomes clear. Cleaner input improves downstream behaviour. Fraud engines perform better with consistent signals. Risk decisions become clearer. Customer support teams gain more dependable records. When data is scattered and unchecked, synthetic personas move more freely. When data is organised and verified, they become more visible. This is where the influence of DPDP rules becomes subtle. Deepfake content succeeds by matching familiar patterns. It blends into weak systems that cannot challenge continuity. Structured data environments limit these opportunities. They reduce ambiguity and shrink the number of places where a false identity can hide. This gives enterprises a stronger base for every detection capability they depend on. There is also a behavioural shift introduced by the DPDP rules. Once teams begin managing data with more discipline, their instinct around authenticity improves. Consent is checked properly. Accuracy is taken seriously. Records are maintained rather than ignored. This change in everyday behaviour strengthens identity awareness across the organisation. Deepfake risk is not only technical. It is also operational, and disciplined teams recognise anomalies faster.

DPDP Rules Do Not Stop Deepfakes—but They Shrink the Attack Surface

None of this means that DPDP rules stop deepfakes. They do not. Deepfake quality is rising and will continue to challenge even mature systems. What the rules offer is a necessary foundation. They push organisations to adopt habits of verification, documentation, and controlled intake. Those habits shrink the attack surface for synthetic identities and improve the effectiveness of whatever detection tools a company chooses to use. As enterprises interpret the rules, many will see the work as procedural. New notices. Updated consent. Retention plans. But the real strength will emerge in the functions that depend on reliable identity and reliable records. Credit decisions. Access management. Customer onboarding. Dispute resolution. Identity verification. These areas become more stable when the data that supports them is consistent and traceable. The rise of deepfakes makes this stability essential. False personas are cheap to create and increasingly convincing. They will exploit gaps wherever they exist. Strong tools matter, but so does the quality of the data that flows into those tools. Without clean and verified data, even advanced detection systems struggle. The DPDP rules arrive at a moment when enterprises need stronger foundations. By demanding better intake discipline and clearer data pathways, they reduce the natural openings that deceptive synthetic content relies on. In a world where authentic and synthetic individuals now compete for space inside enterprise systems, this shift may become one of the most practical outcomes of the entire compliance effort. (This article reflects the author’s analysis and personal viewpoints and is intended for informational purposes only. It should not be construed as legal or regulatory advice.)
❌