Normal view

Received before yesterday

French Police Raid X Offices as Grok Investigations Grow

3 February 2026 at 16:25

French Police Raid X Offices as Grok Investigations Grow

French police raided the offices of the X social media platform today as European investigations grew into nonconsensual sexual deepfakes and potential child sexual abuse material (CSAM) generated by X’s Grok AI chatbot. A statement (in French) from the Paris prosecutor’s office suggested that Grok’s dissemination of Holocaust denial content may also be an issue in the Grok investigations. X owner Elon Musk and former CEO Linda Yaccarino were issued “summonses for voluntary interviews” on April 20, along with X employees the same week. Europol, which is assisting in the investigation, said in a statement that the investigation is “in relation to the proliferation of illegal content, notably the production of deepfakes, child sexual abuse material, and content contesting crimes against humanity. ... The investigation concerns a range of suspected criminal offences linked to the functioning and use of the platform, including the dissemination of illegal content and other forms of online criminal activity.” The French action comes amid a growing UK probe into Grok’s use of nonconsensual sexual imagery, and last month the EU launched its own investigation into the allegations. Meanwhile, a new Reuters report suggests that X’s attempts to curb Grok’s abuses are failing. “While Grok’s public X account is no longer producing the same flood of sexualized imagery, the Grok chatbot continues to do so when prompted, even after being warned that the subjects were vulnerable or would be humiliated by the pictures,” Reuters wrote in a report published today.

French Prosecutor Calls X Investigation ‘Constructive’

The French prosecutor’s statement said the investigation “is, at this stage, part of a constructive approach, with the objective of ultimately guaranteeing the X platform's compliance with French laws, insofar as it operates in French territory” (translated from the French). The investigation initially began in January 2025, the statement said, and “was broadened following other reports denouncing the functioning of Grok on the X platform, which led to the dissemination of Holocaust denial content and sexually explicit deepfakes.” The investigation concerns seven “criminal offenses,” according to the Paris prosecutor’s statement:
  • Complicity in the possession of images of minors of a child pornography nature
  • Complicity in the dissemination, offering, or making available of images of minors of a child pornography nature by an organized group
  • Violation of the right to image (sexual deepfakes)
  • Denial of crimes against humanity (Holocaust denial)
  • Fraudulent extraction of data from an automated data processing system by an organized group
  • Tampering with the operation of an automated data processing system by an organized group
  • Administration of an illicit online platform by an organized group
The Paris prosecutor’s office deleted its X account after announcing the investigation.

Grok Investigations in the UK Grow

In the UK, the Information Commissioner’s Office (ICO) announced that it was launching an investigation into Grok abuses, on the same day the UK Ofcom communications services regulator said its own authority to investigate chatbots may be limited. William Malcolm, ICO's Executive Director for Regulatory Risk & Innovation, said in a statement: “The reports about Grok raise deeply troubling questions about how people’s personal data has been used to generate intimate or sexualised images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this.” “Our investigation will assess whether XIUC and X.AI have complied with data protection law in the development and deployment of the Grok services, including the safeguards in place to protect people’s data rights,” Malcolm added. “Where we find obligations have not been met, we will take action to protect the public.” Ilia Kolochenko, CEO at ImmuniWeb and a cybersecurity law attorney, said in a statement “The patience of regulators is not infinite: similar investigations are already pending even in California, let alone the EU. Moreover, some countries have already temporarily restricted or threatened to restrict access to X’s AI chatbot and more bans are probably coming very soon.” “Hopefully X will take these alarming signals seriously and urgently implement the necessary security guardrails to prevent misuse and abuse of its AI technology,” Kolochenko added. “Otherwise, X may simply disappear as a company under the snowballing pressure from the authorities and a looming avalanche of individual lawsuits.”

European Commission Launches Fresh DSA Investigation Into X Over Grok AI Risks

27 January 2026 at 01:11

European Commission investigation into Grok AI

The European Commission has launched a new formal investigation into X under the Digital Services Act (DSA), intensifying regulatory scrutiny over the platform’s use of its AI chatbot, Grok. Announced on January 26, the move follows mounting concerns that Grok AI image-generation and recommender functionalities may have exposed users in the EU to illegal and harmful content, including manipulated sexually explicit images and material that could amount to child sexual abuse material (CSAM). This latest European Commission investigation into X runs in parallel with an extension of an ongoing probe first opened in December 2023. The Commission will now examine whether X properly assessed and mitigated the systemic risks associated with deploying Grok’s functionalities into its platform in the EU, as required under the Digital Services Act (DSA).

Focus on Grok AI and Illegal Content Risks

At the core of the new proceedings is whether X fulfilled its obligations to assess and reduce risks stemming from Grok AI. The Commission said the risks appear to have already materialised, exposing EU citizens to serious harm. Regulators will investigate whether X:
  • Diligently assessed and mitigated systemic risks, including the dissemination of illegal content, negative effects related to gender-based violence, and serious consequences for users’ physical and mental well-being.
  • Conducted and submitted an ad hoc risk assessment report to the Commission for Grok’s functionalities before deploying them, given their critical impact on X’s overall risk profile.
If proven, these failures would constitute infringements of Articles 34(1) and (2), 35(1), and 42(2) of the Digital Services Act. The Commission stressed that the opening of formal proceedings does not prejudge the outcome but confirmed that an in-depth investigation will now proceed as a matter of priority.

Recommender Systems Also Under Expanded Scrutiny

In a related step, the European Commission has extended its December 2023 investigation into X’s recommender systems. This expanded review will assess whether X properly evaluated and mitigated all systemic risks linked to how its algorithms promote content, including the impact of its recently announced switch to a Grok-based recommender system. As a designated very large online platform (VLOP) under the DSA, X is legally required to identify, assess, and reduce systemic risks arising from its services in the EU. These risks include the spread of illegal content and threats to fundamental rights, particularly those affecting minors. Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy, underlined the seriousness of the case in a statement: “Sexual deepfakes of women and children are a violent, unacceptable form of degradation. With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens - including those of women and children - as collateral damage of its service.” Earlier this month, a European Commission spokesperson had also addressed the issue while speaking to journalists in Brussels, calling the matter urgent and unacceptable. “I can confirm from this podium that the Commission is also very seriously looking into this matter,” the spokesperson said, adding: “This is not ‘spicy’. This is illegal. This is appalling. This is disgusting. This has no place in Europe.”

International Pressure Builds Around Grok AI

The investigation comes against a backdrop of rising regulatory pressure worldwide over Grok AI’s image-generation capabilities. On January 16, X announced changes to Grok aimed at preventing the creation of nonconsensual sexualised images, including content that critics say amounts to CSAM. The update followed weeks of scrutiny and reports of explicit material generated using Grok. In the United States, California Attorney General Rob Bonta confirmed on January 14 that his office had opened an investigation into xAI, the company behind Grok, over reports describing the depiction of women and children in explicit situations. Bonta called the reports “shocking” and urged immediate action, saying his office is examining whether the company may have violated the law. U.S. lawmakers have also stepped in. On January 12, three senators urged Apple and Google to remove X and Grok from their app stores, arguing that the chatbot had repeatedly violated app store policies related to abusive and exploitative content.

Next Steps in the European Commission Investigation Into X

As part of the Digital Services Act (DSA) enforcement process, the Commission will continue gathering evidence by sending additional requests for information, conducting interviews, or carrying out inspections. Interim measures could be imposed if X fails to make meaningful adjustments to its service. The Commission is also empowered to adopt a non-compliance decision or accept commitments from X to remedy the issues under investigation. Notably, the opening of formal proceedings shifts enforcement authority to the Commission, relieving national Digital Services Coordinators of their supervisory powers for the suspected infringements. The investigation complements earlier DSA proceedings that resulted in a €120 million fine against X in December 2025 for deceptive design, lack of advertising transparency, and insufficient data access for researchers. With Grok AI now firmly in regulators’ sights, the outcome of this probe could have major implications for how AI-driven features are governed on large online platforms across the EU.

Grok Image Abuse Prompts X to Roll Out New Safety Limits

16 January 2026 at 02:32

Grok AI Image Abuse

Elon Musk’s social media platform X has announced a series of changes to its AI chatbot Grok, aiming to prevent the creation of nonconsensual sexualized images, including content that critics and authorities say amounts to child sexual abuse material (CSAM). The announcement was made Wednesday via X’s official Safety account, following weeks of growing scrutiny over Grok AI’s image-generation capabilities and reports of nonconsensual sexualized content.

X Reiterates Zero Tolerance Policy on CSAM and Nonconsensual Content

In its statement, X emphasized that it maintains “zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.” The platform said it continues to remove high-priority violative content, including CSAM, and to take enforcement action against accounts that violate X’s rules. Where required, accounts seeking child sexual exploitation material are reported to law enforcement authorities. The company acknowledged that the rapid evolution of generative AI presents industry-wide challenges and said it is actively working with users, partners, governing bodies, and other platforms to respond more quickly as new risks emerge.

Grok AI Image Generation Restrictions Expanded

As part of the update, X said it has implemented technological measures to restrict Grok AI from editing images of real people into revealing clothing, such as bikinis. These restrictions apply globally and affect all users, including paid subscribers. In a further change, image creation and image editing through the @Grok account are now limited to paid subscribers worldwide. X said this step adds an additional layer of accountability by helping ensure that users who attempt to abuse Grok in violation of laws or platform policies can be identified. X also confirmed the introduction of geoblocking measures in certain jurisdictions. In regions where such content is illegal, users will no longer be able to generate images of real people in bikinis, underwear, or similar attire using Grok AI. Similar geoblocking controls are being rolled out for the standalone Grok app by xAI.

Announcement Follows Widespread Abuse Reports

The update comes amid a growing scandal involving Grok AI, after thousands of users were reported to have generated sexualized images of women and children using the tool. Numerous reports documented how users took publicly available images and used Grok to depict individuals in explicit or suggestive scenarios without their consent. Particular concern has centered on a feature known as “Spicy Mode,” which xAI developed as part of Grok’s image-generation system and promoted as a differentiator. Critics say the feature enabled large-scale abuse and contributed to the spread of nonconsensual intimate imagery. According to one analysis cited in media reports, more than half of the approximately 20,000 images generated by Grok over a recent holiday period depicted people in minimal clothing, with some images appearing to involve children.

U.S. and European Authorities Escalate Scrutiny

On January 14, 2026, ahead of X’s announcement, California Attorney General Rob Bonta confirmed that his office had opened an investigation into xAI over the proliferation of nonconsensual sexually explicit material produced using Grok. In a statement, Bonta said reports describing the depiction of women and children in explicit situations were “shocking” and urged xAI to take immediate action. His office is examining whether and how xAI may have violated the law. Regulatory pressure has also intensified internationally. The European Commission confirmed earlier this month that it is examining Grok’s image-generation capabilities, particularly the creation of sexually explicit images involving minors. European officials have signaled that enforcement action is being considered.

App Store Pressure Adds to Challenges

On January 12, 2026, three U.S. senators urged Apple and Google to remove X and Grok from their app stores, arguing that Grok AI has repeatedly violated app store policies related to abusive and exploitative content. The lawmakers warned that app distribution platforms may also bear responsibility if such content continues.

Ongoing Oversight and Industry Implications

X said the latest changes do not alter its existing safety rules, which apply to all AI prompts and generated content, regardless of whether users are free or paid subscribers. The platform stated that its safety teams are working continuously to add safeguards, remove illegal content, suspend accounts where appropriate, and cooperate with authorities. As investigations continue across multiple jurisdictions, the Grok controversy is becoming a defining case in the broader debate over AI safety, accountability, and the protection of children and vulnerable individuals in the age of generative AI.

Attackers Targeting LLMs in Widespread Campaign

12 January 2026 at 15:20

ai generated 8177861 1280

Threat actors are targeting LLMs in a widespread reconnaissance campaign that could be the first step in cyberattacks on exposed AI models, according to security researchers. The attackers scanned for every major large language model (LLM) family, including OpenAI-compatible and Google Gemini API formats, looking for “misconfigured proxy servers that might leak access to commercial APIs,” according to research from GreyNoise, whose honeypots picked up 80,000 of the enumeration requests from the threat actors. “Threat actors don't map infrastructure at this scale without plans to use that map,” the researchers said. “If you're running exposed LLM endpoints, you're likely already on someone's list.”

LLM Reconnaissance Targets ‘Every Major Model Family’

The researchers said the threat actors were probing “every major model family,” including:
  • OpenAI (GPT-4o and variants)
  • Anthropic (Claude Sonnet, Opus, Haiku)
  • Meta (Llama 3.x)
  • DeepSeek (DeepSeek-R1)
  • Google (Gemini)
  • Mistral
  • Alibaba (Qwen)
  • xAI (Grok)
The campaign began on December 28, when two IPs “launched a methodical probe of 73+ LLM model endpoints,” the researchers said. In a span of 11 days, they generated 80,469 sessions, “systematic reconnaissance hunting for misconfigured proxy servers that might leak access to commercial APIs.” Test queries were “deliberately innocuous with the likely goal to fingerprint which model actually responds without triggering security alerts” (image below). [caption id="attachment_108529" align="aligncenter" width="908"]prompts used by attackers targeting LLMs Test queries used by attackers targeting LLMs (GreyNoise)[/caption] The two IPs behind the reconnaissance campaign were: 45.88.186.70 (AS210558, 1337 Services GmbH) and 204.76.203.125 (AS51396, Pfcloud UG). GreyNoise said both IPs have “histories of CVE exploitation,” including attacks on the “React2Shell” vulnerability CVE-2025-55182, TP-Link Archer vulnerability CVE-2023-1389, and more than 200 other vulnerabilities. The researchers concluded that the campaign was a professional threat actor conducting reconnaissance operations to discover cyberattack targets. “The infrastructure overlap with established CVE scanning operations suggests this enumeration feeds into a larger exploitation pipeline,” the researchers said. “They're building target lists.”

Second LLM Campaign Targets SSRF Vulnerabilities

The researchers also detected a second campaign targeting server-side request forgery (SSRF) vulnerabilities, which “force your server to make outbound connections to attacker-controlled infrastructure.” The attackers targeted the honeypot infrastructure’s model pull functionality by injecting malicious registry URLs to force servers to make HTTP requests to the attacker’s infrastructure, and they also targeted Twilio SMS webhook integrations by manipulating MediaUrl parameters to trigger outbound connections. The attackers used ProjectDiscovery's Out-of-band Application Security Testing (OAST) infrastructure to confirm successful SSRF exploitation through callback validation. A single JA4H signature appeared in almost all of the attacks, “pointing to shared automation tooling—likely Nuclei.” 62 source IPs were spread across 27 countries, “but consistent fingerprints indicate VPS-based infrastructure, not a botnet.” The researchers concluded that the second campaign was likely security researchers or bug bounty hunters, but they added that “the scale and Christmas timing suggest grey-hat operations pushing boundaries.” The researchers noted that the two campaigns “reveal how threat actors are systematically mapping the expanding surface area of AI deployments.”

LLM Security Recommendations

The researchers recommended that organizations “Lock down model pulls ... to accept models only from trusted registries. Egress filtering prevents SSRF callbacks from reaching attacker infrastructure.” Organizations should also detect enumeration patterns and “alert on rapid-fire requests hitting multiple model endpoints,” watching for fingerprinting queries such as "How many states are there in the United States?" and "How many letter r..." They should also block OAST at DNS to “cut off the callback channel that confirms successful exploitation.” Organizations should also rate-limit suspicious ASNs, noting that AS152194, AS210558 and AS51396 “all appeared prominently in attack traffic,” and they should also monitor JA4 fingerprints. ‍

After EU Probe, U.S. Senators Push Apple and Google to Review Grok AI

12 January 2026 at 02:01

U.S. Senators Push Apple and Google to Review Grok AI

Concerns surrounding Grok AI are escalating rapidly, with pressure now mounting in the United States after ongoing scrutiny in Europe. Three U.S. senators have urged Apple and Google to remove the X app and Grok AI from the Apple App Store and Google Play Store, citing the large-scale creation of nonconsensual sexualized images of real people, including children. The move comes as a direct follow-up to the European Commission’s investigation into Grok AI’s image-generation capabilities, marking a significant expansion of regulatory attention beyond the EU. While European regulators have openly weighed enforcement actions, U.S. authorities are now signaling that app distribution platforms may also bear responsibility.

U.S. senators Cite App Store Policy Violations by Grok AI

In a letter dated January 9, 2026, Senators Ron Wyden, Ed Markey, and Ben Ray Luján formally asked Apple CEO Tim Cook and Google CEO Sundar Pichai to enforce their app store policies against X Corp. The lawmakers argue that Grok AI, which operates within the X app, has repeatedly violated rules governing abusive and exploitative content. According to the senators, users have leveraged Grok AI to generate nonconsensual sexualized images of women, depicting abuse, humiliation, torture, and even death. More alarmingly, the letter states that Grok AI has also been used to create sexualized images of children, content the senators described as both harmful and potentially illegal. The lawmakers emphasized that such activity directly conflicts with policies enforced by both the Apple App Store and Google Play Store, which prohibit content involving sexual exploitation, especially material involving minors.

Researchers Flag Potential Child Abuse Material Linked to Grok AI

The letter also references findings by independent researchers who identified an archive connected to Grok AI containing nearly 100 images flagged as potential child sexual abuse material. These images were reportedly generated over several months, raising questions about X Corp’s oversight and response mechanisms. The senators stated that X appeared fully aware of the issue, pointing to public reactions by Elon Musk, who acknowledged reports of Grok-generated images with emoji responses. In their view, this signaled a lack of seriousness in addressing the misuse of Grok AI.

Premium Restrictions Fail to Calm Controversy

In response to the backlash, X recently limited Grok AI’s image-generation feature to premium subscribers. However, the senators dismissed this move as inadequate. Sen. Wyden said the change merely placed a paywall around harmful behavior rather than stopping it, arguing that it allowed the production of abusive content to continue while generating revenue. The lawmakers stressed that restricting access does not absolve X of responsibility, particularly when nonconsensual sexualized images remain possible through the platform.

Pressure Mounts on Apple App Store and Google Play Store

The senators warned that allowing the X app and Grok AI to remain available on the Apple App Store and Google Play Store would undermine both companies’ claims that their platforms offer safer environments than alternative app distribution methods. They also pointed to recent instances where Apple and Google acted swiftly to remove other controversial apps under government pressure, arguing that similar urgency should apply in the case of Grok AI. At minimum, the lawmakers said, temporary removal of the apps would be appropriate while a full investigation is conducted. They requested a written response from both companies by January 23, 2026, outlining how Grok AI and the X app are being assessed under existing policies. Apple and Google have not publicly commented on the letter, and X has yet to issue a formal response. The latest development adds momentum to global scrutiny of Grok AI, reinforcing concerns already raised by the European Commission. Together, actions in the U.S. and Europe signal a broader shift toward holding AI platforms, and the app ecosystems that distribute them, accountable for how generative technologies are deployed and controlled at scale.
❌