Normal view

Received before yesterday

India Brings AI-Generated Content Under Formal Regulation with IT Rules Amendment

12 February 2026 at 04:28

AI-generated Content

The Central Government has formally brought AI-generated content within India’s regulatory framework for the first time. Through notification G.S.R. 120(E), issued by the Ministry of Electronics and Information Technology (MeitY) and signed by Joint Secretary Ajit Kumar, amendments were introduced to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The revised rules take effect from February 20, 2026.  The move represents a new shift in the Indian cybersecurity and digital governance policy. While the Information Technology Act, 2000, has long addressed unlawful online conduct, these amendments explicitly define and regulate “synthetically generated information” (SGI), placing AI-generated content under structured compliance obligations. 

What the Law Now Defines as “Synthetically Generated Information” 

The notification inserts new clauses into Rule 2 of the 2021 Rules. It defines “audio, visual or audio-visual information” broadly to include any audio, image, photograph, video, sound recording, or similar content created, generated, modified, or altered through a computer resource.  More critically, clause (wa) defines “synthetically generated information” as content that is artificially or algorithmically created or altered in a manner that appears real, authentic, or true and depicts or portrays an individual or event in a way that is likely to be perceived as indistinguishable from a natural person or real-world occurrence.  This definition clearly encompasses deep-fake videos, AI-generated voiceovers, face-swapped images, and other forms of AI-generated content designed to simulate authenticity. The framing is deliberate: the concern is not merely digital alteration, but deception, content that could reasonably be mistaken for reality.  At the same time, the amendment carves out exceptions. Routine or good-faith editing, such as color correction, formatting, transcription, compression, accessibility improvements, translation, or technical enhancement, does not qualify as synthetically generated information, provided the underlying substance or meaning is not materially altered. Educational materials, draft templates, or conceptual illustrations also fall outside the SGI category unless they create a false document or false electronic record. This distinction attempts to balance innovation in Information Technology with protection against misuse. 

New Duties for Intermediaries 

The amendments substantially revise Rule 3, expanding intermediary obligations. Platforms must inform users, at least once every three months and in English or any Eighth Schedule language, that non-compliance with platform rules or applicable laws may lead to suspension, termination, removal of content, or legal liability. Where violations relate to criminal offences, such as those under the Bharatiya Nagarik Suraksha Sanhita, 2023, or the Protection of Children from Sexual Offences Act, 2012, mandatory reporting requirements apply.  A new clause (ca) introduces additional obligations for intermediaries that enable or facilitate the creation or dissemination of synthetically generated information. These platforms must inform users that directing their services to create unlawful AI-generated content may attract penalties under laws including the Information Technology Act, the Bharatiya Nyaya Sanhita, 2023, the Representation of the People Act, 1951, the Indecent Representation of Women (Prohibition) Act, 1986, the Sexual Harassment of Women at Workplace Act, 2013, and the Immoral Traffic (Prevention) Act, 1956.  Consequences for violations may include immediate content removal, suspension or termination of accounts, disclosure of the violator’s identity to victims, and reporting to authorities where offences require mandatory reporting. The compliance timelines have also been tightened. Content removal in response to valid orders must now occur within three hours instead of thirty-six hours. Certain grievance response windows have been reduced from fifteen days to seven days, and some urgent compliance requirements now demand action within two hours. 

Due Diligence and Labelling Requirements for AI-generated Content 

A new Rule 3(3) imposes explicit due diligence obligations for AI-generated content. Intermediaries must deploy reasonable and appropriate technical measures, including automated tools, to prevent users from creating or disseminating synthetically generated information that violates the law.  This includes content containing child sexual abuse material, non-consensual intimate imagery, obscene or sexually explicit material, false electronic records, or content related to explosive materials or arms procurement. It also includes deceptive portrayals of real individuals or events intended to mislead.  For lawful AI-generated content that does not violate these prohibitions, the rules mandate prominent labelling. Visual content must carry clearly visible notices. Audio content must include a prefixed disclosure. Additionally, such content must be embedded with permanent metadata or other provenance mechanisms, including a unique identifier linking the content to the intermediary computer resource, where technically feasible. Platforms are expressly prohibited from enabling the suppression or removal of these labels or metadata. 

Enhanced Obligations for Social Media Intermediaries 

Rule 4 introduces an additional compliance layer for significant social media intermediaries. Before allowing publication, these platforms must require users to declare whether content is synthetically generated. They must deploy technical measures to verify the accuracy of that declaration. If confirmed as AI-generated content, it must be clearly labelled before publication.  If a platform knowingly permits or fails to act on unlawful synthetically generated information, it may be deemed to have failed its due diligence obligations. The amendments also align terminology with India’s evolving criminal code, replacing references to the Indian Penal Code with the Bharatiya Nyaya Sanhita, 2023. 

Implications for Indian Cybersecurity and Digital Platforms 

The February 2026 amendment reflects a decisive step in Indian cybersecurity policy. Rather than banning AI-generated content outright, the government has opted for traceability, transparency, and technical accountability. The focus is on preventing deception, protecting individuals from reputational harm, and ensuring rapid response to unlawful synthetic media. For platforms operating within India’s Information Technology ecosystem, compliance will require investment in automated detection systems, content labelling infrastructure, metadata embedding, and accelerated grievance redressal workflows. For users, the regulatory signal is clear: generating deceptive synthetic media is no longer merely unethical; it may trigger direct legal consequences. As AI tools continue to scale, the regulatory framework introduced through G.S.R. 120(E) marks India’s formal recognition that AI-generated content is not a fringe concern but a central governance challenge in the digital age. 

India Seeks Larger Role in Global AI and Deep Tech Development

12 February 2026 at 04:04

IndiaAI Mission

India’s technology ambitions are no longer limited to policy announcements, they are now translating into capital flows, institutional reforms, and global positioning. At the center of this transformation is the IndiaAI Mission, a flagship initiative that is reshaping AI in India while influencing private sector investment and deep tech growth across multiple domains. Information submitted in the Lok Sabha on February 11, 2026, by Minister of Electronics and IT Ashwini Vaishnaw outlines how government-backed reforms and funding mechanisms are strengthening India’s AI and space technology ecosystem. For global observers, the scale and coordination of these efforts signal a strategic push to position India as a long-term technology powerhouse.

IndiaAI Mission Lays Foundation for AI in India

Launched in March 2024 with an outlay of ₹10,372 crore, the IndiaAI Mission aims to build a comprehensive AI ecosystem. In less than two years, the initiative has delivered measurable progress. More than 38,000 GPUs have been onboarded to create a common compute facility accessible to startups and academic institutions at affordable rates. Twelve teams have been shortlisted to develop indigenous foundational models or Large Language Models (LLMs), while 30 applications have been approved to build India-specific AI solutions. Talent development remains central to the IndiaAI Mission. Over 8,000 undergraduate students, 5,000 postgraduate students, and 500 PhD scholars are currently being supported. Additionally, 27 India Data and AI Labs have been established, with 543 more identified for development. India’s AI ecosystem is also earning global recognition. The Stanford Global AI Vibrancy 2025 report ranks India third worldwide in AI competitiveness and ecosystem vibrancy. The country is also the second-largest contributor to GitHub AI projects—evidence of a strong developer community driving AI in India from the ground up.

Private Sector Investment in AI Gains Speed

Encouraged by the IndiaAI Mission and broader reforms, private sector investment in AI is rising steadily. According to the Stanford AI Index Report 2025, India’s cumulative private investment in AI between 2013 and 2024 reached approximately $11.1 billion. Recent announcements underscore this momentum. Google revealed plans to establish a major AI Hub in Visakhapatnam with an investment of around $15 billion—its largest commitment in India so far. Tata Group has also announced an $11 billion AI innovation city in Maharashtra. These developments suggest that AI in India is moving beyond research output toward large-scale commercial infrastructure. The upcoming India AI Impact Summit 2026, to be held in New Delhi, will further position India within the global AI debate. Notably, it will be the first time the global AI summit series takes place in the Global South, signaling a shift toward more inclusive technology governance.

Deep Tech Push Backed by RDI Fund and Policy Reforms

Beyond AI, the government is reinforcing the broader deep tech sector through funding and policy clarity. A ₹1 lakh crore Research, Development and Innovation (RDI) Fund under the Anusandhan National Research Foundation (ANRF) has been announced to support high-risk, high-impact projects. The National Deep Tech Startup Policy addresses long-standing challenges in funding access, intellectual property, infrastructure, and commercialization. Under Startup India, deep tech firms now enjoy extended eligibility periods and higher turnover thresholds for tax benefits and government support. These structural changes aim to strengthen India’s Gross Expenditure on Research and Development (GERD), currently at 0.64% of GDP. Encouragingly, India’s position in the Global Innovation Index has climbed from 81st in 2015 to 38th in 2025—an indicator that reforms are yielding measurable outcomes.

Space Sector Reforms Expand India’s Global Footprint

Parallel to AI in India, the government is also expanding its ambitions in space technology. The Indian Space Policy 2023 clearly defines the roles of ISRO, IN-SPACe, and private industry, opening the entire space value chain to commercial participation. IN-SPACe now operates as a single-window agency authorizing non-government space activities and facilitating access to ISRO’s infrastructure. A ₹1,000 crore venture capital fund and a ₹500 crore Technology Adoption Fund are supporting early-stage and scaling space startups. Foreign Direct Investment norms have been liberalized, permitting up to 100% FDI in satellite manufacturing and components. Through NewSpace India Limited (NSIL), the country is expanding its presence in the global commercial launch market, particularly for small and medium satellites. The government’s collaboration between ISRO and the Department of Biotechnology in space biotechnology—including microgravity research and space bio-manufacturing—signals how interdisciplinary innovation is becoming a national priority.

A Strategic Inflection Point for AI in India

Taken together, the IndiaAI Mission, private sector investment in AI, deep tech reforms, and space sector liberalization form a coordinated architecture. This is not merely about technology adoption—it is about long-term capability building. For global readers, India’s approach offers an interesting case study: sustained public investment paired with regulatory clarity and private capital participation. While challenges such as research intensity and commercialization gaps remain, the trajectory is clear. The IndiaAI Mission has become more than a policy initiative, it is emerging as a structural driver of AI in India and a signal of the country’s broader technological ambitions in the decade ahead.

Spain Ban Social Media Platforms for Kids as Global Trend Grows

4 February 2026 at 01:00

Spain Ban Social Media Platforms

Spain is preparing to take one of the strongest steps yet in Europe’s growing push to regulate the digital world for young people. Spain will ban social media platforms for children under the age of 16, a move Prime Minister Pedro Sanchez framed as necessary to protect minors from what he called the “digital Wild West.” This, Spain ban social media platforms, is not just another policy announcement. The Spain ban social media decision reflects a wider global shift: governments are finally admitting that social media has become too powerful, too unregulated, and too harmful for children to navigate alone.

Spain Ban Social Media Platforms for Children Under Age of 16

Speaking at the World Government Summit in Dubai, Sanchez said Spain will require social media platforms to implement strict age verification systems, ensuring that children under 16 cannot access these services freely. “Social media has become a failed state,” Sanchez declared, arguing that laws are ignored and harmful behavior is tolerated online. The Spain ban social media platforms for children under age of 16 is being positioned as a child safety measure, but it is also a direct challenge to tech companies that have long avoided accountability. Sanchez’s language was blunt, and honestly, refreshing. For years, platforms have marketed themselves as neutral spaces while profiting from algorithms that amplify outrage, addictive scrolling, and harmful content. Spain’s message is clear: enough is enough.

Social Media Ban and Executive Accountability

Spain is not stopping at age limits. Sanchez also announced a new bill expected next week that would hold social media executives personally accountable for illegal and hateful content. That is a significant escalation. A social media ban alone may restrict access, but forcing executives to face consequences could change platform behavior at its core. The era of tech leaders hiding behind “we’re just a platform” excuses may finally be coming to an end. This makes Spain’s approach one of the most aggressive in Europe so far.

France Joins the Global Social Media Ban Movement

Spain is not acting in isolation. On February 3, 2026, French lawmakers approved their own social media ban for children under 15. The bill passed by a wide margin in the National Assembly and is expected to take effect in September, at the start of the next school year. French President Emmanuel Macron strongly backed the move, saying: “Our children’s brains are not for sale… Their dreams must not be dictated by algorithms.” That statement captures the heart of this debate. Social media is not just entertainment anymore. It is an attention economy designed to hook young minds early, shaping behavior, self-image, and even mental health. France’s decision adds momentum to the idea that a social media ban globally for children may soon become the norm rather than the exception.

Australia’s World-First Social Media Ban for Children Under 16

The strongest example so far comes from Australia, which implemented a world-first social media ban for children under 16 in December 2025. The ban covered major platforms including:
  • Facebook
  • Instagram
  • TikTok
  • Snapchat
  • Reddit
  • X
  • YouTube
  • Twitch
Messaging apps like WhatsApp were exempt, acknowledging that communication tools are different from algorithm-driven feeds. Since enforcement began, companies have revoked access to around 4.7 million accounts linked to children. Meta alone removed nearly 550,000 accounts the day after the ban took effect. Australia’s case shows that enforcement is possible, even at scale, through ID checks, third-party age estimation tools, and data inference. Yes, some children try to bypass restrictions. But the broader impact is undeniable: governments can intervene when platforms fail to self-regulate.

UK Exploring Similar Social Media Ban Measures

The United Kingdom is now considering its own restrictions. Prime Minister Keir Starmer recently said the government is exploring a social media ban for children aged 15 and under, alongside stricter age verification and limits on addictive features. The UK’s discussion highlights another truth: this is no longer just about content moderation. It’s about the mental wellbeing of an entire generation growing up inside algorithmic systems.

Is a Social Media Ban Globally for Children the Future?

Spain’s move, combined with France, Australia, and the UK, signals a clear global trend. For years, social media companies promised safety tools, parental controls, and community guidelines. Yet harmful content, cyberbullying, predatory behavior, and addictive design have continued to spread. The reality is uncomfortable: platforms were never built with children in mind. They were built for engagement, profit, and data. A social media ban globally for children may not be perfect, but it is becoming a political and social necessity. Spain’s decision to ban social media platforms for children under age of 16 is not just about restricting access. It is about redefining digital childhood, reclaiming accountability, and admitting that the online world cannot remain lawless. The digital Wild West era may finally be ending.

France Approves Social Media Ban for Children Under 15 Amid Global Trend

3 February 2026 at 04:13

social media ban for children France

French lawmakers have approved a social media ban for children under 15, a move aimed at protecting young people from harmful online content. The bill, which also restricts mobile phone use in high schools, was passed by a 130-21 vote in the National Assembly and is expected to take effect at the start of the next school year in September. French President Emmanuel Macron has called for the legislation to be fast-tracked, and it will now be reviewed by the Senate. “Banning social media for those under 15: this is what scientists recommend, and this is what the French people are overwhelmingly calling for,” Macron said. “Our children’s brains are not for sale — neither to American platforms nor to Chinese networks. Their dreams must not be dictated by algorithms.”

Why France Introduced a Social Media Ban for Children

The new social media ban for children in France is part of a broader effort to address the negative effects of excessive screen time and harmful content. Studies show that one in two French teenagers spends between two and five hours daily on smartphones, with 58% of children aged 12 to 17 actively using social networks. Health experts warn that prolonged social media use can lead to reduced self-esteem, exposure to risky behaviors such as self-harm or substance abuse, and mental health challenges. Some families in France have even taken legal action against platforms like TikTok over teen suicides allegedly linked to harmful online content. The French legislation carefully exempts educational resources, online encyclopedias, and platforms for open-source software, ensuring children can still access learning and development tools safely.

Lessons From Australia’s Social Media Ban for Children

France’s move mirrors global trends. In December 2025, Australia implemented a social media ban for children under 16, covering major platforms including Facebook, Instagram, TikTok, Snapchat, Reddit, Threads, X, YouTube, and Twitch. Messaging apps like WhatsApp were exempt. Since the ban, social media companies have revoked access to about 4.7 million accounts identified as belonging to children. Meta alone removed nearly 550,000 accounts the day after the ban took effect. Australian officials said the measures restore children’s online safety and prevent predatory social media practices. Platforms comply with the ban through age verification methods such as ID checks, third-party age estimation technologies, or inference from existing account data. While some children attempted to bypass restrictions, the ban is considered a significant step in protecting children online.

UK Considers Following France and Australia

The UK is also exploring similar measures. Prime Minister Keir Starmer recently said the government is considering a social media ban for children aged 15 and under, along with stricter age verification, phone curfews, and restrictions on addictive platform features. The UK’s move comes amid growing concern about the mental wellbeing and safety of children online.

Global Shift Toward Child Cyber Safety

The introduction of a social media ban for children in France, alongside Australia’s implementation and the UK’s proposal, highlights a global trend toward protecting minors in the digital age. These measures aim to balance access to educational and creative tools while shielding children from online harm and excessive screen time. As more countries consider social media regulations for minors, the focus is clear: ensuring cyber safety, supporting mental health, and giving children the chance to enjoy a safe and healthy online experience.

UK Turns to Australia Model as British Government Considers Social Media Ban for Children

21 January 2026 at 01:13

social media ban for children

Just weeks after Australia rolled out the world’s first nationwide social media ban for children under 16, the British government has signaled it may follow a similar path. On Monday, Prime Minister Keir Starmer said the UK is considering a social media ban for children aged 15 and under, warning that “no option is off the table” as ministers confront growing concerns about young people’s online wellbeing. The move places the British government ban social media proposal at the center of a broader national debate about the role of technology in childhood. Officials said they are studying a wide range of measures, including tougher age checks, phone curfews, restrictions on addictive platform features, and potentially raising the digital age of consent.

UK Explores Stricter Limits on Social Media Ban for Children

social media ban for children In a Substack post on Tuesday, Starmer said that for many children, social media has become “a world of endless scrolling, anxiety and comparison.” “Being a child should not be about constant judgement from strangers or the pressure to perform for likes,” he wrote. Alongside the possible ban, the government has launched a formal consultation on children’s use of technology. The review will examine whether a social media ban for children would be effective and, if introduced, how it could be enforced. Ministers will also look at improving age assurance technology and limiting design features such as “infinite scrolling” and “streaks,” which officials say encourage compulsive use. The consultation will be backed by a nationwide conversation with parents, young people, and civil society groups. The government said it would respond to the consultation in the summer.

Learning from Australia’s Unprecedented Move

British ministers are set to visit Australia to “learn first-hand from their approach,” referencing Canberra’s decision to ban social media for children under 16. The Australian law, which took effect on December 10, requires platforms such as Instagram, Facebook, X, Snapchat, TikTok, Reddit, Twitch, Kick, Threads, and YouTube to block underage users or face fines of up to AU$32 million. Prime Minister Anthony Albanese made clear why his government acted. “Social media is doing harm to our kids, and I’m calling time on it,” he said. “I’ve spoken to thousands of parents… they’re worried sick about the safety of our kids online, and I want Australian families to know that the Government has your back.” Parents and children are not penalized under the Australian rules; enforcement targets technology companies. Early figures suggest significant impact. Australia’s eSafety Commissioner Julie Inman-Grant said 4.7 million social media accounts were deactivated in the first week of the policy. To put that in context, there are about 2.5 million Australians aged eight to 15. “This is exactly what we hoped for and expected: early wins through focused deactivations,” she said, adding that “absolute perfection is not a realistic goal,” but the law aims to delay exposure, reduce harm, and set a clear social norm.

UK Consultation and School Phone Bans

The UK’s proposals go beyond a possible social media ban. The government said it will examine raising the digital age of consent, introducing phone curfews, and restricting addictive platform features. It also announced tougher guidance for schools, making it clear that pupils should not have access to mobile phones during lessons, breaks, or lunch. Ofsted inspectors will now check whether mobile phone bans are properly enforced during school inspections. Schools struggling to implement bans will receive one-to-one support from Attendance and Behaviour Hub schools. Although nearly all UK schools already have phone policies—99.9% of primary schools and 90% of secondary schools—58% of secondary pupils reported phones being used without permission in some lessons. Education Secretary Bridget Phillipson said: “Mobile phones have no place in schools. No ifs, no buts.”

Building on Existing Online Safety Laws

Technology Secretary Liz Kendall said the government is prepared to take further action beyond the Online Safety Act. “These laws were never meant to be the end point, and we know parents still have serious concerns,” she said. “We are determined to ensure technology enriches children’s lives, not harms them.” The Online Safety Act has already introduced age checks for adult sites and strengthened rules around harmful content. The government said children encountering age checks online has risen from 30% to 47%, and 58% of parents believe the measures are improving safety. The proposed British government ban social media initiative would build on this framework, focusing on features that drive excessive use regardless of content. Officials said evidence from around the world will be examined as they consider whether a UK-wide social media ban for children could work in practice. As Australia’s experience begins to unfold, the UK is positioning itself to decide whether similar restrictions could reshape how children engage with digital platforms. The consultation marks the start of what ministers describe as a long-term effort to ensure young people develop a healthier relationship with technology.

Grok Image Abuse Prompts X to Roll Out New Safety Limits

16 January 2026 at 02:32

Grok AI Image Abuse

Elon Musk’s social media platform X has announced a series of changes to its AI chatbot Grok, aiming to prevent the creation of nonconsensual sexualized images, including content that critics and authorities say amounts to child sexual abuse material (CSAM). The announcement was made Wednesday via X’s official Safety account, following weeks of growing scrutiny over Grok AI’s image-generation capabilities and reports of nonconsensual sexualized content.

X Reiterates Zero Tolerance Policy on CSAM and Nonconsensual Content

In its statement, X emphasized that it maintains “zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.” The platform said it continues to remove high-priority violative content, including CSAM, and to take enforcement action against accounts that violate X’s rules. Where required, accounts seeking child sexual exploitation material are reported to law enforcement authorities. The company acknowledged that the rapid evolution of generative AI presents industry-wide challenges and said it is actively working with users, partners, governing bodies, and other platforms to respond more quickly as new risks emerge.

Grok AI Image Generation Restrictions Expanded

As part of the update, X said it has implemented technological measures to restrict Grok AI from editing images of real people into revealing clothing, such as bikinis. These restrictions apply globally and affect all users, including paid subscribers. In a further change, image creation and image editing through the @Grok account are now limited to paid subscribers worldwide. X said this step adds an additional layer of accountability by helping ensure that users who attempt to abuse Grok in violation of laws or platform policies can be identified. X also confirmed the introduction of geoblocking measures in certain jurisdictions. In regions where such content is illegal, users will no longer be able to generate images of real people in bikinis, underwear, or similar attire using Grok AI. Similar geoblocking controls are being rolled out for the standalone Grok app by xAI.

Announcement Follows Widespread Abuse Reports

The update comes amid a growing scandal involving Grok AI, after thousands of users were reported to have generated sexualized images of women and children using the tool. Numerous reports documented how users took publicly available images and used Grok to depict individuals in explicit or suggestive scenarios without their consent. Particular concern has centered on a feature known as “Spicy Mode,” which xAI developed as part of Grok’s image-generation system and promoted as a differentiator. Critics say the feature enabled large-scale abuse and contributed to the spread of nonconsensual intimate imagery. According to one analysis cited in media reports, more than half of the approximately 20,000 images generated by Grok over a recent holiday period depicted people in minimal clothing, with some images appearing to involve children.

U.S. and European Authorities Escalate Scrutiny

On January 14, 2026, ahead of X’s announcement, California Attorney General Rob Bonta confirmed that his office had opened an investigation into xAI over the proliferation of nonconsensual sexually explicit material produced using Grok. In a statement, Bonta said reports describing the depiction of women and children in explicit situations were “shocking” and urged xAI to take immediate action. His office is examining whether and how xAI may have violated the law. Regulatory pressure has also intensified internationally. The European Commission confirmed earlier this month that it is examining Grok’s image-generation capabilities, particularly the creation of sexually explicit images involving minors. European officials have signaled that enforcement action is being considered.

App Store Pressure Adds to Challenges

On January 12, 2026, three U.S. senators urged Apple and Google to remove X and Grok from their app stores, arguing that Grok AI has repeatedly violated app store policies related to abusive and exploitative content. The lawmakers warned that app distribution platforms may also bear responsibility if such content continues.

Ongoing Oversight and Industry Implications

X said the latest changes do not alter its existing safety rules, which apply to all AI prompts and generated content, regardless of whether users are free or paid subscribers. The platform stated that its safety teams are working continuously to add safeguards, remove illegal content, suspend accounts where appropriate, and cooperate with authorities. As investigations continue across multiple jurisdictions, the Grok controversy is becoming a defining case in the broader debate over AI safety, accountability, and the protection of children and vulnerable individuals in the age of generative AI.

After EU Probe, U.S. Senators Push Apple and Google to Review Grok AI

12 January 2026 at 02:01

U.S. Senators Push Apple and Google to Review Grok AI

Concerns surrounding Grok AI are escalating rapidly, with pressure now mounting in the United States after ongoing scrutiny in Europe. Three U.S. senators have urged Apple and Google to remove the X app and Grok AI from the Apple App Store and Google Play Store, citing the large-scale creation of nonconsensual sexualized images of real people, including children. The move comes as a direct follow-up to the European Commission’s investigation into Grok AI’s image-generation capabilities, marking a significant expansion of regulatory attention beyond the EU. While European regulators have openly weighed enforcement actions, U.S. authorities are now signaling that app distribution platforms may also bear responsibility.

U.S. senators Cite App Store Policy Violations by Grok AI

In a letter dated January 9, 2026, Senators Ron Wyden, Ed Markey, and Ben Ray Luján formally asked Apple CEO Tim Cook and Google CEO Sundar Pichai to enforce their app store policies against X Corp. The lawmakers argue that Grok AI, which operates within the X app, has repeatedly violated rules governing abusive and exploitative content. According to the senators, users have leveraged Grok AI to generate nonconsensual sexualized images of women, depicting abuse, humiliation, torture, and even death. More alarmingly, the letter states that Grok AI has also been used to create sexualized images of children, content the senators described as both harmful and potentially illegal. The lawmakers emphasized that such activity directly conflicts with policies enforced by both the Apple App Store and Google Play Store, which prohibit content involving sexual exploitation, especially material involving minors.

Researchers Flag Potential Child Abuse Material Linked to Grok AI

The letter also references findings by independent researchers who identified an archive connected to Grok AI containing nearly 100 images flagged as potential child sexual abuse material. These images were reportedly generated over several months, raising questions about X Corp’s oversight and response mechanisms. The senators stated that X appeared fully aware of the issue, pointing to public reactions by Elon Musk, who acknowledged reports of Grok-generated images with emoji responses. In their view, this signaled a lack of seriousness in addressing the misuse of Grok AI.

Premium Restrictions Fail to Calm Controversy

In response to the backlash, X recently limited Grok AI’s image-generation feature to premium subscribers. However, the senators dismissed this move as inadequate. Sen. Wyden said the change merely placed a paywall around harmful behavior rather than stopping it, arguing that it allowed the production of abusive content to continue while generating revenue. The lawmakers stressed that restricting access does not absolve X of responsibility, particularly when nonconsensual sexualized images remain possible through the platform.

Pressure Mounts on Apple App Store and Google Play Store

The senators warned that allowing the X app and Grok AI to remain available on the Apple App Store and Google Play Store would undermine both companies’ claims that their platforms offer safer environments than alternative app distribution methods. They also pointed to recent instances where Apple and Google acted swiftly to remove other controversial apps under government pressure, arguing that similar urgency should apply in the case of Grok AI. At minimum, the lawmakers said, temporary removal of the apps would be appropriate while a full investigation is conducted. They requested a written response from both companies by January 23, 2026, outlining how Grok AI and the X app are being assessed under existing policies. Apple and Google have not publicly commented on the letter, and X has yet to issue a formal response. The latest development adds momentum to global scrutiny of Grok AI, reinforcing concerns already raised by the European Commission. Together, actions in the U.S. and Europe signal a broader shift toward holding AI platforms, and the app ecosystems that distribute them, accountable for how generative technologies are deployed and controlled at scale.

UK Moves to Close Public Sector Cyber Gaps With Government Cyber Action Plan

Government Cyber Action Plan

The UK government has revealed the Government Cyber Action Plan as a renewed effort to close the growing gap between escalating cyber threats and the public sector’s ability to respond effectively. The move comes amid a series of cyberattacks targeting UK retail and manufacturing sectors, incidents that have underscored broader vulnerabilities affecting critical services and government operations. Designed to strengthen UK cyber resilience, the plan reflects a shift from fragmented cyber initiatives to a more coordinated, accountable, and outcomes-driven approach across government departments.

A Growing Gap Between Threats and Defences

Recent cyber incidents have highlighted a persistent challenge: while threats to public services continue to grow in scale and sophistication, defensive capabilities have not kept pace. Reviews conducted by the Department for Science, Innovation and Technology (DSIT) revealed that cyber and digital resilience across the public sector was significantly lower than previously assessed. This assessment was reinforced by the National Audit Office’s report on government cyber resilience, which warned that without urgent improvements, the government risks serious incidents and operational disruption. The report concluded that the public sector must “catch up with the acute cyber threat it faces” to protect services and ensure value for money.

Building on Existing Foundations

The Government Cyber Action Plan builds on earlier collaborative efforts between DSIT, the National Cyber Security Centre (NCSC), and the Cabinet Office. Notable achievements to date include the establishment of the Government Cyber Coordination Centre (GC3), created to manage cross-government incident response, and the rollout of GovAssure, a scheme designed to assess the security of government-critical systems. Despite these initiatives, officials acknowledged that structural issues, inconsistent governance, and limited accountability continued to hinder effective cyber risk management. GCAP is intended to address these gaps directly.

Five Delivery Strands of the Government Cyber Action Plan

At the core of the Government Cyber Action Plan are five delivery strands aimed at strengthening accountability and improving operational resilience across departments. The first strand focuses on accountability, placing clearer responsibility for cyber risk management on accounting officers, senior leaders, Chief Digital and Information Officers (CDIOs), and Chief Information Security Officers (CISOs). The second strand emphasises support, providing departments with access to shared cyber expertise and the rapid deployment of technical teams during high-risk situations. Under the services strand, GCAP promotes the development of secure digital solutions that can be built once and used across multiple departments. This approach is intended to reduce duplication, improve consistency, and address capability gaps through innovation, including initiatives such as the NCSC’s ACD 2.0 programme. Response is another key focus, with the introduction of the Government Cyber Incident Response Plan (G-CIRP). This framework formalises how departments report and respond to cyber incidents, improving coordination during national-level events. The final strand addresses skills, aiming to attract, develop, and retain cyber professionals across government. Central to this effort is the creation of a Government Cyber Security Profession—the first dedicated government profession focused specifically on cyber security and resilience.

Role of the NCSC and Long-Term Impact

The NCSC will play a central role across all five strands of the Government Cyber Action Plan, from supporting departments during incidents to helping design services that improve resilience. This approach aligns with the NCSC’s existing work with critical national infrastructure and public sector organisations, offering technical guidance, assurance, and incident response support. While GCAP’s implementation will be phased through to 2029 and beyond, officials say the framework is expected to deliver measurable improvements even in its first year. These include stronger risk management practices and faster coordination during cyber incidents. According to Johnny McManus, Deputy Director for Government Cyber Resilience at the NCSC, the combination of DSIT’s delivery leadership and the NCSC’s technical authority provides a foundation for transforming UK cyber resilience across the public sector.

Beyond Compliance: How India’s DPDP Act Is Reshaping the Cyber Insurance Landscape

19 December 2025 at 00:38

DPDP Act Is Reshaping the Cyber Insurance Landscape

By Gauravdeep Singh, Head – State e-Mission Team (SeMT), Ministry of Electronics and Information Technology The Digital Personal Data Protection (DPDP) Act has fundamentally altered the risk landscape for Indian organisations. Data breaches now trigger mandatory compliance obligations regardless of their origin, transforming incidents that were once purely operational concerns into regulatory events with significant financial and legal implications.

Case Study 1: Cloud Misconfiguration in a Consumer Platform

A prominent consumer-facing platform experienced a data exposure incident when a misconfigured storage bucket on its public cloud infrastructure inadvertently made customer data publicly accessible. While no malicious actor was involved, the incident still constituted a reportable data breach under the DPDP Act framework. The organisation faced several immediate obligations:
  • Notification to affected individuals within prescribed timelines
  • Formal reporting to the Data Protection Board
  • Comprehensive internal investigation and remediation measures
  • Potential penalties for failure to implement reasonable security safeguards as mandated under the Act
Such incidents highlight a critical gap in traditional risk management approaches. The financial exposure—encompassing regulatory penalties, legal costs, remediation expenses, and reputational damage—frequently exceeds conventional cyber insurance coverage limits, particularly when compliance failures are implicated.

Case Study 2: Ransomware Attack on Healthcare and EdTech Infrastructure

A mid-sized healthcare and education technology provider fell victim to a ransomware attack that encrypted sensitive personal records. Despite successful restoration from backup systems, the organisation confronted extensive regulatory and operational obligations:
  • Forensic assessment to determine whether data confidentiality was compromised
  • Mandatory notification to regulatory authorities and affected data principals
  • Ongoing legal and compliance proceedings
The total cost extended far beyond any ransom demand. Forensic investigations, legal advisory services, public communications, regulatory compliance activities, and operational disruption collectively created substantial financial strain, costs that would have been mitigated with appropriate insurance coverage.

Case Study 3: AI-Enabled Fraud and Social Engineering

The emergence of AI-driven attack vectors has introduced new dimensions of cyber risk. Deepfake technology and sophisticated phishing campaigns now enable threat actors to impersonate senior leadership with unprecedented authenticity, compelling finance teams to authorise fraudulent fund transfers or inappropriate data disclosures. These attacks often circumvent traditional technical security controls because they exploit human trust rather than system vulnerabilities. As a result, organisations are increasingly seeking insurance coverage for social engineering and cyber fraud events, particularly those involving personal data or financial information, that fall outside conventional cybersecurity threat models.

The Evolution of Cyber Insurance in India

India DPDP Act The Indian cyber insurance market is undergoing significant transformation in response to the DPDP Act and evolving threat landscape. Modern policies now extend beyond traditional hacking incidents to address:
  • Data breaches resulting from human error or operational failures
  • Third-party vendor and SaaS provider security failures
  • Cloud service disruptions and availability incidents
  • Regulatory investigation costs and legal defense expenses
  • Incident response, crisis management, and public relations support
Organisations are reassessing their coverage adequacy as they recognise that historical policy limits of Rs. 10–20 crore may prove insufficient when regulatory penalties, legal costs, business interruption losses, and remediation expenses are aggregated under the DPDP compliance framework.

The SME and MSME Vulnerability

Small and medium enterprises represent the most vulnerable segment of the market. While many SMEs and MSMEs regularly process personal data, they frequently lack:
  • Mature information security controls and governance frameworks
  • Dedicated compliance and data protection teams
  • Financial reserves to absorb penalties, legal costs, or operational disruption
For organisations in this segment, even a relatively minor cyber incident can trigger prolonged operational shutdowns or, in severe cases, permanent closure. Despite this heightened vulnerability, cyber insurance adoption among SMEs remains disproportionately low, driven primarily by awareness gaps and perceived cost barriers.

Implications for the Cyber Insurance Ecosystem

The Indian cyber insurance market is entering a period of accelerated growth and structural evolution. Several key trends are emerging:
  • Higher policy limits becoming standard practice across industries
  • Enhanced underwriting processes emphasising compliance readiness and data governance maturity
  • Comprehensive coverage integrating legal advisory, forensic investigation, and regulatory support
  • Risk-based pricing models that reward robust data protection practices
Looking ahead, cyber insurance will increasingly be evaluated not merely as a risk-transfer mechanism, but as an indicator of an organisation's overall data protection posture and regulatory preparedness.

DPDP Act and the End of Optional Cyber Insurance

The DPDP Act has fundamentally redefined cyber risk in the Indian context. Data breaches are no longer isolated IT failures; they are regulatory events carrying substantial financial, legal, and reputational consequences. In this environment, cyber insurance is transitioning from a discretionary safeguard to a strategic imperative. Organisations that integrate cyber insurance into a comprehensive data governance and enterprise risk management strategy will be better positioned to navigate the evolving regulatory landscape. Conversely, those that remain uninsured or underinsured may discover that the cost of inadequate preparation far exceeds the investment required for robust protection. (This article reflects the author’s analysis and personal viewpoints and is intended for informational purposes only. It should not be construed as legal or regulatory advice.)

8 Ways the DPDP Act Will Change How Indian Companies Handle Data in 2026 

16 December 2025 at 01:16

DPDP Act

For years, data privacy in India lived in a grey zone. Mobile numbers demanded at checkout counters. Aadhaar photocopies lying unattended in hotel drawers. Marketing messages that arrived long after you stopped using a service. Most of us accepted this as normal, until the law caught up.  That moment has arrived.  The Digital Personal Data Protection Act (DPDP Act), 2023, backed by the Digital Personal Data Protection Rules, 2025 notified by the Ministry of Electronics and Information Technology (MeitY) on 13 November 2025, marks a decisive shift in how personal data must be treated in India. As the country heads into 2026, businesses are entering the most critical phase: execution.  Companies now have an 18-month window to re-engineer systems, processes, and accountability frameworks across IT, legal, HR, marketing, and vendor ecosystems. The change is not cosmetic. It is structural.  As Sandeep Shukla, Director, International Institute of Information Technology Hyderabad (IIIT Hyderabad), puts it bluntly: 
“Well, I can say that Indian Companies so far has been rather negligent of customer's privacy. Anywhere you go, they ask for your mobile number.” 
The DPDP Act is designed to ensure that such casual indifference to personal data does not survive the next decade.  Below are eight fundamental ways the DPDP Act will change how Indian companies handle data in 2026, with real-world implications for businesses, consumers, and the digital economy.

1. Privacy Will Movefromthe Back Office to the Boardroom 

Until now, data protection in Indian organizations largely sat with compliance teams or IT security. That model will not hold in 2026.  The DPDP framework makes senior leadership directly accountable for how personal data is handled, especially in cases of breaches or systemic non-compliance. Privacy risk will increasingly be treated like financial or operational risk. 
According to Shashank Bajpai, CISO & CTSO at YOTTA, “The DPDP Act (2023) becomes operational through Rules notified in November 2025; the result is a staggered compliance timetable that places 2026 squarely in the execution phase. That makes 2026 the inflection year when planning becomes measurable operational work and when regulators will expect visible progress.” 
In 2026, privacy decisions will increasingly sit with boards, CXOs, and risk committees. Metrics such as consent opt-out rates, breach response time, and third-party risk exposure will become leadership-level conversations, not IT footnotes.

2. Consent Will Become Clear, Granular, and Reversible

One of the most visible changes users will experience is how consent is sought.  Under the DPDP Act, consent must be specific, informed, unambiguous, and easy to withdraw. Pre-ticked boxes and vague “by using this service” clauses will no longer be enough. 
As Gauravdeep Singh, State Head (Digital Transformation), e-Mission Team, MeitY, explains, “Data Principal = YOU.” 
Whether it’s a food delivery app requesting location access or a fintech platform processing transaction history, individuals gain the right to control how their data is used—and to change their mind later.

3. Data Hoarding Will Turnintoa Liability 

For many Indian companies, collecting more data than necessary was seen as harmless. Under the DPDP Act, it becomes risky.  Organizations must now define why data is collected, how long it is retained, and how it is securely disposed of. If personal data is no longer required for a stated purpose, it cannot simply be stored indefinitely. 
Shukla highlights how deeply embedded poor practices have been, “Hotels take your aadhaar card or driving license and copy and keep it in the drawers inside files without ever telling the customer about their policy regarding the disposal of such PII data safely and securely.” 
In 2026, undefined retention is no longer acceptable.

4. Third-Party Vendors Will Come Under the Scanner

Data processors like cloud providers, payment gateways, CRM platforms, will no longer operate in the shadows.  The DPDP Act clearly distinguishes between Data Fiduciaries (companies that decide how data is used) and Data Processors (those that process data on their behalf). Fiduciaries remain accountable, even if the breach occurs at a vendor.  This will force companies to: 
  • Audit vendors regularly 
  • Rewrite contracts with DPDP clauses 
  • Monitor cross-border data flows 
As Shukla notes“The shops, E-commerce establishments, businesses, utilities collect so much customer PII, and often use third party data processor for billing, marketing and outreach. We hardly ever get to know how they handle the data.” 
In 2026, companies will be required to audit vendors, strengthen contracts, and ensure processors follow DPDP-compliant practices, because liability remains with the fiduciary.

5. Breach Response Will Be Timed, Tested, and Visible

Data breaches are no longer just technical incidents, they are legal events.  The DPDP Rules require organizations to detect, assess, and respond to breaches with defined processes and accountability. Silence or delay will only worsen regulatory consequences. 
As Bajpai notes, “The practical effect is immediate: companies must move from policy documents to implemented consent systems, security controls, breach workflows, and vendor governance.” 
Tabletop exercises, breach simulations, and forensic readiness will become standard—not optional. 

6. SignificantData Fiduciaries (SDFs) Will Face Heavier Obligations 

Not all companies are treated equally under the DPDP Act. Significant Data Fiduciaries (SDFs)—those handling large volumes of sensitive personal data, will face stricter obligations, including: 
  • Data Protection Impact Assessments 
  • Appointment of India-based Data Protection Officers 
  • Regular independent audits 
Global platforms like Meta, Google, Amazon, and large Indian fintechs will feel the pressure first, but the ripple effect will touch the entire ecosystem.

7. A New Privacy Infrastructure Will Emerge

The DPDP framework is not just regulation—it is ecosystem building. 
As Bajpai observes, “This is not just regulation; it is an economic strategy to build domestic capability in cloud, identity, security and RegTech.” 
Consent Managers, auditors, privacy tech vendors, and compliance platforms will grow rapidly in 2026. For Indian startups, DPDP compliance itself becomes a business opportunity.

8. Trust Will Become a Competitive Advantage

Perhaps the biggest change is psychological. In 2026, users will increasingly ask: 
  • Why does this app need my data? 
  • Can I withdraw consent? 
  • What happens if there’s a breach? 
One Reddit user captured the risk succinctly, “On paper, the DPDP Act looks great… But a law is only as strong as public awareness around it.” 
Companies that communicate transparently and respect user choice will win trust. Those that don’t will lose customers long before regulators step in. 

Preparing for 2026: From Awareness to Action 

As Hareesh Tibrewala, CEO at Anhad, notes, “Organizations now have the opportunity to prepare a roadmap for DPDP implementation.”
For many businesses, however, the challenge lies in turning awareness into action, especially when clarity around timelines and responsibilities is still evolving.  The concern extends beyond citizens to companies themselves, many of which are still grappling with core concepts such as consent management, data fiduciary obligations, and breach response requirements. With penalties tiered by the nature and severity of violations—ranging from significant fines to amounts running into hundreds of crores, this lack of understanding could prove costly.  In 2026, regulators will no longer be looking for intent, they will be looking for evidence of execution. As Bajpai points out, “That makes 2026 the inflection year when planning becomes measurable operational work and when regulators will expect visible progress.” 

What Companies Should Do Now: A Practical DPDP Act Readiness Checklist 

As India moves closer to full DPDP enforcement, organizations that act early will find compliance far less disruptive. At a minimum, businesses should focus on the following steps: 
  • Map personal data flows: Identify what personal data is collected, where it resides, who has access to it, and which third parties process it. 
  • Review consent mechanisms: Ensure consent requests are clear, purpose-specific, and easy to withdraw, across websites, apps, and internal systems. 
  • Define retention and deletion policies: Establish how long different categories of personal data are retained and document secure disposal processes. 
  • Assess third-party risk: Audit vendors, cloud providers, and processors to confirm DPDP-aligned controls and contractual obligations. 
  • Strengthen breach response readiness: Put tested incident response and notification workflows in place, not just policies on paper. 
  • Train employees across functions: Build awareness beyond IT and legal teams, privacy failures often begin with everyday operational mistakes. 
  • Assign ownership and accountability: Clearly define who is responsible for DPDP compliance, reporting, and ongoing monitoring. 
These steps are not about ticking boxes; they are about building muscle memory for a privacy-first operating environment. 

2026 Is the Year Privacy Becomes Real 

The DPDP Act does not promise instant perfection. What it demands is accountability.  By 2026, privacy will move from policy documents to product design, from legal fine print to leadership dashboards, and from reactive fixes to proactive governance. Organizations that delay will not only face regulatory penalties, but they also risk losing customer trust in an increasingly privacy-aware market. 
As Sandeep Shukla cautions, “It will probably take years before a proper implementation at all levels of organizations would be seen.” 
But the direction is clear. Personal data in India can no longer be treated casually.  The DPDP Act marks the end of informal data handling, and the beginning of a more disciplined, transparent, and accountable digital economy. 

Australia’s Social Media Ban for Kids: Protection, Overreach or the Start of a Global Shift?

10 December 2025 at 04:23

ban on social media

On a cozy December morning, as children in Australia set their bags aside for the holiday season and held their tabs and phones in hand to take that selfie and announce to the world they were all set for the fun to begin, something felt a miss. They couldn't access their Snap Chat and Instagram accounts. No it wasn't another downtime caused by a cyberattack, because they could see their parents lounging on the couch and laughing at the dog dance reels. So why were they not able to? The answer: the ban on social media for children under 16 had officially taken effect. It wasn't just one or 10 or 100 but more than one million young users who woke up locked out of their social media. No TikTok scroll. No Snapchat streak. No YouTube comments. Australia had quietly entered a new era, the world’s first nationwide ban on social media for children under 16, effective December 10. The move has initiated global debate, parental relief, youth frustration, and a broader question: Is this the start of a global shift, or a risky social experiment? Prime Minister Anthony Albanese was clear about why his government took this unparalleled step. “Social media is doing harm to our kids, and I’m calling time on it,” he said during a press conference. “I’ve spoken to thousands of parents… they’re worried sick about the safety of our kids online, and I want Australian families to know that the Government has your back.” Under the Anthony Albanese social media policy, platforms including Instagram, Facebook, X, Snapchat, TikTok, Reddit, Twitch, Kick, Threads and YouTube must block users under 16, or face fines of up to AU$32 million. Parents and children won’t be penalized, but tech companies will. [caption id="attachment_107569" align="aligncenter" width="448"]Australia ban Social Media Source: eSafety Commissioner[/caption]

Australia's Ban on Social Media: A Big Question

Albanese pointed to rising concerns about the effects of social media on children, from body-image distortion to exposure to inappropriate content and addictive algorithms that tug at young attention spans. [caption id="attachment_107541" align="aligncenter" width="960"]Ban on social media Source: Created using Google Gemini[/caption] Research supports these concerns. A Pew Research Center study found:
  • 48% of teens say social media has a mostly negative effect on people their age, up sharply from 32% in 2022.
  • 45% feel they spend too much time on social media.
  • Teen girls experience more negative impacts than boys, including mental health struggles (25% vs 14%) and loss of confidence (20% vs 10%).
  • Yet paradoxically, 74% of teens feel more connected to friends because of social media, and 63% use it for creativity.
These contradictions make the issue far from black and white. Psychologists remind us that adolescence, beginning around age 10 and stretching into the mid-20s, is a time of rapid biological and social change, and that maturity levels vary. This means that a one-size-fits-all ban on social media may overshoot the mark.

Ban on Social Media for Users Under 16: How People Reacted

Australia’s announcement, first revealed in November 2024, has motivated countries from Malaysia to Denmark to consider similar legislation. But not everyone is convinced this is the right way forward.

Supporters Applaud “A Chance at a Real Childhood”

Pediatric occupational therapist Cris Rowan, who has spent 22 years working with children, celebrated the move: “This may be the first time children have the opportunity to experience a real summer,” she said.“Canada should follow Australia’s bold initiative. Parents and teachers can start their own movement by banning social media from homes and schools.” Parents’ groups have also welcomed the decision, seeing it as a necessary intervention in a world where screens dominate childhood.

Others Say the Ban Is Imperfect, but Necessary

Australian author Geoff Hutchison puts it bluntly: “We shouldn’t look for absolutes. It will be far from perfect. But we can learn what works… We cannot expect the repugnant tech bros to care.” His view reflects a broader belief that tech companies have too much power, and too little accountability.

Experts Warn Against False Security 

However, some experts caution that the Australia ban on social media may create the illusion of safety while failing to address deeper issues. Professor Tama Leaver, Internet Studies expert at Curtin University, told The Cyber Express that while the ban on social media addresses some risks, such as algorithmic amplification of inappropriate content and endless scrolling, many online dangers remain.

“The social media ban only really addresses on set of risks for young people, which is algorithmic amplification of inappropriate content and the doomscrolling or infinite scroll. Many risks remain. The ban does nothing to address cyberbullying since messaging platforms are exempt from the ban, so cyberbullying will simply shift from one platform to another.”

Leaver also noted that restricting access to popular platforms will not drive children offline. Due to ban on social media young users will explore whatever digital spaces remain, which could be less regulated and potentially riskier.

“Young people are not leaving the digital world. If we take some apps and platforms away, they will explore and experiment with whatever is left. If those remaining spaces are less known and more risky, then the risks for young people could definitely increase. Ideally the ban will lead to more conversations with parents and others about what young people explore and do online, which could mitigate many of the risks.”

From a broader perspective, Leaver emphasized that the ban on social media will only be fully beneficial if accompanied by significant investment in digital literacy and digital citizenship programs across schools:

“The only way this ban could be fully beneficial is if there is a huge increase in funding and delivery of digital literacy and digital citizenship programs across the whole K-12 educational spectrum. We have to formally teach young people those literacies they might otherwise have learnt socially, otherwise the ban is just a 3 year wait that achieves nothing.”

He added that platforms themselves should take a proactive role in protecting children:

“There is a global appetite for better regulation of platforms, especially regarding children and young people. A digital duty of care which requires platforms to examine and proactively reduce or mitigate risks before they appear on platforms would be ideal, and is something Australia and other countries are exploring. Minimizing risks before they occur would be vastly preferable to the current processes which can only usually address harm once it occurs.”

Looking at the global stage, Leaver sees Australia ban on social media as a potential learning opportunity for other nations:

“There is clearly global appetite for better and more meaningful regulation of digital platforms. For countries considered their own bans, taking the time to really examine the rollout in Australia, to learn from our mistakes as much as our ambitions, would seem the most sensible path forward.”

Other specialists continue to warn that the ban on social media could isolate vulnerable teenagers or push them toward more dangerous, unregulated corners of the internet.

Legal Voices Raise Serious Constitutional Questions

Senior Supreme Court Advocate Dr. K. P. Kylasanatha Pillay offered a thoughtful reflection: “Exposure of children to the vagaries of social media is a global concern… But is a total ban feasible? We must ask whether this is a reasonable restriction or if it crosses the limits of state action. Not all social media content is harmful. The best remedy is to teach children awareness.” His perspective reflects growing debate about rights, safety, and state control.

LinkedIn, Reddit, and the Public Divide

Social media itself has become the battleground for reactions. On Reddit, youngesters were particularly vocal about the ban on social media. One teen wrote: “Good intentions, bad execution. This will make our generation clueless about internet safety… Social media is how teenagers express themselves. This ban silences our voices.” Another pointed out the easy loophole: “Bypassing this ban is as easy as using a free VPN. Governments don’t care about safety — they want control.” But one adult user disagreed: “Everyone against the ban seems to be an actual child. I got my first smartphone at 20. My parents were right — early exposure isn’t always good.” This generational divide is at the heart of the debate.

Brands, Marketers, and Schools Brace for Impact

Bindu Sharma, Founder of World One Consulting, highlighted the global implications: “Ten of the biggest platforms were ordered to block children… The world is watching how this plays out.” If the ban succeeds, brands may rethink how they target younger audiences. If it fails, digital regulation worldwide may need reimagining.

Where Does This Leave the World?

Australia’s decision to ban social media for children under 16 is bold, controversial, and rooted in good intentions. It could reshape how societies view childhood, technology, and digital rights. But as critics note, ban on social media platforms can also create unintended consequences, from delinquency to digital illiteracy. What’s clear is this: Australia has started a global conversation that’s no longer avoidable. As one LinkedIn user concluded: “Safety of the child today is assurance of the safety of society tomorrow.”

European Court Imposes Strict New Data Checks on Online Marketplace Ads

3 December 2025 at 00:34

CJEU ruling

The CJEU ruling by the Court of Justice of the European Union on Tuesday has made it clear that online marketplaces are responsible for the personal data that appears in advertisements on their platforms. The Court of Justice of the European Union decision makes clear that platforms must get consent from any person whose data is shown in an advertisement, and must verify ads before they go live, especially where sensitive data is involved. The CJEU ruling comes from a 2018 incident in Romania. A fake advertisement on the classifieds website publi24.ro claimed a woman was offering sexual services. The post included her photos and phone number, which were used without her permission. The operator of the site, Russmedia Digital, removed the ad within an hour, but by then it had already been copied to other websites. The woman said the ad harmed her privacy and reputation and took the company to court. Lower courts in Romania gave different decisions, so the case was referred to the Court of Justice of the European Union for clarity. The CJEU has now confirmed that online marketplaces are data controllers under the GDPR for the personal data contained in ads on their sites.

CJEU Ruling: What Online Marketplaces Must Do Now

The court said that marketplace operators must take more responsibility and cannot rely on old rules that protect hosting services from liability. From now on, platforms must:
  • Check ads before publishing them when they contain personal or sensitive data.
  • Confirm that the person posting the ad is the same person shown in the ad, or make sure the person shown has given explicit consent.
  • Refuse ads if consent or identity verification cannot be confirmed.
  • Put measures in place to help prevent sensitive ads from being copied and reposted on other websites.
These steps must be part of the platform’s regular technical and organisational processes to comply with the GDPR.

What This Means for Platforms Across The EU

Legal teams at Pinsent Masons warned the decision “will likely have major implications for data protection across the 27 member states.” Nienke Kingma of Pinsent Masons said the ruling is important for compliance, adding it is “setting a new standard for data protection compliance across the EU.” Thijs Kelder, also at Pinsent Masons, said: “This judgment makes clear that online marketplaces cannot avoid their obligations under the GDPR,” and noted the decision “increases the operational risks on these platforms,” meaning companies will need stronger risk management. Daphne Keller of Stanford Law School warned about wider effects on free expression and platform design, noting the ruling “has major implications for free expression and access to information, age verification and privacy.”

Practical Impact

The CJEU ruling decision marks a major shift in how online marketplaces must operate. Platforms that allow users to post adverts will now have to rethink their processes, from verifying identities and checking personal data before an ad goes live to updating their terms and investing in new technical controls. Smaller platforms may feel the pressure most, as the cost of building these checks could be significant. What happens next will depend on how national data protection authorities interpret the ruling and how quickly companies can adapt. The coming months will reveal how verification should work in practice, what measures count as sufficient protection against reposting, and how platforms can balance these new duties with user privacy and free expression. The ruling sets a strict new standard, and its real impact will become clearer as regulators, courts, and platforms begin to implement it.

EU Reaches Agreement on Child Sexual Abuse Detection Law After Three Years of Contentious Debate

27 November 2025 at 13:47

Child Sexual Abuse

That lengthy standoff over privacy rights versus child protection ended Wednesday when EU member states finally agreed on a negotiating mandate for the Child Sexual Abuse Regulation, a controversial law requiring online platforms to detect, report, and remove child sexual abuse material while critics warn the measures could enable mass surveillance of private communications.

The Council agreement, reached despite opposition from the Czech Republic, Netherlands, and Poland, clears the way for trilogue negotiations with the European Parliament to begin in 2026 on legislation that would permanently extend voluntary scanning provisions and establish a new EU Centre on Child Sexual Abuse.

The Council introduces three risk categories of online services based on objective criteria including service type, with authorities able to oblige online service providers classified in the high-risk category to contribute to developing technologies to mitigate risks relating to their services. The framework shifts responsibility to digital companies to proactively address risks on their platforms.

Permanent Extension of Voluntary Scanning

One significant provision permanently extends voluntary scanning, a temporary measure first introduced in 2021 that allows companies to voluntarily scan for child sexual abuse material without violating EU privacy laws. That exemption was set to expire in April 2026 under current e-Privacy Directive provisions.

At present, providers of messaging services may voluntarily check content shared on their platforms for online child sexual abuse material, then report and remove it. According to the Council position, this exemption will continue to apply indefinitely under the new law.

Danish Justice Minister Peter Hummelgaard welcomed the Council's agreement, stating that the spread of child sexual abuse material is "completely unacceptable." "Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse," Hummelgaard said.

New EU Centre on Child Sexual Abuse

The legislation provides for establishment of a new EU agency, the EU Centre on Child Sexual Abuse, to support implementation of the regulation. The Centre will act as a hub for child sexual abuse material detection, reporting, and database management, receiving reports from providers, assessing risk levels across platforms, and maintaining a database of indicators.

The EU Centre will assess and process information supplied by online providers about child sexual abuse material identified on services, creating, maintaining and operating a database for reports submitted by providers. The Centre will share information from companies with Europol and national law enforcement bodies, supporting national authorities in assessing the risk that online services could be used to spread abuse material.

Online companies must provide assistance for victims who would like child sexual abuse material depicting them removed or for access to such material disabled. Victims can ask for support from the EU Centre, which will check whether companies involved have removed or disabled access to items victims want taken down.

Privacy Concerns and Opposition

The breakthrough comes after months of stalled negotiations and a postponed October vote when Germany joined a blocking minority opposing what critics commonly call "chat control." Berlin argued the proposal risked "unwarranted monitoring of chats," comparing it to opening letters from other correspondents.

Critics from Big Tech companies and data privacy NGOs warn the measures could pave the way for mass surveillance, as private messages would be scanned by authorities to detect illegal images. The Computer and Communications Industry Association stated that EU member states made clear the regulation can only move forward if new rules strike a true balance protecting minors while maintaining confidentiality of communications, including end-to-end encryption.

Also read: EU Chat Control Proposal to Prevent Child Sexual Abuse Slammed by Critics

Former Pirate MEP Patrick Breyer, who has been advocating against the file, characterized the Council endorsement as "a Trojan Horse" that legitimizes warrantless, error-prone mass surveillance of millions of Europeans by US corporations through cementing voluntary mass scanning.

The European Parliament's study heavily critiqued the Commission's proposal, concluding there aren't currently technological solutions that can detect child sexual abuse material without resulting in high error rates affecting all messages, files and data in platforms. The study also concluded the proposal would undermine end-to-end encryption and security of digital communications.

Scope of the Crisis

Statistics underscore the urgency. 20.5 million reports and 63 million files of abuse were submitted to the National Center for Missing and Exploited Children CyberTipline last year, with online grooming increasing 300 percent since negotiations began. Every half second, an image of a child being sexually abused is reported online.

Sixty-two percent of abuse content flagged by the Internet Watch Foundation in 2024 was traced to EU servers, with at least one in five children in Europe a victim of sexual abuse.

The Council position allows trilogue negotiations with the European Parliament and Commission to start in 2026. Those negotiations need to conclude before the already postponed expiration of the current e-Privacy regulation that allows exceptions under which companies can conduct voluntary scanning. The European Parliament reached its negotiating position in November 2023.

❌