Normal view

Received before yesterday

India Brings AI-Generated Content Under Formal Regulation with IT Rules Amendment

12 February 2026 at 04:28

AI-generated Content

The Central Government has formally brought AI-generated content within India’s regulatory framework for the first time. Through notification G.S.R. 120(E), issued by the Ministry of Electronics and Information Technology (MeitY) and signed by Joint Secretary Ajit Kumar, amendments were introduced to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The revised rules take effect from February 20, 2026.  The move represents a new shift in the Indian cybersecurity and digital governance policy. While the Information Technology Act, 2000, has long addressed unlawful online conduct, these amendments explicitly define and regulate “synthetically generated information” (SGI), placing AI-generated content under structured compliance obligations. 

What the Law Now Defines as “Synthetically Generated Information” 

The notification inserts new clauses into Rule 2 of the 2021 Rules. It defines “audio, visual or audio-visual information” broadly to include any audio, image, photograph, video, sound recording, or similar content created, generated, modified, or altered through a computer resource.  More critically, clause (wa) defines “synthetically generated information” as content that is artificially or algorithmically created or altered in a manner that appears real, authentic, or true and depicts or portrays an individual or event in a way that is likely to be perceived as indistinguishable from a natural person or real-world occurrence.  This definition clearly encompasses deep-fake videos, AI-generated voiceovers, face-swapped images, and other forms of AI-generated content designed to simulate authenticity. The framing is deliberate: the concern is not merely digital alteration, but deception, content that could reasonably be mistaken for reality.  At the same time, the amendment carves out exceptions. Routine or good-faith editing, such as color correction, formatting, transcription, compression, accessibility improvements, translation, or technical enhancement, does not qualify as synthetically generated information, provided the underlying substance or meaning is not materially altered. Educational materials, draft templates, or conceptual illustrations also fall outside the SGI category unless they create a false document or false electronic record. This distinction attempts to balance innovation in Information Technology with protection against misuse. 

New Duties for Intermediaries 

The amendments substantially revise Rule 3, expanding intermediary obligations. Platforms must inform users, at least once every three months and in English or any Eighth Schedule language, that non-compliance with platform rules or applicable laws may lead to suspension, termination, removal of content, or legal liability. Where violations relate to criminal offences, such as those under the Bharatiya Nagarik Suraksha Sanhita, 2023, or the Protection of Children from Sexual Offences Act, 2012, mandatory reporting requirements apply.  A new clause (ca) introduces additional obligations for intermediaries that enable or facilitate the creation or dissemination of synthetically generated information. These platforms must inform users that directing their services to create unlawful AI-generated content may attract penalties under laws including the Information Technology Act, the Bharatiya Nyaya Sanhita, 2023, the Representation of the People Act, 1951, the Indecent Representation of Women (Prohibition) Act, 1986, the Sexual Harassment of Women at Workplace Act, 2013, and the Immoral Traffic (Prevention) Act, 1956.  Consequences for violations may include immediate content removal, suspension or termination of accounts, disclosure of the violator’s identity to victims, and reporting to authorities where offences require mandatory reporting. The compliance timelines have also been tightened. Content removal in response to valid orders must now occur within three hours instead of thirty-six hours. Certain grievance response windows have been reduced from fifteen days to seven days, and some urgent compliance requirements now demand action within two hours. 

Due Diligence and Labelling Requirements for AI-generated Content 

A new Rule 3(3) imposes explicit due diligence obligations for AI-generated content. Intermediaries must deploy reasonable and appropriate technical measures, including automated tools, to prevent users from creating or disseminating synthetically generated information that violates the law.  This includes content containing child sexual abuse material, non-consensual intimate imagery, obscene or sexually explicit material, false electronic records, or content related to explosive materials or arms procurement. It also includes deceptive portrayals of real individuals or events intended to mislead.  For lawful AI-generated content that does not violate these prohibitions, the rules mandate prominent labelling. Visual content must carry clearly visible notices. Audio content must include a prefixed disclosure. Additionally, such content must be embedded with permanent metadata or other provenance mechanisms, including a unique identifier linking the content to the intermediary computer resource, where technically feasible. Platforms are expressly prohibited from enabling the suppression or removal of these labels or metadata. 

Enhanced Obligations for Social Media Intermediaries 

Rule 4 introduces an additional compliance layer for significant social media intermediaries. Before allowing publication, these platforms must require users to declare whether content is synthetically generated. They must deploy technical measures to verify the accuracy of that declaration. If confirmed as AI-generated content, it must be clearly labelled before publication.  If a platform knowingly permits or fails to act on unlawful synthetically generated information, it may be deemed to have failed its due diligence obligations. The amendments also align terminology with India’s evolving criminal code, replacing references to the Indian Penal Code with the Bharatiya Nyaya Sanhita, 2023. 

Implications for Indian Cybersecurity and Digital Platforms 

The February 2026 amendment reflects a decisive step in Indian cybersecurity policy. Rather than banning AI-generated content outright, the government has opted for traceability, transparency, and technical accountability. The focus is on preventing deception, protecting individuals from reputational harm, and ensuring rapid response to unlawful synthetic media. For platforms operating within India’s Information Technology ecosystem, compliance will require investment in automated detection systems, content labelling infrastructure, metadata embedding, and accelerated grievance redressal workflows. For users, the regulatory signal is clear: generating deceptive synthetic media is no longer merely unethical; it may trigger direct legal consequences. As AI tools continue to scale, the regulatory framework introduced through G.S.R. 120(E) marks India’s formal recognition that AI-generated content is not a fringe concern but a central governance challenge in the digital age. 

Taiwan Government Agencies Faced 637 Cybersecurity Incidents in H2 2025

12 February 2026 at 02:21

cybersecurity incidents

In the past six months, Taiwan’s government agencies have reported 637 cybersecurity incidents, according to the latest data released by the Cybersecurity Academy (CSAA). The findings, published in its Cybersecurity Weekly Report, reveal not just the scale of digital threats facing Taiwan’s public sector, but also four recurring attack patterns that reflect broader global trends targeting government agencies. For international observers, the numbers are significant. Out of a total of 723 cybersecurity incidents reported by government bodies and select non-government organizations during this period, 637 cases involved government agencies alone. The majority of these—410 cases—were classified as illegal intrusion, making it the most prevalent threat category. These cybersecurity incidents provide insight into how threat actors continue to exploit both technical vulnerabilities and human behaviour within public institutions.

Illegal Intrusion Leads the Wave of Cybersecurity Incidents

Illegal intrusion remains the leading category among reported cybersecurity incidents affecting government agencies. While the term may sound broad, it reflects deliberate attempts by attackers to gain unauthorized access to systems, often paving the way for espionage, data theft, or operational disruption. The CSAA identified four recurring attack patterns behind these incidents. The first involves the distribution of malicious programs disguised as legitimate software. Attackers impersonate commonly used applications, luring employees into downloading infected files. Once installed, these malicious programs establish abnormal external connections, creating backdoors for future control or data exfiltration. This tactic is particularly concerning for government agencies, where employees frequently rely on specialized or internal tools. A single compromised endpoint can provide attackers with a foothold into wider networks, increasing the scale of cybersecurity incidents.

USB Worm Infections and Endpoint Vulnerabilities

The second major pattern behind these cybersecurity incidents involves worm infections spread through portable media devices such as USB drives. Though often considered an old-school technique, USB-based attacks remain effective—especially in environments where portable media is routinely used for operational tasks. When infected devices are plugged into systems, malicious code can automatically execute, triggering endpoint intrusion and abnormal system behavior. Such breaches can lead to lateral movement within networks and unauthorized external communications. This pattern underscores a key reality: technical sophistication is not always necessary. In many cybersecurity incidents, attackers succeed by exploiting routine workplace habits rather than zero-day vulnerabilities.

Social Engineering and Watering Hole Attacks Target Trust

The third pattern involves social engineering email attacks, frequently disguised as administrative litigation or official document exchanges. These phishing emails are crafted around business topics highly relevant to government agencies, increasing the likelihood that recipients will open attachments or click malicious links. Such cybersecurity incidents rely heavily on human psychology. The urgency and authority embedded in administrative-themed emails make them particularly effective. Despite years of awareness campaigns, phishing remains one of the most successful entry points for attackers globally. The fourth pattern, known as watering hole attacks, adds another layer of complexity. In these cases, attackers compromise legitimate websites commonly visited by government officials. During normal browsing, malicious commands are silently executed, resulting in endpoint compromise and abnormal network behavior. Watering hole attacks demonstrate how cybersecurity incidents can originate from seemingly trusted digital environments. Even cautious users can fall victim when legitimate platforms are weaponized.

Critical Infrastructure Faces Operational Risks

Beyond government agencies, cybersecurity incidents reported by non-government organizations primarily affected critical infrastructure providers, particularly in emergency response, healthcare, and communications sectors. Interestingly, many of these cases involved equipment malfunctions or damage rather than direct cyberattacks. System operational anomalies led to service interruptions, while environmental factors such as typhoons disrupted critical services. These incidents highlight an important distinction: not all disruptions stem from malicious activity. However, the operational impact can be equally severe. The Cybersecurity Research Institute (CRI) emphasized that equipment resilience, operational continuity, and environmental risk preparedness are just as crucial as cybersecurity protection. In an interconnected world, digital security and physical resilience must go hand in hand.

Strengthening Endpoint Protection and Cyber Governance

In response to the rise in cybersecurity incidents, experts recommend a dual approach—technical reinforcement and management reform. From a technical perspective, endpoint protection and abnormal behavior monitoring must be strengthened. Systems should be capable of detecting malicious programs, suspicious command execution, abnormal connections, and risky portable media usage. Enhanced browsing and attachment access protection can further reduce the risk of malware downloads during routine operations. From a governance standpoint, ongoing education is essential. Personnel must remain alert to risks associated with fake software, social engineering email attacks, and watering hole attacks. Clear management policies regarding portable media usage, software sourcing, and external website access should be embedded into cybersecurity governance frameworks. The volume of cybersecurity incidents reported in just six months sends a clear message: digital threats targeting public institutions are persistent, adaptive, and increasingly strategic. Governments and critical infrastructure providers must move beyond reactive responses and build layered defenses that address both technology and human behavior.

Cloud Security and Compliance: What It Is and Why It Matters for Your Business

11 February 2026 at 16:08

Cloud adoption didn’t just change where workloads run. It fundamentally changed how security and compliance must be managed. Enterprises are moving faster than ever across AWS, Azure, GCP, and hybrid...

The post Cloud Security and Compliance: What It Is and Why It Matters for Your Business appeared first on Security Boulevard.

Survey Sees Little Post-Quantum Computing Encryption Progress

10 February 2026 at 17:15

A global survey of 4,149 IT and security practitioners finds that while three-quarters (75%) expect a quantum computer will be capable of breaking traditional public key encryption within five years, only 38% at this point in time are preparing to adopt post-quantum cryptography. Conducted by the Ponemon Institute on behalf of Entrust, a provider of..

The post Survey Sees Little Post-Quantum Computing Encryption Progress appeared first on Security Boulevard.

What the Incognito Market Sentencing Reveals About Dark Web Drug Trafficking

5 February 2026 at 01:22

Incognito Market

The 30-year prison sentence handed to Rui-Siang Lin, the operator of the infamous Incognito Market, is more than just another darknet takedown story. Lin, who ran Incognito Market under the alias “Pharaoh,” oversaw one of the largest online narcotics operations in history, generating more than $105 million in illegal drug sales worldwide before its collapse in March 2024. Platforms like Incognito Market are not clever experiments in decentralization. They are industrial-scale criminal enterprises, and their architects will be treated as such. Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

How Incognito Market Became a Global Narcotics Hub

Launched in October 2020, Incognito Market was designed to look and feel like a legitimate e-commerce platform, only its products were heroin, cocaine, methamphetamine, MDMA, LSD, ketamine, and counterfeit prescription drugs. Accessible through the Tor browser, the dark web marketplace allowed anyone with basic technical knowledge to buy illegal narcotics from around the globe. At its peak, Incognito Market supported over 400,000 buyer accounts, more than 1,800 vendors, and facilitated 640,000 drug transactions. Over 1,000 kilograms of cocaine, 1,000 kilograms of methamphetamine, and fentanyl-laced pills were likely sold, the authorities said. This was not a fringe operation—it was a global supply chain built on code, crypto, and calculated harm.
Also read: “Incognito Market” Operator Arrested for Running $100M Narcotics Marketplace

“Pharaoh” and the Business of Digital Drug Trafficking

Operating as “Pharaoh,” Lin exercised total control over Incognito Market. Vendors paid an entry fee and a 5% commission on every sale, creating a steady revenue stream that funded servers, staff, and Lin’s personal profit—more than $6 million by prosecutors’ estimates. The marketplace had a very professional-looking modus operandi from branding, customer service, vendor ratings, and even its own internal financial system—the Incognito Bank—which allowed users to deposit cryptocurrency and transact anonymously. The system was designed to remove trust from human relationships and replace it with platform-controlled infrastructure. This was not chaos. It was corporate-style crime.

Fentanyl, Fake Oxycodone, and Real Deaths

In January 2022, Lin explicitly allowed opiate sales on Incognito Market, a decision that proved deadly. Listings advertised “authentic” oxycodone, but laboratory tests later revealed fentanyl instead. In September 2022, a 27-year-old man from Arkansas died after consuming pills purchased through the platform. This is where the myth of victimless cybercrime collapsed. Incognito Market did not just move drugs—it amplified the opioid crisis and directly contributed to loss of life. U.S. Attorney Jay Clayton stated that Lin’s actions caused misery for more than 470,000 users and their families, a figure that shows the human cost behind the transactions.

Exit Scam, Extortion, and the Final Collapse

When Incognito Market shut down in March 2024, Lin didn’t disappear quietly. He stole at least $1 million in user deposits and attempted to extort buyers and vendors, threatening to expose their identities and crypto addresses. His message was blunt: “YES, THIS IS AN EXTORTION!!!” It was a fittingly brazen end to an operation built on manipulation and fear. Judge Colleen McMahon called Incognito Market the most serious drug case she had seen in nearly three decades, labeling Lin a “drug kingpin.” The message from law enforcement is unmistakable: dark web platforms, cryptocurrency, and blockchain are not shields against justice.

France Approves Social Media Ban for Children Under 15 Amid Global Trend

3 February 2026 at 04:13

social media ban for children France

French lawmakers have approved a social media ban for children under 15, a move aimed at protecting young people from harmful online content. The bill, which also restricts mobile phone use in high schools, was passed by a 130-21 vote in the National Assembly and is expected to take effect at the start of the next school year in September. French President Emmanuel Macron has called for the legislation to be fast-tracked, and it will now be reviewed by the Senate. “Banning social media for those under 15: this is what scientists recommend, and this is what the French people are overwhelmingly calling for,” Macron said. “Our children’s brains are not for sale — neither to American platforms nor to Chinese networks. Their dreams must not be dictated by algorithms.”

Why France Introduced a Social Media Ban for Children

The new social media ban for children in France is part of a broader effort to address the negative effects of excessive screen time and harmful content. Studies show that one in two French teenagers spends between two and five hours daily on smartphones, with 58% of children aged 12 to 17 actively using social networks. Health experts warn that prolonged social media use can lead to reduced self-esteem, exposure to risky behaviors such as self-harm or substance abuse, and mental health challenges. Some families in France have even taken legal action against platforms like TikTok over teen suicides allegedly linked to harmful online content. The French legislation carefully exempts educational resources, online encyclopedias, and platforms for open-source software, ensuring children can still access learning and development tools safely.

Lessons From Australia’s Social Media Ban for Children

France’s move mirrors global trends. In December 2025, Australia implemented a social media ban for children under 16, covering major platforms including Facebook, Instagram, TikTok, Snapchat, Reddit, Threads, X, YouTube, and Twitch. Messaging apps like WhatsApp were exempt. Since the ban, social media companies have revoked access to about 4.7 million accounts identified as belonging to children. Meta alone removed nearly 550,000 accounts the day after the ban took effect. Australian officials said the measures restore children’s online safety and prevent predatory social media practices. Platforms comply with the ban through age verification methods such as ID checks, third-party age estimation technologies, or inference from existing account data. While some children attempted to bypass restrictions, the ban is considered a significant step in protecting children online.

UK Considers Following France and Australia

The UK is also exploring similar measures. Prime Minister Keir Starmer recently said the government is considering a social media ban for children aged 15 and under, along with stricter age verification, phone curfews, and restrictions on addictive platform features. The UK’s move comes amid growing concern about the mental wellbeing and safety of children online.

Global Shift Toward Child Cyber Safety

The introduction of a social media ban for children in France, alongside Australia’s implementation and the UK’s proposal, highlights a global trend toward protecting minors in the digital age. These measures aim to balance access to educational and creative tools while shielding children from online harm and excessive screen time. As more countries consider social media regulations for minors, the focus is clear: ensuring cyber safety, supporting mental health, and giving children the chance to enjoy a safe and healthy online experience.

U.S. and Bulgaria Shut Down Three Major Piracy Websites in EU Crackdown

2 February 2026 at 03:05

online piracy

In a major step against online piracy and illegal copyright distribution, U.S. law enforcement has partnered with Bulgarian authorities to dismantle three of the largest piracy websites operating in the European Union. The coordinated operation targeted platforms that allegedly provided unauthorized access to thousands of copyrighted movies, television shows, video games, software, and other digital content. The U.S. government executed seizure warrants against three U.S.-registered internet domains that were reportedly operated from Bulgaria. These domains — zamunda.net, arenabg.com, and zelka.org — were among the most heavily visited piracy services in the region. This action highlights growing international cooperation in tackling copyright infringement and protecting intellectual property rights worldwide.

Crackdown Targets Large-Scale Online Piracy Networks

According to U.S. authorities, the seized websites were allegedly engaged in the illegal distribution of copyrighted works on a massive scale. These platforms offered users access to unauthorized copies of content, including many works owned by U.S. companies and creators. The operation focused on online services that allowed millions of downloads of copyrighted material, contributing to significant financial losses for the entertainment, software, and publishing industries. Law enforcement officials emphasized that willful copyright infringement is a crime, and such piracy networks often operate as commercial enterprises rather than casual file-sharing platforms. Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

Tens of Millions of Visits and Millions in Losses

Court affidavits supporting the seizure warrants reveal the enormous scale of the piracy activity linked to these domains. The three websites reportedly:
  • Received tens of millions of visits annually
  • Offered thousands of infringed works without authorization
  • Generated millions of illegal downloads
  • Caused retail losses totaling millions of dollars
One of the domains was frequently ranked among the top 10 most visited websites in Bulgaria, highlighting how deeply embedded these piracy platforms were in the country’s online ecosystem. Authorities also noted that the websites appeared to generate substantial revenue through online advertisements, making piracy not only a copyright issue but also a profitable criminal business model.

Seized Domains Now Under U.S. Government Custody

The domains are now in the custody of the United States government. Visitors attempting to access the sites will instead see an official seizure banner. The notice informs users that:
  • Federal authorities have seized the domain names
  • Copyright infringement is a serious criminal offense
  • The websites are no longer operational
The seizure of these domains represents a significant disruption of piracy infrastructure and sends a clear warning to operators running similar illegal platforms.

Strong Cooperation Between U.S., Bulgaria, and Europol

The Justice Department credited Bulgarian law enforcement agencies for their critical support in the takedown. Key Bulgarian partners included:
  • The National Investigative Service
  • The Ministry of the Interior’s General Directorate Combating Organized Crime
  • The State Agency for National Security
  • The Prosecutor’s Office
On the U.S. side, the operation involved:
  • The U.S. Attorney’s Office for the Southern District of Mississippi
  • Homeland Security Investigations (HSI) New Orleans Field Office
  • The National Intellectual Property Rights Coordination Center (IPR Center)
The Justice Department also acknowledged the important coordination role played by Europol, along with technical support from the HSI Athens office and U.S. Customs and Border Protection (CBP) in Sofia. This case demonstrates how international partnerships are becoming essential in fighting cross-border cybercrime and piracy.

Role of ICHIP Program in Global Cybercrime Support

The Justice Department noted that it continues to provide intellectual property and cybercrime assistance to foreign partners through the International Computer Hacking and Intellectual Property (ICHIP) program. This program helps strengthen global law enforcement capabilities in areas such as:
  • Cybercrime investigations
  • Digital piracy enforcement
  • Intellectual property protection
  • Prosecutorial and judicial cooperation
The ICHIP initiative is jointly administered through OPDAT and the Computer Crime and Intellectual Property Section, in partnership with the U.S. Department of State.

IPR Center Remains Key Weapon Against Digital Piracy

The National Intellectual Property Rights Coordination Center (IPR Center) plays a central role in combating criminal piracy and counterfeiting. By bringing together expertise from multiple agencies, the IPR Center works to:
  • Share intelligence on IP theft
  • Coordinate enforcement actions
  • Protect the U.S. economy and consumers
  • Support investigations into digital piracy networks
Authorities encourage individuals and businesses to report suspected IP theft through the official IPR Center website.

Investigation Ongoing

The announcement was made by Assistant Attorney General A. Tysen Duva, U.S. Attorney Baxter Kruger, and Acting Special Agent in Charge Matt Wright of HSI New Orleans. Homeland Security Investigations has confirmed that the matter remains under active investigation. With the takedown of these major piracy sites, U.S. and Bulgarian authorities have delivered one of the strongest blows yet against online copyright infringement in the European Union.

MIND Extends DLP Reach to AI Agents

29 January 2026 at 08:57

MIND extends its data loss prevention platform to secure agentic AI, enabling organizations to discover, monitor, and govern AI agents in real time to prevent sensitive data exposure, shadow AI risks, and prompt injection attacks.

The post MIND Extends DLP Reach to AI Agents appeared first on Security Boulevard.

Canada Marks Data Privacy Week 2026 as Commissioner Pushes for Privacy by Design

27 January 2026 at 03:18

Data Privacy Week 2026

As Data Privacy Week 2026 gets underway from January 26 to 30, Canada’s Privacy Commissioner Philippe Dufresne has renewed calls for stronger data protection practices, modern privacy laws, and a privacy-first approach to emerging technologies such as artificial intelligence. In a statement marking Data Privacy Week 2026, Dufresne said data has become one of the most valuable resources of the 21st century, making responsible data management essential for both individuals and organizations. “Data is one of the most important resources of the 21st century and managing it well is essential for ensuring that individuals and organizations can confidently reap the benefits of a digital society,” he said. The Office of the Privacy Commissioner (OPC) has chosen privacy by design as its theme this year, highlighting the need for organizations to embed privacy into their programs, products, and services from the outset. According to Dufresne, this proactive approach can help organizations innovate responsibly, reduce risks, build for the future, and earn public trust.

Data Privacy Week 2026: Privacy by Design Takes Centre Stage

Speaking on the growing integration of technology into everyday life, Dufresne said Data Privacy Week 2026 is a timely opportunity to underline the importance of data protection. With personal data being collected, used, and shared at unprecedented levels, privacy is no longer a secondary concern. “Prioritizing privacy by design is my Office’s theme for Data Privacy Week this year, which highlights the benefits to organizations of taking a proactive approach to protect the personal information that is in their care,” he said. The OPC is also offering guidance for individuals on how to safeguard their personal information in a digital world, while providing organizations with resources to support privacy-first programs, policies, and services. These include principles to encourage responsible innovation, especially in the use of generative AI technologies.

Real-World Cases Show Why Privacy Matters

In parallel with Data Privacy Week 2026, Dufresne used a recent appearance before Parliament to point to concrete cases that show how privacy failures can cause serious and lasting harm. He referenced investigations into the non-consensual sharing of intimate images involving Aylo, the operator of Pornhub, and the 23andMe data breach, which exposed highly sensitive personal information of 7 million customers, including more than 300,000 Canadians. His office’s joint investigation into TikTok also highlighted the need to protect children’s privacy online. The probe not only resulted in a report but also led TikTok to improve its privacy practices in the interests of its users, particularly minors. Dufresne also confirmed an expanded investigation into X and its Grok chatbot, focusing on the emerging use of AI to create deepfakes, which he said presents significant risks to Canadians. “These are some of many examples that demonstrate the importance of privacy for current and future generations,” he told lawmakers, adding that prioritizing privacy is also a strategic and competitive asset for organizations.

Modernizing Canada’s Privacy Laws

A central theme of Data Privacy Week 2026 in Canada is the need to modernize privacy legislation. Dufresne said existing laws must be updated to protect Canadians in a data-driven world while giving businesses clear and practical rules. He voiced support for proposed changes under Bill C-15, the Budget 2025 Implementation Act, which would amend the Personal Information Protection and Electronic Documents Act (PIPEDA) to introduce a right to data mobility. This would allow individuals to request that their personal information be transferred to another organization, subject to regulations and safeguards. “A right to data mobility would give Canadians greater control of their personal information by allowing them to make decisions about who they want their information shared with,” he said, adding that it would also make it easier for people to switch service providers and support innovation and competition. Under the proposed amendments, organizations would be required to disclose personal information to designated organizations upon request, provided both are subject to a data-mobility framework. The federal government would also gain authority to set regulations covering safeguards, interoperability standards, and exceptions. Given the scope of these changes, Dufresne said it will be important for his office to be consulted as the regulations are developed.

A Call to Act During Data Privacy Week 2026

Looking ahead, Dufresne framed Data Privacy Week 2026 as both a moment of reflection and a call to action. “Let us work together to create a safer digital future for all, where privacy is everyone’s priority,” he said. He invited Canadians to take part in Data Privacy Week 2026 by joining the conversation online, engaging with content from the OPC’s LinkedIn account, and using the hashtag #DPW2026 to connect with others committed to advancing privacy in Canada and globally. As digital technologies continue to reshape daily life, the message from Canada’s Privacy Commissioner is clear: privacy is not just a legal requirement, but a foundation for trust, innovation, and long-term economic growth.

UK Turns to Australia Model as British Government Considers Social Media Ban for Children

21 January 2026 at 01:13

social media ban for children

Just weeks after Australia rolled out the world’s first nationwide social media ban for children under 16, the British government has signaled it may follow a similar path. On Monday, Prime Minister Keir Starmer said the UK is considering a social media ban for children aged 15 and under, warning that “no option is off the table” as ministers confront growing concerns about young people’s online wellbeing. The move places the British government ban social media proposal at the center of a broader national debate about the role of technology in childhood. Officials said they are studying a wide range of measures, including tougher age checks, phone curfews, restrictions on addictive platform features, and potentially raising the digital age of consent.

UK Explores Stricter Limits on Social Media Ban for Children

social media ban for children In a Substack post on Tuesday, Starmer said that for many children, social media has become “a world of endless scrolling, anxiety and comparison.” “Being a child should not be about constant judgement from strangers or the pressure to perform for likes,” he wrote. Alongside the possible ban, the government has launched a formal consultation on children’s use of technology. The review will examine whether a social media ban for children would be effective and, if introduced, how it could be enforced. Ministers will also look at improving age assurance technology and limiting design features such as “infinite scrolling” and “streaks,” which officials say encourage compulsive use. The consultation will be backed by a nationwide conversation with parents, young people, and civil society groups. The government said it would respond to the consultation in the summer.

Learning from Australia’s Unprecedented Move

British ministers are set to visit Australia to “learn first-hand from their approach,” referencing Canberra’s decision to ban social media for children under 16. The Australian law, which took effect on December 10, requires platforms such as Instagram, Facebook, X, Snapchat, TikTok, Reddit, Twitch, Kick, Threads, and YouTube to block underage users or face fines of up to AU$32 million. Prime Minister Anthony Albanese made clear why his government acted. “Social media is doing harm to our kids, and I’m calling time on it,” he said. “I’ve spoken to thousands of parents… they’re worried sick about the safety of our kids online, and I want Australian families to know that the Government has your back.” Parents and children are not penalized under the Australian rules; enforcement targets technology companies. Early figures suggest significant impact. Australia’s eSafety Commissioner Julie Inman-Grant said 4.7 million social media accounts were deactivated in the first week of the policy. To put that in context, there are about 2.5 million Australians aged eight to 15. “This is exactly what we hoped for and expected: early wins through focused deactivations,” she said, adding that “absolute perfection is not a realistic goal,” but the law aims to delay exposure, reduce harm, and set a clear social norm.

UK Consultation and School Phone Bans

The UK’s proposals go beyond a possible social media ban. The government said it will examine raising the digital age of consent, introducing phone curfews, and restricting addictive platform features. It also announced tougher guidance for schools, making it clear that pupils should not have access to mobile phones during lessons, breaks, or lunch. Ofsted inspectors will now check whether mobile phone bans are properly enforced during school inspections. Schools struggling to implement bans will receive one-to-one support from Attendance and Behaviour Hub schools. Although nearly all UK schools already have phone policies—99.9% of primary schools and 90% of secondary schools—58% of secondary pupils reported phones being used without permission in some lessons. Education Secretary Bridget Phillipson said: “Mobile phones have no place in schools. No ifs, no buts.”

Building on Existing Online Safety Laws

Technology Secretary Liz Kendall said the government is prepared to take further action beyond the Online Safety Act. “These laws were never meant to be the end point, and we know parents still have serious concerns,” she said. “We are determined to ensure technology enriches children’s lives, not harms them.” The Online Safety Act has already introduced age checks for adult sites and strengthened rules around harmful content. The government said children encountering age checks online has risen from 30% to 47%, and 58% of parents believe the measures are improving safety. The proposed British government ban social media initiative would build on this framework, focusing on features that drive excessive use regardless of content. Officials said evidence from around the world will be examined as they consider whether a UK-wide social media ban for children could work in practice. As Australia’s experience begins to unfold, the UK is positioning itself to decide whether similar restrictions could reshape how children engage with digital platforms. The consultation marks the start of what ministers describe as a long-term effort to ensure young people develop a healthier relationship with technology.

Nicole Ozer Joins CPPA to Drive Privacy and Digital Security Initiatives

14 January 2026 at 01:20

Nicole Ozer appointment

The California Privacy Protection Agency (CalPrivacy) has announced a significant leadership appointment, as Assembly Speaker Robert Rivas named Nicole Ozer to the CPPA Board, emphasising California’s ongoing commitment to strengthening consumer privacy protections. The Nicole Ozer appointment comes at a time when privacy regulation, digital rights, and responsible data governance are taking on increased importance across both state and federal institutions. Ozer brings decades of experience working at the intersection of privacy rights, technology, and democratic governance. She currently serves as the inaugural Executive Director of the Center for Constitutional Democracy at UC Law San Francisco, where her work focuses on safeguarding civil liberties in the digital age.

Nicole Ozer Appointment Strengthens CalPrivacy Board

Jennifer Urban, Chair of the California Privacy Protection Agency Board, welcomed the Nicole Ozer appointment, citing Ozer’s extensive background in privacy law, surveillance policy, artificial intelligence, and digital speech. “Nicole has a long history of service to Californians and deep legal and policy expertise,” Urban said. “Her knowledge will be a valuable asset to the agency as we continue advancing privacy protections across the state.” Urban also acknowledged the contributions of outgoing board member Dr. Brandie Nonnecke, noting her role in supporting CalPrivacy’s rulemaking, enforcement efforts, and public outreach initiatives over the past year. The CPPA Board plays a central role in guiding how California’s privacy laws are implemented and enforced, making leadership appointments especially critical as regulatory expectations evolve.

Nicole Ozer’s Background in Privacy and Civil Liberties

Before joining UC Law San Francisco, Nicole Ozer served as the founding Director of the Technology and Civil Liberties Program at the ACLU of Northern California. Her career also includes roles as a Technology and Human Rights Fellow at the Harvard Kennedy School, a Visiting Researcher at the Berkeley Center for Law and Technology, and a Fellow at Stanford’s Digital Civil Society Lab. Her work has been widely recognized, including a California Senate Members Resolution honoring her dedication to defending civil liberties in the digital world and her contributions to protecting the rights of people across California. “I appreciate the opportunity to serve on the CPPA Board,” Ozer said. “This is a critical moment to ensure that California’s robust privacy rights are meaningful in practice. I look forward to supporting the agency’s important work.”

Role of the California Privacy Protection Agency

The California Privacy Protection Agency is governed by a five-member board, with appointments made by the Governor, the Senate Rules Committee, the Assembly Speaker, and the Attorney General. The agency is responsible for administering and enforcing key privacy laws, including the California Consumer Privacy Act, the Delete Act, and the Opt Me Out Act. Beyond enforcement, CalPrivacy focuses on educating consumers and businesses about their rights and obligations. Through its website, Privacy.ca.gov, Californians can access guidance on protecting personal data, submitting delete requests, and using the Delete Request and Opt-out Platform (DROP).

Leadership Shifts Across Security and Privacy Institutions

Ozer’s appointment to the California Privacy Protection Agency Board comes in the same week as another notable leadership development at the federal level. The National Security Agency (NSA) announced the appointment of Timothy Kosiba as its 21st Deputy Director, highlighting parallel leadership changes shaping the future of privacy, cybersecurity, and national security. As NSA Deputy Director, Kosiba becomes the agency’s senior civilian leader, responsible for strategy execution, policy development, and operational oversight. His appointment was designated by Secretary of War Pete Hegseth and Director of National Intelligence Tulsi Gabbard, and formally approved by President Donald J. Trump. While the missions of the National Security Agency and the California Privacy Protection Agency differ, both appointments underline a growing emphasis on experienced leadership in institutions responsible for protecting sensitive data, infrastructure, and public trust. Together, these developments reflect how governance around privacy, cybersecurity, and digital rights continues to evolve, with leadership playing a central role in shaping how protections are implemented in practice.

After EU Probe, U.S. Senators Push Apple and Google to Review Grok AI

12 January 2026 at 02:01

U.S. Senators Push Apple and Google to Review Grok AI

Concerns surrounding Grok AI are escalating rapidly, with pressure now mounting in the United States after ongoing scrutiny in Europe. Three U.S. senators have urged Apple and Google to remove the X app and Grok AI from the Apple App Store and Google Play Store, citing the large-scale creation of nonconsensual sexualized images of real people, including children. The move comes as a direct follow-up to the European Commission’s investigation into Grok AI’s image-generation capabilities, marking a significant expansion of regulatory attention beyond the EU. While European regulators have openly weighed enforcement actions, U.S. authorities are now signaling that app distribution platforms may also bear responsibility.

U.S. senators Cite App Store Policy Violations by Grok AI

In a letter dated January 9, 2026, Senators Ron Wyden, Ed Markey, and Ben Ray Luján formally asked Apple CEO Tim Cook and Google CEO Sundar Pichai to enforce their app store policies against X Corp. The lawmakers argue that Grok AI, which operates within the X app, has repeatedly violated rules governing abusive and exploitative content. According to the senators, users have leveraged Grok AI to generate nonconsensual sexualized images of women, depicting abuse, humiliation, torture, and even death. More alarmingly, the letter states that Grok AI has also been used to create sexualized images of children, content the senators described as both harmful and potentially illegal. The lawmakers emphasized that such activity directly conflicts with policies enforced by both the Apple App Store and Google Play Store, which prohibit content involving sexual exploitation, especially material involving minors.

Researchers Flag Potential Child Abuse Material Linked to Grok AI

The letter also references findings by independent researchers who identified an archive connected to Grok AI containing nearly 100 images flagged as potential child sexual abuse material. These images were reportedly generated over several months, raising questions about X Corp’s oversight and response mechanisms. The senators stated that X appeared fully aware of the issue, pointing to public reactions by Elon Musk, who acknowledged reports of Grok-generated images with emoji responses. In their view, this signaled a lack of seriousness in addressing the misuse of Grok AI.

Premium Restrictions Fail to Calm Controversy

In response to the backlash, X recently limited Grok AI’s image-generation feature to premium subscribers. However, the senators dismissed this move as inadequate. Sen. Wyden said the change merely placed a paywall around harmful behavior rather than stopping it, arguing that it allowed the production of abusive content to continue while generating revenue. The lawmakers stressed that restricting access does not absolve X of responsibility, particularly when nonconsensual sexualized images remain possible through the platform.

Pressure Mounts on Apple App Store and Google Play Store

The senators warned that allowing the X app and Grok AI to remain available on the Apple App Store and Google Play Store would undermine both companies’ claims that their platforms offer safer environments than alternative app distribution methods. They also pointed to recent instances where Apple and Google acted swiftly to remove other controversial apps under government pressure, arguing that similar urgency should apply in the case of Grok AI. At minimum, the lawmakers said, temporary removal of the apps would be appropriate while a full investigation is conducted. They requested a written response from both companies by January 23, 2026, outlining how Grok AI and the X app are being assessed under existing policies. Apple and Google have not publicly commented on the letter, and X has yet to issue a formal response. The latest development adds momentum to global scrutiny of Grok AI, reinforcing concerns already raised by the European Commission. Together, actions in the U.S. and Europe signal a broader shift toward holding AI platforms, and the app ecosystems that distribute them, accountable for how generative technologies are deployed and controlled at scale.

UK Moves to Close Public Sector Cyber Gaps With Government Cyber Action Plan

Government Cyber Action Plan

The UK government has revealed the Government Cyber Action Plan as a renewed effort to close the growing gap between escalating cyber threats and the public sector’s ability to respond effectively. The move comes amid a series of cyberattacks targeting UK retail and manufacturing sectors, incidents that have underscored broader vulnerabilities affecting critical services and government operations. Designed to strengthen UK cyber resilience, the plan reflects a shift from fragmented cyber initiatives to a more coordinated, accountable, and outcomes-driven approach across government departments.

A Growing Gap Between Threats and Defences

Recent cyber incidents have highlighted a persistent challenge: while threats to public services continue to grow in scale and sophistication, defensive capabilities have not kept pace. Reviews conducted by the Department for Science, Innovation and Technology (DSIT) revealed that cyber and digital resilience across the public sector was significantly lower than previously assessed. This assessment was reinforced by the National Audit Office’s report on government cyber resilience, which warned that without urgent improvements, the government risks serious incidents and operational disruption. The report concluded that the public sector must “catch up with the acute cyber threat it faces” to protect services and ensure value for money.

Building on Existing Foundations

The Government Cyber Action Plan builds on earlier collaborative efforts between DSIT, the National Cyber Security Centre (NCSC), and the Cabinet Office. Notable achievements to date include the establishment of the Government Cyber Coordination Centre (GC3), created to manage cross-government incident response, and the rollout of GovAssure, a scheme designed to assess the security of government-critical systems. Despite these initiatives, officials acknowledged that structural issues, inconsistent governance, and limited accountability continued to hinder effective cyber risk management. GCAP is intended to address these gaps directly.

Five Delivery Strands of the Government Cyber Action Plan

At the core of the Government Cyber Action Plan are five delivery strands aimed at strengthening accountability and improving operational resilience across departments. The first strand focuses on accountability, placing clearer responsibility for cyber risk management on accounting officers, senior leaders, Chief Digital and Information Officers (CDIOs), and Chief Information Security Officers (CISOs). The second strand emphasises support, providing departments with access to shared cyber expertise and the rapid deployment of technical teams during high-risk situations. Under the services strand, GCAP promotes the development of secure digital solutions that can be built once and used across multiple departments. This approach is intended to reduce duplication, improve consistency, and address capability gaps through innovation, including initiatives such as the NCSC’s ACD 2.0 programme. Response is another key focus, with the introduction of the Government Cyber Incident Response Plan (G-CIRP). This framework formalises how departments report and respond to cyber incidents, improving coordination during national-level events. The final strand addresses skills, aiming to attract, develop, and retain cyber professionals across government. Central to this effort is the creation of a Government Cyber Security Profession—the first dedicated government profession focused specifically on cyber security and resilience.

Role of the NCSC and Long-Term Impact

The NCSC will play a central role across all five strands of the Government Cyber Action Plan, from supporting departments during incidents to helping design services that improve resilience. This approach aligns with the NCSC’s existing work with critical national infrastructure and public sector organisations, offering technical guidance, assurance, and incident response support. While GCAP’s implementation will be phased through to 2029 and beyond, officials say the framework is expected to deliver measurable improvements even in its first year. These include stronger risk management practices and faster coordination during cyber incidents. According to Johnny McManus, Deputy Director for Government Cyber Resilience at the NCSC, the combination of DSIT’s delivery leadership and the NCSC’s technical authority provides a foundation for transforming UK cyber resilience across the public sector.

Trump Orders US Exit from Global Cyber and Hybrid Threat Coalitions

8 January 2026 at 06:13

Donald_Trump

President Donald Trump has ordered the immediate withdrawal of the United States from several premier international bodies dedicated to cybersecurity, digital human rights, and countering hybrid warfare, as part of a major restructuring of American defense and diplomatic posture. The directive is part of a memorandum issued on Monday, targeting 66 international organizations deemed "contrary to the interests of the United States."

While the memorandum’s cuts to climate and development sectors have grabbed headlines, national security experts will be worries of the targeted dismantling of U.S. participation in key security alliances in the digital realm. The President has explicitly directed withdrawal from the European Centre of Excellence for Countering Hybrid Threats (Hybrid CoE), the Global Forum on Cyber Expertise (GFCE), and the Freedom Online Coalition (FOC).

"I have considered the Secretary of State’s report... and have determined that it is contrary to the interests of the United States to remain a member," President Trump said. The U.S. Secretary of State Marco Rubio backed POTUS' move calling these coalitions "wasteful, ineffective, and harmful."

"These institutions (are found) to be redundant in their scope, mismanaged, unnecessary, wasteful, poorly run, captured by the interests of actors advancing their own agendas contrary to our own, or a threat to our nation’s sovereignty, freedoms, and general prosperity," Rubio said. "President Trump is clear: It is no longer acceptable to be sending these institutions the blood, sweat, and treasure of the American people, with little to nothing to show for it. The days of billions of dollars in taxpayer money flowing to foreign interests at the expense of our people are over."

Dismantling the Hybrid Defense Shield

Perhaps the most significant strategic loss is the U.S. exit from the European Centre of Excellence for Countering Hybrid Threats (Hybrid CoE). Based in Helsinki, the Hybrid CoE is unique as the primary operational bridge between NATO and the European Union.

The Centre was established to analyze and counter "hybrid" threats—ambiguous, non-military attacks such as election interference, disinformation campaigns, and economic coercion, tactics frequently attributed to state actors like Russia and China. By withdrawing, the U.S. is effectively blinding the shared intelligence and coordinated response mechanisms that European allies rely on to detect these sub-threshold attacks. The U.S. participation was seen as a key deterrent; without it, the trans-Atlantic unified front against hybrid warfare could be severely fractured.

Also read: Russia-Linked Hybrid Campaign Targeted 2024 Elections: Romanian Prosecutor General

Abandoning Global Cyber Capacity Building

The administration is also pulling out of the Global Forum on Cyber Expertise (GFCE). Unlike a military alliance, the GFCE is a pragmatic, multi-stakeholder platform that consists of 260+ members and partners bringing together governments, private tech companies, and NGOs to build cyber capacity in developing nations.

The GFCE’s mission is to strengthen global cyber defenses by helping nations develop their own incident response teams, cyber crime laws, and critical infrastructure protection. A U.S. exit here opens a power vacuum. As the U.S. retreats from funding and guiding the capacity-building efforts, rival powers may step in to offer their own support, potentially embedding authoritarian standards into the digital infrastructure of the Global South.

The GFCE on thinks otherwise. A GFCE spokesperson told The Cyber Express "(It) respects the decision of the US government and recognizes the United States as one of the founding members of the GFCE since 2015."

"The US has been an important contributor to international cyber capacity building efforts over time," the spokesperson added when asked about US' role in the Forum. However the pull-out won't be detrimental as "the GFCE’s work is supported by a broad and diverse group of members and partners. The GFCE remains operational and committed to continuing its mission."

A Blow to Internet Freedom

Finally, the withdrawal from the Freedom Online Coalition (FOC) marks an ideological shift. The FOC is a partnership of 42 governments committed to advancing human rights online, specifically fighting against internet shutdowns, censorship, and digital authoritarianism.

The U.S. has historically been a leading voice in the FOC, using the coalition to pressure regimes that restrict internet access or persecute digital dissidents. Leaving the FOC suggests the Trump administration is deprioritizing the promotion of digital human rights as a foreign policy objective. This could embolden authoritarian regimes to tighten control over their domestic internets without fear of a coordinated diplomatic backlash from the West.

The "America First" Cyber Doctrine

The administration argues these withdrawals are necessary to stop funding globalist bureaucracies that constrain U.S. action. By exiting, the White House aims to reallocate resources to bilateral partnerships where the U.S. can exert more direct leverage. However, critics could argue that in the interconnected domain of cyberspace, isolation is a vulnerability. By ceding the chair at these tables, the United States may find itself writing the rules of the next digital conflict alone, while the rest of the world—friend and foe alike—organizes without it.

The article was updated to include GFCE spokesperson's response and U.S. Secretary of State Marco Rubio's statement.

Also read: Trump’s Team Removes TSA Leader Pekoske as Cyber Threats Intensify

Beyond Compliance: How India’s DPDP Act Is Reshaping the Cyber Insurance Landscape

19 December 2025 at 00:38

DPDP Act Is Reshaping the Cyber Insurance Landscape

By Gauravdeep Singh, Head – State e-Mission Team (SeMT), Ministry of Electronics and Information Technology The Digital Personal Data Protection (DPDP) Act has fundamentally altered the risk landscape for Indian organisations. Data breaches now trigger mandatory compliance obligations regardless of their origin, transforming incidents that were once purely operational concerns into regulatory events with significant financial and legal implications.

Case Study 1: Cloud Misconfiguration in a Consumer Platform

A prominent consumer-facing platform experienced a data exposure incident when a misconfigured storage bucket on its public cloud infrastructure inadvertently made customer data publicly accessible. While no malicious actor was involved, the incident still constituted a reportable data breach under the DPDP Act framework. The organisation faced several immediate obligations:
  • Notification to affected individuals within prescribed timelines
  • Formal reporting to the Data Protection Board
  • Comprehensive internal investigation and remediation measures
  • Potential penalties for failure to implement reasonable security safeguards as mandated under the Act
Such incidents highlight a critical gap in traditional risk management approaches. The financial exposure—encompassing regulatory penalties, legal costs, remediation expenses, and reputational damage—frequently exceeds conventional cyber insurance coverage limits, particularly when compliance failures are implicated.

Case Study 2: Ransomware Attack on Healthcare and EdTech Infrastructure

A mid-sized healthcare and education technology provider fell victim to a ransomware attack that encrypted sensitive personal records. Despite successful restoration from backup systems, the organisation confronted extensive regulatory and operational obligations:
  • Forensic assessment to determine whether data confidentiality was compromised
  • Mandatory notification to regulatory authorities and affected data principals
  • Ongoing legal and compliance proceedings
The total cost extended far beyond any ransom demand. Forensic investigations, legal advisory services, public communications, regulatory compliance activities, and operational disruption collectively created substantial financial strain, costs that would have been mitigated with appropriate insurance coverage.

Case Study 3: AI-Enabled Fraud and Social Engineering

The emergence of AI-driven attack vectors has introduced new dimensions of cyber risk. Deepfake technology and sophisticated phishing campaigns now enable threat actors to impersonate senior leadership with unprecedented authenticity, compelling finance teams to authorise fraudulent fund transfers or inappropriate data disclosures. These attacks often circumvent traditional technical security controls because they exploit human trust rather than system vulnerabilities. As a result, organisations are increasingly seeking insurance coverage for social engineering and cyber fraud events, particularly those involving personal data or financial information, that fall outside conventional cybersecurity threat models.

The Evolution of Cyber Insurance in India

India DPDP Act The Indian cyber insurance market is undergoing significant transformation in response to the DPDP Act and evolving threat landscape. Modern policies now extend beyond traditional hacking incidents to address:
  • Data breaches resulting from human error or operational failures
  • Third-party vendor and SaaS provider security failures
  • Cloud service disruptions and availability incidents
  • Regulatory investigation costs and legal defense expenses
  • Incident response, crisis management, and public relations support
Organisations are reassessing their coverage adequacy as they recognise that historical policy limits of Rs. 10–20 crore may prove insufficient when regulatory penalties, legal costs, business interruption losses, and remediation expenses are aggregated under the DPDP compliance framework.

The SME and MSME Vulnerability

Small and medium enterprises represent the most vulnerable segment of the market. While many SMEs and MSMEs regularly process personal data, they frequently lack:
  • Mature information security controls and governance frameworks
  • Dedicated compliance and data protection teams
  • Financial reserves to absorb penalties, legal costs, or operational disruption
For organisations in this segment, even a relatively minor cyber incident can trigger prolonged operational shutdowns or, in severe cases, permanent closure. Despite this heightened vulnerability, cyber insurance adoption among SMEs remains disproportionately low, driven primarily by awareness gaps and perceived cost barriers.

Implications for the Cyber Insurance Ecosystem

The Indian cyber insurance market is entering a period of accelerated growth and structural evolution. Several key trends are emerging:
  • Higher policy limits becoming standard practice across industries
  • Enhanced underwriting processes emphasising compliance readiness and data governance maturity
  • Comprehensive coverage integrating legal advisory, forensic investigation, and regulatory support
  • Risk-based pricing models that reward robust data protection practices
Looking ahead, cyber insurance will increasingly be evaluated not merely as a risk-transfer mechanism, but as an indicator of an organisation's overall data protection posture and regulatory preparedness.

DPDP Act and the End of Optional Cyber Insurance

The DPDP Act has fundamentally redefined cyber risk in the Indian context. Data breaches are no longer isolated IT failures; they are regulatory events carrying substantial financial, legal, and reputational consequences. In this environment, cyber insurance is transitioning from a discretionary safeguard to a strategic imperative. Organisations that integrate cyber insurance into a comprehensive data governance and enterprise risk management strategy will be better positioned to navigate the evolving regulatory landscape. Conversely, those that remain uninsured or underinsured may discover that the cost of inadequate preparation far exceeds the investment required for robust protection. (This article reflects the author’s analysis and personal viewpoints and is intended for informational purposes only. It should not be construed as legal or regulatory advice.)

FBI Seizes E-Note Crypto Exchange Linked to Ransomware Money Laundering

18 December 2025 at 04:15

FBI Seizes E-Note Crypto Exchange

The FBI E-Note cryptocurrency exchange takedown marks a major international law enforcement action against financial infrastructure allegedly used by transnational cybercriminal groups. The U.S. Department of Justice confirmed on Wednesday that the FBI, working with partners in Germany and Finland, disrupted and seized the online infrastructure of E-Note, a cryptocurrency exchange accused of laundering illicit funds linked to ransomware attacks and account takeovers. According to the United States Attorney’s Office for the Eastern District of Michigan, the coordinated operation targeted websites and servers used to operate E-Note, which allegedly provided cash-out services for cybercriminals targeting U.S. healthcare organizations and critical infrastructure. [caption id="attachment_107893" align="aligncenter" width="1024"]FBI Seizes E-Note Crypto Exchange Source: https://www.justice.gov/[/caption] “The United States Attorney’s Office for the Eastern District of Michigan announced today a coordinated action with international partners and the Michigan State Police to disrupt and take down the online infrastructure used to operate E-Note, a cryptocurrency exchange that allegedly facilitated money laundering by transnational cyber-criminal organizations,” the Justice Department said.

E-Note Allegedly Laundered Over $70 Million in Illicit Funds

Investigators say the FBI E-Note cryptocurrency exchange takedown follows years of financial tracking by federal authorities. Since 2017, the FBI identified more than $70 million in illicit proceeds transferred through the E-Note payment service and its associated money mule network. These funds were allegedly tied to ransomware attacks and account takeovers, including proceeds stolen or extorted from victims in the United States. “Since 2017, the FBI identified more than $70,000,000 of illicit proceeds of ransomware attacks and account takeovers transferred via E-Note payment service and money mule network,” the DOJ stated. Authorities believe the exchange played a key role in converting cryptocurrency into various cash currencies, allowing cybercriminals to move funds across international borders while avoiding detection.

Russian National Charged in Money Laundering Conspiracy

As part of the operation, U.S. prosecutors unsealed an indictment against Mykhalio Petrovich Chudnovets, a 39-year-old Russian national. Chudnovets is charged with one count of conspiracy to launder monetary instruments, an offense that carries a maximum sentence of 20 years in prison. According to court documents, Chudnovets began offering money laundering services to cybercriminals as early as 2010. Prosecutors allege that he controlled and operated the E-Note payment processing service until law enforcement seized its infrastructure. “Until this seizure by law enforcement, Chudnovets offered money laundering services via the E-Note payment processing service, which he controlled and operated,” the DOJ said. Investigators allege that Chudnovets worked closely with financially motivated cybercriminals to transfer criminal proceeds internationally and convert cryptocurrency into cash.

Servers, Websites, and Apps Seized in Coordinated Action

During the FBI E-Note cryptocurrency exchange takedown, U.S. and international authorities seized servers hosting the operation, as well as related mobile applications. Law enforcement also took control of the websites “e-note.com,” “e-note.ws,” and “jabb.mn.” U.S. authorities separately obtained earlier copies of Chudnovets’ servers, which included customer databases and transaction records, providing investigators with detailed insight into the alleged laundering activity. The Justice Department confirmed that the action was carried out with support from the German Federal Criminal Police Office, the Finnish National Bureau of Investigation, and the Michigan State Police Michigan Cyber Command Center (MC3).

Investigation Led by FBI Detroit Cyber Task Force

The case is being investigated by the FBI Detroit Cyber Task Force, with Assistant U.S. Attorney Timothy Wyse prosecuting. The announcement was made jointly by United States Attorney Jerome F. Gorgon, Jr. and Jennifer Runyan, Special Agent in Charge of the FBI’s Detroit Division. Authorities emphasized that individuals who believe their funds were laundered through E-Note should contact law enforcement. “Any individual who believes he/she is a victim whose funds were laundered through Chudnovets should reach out to law enforcement via email address e-note-information@fbi.gov,” the DOJ said. The Justice Department also noted that the indictment remains an allegation. “An indictment is merely an allegation. All defendants are presumed innocent until proven guilty beyond a reasonable doubt in a court of law.”

8 Ways the DPDP Act Will Change How Indian Companies Handle Data in 2026 

16 December 2025 at 01:16

DPDP Act

For years, data privacy in India lived in a grey zone. Mobile numbers demanded at checkout counters. Aadhaar photocopies lying unattended in hotel drawers. Marketing messages that arrived long after you stopped using a service. Most of us accepted this as normal, until the law caught up.  That moment has arrived.  The Digital Personal Data Protection Act (DPDP Act), 2023, backed by the Digital Personal Data Protection Rules, 2025 notified by the Ministry of Electronics and Information Technology (MeitY) on 13 November 2025, marks a decisive shift in how personal data must be treated in India. As the country heads into 2026, businesses are entering the most critical phase: execution.  Companies now have an 18-month window to re-engineer systems, processes, and accountability frameworks across IT, legal, HR, marketing, and vendor ecosystems. The change is not cosmetic. It is structural.  As Sandeep Shukla, Director, International Institute of Information Technology Hyderabad (IIIT Hyderabad), puts it bluntly: 
“Well, I can say that Indian Companies so far has been rather negligent of customer's privacy. Anywhere you go, they ask for your mobile number.” 
The DPDP Act is designed to ensure that such casual indifference to personal data does not survive the next decade.  Below are eight fundamental ways the DPDP Act will change how Indian companies handle data in 2026, with real-world implications for businesses, consumers, and the digital economy.

1. Privacy Will Movefromthe Back Office to the Boardroom 

Until now, data protection in Indian organizations largely sat with compliance teams or IT security. That model will not hold in 2026.  The DPDP framework makes senior leadership directly accountable for how personal data is handled, especially in cases of breaches or systemic non-compliance. Privacy risk will increasingly be treated like financial or operational risk. 
According to Shashank Bajpai, CISO & CTSO at YOTTA, “The DPDP Act (2023) becomes operational through Rules notified in November 2025; the result is a staggered compliance timetable that places 2026 squarely in the execution phase. That makes 2026 the inflection year when planning becomes measurable operational work and when regulators will expect visible progress.” 
In 2026, privacy decisions will increasingly sit with boards, CXOs, and risk committees. Metrics such as consent opt-out rates, breach response time, and third-party risk exposure will become leadership-level conversations, not IT footnotes.

2. Consent Will Become Clear, Granular, and Reversible

One of the most visible changes users will experience is how consent is sought.  Under the DPDP Act, consent must be specific, informed, unambiguous, and easy to withdraw. Pre-ticked boxes and vague “by using this service” clauses will no longer be enough. 
As Gauravdeep Singh, State Head (Digital Transformation), e-Mission Team, MeitY, explains, “Data Principal = YOU.” 
Whether it’s a food delivery app requesting location access or a fintech platform processing transaction history, individuals gain the right to control how their data is used—and to change their mind later.

3. Data Hoarding Will Turnintoa Liability 

For many Indian companies, collecting more data than necessary was seen as harmless. Under the DPDP Act, it becomes risky.  Organizations must now define why data is collected, how long it is retained, and how it is securely disposed of. If personal data is no longer required for a stated purpose, it cannot simply be stored indefinitely. 
Shukla highlights how deeply embedded poor practices have been, “Hotels take your aadhaar card or driving license and copy and keep it in the drawers inside files without ever telling the customer about their policy regarding the disposal of such PII data safely and securely.” 
In 2026, undefined retention is no longer acceptable.

4. Third-Party Vendors Will Come Under the Scanner

Data processors like cloud providers, payment gateways, CRM platforms, will no longer operate in the shadows.  The DPDP Act clearly distinguishes between Data Fiduciaries (companies that decide how data is used) and Data Processors (those that process data on their behalf). Fiduciaries remain accountable, even if the breach occurs at a vendor.  This will force companies to: 
  • Audit vendors regularly 
  • Rewrite contracts with DPDP clauses 
  • Monitor cross-border data flows 
As Shukla notes“The shops, E-commerce establishments, businesses, utilities collect so much customer PII, and often use third party data processor for billing, marketing and outreach. We hardly ever get to know how they handle the data.” 
In 2026, companies will be required to audit vendors, strengthen contracts, and ensure processors follow DPDP-compliant practices, because liability remains with the fiduciary.

5. Breach Response Will Be Timed, Tested, and Visible

Data breaches are no longer just technical incidents, they are legal events.  The DPDP Rules require organizations to detect, assess, and respond to breaches with defined processes and accountability. Silence or delay will only worsen regulatory consequences. 
As Bajpai notes, “The practical effect is immediate: companies must move from policy documents to implemented consent systems, security controls, breach workflows, and vendor governance.” 
Tabletop exercises, breach simulations, and forensic readiness will become standard—not optional. 

6. SignificantData Fiduciaries (SDFs) Will Face Heavier Obligations 

Not all companies are treated equally under the DPDP Act. Significant Data Fiduciaries (SDFs)—those handling large volumes of sensitive personal data, will face stricter obligations, including: 
  • Data Protection Impact Assessments 
  • Appointment of India-based Data Protection Officers 
  • Regular independent audits 
Global platforms like Meta, Google, Amazon, and large Indian fintechs will feel the pressure first, but the ripple effect will touch the entire ecosystem.

7. A New Privacy Infrastructure Will Emerge

The DPDP framework is not just regulation—it is ecosystem building. 
As Bajpai observes, “This is not just regulation; it is an economic strategy to build domestic capability in cloud, identity, security and RegTech.” 
Consent Managers, auditors, privacy tech vendors, and compliance platforms will grow rapidly in 2026. For Indian startups, DPDP compliance itself becomes a business opportunity.

8. Trust Will Become a Competitive Advantage

Perhaps the biggest change is psychological. In 2026, users will increasingly ask: 
  • Why does this app need my data? 
  • Can I withdraw consent? 
  • What happens if there’s a breach? 
One Reddit user captured the risk succinctly, “On paper, the DPDP Act looks great… But a law is only as strong as public awareness around it.” 
Companies that communicate transparently and respect user choice will win trust. Those that don’t will lose customers long before regulators step in. 

Preparing for 2026: From Awareness to Action 

As Hareesh Tibrewala, CEO at Anhad, notes, “Organizations now have the opportunity to prepare a roadmap for DPDP implementation.”
For many businesses, however, the challenge lies in turning awareness into action, especially when clarity around timelines and responsibilities is still evolving.  The concern extends beyond citizens to companies themselves, many of which are still grappling with core concepts such as consent management, data fiduciary obligations, and breach response requirements. With penalties tiered by the nature and severity of violations—ranging from significant fines to amounts running into hundreds of crores, this lack of understanding could prove costly.  In 2026, regulators will no longer be looking for intent, they will be looking for evidence of execution. As Bajpai points out, “That makes 2026 the inflection year when planning becomes measurable operational work and when regulators will expect visible progress.” 

What Companies Should Do Now: A Practical DPDP Act Readiness Checklist 

As India moves closer to full DPDP enforcement, organizations that act early will find compliance far less disruptive. At a minimum, businesses should focus on the following steps: 
  • Map personal data flows: Identify what personal data is collected, where it resides, who has access to it, and which third parties process it. 
  • Review consent mechanisms: Ensure consent requests are clear, purpose-specific, and easy to withdraw, across websites, apps, and internal systems. 
  • Define retention and deletion policies: Establish how long different categories of personal data are retained and document secure disposal processes. 
  • Assess third-party risk: Audit vendors, cloud providers, and processors to confirm DPDP-aligned controls and contractual obligations. 
  • Strengthen breach response readiness: Put tested incident response and notification workflows in place, not just policies on paper. 
  • Train employees across functions: Build awareness beyond IT and legal teams, privacy failures often begin with everyday operational mistakes. 
  • Assign ownership and accountability: Clearly define who is responsible for DPDP compliance, reporting, and ongoing monitoring. 
These steps are not about ticking boxes; they are about building muscle memory for a privacy-first operating environment. 

2026 Is the Year Privacy Becomes Real 

The DPDP Act does not promise instant perfection. What it demands is accountability.  By 2026, privacy will move from policy documents to product design, from legal fine print to leadership dashboards, and from reactive fixes to proactive governance. Organizations that delay will not only face regulatory penalties, but they also risk losing customer trust in an increasingly privacy-aware market. 
As Sandeep Shukla cautions, “It will probably take years before a proper implementation at all levels of organizations would be seen.” 
But the direction is clear. Personal data in India can no longer be treated casually.  The DPDP Act marks the end of informal data handling, and the beginning of a more disciplined, transparent, and accountable digital economy. 

FBI Cautions Alaskans Against Phone Scams Using Fake Arrest Threats

15 December 2025 at 06:49

FBI Warns

The FBI Anchorage Field Office has issued a public warning after seeing a sharp increase in fraud cases targeting residents across Alaska. According to federal authorities, scammers are posing as law enforcement officers and government officials in an effort to extort money or steal sensitive personal information from unsuspecting victims.

The warning comes as reports continue to rise involving unsolicited phone calls where criminals falsely claim to represent agencies such as the FBI or other local, state, and federal law enforcement bodies operating in Alaska. These scams fall under a broader category of law enforcement impersonation scams, which rely heavily on fear, urgency, and deception.

How the Phone Scam Works

Scammers typically contact victims using spoofed phone numbers that appear legitimate. In many cases, callers accuse individuals of failing to report for jury duty or missing a court appearance. Victims are then told that an arrest warrant has been issued in their name.

To avoid immediate arrest or legal consequences, the caller demands payment of a supposed fine. Victims are pressured to act quickly, often being told they must resolve the issue immediately. According to the FBI, these criminals may also provide fake court documents or reference personal details about the victim to make the scam appear more convincing.

In more advanced cases, scammers may use artificial intelligence tools to enhance their impersonation tactics. This includes generating realistic voices or presenting professionally formatted documents that appear to come from official government sources. These methods have contributed to the growing sophistication of government impersonation scams nationwide.

Common Tactics Used by Scammers

Authorities note that these scams most often occur through phone calls and emails. Criminals commonly use aggressive language and insist on speaking only with the targeted individual. Victims are often told not to discuss the call with family members, friends, banks, or law enforcement agencies.

Payment requests are another key red flag. Scammers typically demand money through methods that are difficult to trace or reverse. These include cash deposits at cryptocurrency ATMs, prepaid gift cards, wire transfers, or direct cryptocurrency payments. The FBI has emphasized that legitimate government agencies never request payment through these channels.

FBI Clarifies What Law Enforcement Will Not Do

The FBI has reiterated that it does not call members of the public to demand payment or threaten arrest over the phone. Any call claiming otherwise should be treated as fraudulent. This clarification is a central part of the FBI’s broader FBI scam warning Alaska residents are being urged to take seriously.

Impact of Government Impersonation Scams

Data from the FBI’s Internet Crime Complaint Center (IC3) highlights the scale of the problem. In 2024 alone, IC3 received more than 17,000 complaints related to government impersonation scams across the United States. Reported losses from these incidents exceeded $405 million nationwide.

Alaska has not been immune. Reported victim losses in the state surpassed $1.3 million, underscoring the financial and emotional impact these scams can have on individuals and families.

How Alaskans Can Protect Themselves

To reduce the risk of falling victim, the FBI urges residents to “take a beat” before responding to any unsolicited communication. Individuals should resist pressure tactics and take time to verify claims independently.

The FBI strongly advises against sharing or confirming personally identifiable information with anyone contacted unexpectedly. Alaskans are also cautioned never to send money, gift cards, cryptocurrency, or other assets in response to unsolicited demands.

What to Do If You Are Targeted

Anyone who believes they may have been targeted or victimized should immediately stop communicating with the scammer. Victims should notify their financial institutions, secure their accounts, contact local law enforcement, and file a complaint with the FBI’s Internet Crime Complaint Center at www.ic3.gov. Prompt reporting can help limit losses and prevent others from being targeted.

City of Cambridge Advises Password Reset After Nationwide CodeRED Data Breach

12 December 2025 at 00:56

City of Cambridge

The City of Cambridge has released an important update regarding the OnSolve CodeRED emergency notifications system, also known locally as Cambridge’s reverse 911 system. The platform, widely used by thousands of local governments and public safety agencies across the country, was taken offline in November following a nationwide OnSolve CodeRED cyberattack. Residents who rely on CodeRED alerts for information about snow emergencies, evacuations, water outages, or other service disruptions are being asked to take immediate steps to secure their accounts and continue receiving notifications.

Impact of the OnSolve CodeRED Cyberattack on User Data

According to city officials, the data breach affected CodeRED databases nationwide, including Cambridge. The compromised information may include phone numbers, email addresses, and passwords of registered users. Importantly, the attack targeted the OnSolve CodeRED system itself, not the City of Cambridge or its departments. This OnSolve CodeRED cyberattack incident mirrors similar concerns raised in Monroe County, Georgia, where officials confirmed that residents’ personal information was also exposed. The Monroe County Emergency Management Agency emphasized that the breach was part of a nationwide cybersecurity incident and not a local failure.

Transition to CodeRED by Crisis24

In response, OnSolve permanently decommissioned the old CodeRED platform and migrated services to a new, secure environment known as CodeRED by Crisis24. The new system has undergone comprehensive security audits, including penetration testing and system hardening, to ensure stronger protection against future threats. For Cambridge residents, previously registered contact information has been imported into the new platform. However, due to security concerns, all passwords have been removed. Users must now reset their credentials before accessing their accounts.

Steps for City of Cambridge Residents and Users

To continue receiving emergency notifications, residents should:
  • Visit accountportal.onsolve.net/cambridgema
  • Enter their username (usually an email address)
  • Select “forgot password” to verify and reset credentials
  • If unsure of their username, use the “forgot username” option
Officials strongly advise against reusing old CodeRED passwords, as they may have been compromised. Instead, users should create strong, unique passwords and update their information once logged in. Additionally, anyone who used the same password across multiple accounts is urged to change those credentials immediately to reduce the risk of further exposure.

Broader National Context

The Monroe County cyberattack highlights the scale of the issue. Officials there reported that data such as names, addresses, phone numbers, and passwords were compromised. Residents who enrolled before March 31, 2025, had their information migrated to the new Crisis24 CodeRED platform, while those who signed up afterward must re‑enroll. OnSolve has reassured communities that the intrusion was contained within the original system and did not spread to other networks. While there is currently no evidence of identity theft, the incident underscores the growing risks of cyber intrusions nationwide.

Resources for Cybersecurity Protection

Residents who believe they may have been victims of cyber‑enabled fraud are encouraged to report incidents to the FBI Internet Crime Complaint Center (IC3) at ic3.gov. Additional resources are available to help protect individuals and families from fraud and cybercrime. Security experts note that the rising frequency of attacks highlights the importance of independent threat‑intelligence providers. Companies such as Cyble track vulnerabilities and cybercriminal activity across global networks, offering organizations tools to strengthen defenses and respond more quickly to incidents.

Looking Ahead

The City of Cambridge has thanked residents for their patience as staff worked with OnSolve to restore emergency alert capabilities. Officials emphasized that any breach of security is a serious concern and confirmed that they will continue monitoring the new CodeRED by Crisis24 platform to ensure its standards are upheld. In addition, the City is evaluating other emergency alerting systems to determine the most effective long‑term solution for community safety.

Federal Grand Jury Charges Former Manager with Government Contractor Fraud

11 December 2025 at 04:16

Government Contractor Fraud

Government contractor fraud is at the heart of a new indictment returned by a federal grand jury in Washington, D.C. against a former senior manager in Virginia. Prosecutors say Danielle Hillmer, 53, of Chantilly, misled federal agencies for more than a year about the security of a cloud platform used by the U.S. Army and other government customers. The indictment, announced yesterday, charges Hillmer with major government contractor fraud, wire fraud, and obstruction of federal audits. According to prosecutors, she concealed serious weaknesses in the system while presenting it as fully compliant with strict federal cybersecurity standards.

Government Contractor Fraud: Alleged Scheme to Mislead Agencies

According to court documents, Hillmer’s actions spanned from March 2020 through November 2021. During this period, she allegedly obstructed auditors and misrepresented the platform’s compliance with the Federal Risk and Authorization Management Program (FedRAMP) and the Department of Defense’s Risk Management Framework. The indictment claims that while the platform was marketed as a secure environment for federal agencies, it lacked critical safeguards such as access controls, logging, and monitoring. Despite repeated warnings, Hillmer allegedly insisted the system met the FedRAMP High baseline and DoD Impact Levels 4 and 5, both of which are required for handling sensitive government data.

Obstruction of Audits

Federal prosecutors allege Hillmer went further by attempting to obstruct third-party assessors during audits in 2020 and 2021. She is accused of concealing deficiencies and instructing others to hide the true state of the system during testing and demonstrations. The indictment also states that Hillmer misled the U.S. Army to secure sponsorship for a Department of Defense provisional authorization. She allegedly submitted, and directed others to submit, authorization materials containing false information to assessors, authorizing officials, and government customers. These misrepresentations, prosecutors say, allowed the contractor to obtain and maintain government contracts under false pretenses.

Charges and Potential Penalties

Hillmer faces two counts of wire fraud, one count of major government fraud, and two counts of obstruction of a federal audit. If convicted, she could face:
  • Up to 20 years in prison for each wire fraud count
  • Up to 10 years in prison for major government fraud
  • Up to 5 years in prison for each obstruction count
A federal district court judge will determine any sentence after considering the U.S. Sentencing Guidelines and other statutory factors. The indictment was announced by Acting Assistant Attorney General Matthew R. Galeotti of the Justice Department’s Criminal Division and Deputy Inspector General Robert C. Erickson of the U.S. General Services Administration Office of Inspector General (GSA-OIG). The case is being investigated by the GSA-OIG, the Defense Criminal Investigative Service, the Naval Criminal Investigative Service, and the Department of the Army Criminal Investigation Division. Trial Attorneys Lauren Archer and Paul Hayden of the Criminal Division’s Fraud Section are prosecuting the case.

Broader Implications of Government Contractor Fraud

The indictment highlights ongoing concerns about the integrity of cloud platforms used by federal agencies. Programs like FedRAMP and the DoD’s Risk Management Framework are designed to ensure that systems handling sensitive government data meet rigorous security standards. Allegations that a contractor misrepresented compliance raise questions about oversight and the risks posed to national security when platforms fall short of requirements. Federal officials emphasized that the government contractor fraud case highlights the importance of transparency and accountability in government contracting, particularly in areas involving cybersecurity. Note: It is important to note that an indictment is merely an allegation. Hillmer, like all defendants, is presumed innocent until proven guilty beyond a reasonable doubt in a court of law.

Australia’s Social Media Ban for Kids: Protection, Overreach or the Start of a Global Shift?

10 December 2025 at 04:23

ban on social media

On a cozy December morning, as children in Australia set their bags aside for the holiday season and held their tabs and phones in hand to take that selfie and announce to the world they were all set for the fun to begin, something felt a miss. They couldn't access their Snap Chat and Instagram accounts. No it wasn't another downtime caused by a cyberattack, because they could see their parents lounging on the couch and laughing at the dog dance reels. So why were they not able to? The answer: the ban on social media for children under 16 had officially taken effect. It wasn't just one or 10 or 100 but more than one million young users who woke up locked out of their social media. No TikTok scroll. No Snapchat streak. No YouTube comments. Australia had quietly entered a new era, the world’s first nationwide ban on social media for children under 16, effective December 10. The move has initiated global debate, parental relief, youth frustration, and a broader question: Is this the start of a global shift, or a risky social experiment? Prime Minister Anthony Albanese was clear about why his government took this unparalleled step. “Social media is doing harm to our kids, and I’m calling time on it,” he said during a press conference. “I’ve spoken to thousands of parents… they’re worried sick about the safety of our kids online, and I want Australian families to know that the Government has your back.” Under the Anthony Albanese social media policy, platforms including Instagram, Facebook, X, Snapchat, TikTok, Reddit, Twitch, Kick, Threads and YouTube must block users under 16, or face fines of up to AU$32 million. Parents and children won’t be penalized, but tech companies will. [caption id="attachment_107569" align="aligncenter" width="448"]Australia ban Social Media Source: eSafety Commissioner[/caption]

Australia's Ban on Social Media: A Big Question

Albanese pointed to rising concerns about the effects of social media on children, from body-image distortion to exposure to inappropriate content and addictive algorithms that tug at young attention spans. [caption id="attachment_107541" align="aligncenter" width="960"]Ban on social media Source: Created using Google Gemini[/caption] Research supports these concerns. A Pew Research Center study found:
  • 48% of teens say social media has a mostly negative effect on people their age, up sharply from 32% in 2022.
  • 45% feel they spend too much time on social media.
  • Teen girls experience more negative impacts than boys, including mental health struggles (25% vs 14%) and loss of confidence (20% vs 10%).
  • Yet paradoxically, 74% of teens feel more connected to friends because of social media, and 63% use it for creativity.
These contradictions make the issue far from black and white. Psychologists remind us that adolescence, beginning around age 10 and stretching into the mid-20s, is a time of rapid biological and social change, and that maturity levels vary. This means that a one-size-fits-all ban on social media may overshoot the mark.

Ban on Social Media for Users Under 16: How People Reacted

Australia’s announcement, first revealed in November 2024, has motivated countries from Malaysia to Denmark to consider similar legislation. But not everyone is convinced this is the right way forward.

Supporters Applaud “A Chance at a Real Childhood”

Pediatric occupational therapist Cris Rowan, who has spent 22 years working with children, celebrated the move: “This may be the first time children have the opportunity to experience a real summer,” she said.“Canada should follow Australia’s bold initiative. Parents and teachers can start their own movement by banning social media from homes and schools.” Parents’ groups have also welcomed the decision, seeing it as a necessary intervention in a world where screens dominate childhood.

Others Say the Ban Is Imperfect, but Necessary

Australian author Geoff Hutchison puts it bluntly: “We shouldn’t look for absolutes. It will be far from perfect. But we can learn what works… We cannot expect the repugnant tech bros to care.” His view reflects a broader belief that tech companies have too much power, and too little accountability.

Experts Warn Against False Security 

However, some experts caution that the Australia ban on social media may create the illusion of safety while failing to address deeper issues. Professor Tama Leaver, Internet Studies expert at Curtin University, told The Cyber Express that while the ban on social media addresses some risks, such as algorithmic amplification of inappropriate content and endless scrolling, many online dangers remain.

“The social media ban only really addresses on set of risks for young people, which is algorithmic amplification of inappropriate content and the doomscrolling or infinite scroll. Many risks remain. The ban does nothing to address cyberbullying since messaging platforms are exempt from the ban, so cyberbullying will simply shift from one platform to another.”

Leaver also noted that restricting access to popular platforms will not drive children offline. Due to ban on social media young users will explore whatever digital spaces remain, which could be less regulated and potentially riskier.

“Young people are not leaving the digital world. If we take some apps and platforms away, they will explore and experiment with whatever is left. If those remaining spaces are less known and more risky, then the risks for young people could definitely increase. Ideally the ban will lead to more conversations with parents and others about what young people explore and do online, which could mitigate many of the risks.”

From a broader perspective, Leaver emphasized that the ban on social media will only be fully beneficial if accompanied by significant investment in digital literacy and digital citizenship programs across schools:

“The only way this ban could be fully beneficial is if there is a huge increase in funding and delivery of digital literacy and digital citizenship programs across the whole K-12 educational spectrum. We have to formally teach young people those literacies they might otherwise have learnt socially, otherwise the ban is just a 3 year wait that achieves nothing.”

He added that platforms themselves should take a proactive role in protecting children:

“There is a global appetite for better regulation of platforms, especially regarding children and young people. A digital duty of care which requires platforms to examine and proactively reduce or mitigate risks before they appear on platforms would be ideal, and is something Australia and other countries are exploring. Minimizing risks before they occur would be vastly preferable to the current processes which can only usually address harm once it occurs.”

Looking at the global stage, Leaver sees Australia ban on social media as a potential learning opportunity for other nations:

“There is clearly global appetite for better and more meaningful regulation of digital platforms. For countries considered their own bans, taking the time to really examine the rollout in Australia, to learn from our mistakes as much as our ambitions, would seem the most sensible path forward.”

Other specialists continue to warn that the ban on social media could isolate vulnerable teenagers or push them toward more dangerous, unregulated corners of the internet.

Legal Voices Raise Serious Constitutional Questions

Senior Supreme Court Advocate Dr. K. P. Kylasanatha Pillay offered a thoughtful reflection: “Exposure of children to the vagaries of social media is a global concern… But is a total ban feasible? We must ask whether this is a reasonable restriction or if it crosses the limits of state action. Not all social media content is harmful. The best remedy is to teach children awareness.” His perspective reflects growing debate about rights, safety, and state control.

LinkedIn, Reddit, and the Public Divide

Social media itself has become the battleground for reactions. On Reddit, youngesters were particularly vocal about the ban on social media. One teen wrote: “Good intentions, bad execution. This will make our generation clueless about internet safety… Social media is how teenagers express themselves. This ban silences our voices.” Another pointed out the easy loophole: “Bypassing this ban is as easy as using a free VPN. Governments don’t care about safety — they want control.” But one adult user disagreed: “Everyone against the ban seems to be an actual child. I got my first smartphone at 20. My parents were right — early exposure isn’t always good.” This generational divide is at the heart of the debate.

Brands, Marketers, and Schools Brace for Impact

Bindu Sharma, Founder of World One Consulting, highlighted the global implications: “Ten of the biggest platforms were ordered to block children… The world is watching how this plays out.” If the ban succeeds, brands may rethink how they target younger audiences. If it fails, digital regulation worldwide may need reimagining.

Where Does This Leave the World?

Australia’s decision to ban social media for children under 16 is bold, controversial, and rooted in good intentions. It could reshape how societies view childhood, technology, and digital rights. But as critics note, ban on social media platforms can also create unintended consequences, from delinquency to digital illiteracy. What’s clear is this: Australia has started a global conversation that’s no longer avoidable. As one LinkedIn user concluded: “Safety of the child today is assurance of the safety of society tomorrow.”

European Court Imposes Strict New Data Checks on Online Marketplace Ads

3 December 2025 at 00:34

CJEU ruling

The CJEU ruling by the Court of Justice of the European Union on Tuesday has made it clear that online marketplaces are responsible for the personal data that appears in advertisements on their platforms. The Court of Justice of the European Union decision makes clear that platforms must get consent from any person whose data is shown in an advertisement, and must verify ads before they go live, especially where sensitive data is involved. The CJEU ruling comes from a 2018 incident in Romania. A fake advertisement on the classifieds website publi24.ro claimed a woman was offering sexual services. The post included her photos and phone number, which were used without her permission. The operator of the site, Russmedia Digital, removed the ad within an hour, but by then it had already been copied to other websites. The woman said the ad harmed her privacy and reputation and took the company to court. Lower courts in Romania gave different decisions, so the case was referred to the Court of Justice of the European Union for clarity. The CJEU has now confirmed that online marketplaces are data controllers under the GDPR for the personal data contained in ads on their sites.

CJEU Ruling: What Online Marketplaces Must Do Now

The court said that marketplace operators must take more responsibility and cannot rely on old rules that protect hosting services from liability. From now on, platforms must:
  • Check ads before publishing them when they contain personal or sensitive data.
  • Confirm that the person posting the ad is the same person shown in the ad, or make sure the person shown has given explicit consent.
  • Refuse ads if consent or identity verification cannot be confirmed.
  • Put measures in place to help prevent sensitive ads from being copied and reposted on other websites.
These steps must be part of the platform’s regular technical and organisational processes to comply with the GDPR.

What This Means for Platforms Across The EU

Legal teams at Pinsent Masons warned the decision “will likely have major implications for data protection across the 27 member states.” Nienke Kingma of Pinsent Masons said the ruling is important for compliance, adding it is “setting a new standard for data protection compliance across the EU.” Thijs Kelder, also at Pinsent Masons, said: “This judgment makes clear that online marketplaces cannot avoid their obligations under the GDPR,” and noted the decision “increases the operational risks on these platforms,” meaning companies will need stronger risk management. Daphne Keller of Stanford Law School warned about wider effects on free expression and platform design, noting the ruling “has major implications for free expression and access to information, age verification and privacy.”

Practical Impact

The CJEU ruling decision marks a major shift in how online marketplaces must operate. Platforms that allow users to post adverts will now have to rethink their processes, from verifying identities and checking personal data before an ad goes live to updating their terms and investing in new technical controls. Smaller platforms may feel the pressure most, as the cost of building these checks could be significant. What happens next will depend on how national data protection authorities interpret the ruling and how quickly companies can adapt. The coming months will reveal how verification should work in practice, what measures count as sufficient protection against reposting, and how platforms can balance these new duties with user privacy and free expression. The ruling sets a strict new standard, and its real impact will become clearer as regulators, courts, and platforms begin to implement it.

Australia Establishes AI Safety Institute to Combat Emerging Threats from Frontier AI Systems

2 December 2025 at 11:38

APT31, Australian Parliament, AI Safety Institute, National AI Plan

Australia's fragmented approach to AI oversight—with responsibilities scattered across privacy commissioners, consumer watchdogs, online safety regulators, and sector-specific agencies—required coordination to keep pace with rapidly evolving AI capabilities and their potential to amplify existing harms while creating entirely new threats.

The Australian Government announced establishment of the AI Safety Institute backed by $29.9 million in funding, to monitor emerging AI capabilities, test advanced systems, and share intelligence across government while supporting regulators to ensure AI companies comply with Australian law. The setting up of the AI safety institute is part of the larger National AI Plan that the Australian government officially released on Tuesday.

The Institute will become operational in early 2026 as the centerpiece of the government's strategy to keep Australians safe while capturing economic opportunities from AI adoption. The approach maintains existing legal frameworks as the foundation for addressing AI-related risks rather than introducing standalone AI legislation, with the Institute supporting portfolio agencies and regulators to adapt laws when necessary.

Dual Focus on Upstream Risks and Downstream Harms

The AI Safety Institute will focus on both upstream AI risks and downstream AI harms. Upstream risks involve model capabilities and the ways AI systems are built and trained that can create or amplify harm, requiring technical evaluation of frontier AI systems before deployment.

Downstream harms represent real-world effects people experience when AI systems are used, including bias in hiring algorithms, privacy breaches from data processing, discriminatory outcomes in automated decision-making, and emerging threats like AI-enabled crime and AI-facilitated abuse disproportionately impacting women and girls.

The Institute will generate and share technical insights on emerging AI capabilities, working across government and with international partners. It will develop advice, support bilateral and multilateral safety engagement, and publish safety research to inform industry and academia while engaging with unions, business, and researchers to ensure functions meet community needs.

Supporting Coordinated Regulatory Response

The Institute will support coordinated responses to downstream AI harms by engaging with portfolio agencies and regulators, monitoring and analyzing information across government to allow ministers and regulators to take informed, timely, and cohesive regulatory action.

Portfolio agencies and regulators remain best placed to assess AI uses and harms in specific sectors and adjust regulatory approaches when necessary. The Institute will support existing regulators to ensure AI companies are compliant with Australian law and uphold legal standards of fairness and transparency.

The government emphasized that Australia has strong existing, largely technology-neutral legal frameworks including sector-specific guidance and standards that can apply to AI. The approach promotes flexibility, uses regulators' existing expertise, and targets emerging threats as understanding of AI's strengths and limitations evolves.

Addressing Specific AI Harms

The government is taking targeted action against specific harms while continuing to assess suitability of existing laws. Consumer protections under Australian Consumer Law apply equally to AI-enabled goods and services, with Treasury's review finding Australians enjoy the same strong protections for AI products as traditional goods.

The government addresses AI-related risks through enforceable industry codes under the Online Safety Act 2021, criminalizing non-consensual deepfake material while considering further restrictions on "nudify" apps and reforms to tackle algorithmic bias.

The Attorney-General's Department engages stakeholders through the Copyright and AI Reference Group to consult on possible updates to copyright laws as they relate to AI, with the government ruling out a text and data mining exception to provide certainty to Australian creators and media workers.

Healthcare AI regulation is under review through the Safe and Responsible AI in Healthcare Legislation and Regulation Review, while the Therapeutic Goods Administration oversees AI used in medical device software following its review on strengthening regulation of medical device software including artificial intelligence.

Also read: CPA Australia Warns: AI Adoption Accelerates Cyber Risks for Australian Businesses

National Security and Crisis Response

The Department of Home Affairs, National Intelligence Community, and law enforcement agencies continue efforts to proactively mitigate serious risks posed by AI. Home Affairs coordinates cross-government efforts on cybersecurity and critical infrastructure protection while overseeing the Protective Security Policy Framework detailing policy requirements for authorizing AI technology systems for non-corporate Commonwealth entities.

AI is likely to exacerbate existing national security risks and create new, unknown threats. The government is preparing for potential AI-related incidents through the Australian Government Crisis Management Framework, which provides overarching policy for managing potential crises.

The government will consider how AI-related harms are managed under the framework to ensure ongoing clarity regarding roles and responsibilities across government to support coordinated and effective action.

International Engagement

The Institute will collaborate with domestic and international partners including the National AI Centre and the International Network of AI Safety Institutes to support global conversations on understanding and addressing AI risks.

Australia is a signatory to the Bletchley Declaration, Seoul Declaration, and Paris Statement emphasizing inclusive international cooperation on AI governance. Participation in the UN Global Digital Compact, Hiroshima AI Process, and Global Partnership on AI supports conversations on advancing safe, secure, and trustworthy adoption.

The government is developing an Australian Government Strategy for International Engagement and Regional Leadership on Artificial Intelligence to align foreign and domestic policy settings while establishing priorities for bilateral partnerships and engagement in international forums.

Also read: UK’s AI Safety Institute Establishes San Francisco Office for Global Expansion

GPS Spoofing Detected Across Major Indian Airports; Government Tightens Security

2 December 2025 at 00:37

GPS Spoofing

The Union government of India, the country’s central federal administration, on Monday confirmed several instances of GPS spoofing near Delhi’s Indira Gandhi International Airport (IGIA) and other major airports. Officials said that despite the interference, all flights continued to operate safely and without disruption. The clarification came after reports pointed to digital interference affecting aircraft navigation systems during approach procedures at some of the busiest airports in the country.

What Is GPS Spoofing?

GPS spoofing is a form of signal interference where false Global Positioning System (GPS) signals are broadcast to mislead navigation systems. For aircraft, it can temporarily confuse onboard systems about their true location or altitude. While pilots and air traffic controllers are trained to manage such situations, repeated interference requires immediate reporting and stronger safeguards.

Government Confirms Incidents at Multiple Airports

India’s Civil Aviation Minister Ram Mohan Naidu informed Parliament that several flights approaching Delhi reported GPS spoofing while using satellite-based landing procedures on Runway 10. In a written reply to the Rajya Sabha, the minister confirmed that similar signal interference reports have been received from several India’s major airports, including Mumbai, Kolkata, Hyderabad, Bengaluru, Amritsar, and Chennai. He explained that when GPS spoofing was detected in Delhi, contingency procedures were activated for flights approaching the affected runway. The rest of the airport continued functioning normally through conventional ground-based navigation systems, preventing any impact on overall flight operations.

Safety Procedures and New Reporting System

The Directorate General of Civil Aviation (DGCA) has issued a Standard Operating Procedure (SOP) for real-time reporting of GPS spoofing and Global Navigation Satellite System (GNSS) interference around IGI Airport. The minister added that since DGCA made reporting mandatory in November 2023, regular interference alerts have been received from major airports across the country. These reports are helping regulators identify patterns and respond more quickly to any navigation-related disturbances. India continues to maintain a network of traditional navigation and surveillance systems such as Instrument Landing Systems (ILS) and radar. These systems act as dependable backups if satellite-based navigation is interrupted, following global aviation best practices.

Airports on High Cyber Vigilance

The government said India is actively engaging with global aviation bodies to stay updated on the latest technologies, methods, and safety measures related to aviation cybersecurity. Meanwhile, the Airports Authority of India (AAI) is deploying advanced cybersecurity tools across its IT infrastructure to strengthen protection against potential digital threats. Although the cyber-related interference did not affect flight schedules, the confirmation of GPS spoofing attempts at major airports has led to increased monitoring across key aviation hubs. These airports handle millions of passengers every year, making continuous vigilance essential.

Recent Aviation Challenges

The GPS spoofing reports come shortly after a separate system failure at Delhi Airport in November, which caused major delays. That incident was later linked to a technical issue with the Automatic Message Switching System (AMSS) and was not related to cyber activity. The aviation sector also faced another challenge recently when Airbus A320 aircraft required an urgent software update. The A320, widely used in India, led to around 388 delayed flights on Saturday. All Indian airlines completed the required updates by Sunday, allowing normal operations to resume. Despite reports of interference, the Union government emphasised that there was no impact on passenger safety or flight operations. Established procedures, trained crews, and reliable backup systems ensured that aircraft continued operating normally. Authorities said they will continue monitoring navigation systems closely and strengthening cybersecurity measures across airports to safeguard India’s aviation network.

EU Reaches Agreement on Child Sexual Abuse Detection Law After Three Years of Contentious Debate

27 November 2025 at 13:47

Child Sexual Abuse

That lengthy standoff over privacy rights versus child protection ended Wednesday when EU member states finally agreed on a negotiating mandate for the Child Sexual Abuse Regulation, a controversial law requiring online platforms to detect, report, and remove child sexual abuse material while critics warn the measures could enable mass surveillance of private communications.

The Council agreement, reached despite opposition from the Czech Republic, Netherlands, and Poland, clears the way for trilogue negotiations with the European Parliament to begin in 2026 on legislation that would permanently extend voluntary scanning provisions and establish a new EU Centre on Child Sexual Abuse.

The Council introduces three risk categories of online services based on objective criteria including service type, with authorities able to oblige online service providers classified in the high-risk category to contribute to developing technologies to mitigate risks relating to their services. The framework shifts responsibility to digital companies to proactively address risks on their platforms.

Permanent Extension of Voluntary Scanning

One significant provision permanently extends voluntary scanning, a temporary measure first introduced in 2021 that allows companies to voluntarily scan for child sexual abuse material without violating EU privacy laws. That exemption was set to expire in April 2026 under current e-Privacy Directive provisions.

At present, providers of messaging services may voluntarily check content shared on their platforms for online child sexual abuse material, then report and remove it. According to the Council position, this exemption will continue to apply indefinitely under the new law.

Danish Justice Minister Peter Hummelgaard welcomed the Council's agreement, stating that the spread of child sexual abuse material is "completely unacceptable." "Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse," Hummelgaard said.

New EU Centre on Child Sexual Abuse

The legislation provides for establishment of a new EU agency, the EU Centre on Child Sexual Abuse, to support implementation of the regulation. The Centre will act as a hub for child sexual abuse material detection, reporting, and database management, receiving reports from providers, assessing risk levels across platforms, and maintaining a database of indicators.

The EU Centre will assess and process information supplied by online providers about child sexual abuse material identified on services, creating, maintaining and operating a database for reports submitted by providers. The Centre will share information from companies with Europol and national law enforcement bodies, supporting national authorities in assessing the risk that online services could be used to spread abuse material.

Online companies must provide assistance for victims who would like child sexual abuse material depicting them removed or for access to such material disabled. Victims can ask for support from the EU Centre, which will check whether companies involved have removed or disabled access to items victims want taken down.

Privacy Concerns and Opposition

The breakthrough comes after months of stalled negotiations and a postponed October vote when Germany joined a blocking minority opposing what critics commonly call "chat control." Berlin argued the proposal risked "unwarranted monitoring of chats," comparing it to opening letters from other correspondents.

Critics from Big Tech companies and data privacy NGOs warn the measures could pave the way for mass surveillance, as private messages would be scanned by authorities to detect illegal images. The Computer and Communications Industry Association stated that EU member states made clear the regulation can only move forward if new rules strike a true balance protecting minors while maintaining confidentiality of communications, including end-to-end encryption.

Also read: EU Chat Control Proposal to Prevent Child Sexual Abuse Slammed by Critics

Former Pirate MEP Patrick Breyer, who has been advocating against the file, characterized the Council endorsement as "a Trojan Horse" that legitimizes warrantless, error-prone mass surveillance of millions of Europeans by US corporations through cementing voluntary mass scanning.

The European Parliament's study heavily critiqued the Commission's proposal, concluding there aren't currently technological solutions that can detect child sexual abuse material without resulting in high error rates affecting all messages, files and data in platforms. The study also concluded the proposal would undermine end-to-end encryption and security of digital communications.

Scope of the Crisis

Statistics underscore the urgency. 20.5 million reports and 63 million files of abuse were submitted to the National Center for Missing and Exploited Children CyberTipline last year, with online grooming increasing 300 percent since negotiations began. Every half second, an image of a child being sexually abused is reported online.

Sixty-two percent of abuse content flagged by the Internet Watch Foundation in 2024 was traced to EU servers, with at least one in five children in Europe a victim of sexual abuse.

The Council position allows trilogue negotiations with the European Parliament and Commission to start in 2026. Those negotiations need to conclude before the already postponed expiration of the current e-Privacy regulation that allows exceptions under which companies can conduct voluntary scanning. The European Parliament reached its negotiating position in November 2023.

Account Takeover Scams Surge as FBI Reports Over $262 Million in Losses

26 November 2025 at 00:34

Account Takeover fraud

The Account Takeover fraud threat is accelerating across the United States, prompting the Federal Bureau of Investigation (FBI) to issue a new alert warning individuals, businesses, and organizations of all sizes to stay vigilant. According to the FBI Internet Crime Complaint Center (IC3), more than 5,100 complaints related to ATO fraud have been filed since January 2025, with reported losses exceeding $262 million. The bureau warns that cyber criminals are increasingly impersonating financial institutions to steal money or sensitive information. As the annual Black Friday sale draws millions of shoppers online, the FBI notes that the surge in digital purchases creates an ideal environment for Account Takeover fraud. With consumers frequently visiting unfamiliar retail websites and acting quickly to secure limited-time deals, cyber criminals deploy fake customer support calls, phishing pages, and fraudulent ads disguised as payment or discount portals. The increased online activity during Black Friday makes it easier for attackers to blend in and harder for victims to notice red flags, making the shopping season a lucrative window for ATO scams.

How Account Takeover Fraud Works

In an ATO scheme, cyber criminals gain unauthorized access to online financial, payroll, or health savings accounts. Their goal is simple: steal funds or gather personal data that can be reused for additional fraudulent activities. The FBI notes that these attacks often start with impersonation, either of a financial institution’s staff, customer support teams, or even the institution’s official website. To carry out their schemes, criminals rely heavily on social engineering and phishing websites designed to look identical to legitimate portals. These tactics create a false sense of trust, encouraging account owners to unknowingly hand over their login credentials.

Social Engineering Tactics Increase in Frequency

The FBI highlights that most ATO cases begin with social engineering, where cyber criminals manipulate victims into sharing sensitive information such as passwords, multi-factor authentication (MFA) codes, or one-time passcodes (OTP). Common techniques include:
  • Fraudulent text messages, emails, or calls claiming unusual activity or unauthorized charges. Victims are often directed to click on phishing links or speak to fake customer support representatives.
  • Attackers posing as bank employees or technical support agents who convince victims to share login details under the guise of preventing fraudulent transactions.
  • Scenarios where cyber criminals claim the victim’s identity was used to make unlawful purchases—sometimes involving firearms, and escalate the scam by introducing another impersonator posing as law enforcement.
Once armed with stolen credentials, criminals reset account passwords and gain full control, locking legitimate users out of their own accounts.

Phishing Websites and SEO Poisoning Drive More Losses

Another growing trend is the use of sophisticated phishing domains and websites that perfectly mimic authentic financial institution portals. Victims believe they are logging into their bank or payroll system, but instead, they are handing their details directly to attackers. The FBI also warns about SEO poisoning, a method in which cyber criminals purchase search engine ads or manipulate search rankings to make fraudulent sites appear legitimate. When victims search for their bank online, these deceptive ads redirect them to phishing sites that capture their login information. Once attackers secure access, they rapidly transfer funds to criminal-controlled accounts—many linked to cryptocurrency wallets—making transactions difficult to trace or recover.

How to Stay Protected Against ATO Fraud

The FBI urges customers and businesses to take proactive measures to defend against ATO fraud attempts:
  • Limit personal information shared publicly, especially on social media.
  • Monitor financial accounts regularly for missing deposits, unauthorized withdrawals, or suspicious wire transfers.
  • Use unique, complex passwords and enable MFA on all accounts.
  • Bookmark financial websites and avoid clicking on search engine ads or unsolicited links.
  • Treat unexpected calls, emails, or texts claiming to be from a bank with skepticism.

What To Do If You Experience an Account Takeover

Victims of ATO fraud are advised to act quickly:
  1. Contact your financial institution immediately to request recalls or reversals, and report the incident to IC3.gov.
  2. Reset all compromised credentials, including any accounts using the same passwords.
  3. File a detailed complaint at IC3.gov with all relevant information, such as impersonated institutions, phishing links, emails, or phone numbers used.
  4. Notify the impersonated company so it can warn others and request fraudulent sites be taken down.
  5. Stay informed through updated alerts and advisories published on IC3.gov.
❌