Normal view

Received today — 14 February 2026Cybersecurity

Best Penetration Testing Companies in USA

14 February 2026 at 01:50

Cyber threats are growing at an unprecedented pace. In 2024 alone, global cyber threat losses reached an estimated US$9.5 trillion, and this figure is projected to rise even further in 2025. If threats were a country, it would rank as the world’s third-largest economy, behind only the United States and China. As attackers increasingly leverage […]

The post Best Penetration Testing Companies in USA appeared first on Kratikal Blogs.

The post Best Penetration Testing Companies in USA appeared first on Security Boulevard.

Metasploit Wrap-Up 02/13/2026

13 February 2026 at 15:01

SolarWinds Web Help Desk

Our very own sfewer-r7 has developed an exploit module for the SolarWinds Web Help Desk vulnerabilities CVE-2025-40536 and CVE-2025-40551. On successful exploitation the session will be as running as NT AUTHORITY\SYSTEM. For more information see the Rapid7’s SolarWinds Web Help Desk Vulnerabilities guidance.

Contributions

A big thanks to our contributors who have been adding some great content this release. rudraditya21 has added MITRE ATT&CK metadata to lots of our existing modules. Chocapikk has added support for GHSA (GitHub Security Advisory) references support in Metasploit modules. rudraditya21 also added a change which adds negative caching to the LDAP entry cache, which will now mean missing objects are recorded. It also introduces a missing-entry sentinel, tracks misses per identifier type, and updates AD lookup helpers to short‑circuit on cached misses and record misses when a lookup returns no entry.

New module content (5)

FreeBSD rtsold/rtsol DNSSL Command Injection

Authors: Kevin Day and Lukas Johannes Möller

Type: Exploit

Pull request: #20798 contributed by JohannesLks

Path: freebsd/misc/rtsold_dnssl_cmdinject

AttackerKB reference: CVE-2025-14558

Description: This adds a new command-injection exploit in the FreeBDS rtsol/rtsold daemons (CVE-2025-14558). The vulnerability can be triggered by the Domain Name Search List (DNSSL) option in IPv6 Router Advertisement (RA) messages, which is passed to the resolvconf script without sanitization. It requires elevated privilege as it needs to send IPv6 packets. The injected commands are executed as root.

Ivanti Endpoint Manager Mobile (EPMM) unauthenticated RCE

Authors: sfewer-r7 and watchTowr

Type: Exploit

Pull request: #20932 contributed by sfewer-r7

Path: linux/http/ivanti_epmm_rce

AttackerKB reference: CVE-2026-1340

Description: Adds an exploit module for the recent command injection vulnerability, CVE-2026-1281, affecting Ivanti Endpoint Manager Mobile (EPMM), formerly known as MobileIron. Exploited in-the-wild as a zero-day by an unknown threat actor.

GNU Inetutils Telnet Authentication Bypass Exploit CVE-2026-24061

Authors: Kyu Neushwaistein and jheysel-r7

Type: Exploit

Pull request: #20929 contributed by jheysel-r7

Path: linux/telnet/gnu_inetutils_auth_bypass

AttackerKB reference: CVE-2026-24061

Description: This adds an exploit module for the authentication bypass in GNU Inetutils telnetd tracked as CVE-2026-24061. During negotiation, if the USER environment variable is passed in with a value of "-f root" authentication can be bypassed resulting in command execution as the root user.

SolarWinds Web Help Desk unauthenticated RCE

Authors: Jimi Sebree and sfewer-r7

Type: Exploit

Pull request: #20917 contributed by sfewer-r7

Path: multi/http/solarwinds_webhelpdesk_rce

AttackerKB reference: CVE-2025-40551

Description: This adds an exploit module for SolarWinds Web Help Desk vulnerable to CVE-2025-40536 and CVE-2025-40551. The exploit triggers session opening as NT AUTHORITY\SYSTEM and root.

Xerte Online Toolkits Arbitrary File Upload - Upload Image

Author: Brandon Lester

Type: Exploit

Pull request: #20849 contributed by haicenhacks

Path: multi/http/xerte_authenticated_rce_uploadimage

Description: This adds three RCE modules for Xerte Online Toolkits affecting versions 3.14.0 and <= 3.13.7. Two are unauthenticated while one is authenticated.

Enhancements and features (10)

  • #20710 from Chocapikk - Adds support for GHSA (GitHub Security Advisory) and OSV (Open Source Vulnerabilities) references in Metasploit modules.
  • #20886 from cdelafuente-r7 - Updates services to now also have child services. This allows for more detailed reporting for the services and vulns commands which can now report parent -> child services e.g. SSL -> HTTPS.
  • #20895 from rudraditya21 - Adds negative caching to the LDAP entry cache so missing objects are recorded and subsequent lookups by DN, sAMAccountName, or SID return nil without re-querying the directory.
  • #20934 from rudraditya21 - This adds MITRE ATT&CK tags to modules related to LDAP and AD CS. This enables users to find this content using Metasploit's search functionality and the att&ck keyword.
  • #20935 from rudraditya21 - Adds the MITRE ATT&CK tag T1558.003 to the kerberoast modules. This enables users to find this content using Metasploit's search functionality and the att&ck keyword.
  • #20936 from rudraditya21 - This adds MITRE ATT&CK tags to SMB modules related to accounts. This enables users to find the content by using Metasploit's search capability and the att&ck keyword.
  • #20937 from rudraditya21 - This adds MITRE ATT&CK tags to the two existing SCCM modules that fetch NAA credentials using different techniques. This enables users to find this content using Metasploit's search functionality and the att&ck keyword.
  • #20941 from rudraditya21 - Adds a MITRE ATT&CK technique reference to the Windows password cracking module to support ATT&CK‑driven discovery.
  • #20942 from rudraditya21 - Adds MITRE ATT&CK technique references to getsystem, cve_2020_1472_zerologon, and atlassian_confluence_rce_cve_2023_22527 modules to support ATT&CK‑driven discovery.
  • #20943 from g0tmi1k - Adds affected versions the description in the ‎exploits/unix/webapp/twiki_maketext module.

Bugs fixed (7)

  • #20599 from BenoitDePaoli - Fixes an issue where running services -p <ports> -u -R to set RHOSTS with values from the database could lead to a silently failing file not found error.
  • #20775 from rmtsixq - Fixes a database initialization failure when using msfdb init with the --connection-string option to connect to PostgreSQL 15+ instances (e.g., Docker containers).
  • #20817 from randomstr1ng - Adds a fix to ensure the output of sap_router_portscanner no longer causes module crashes.
  • #20903 from jheysel-r7 - Fixes an issue so #enum_user_directories no longer returns duplicate directories.
  • #20906 from rudraditya21 - Implements a fix for SSH command shells dying on cmd_exec when a trailing newline was present.
  • #20953 from zeroSteiner - Improves the stability of socket channeling support for SSH sessions opened via scanner/ssh/ssh_login.
  • #20955 from adfoster-r7 - Ensures the cleanup of temporarily created RHOST files when using the services -p <ports> -u -R command to set RHOST values from the database.

Documentation

You can find the latest Metasploit documentation on our docsite at docs.metasploit.com.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate and you can get more details on the changes since the last blog post from GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest. To install fresh without using git, you can use the open-source-only Nightly Installers or the commercial edition Metasploit Pro

How do NHIs add value to cloud compliance auditing?

13 February 2026 at 17:00

What Makes Non-Human Identities Essential for Cloud Compliance Auditing? With cybersecurity threats evolve, how can organizations ensure their compliance measures are robust enough to handle the complexities of modern cloud environments? The answer lies in understanding and managing Non-Human Identities (NHIs)—a crucial component for establishing a secure and compliant framework in cloud computing. Understanding NHIs: […]

The post How do NHIs add value to cloud compliance auditing? appeared first on Entro.

The post How do NHIs add value to cloud compliance auditing? appeared first on Security Boulevard.

How can cloud-native security be transformed by Agentic AI?

13 February 2026 at 17:00

How do Non-Human Identities Shape the Future of Cloud Security? Have you ever wondered how machine identities influence cloud security? Non-Human Identities (NHIs) are crucial for maintaining robust cybersecurity frameworks, especially in cloud environments. These identities demand a sophisticated understanding, when they are essential for secure interactions between machines and their environments. The Critical Role […]

The post How can cloud-native security be transformed by Agentic AI? appeared first on Entro.

The post How can cloud-native security be transformed by Agentic AI? appeared first on Security Boulevard.

What future-proof methods do Agentic AIs use in data protection?

13 February 2026 at 17:00

How Secure Is Your Organization’s Cloud Environment? How secure is your organization’s cloud environment? With the digital transformation accelerates, gaps in security are becoming increasingly noticeable. Non-Human Identities (NHIs), representing machine identities, are pivotal in these frameworks. In cybersecurity, they are formed by integrating a ‘Secret’—like an encrypted password or key—and the permissions allocated by […]

The post What future-proof methods do Agentic AIs use in data protection? appeared first on Entro.

The post What future-proof methods do Agentic AIs use in data protection? appeared first on Security Boulevard.

Is Agentic AI driven security scalable for large enterprises?

13 February 2026 at 17:00

How Can Non-Human Identities (NHIs) Transform Scalable Security for Large Enterprises? One might ask: how can large enterprises ensure scalable security without compromising on efficiency and compliance? The answer lies in the effective management of Non-Human Identities (NHIs) and secrets security management. With machine identities, NHIs are pivotal in crafting a robust security framework, especially […]

The post Is Agentic AI driven security scalable for large enterprises? appeared first on Entro.

The post Is Agentic AI driven security scalable for large enterprises? appeared first on Security Boulevard.

Received yesterday — 13 February 2026Cybersecurity

Survey: Most Security Incidents Involve Identity Attacks

13 February 2026 at 15:55

A survey of 512 cybersecurity professionals finds 76% report that over half (54%) of the security incidents that occurred in the past 12 months involved some issue relating to identity management. Conducted by Permiso Security, a provider of an identity security platform, the survey also finds 95% are either very confident (52%) or somewhat confident..

The post Survey: Most Security Incidents Involve Identity Attacks appeared first on Security Boulevard.

NDSS 2025 – Automated Mass Malware Factory

13 February 2026 at 15:00

Session 12B: Malware

Authors, Creators & Presenters: Heng Li (Huazhong University of Science and Technology), Zhiyuan Yao (Huazhong University of Science and Technology), Bang Wu (Huazhong University of Science and Technology), Cuiying Gao (Huazhong University of Science and Technology), Teng Xu (Huazhong University of Science and Technology), Wei Yuan (Huazhong University of Science and Technology), Xiapu Luo (The Hong Kong Polytechnic University)

PAPER
Automated Mass Malware Factory: The Convergence of Piggybacking and Adversarial Example in Android Malicious Software Generation

Adversarial example techniques have been demonstrated to be highly effective against Android malware detection systems, enabling malware to evade detection with minimal code modifications. However, existing adversarial example techniques overlook the process of malware generation, thus restricting the applicability of adversarial example techniques. In this paper, we investigate piggybacked malware, a type of malware generated in bulk by piggybacking malicious code into popular apps, and combine it with adversarial example techniques. Given a malicious code segment (i.e., a rider), we can generate adversarial perturbations tailored to it and insert them into any carrier, enabling the resulting malware to evade detection. Through exploring the mechanism by which adversarial perturbation affects piggybacked malware code, we propose an adversarial piggybacked malware generation method, which comprises three modules: Malicious Rider Extraction, Adversarial Perturbation Generation, and Benign Carrier Selection. Extensive experiments have demonstrated that our method can efficiently generate a large volume of malware in a short period, and significantly increase the likelihood of evading detection. Our method achieved an average attack success rate (ASR) of 88.3% on machine learning-based detection models (e.g., Drebin and MaMaDroid), and an ASR of 76% and 92% on commercial engines Microsoft and Kingsoft, respectively. Furthermore, we have explored potential defenses against our adversarial piggybacked malware.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Automated Mass Malware Factory appeared first on Security Boulevard.

Why PAM Implementations Struggle 

13 February 2026 at 13:41

Privileged Access Management (PAM) is widely recognized as a foundational security control for Zero Trust, ransomware prevention, and compliance with frameworks such as NIST, ISO 27001, and SOC 2. Yet despite heavy investment, many organizations struggle to realize the promised value of PAM. Projects stall, adoption remains low, and security teams are left managing complex systems that deliver limited risk reduction.  […]

The post Why PAM Implementations Struggle  appeared first on 12Port.

The post Why PAM Implementations Struggle  appeared first on Security Boulevard.

Seven Billion Reasons for Facebook to Abandon its Face Recognition Plans

13 February 2026 at 15:58

The New York Times reported that Meta is considering adding face recognition technology to its smart glasses. According to an internal Meta document, the company may launch the product “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.” 

This is a bad idea that Meta should abandon. If adopted and released to the public, it would violate the privacy rights of millions of people and cost the company billions of dollars in legal battles.   

Your biometric data, such as your faceprint, are some of the most sensitive pieces of data that a company can collect. Associated risks include mass surveillance, data breach, and discrimination. Adding this technology to glasses on the street also raises safety concerns.  

 This kind of face recognition feature would require the company to collect a faceprint from every person who steps into view of the camera-equipped glasses to find a match. Meta cannot possibly obtain consent from everyone—especially bystanders who are not Meta users.  

Dozens of state laws consider biometric information to be sensitive and require companies to implement strict protections to collect and process it, including affirmative consent.  

Meta Should Know the Privacy and Legal Risks  

Meta should already know the privacy risks of face recognition technology, after abandoning related technology and paying nearly $7 billion in settlements a few years ago.  

In November 2021, Meta announced that it would shut down its tool that scanned the face of every person in photos posted on the platform. At the time, Meta also announced that it would delete more than a billion face templates. 

Two years before that in July 2019, Facebook settled a sweeping privacy investigation with the Federal Trade Commission for $5 billion. This included allegations that Facebook’s face recognition settings were confusing and deceptive. At the time, the company agreed to obtain consent before running face recognition on users in the future.   

In March 2021, the company agreed to a $650 million class action settlement brought by Illinois consumers under the state's strong biometric privacy law. 

And most recently, in July 2024, Meta agreed to pay $1.4 billion to settle claims that its defunct face recognition system violated Texas law.  

 Privacy Advocates Will Continue to Focus our Resources on Meta  

 Meta’s conclusion that it can avoid scrutiny by releasing a privacy invasive product during a time of political crisis is craven and morally bankrupt. It is also dead wrong.  

Now more than ever, people have seen the real-world risk of invasive technology. The public has recoiled at masked immigration agents roving cities with phones equipped with a face recognition app called Mobile Fortify. And Amazon Ring just experienced a huge backlash when people realized that a feature marketed for finding lost dogs could one day be repurposed for mass biometric surveillance.  

The public will continue to resist these privacy invasive features. And EFF, other civil liberties groups, and plaintiffs’ attorneys will be here to help. We urge privacy regulators and attorneys general to step up to investigate as well.  

Google Ties Suspected Russian Actor to CANFAIL Malware Attacks on Ukrainian Orgs

13 February 2026 at 12:27
A previously undocumented threat actor has been attributed to attacks targeting Ukrainian organizations with malware known as CANFAIL. Google Threat Intelligence Group (GTIG) described the hacking group as possibly affiliated with Russian intelligence services. The threat actor is assessed to have targeted defense, military, government, and energy organizations within the Ukrainian regional and

Google Links China, Iran, Russia, North Korea to Coordinated Defense Sector Cyber Operations

13 February 2026 at 11:23
Several state-sponsored actors, hacktivist entities, and criminal groups from China, Iran, North Korea, and Russia have trained their sights on the defense industrial base (DIB) sector, according to findings from Google Threat Intelligence Group (GTIG). The tech giant's threat intelligence division said the adversarial targeting of the sector is centered around four key themes: striking defense

UAT-9921 Deploys VoidLink Malware to Target Technology and Financial Sectors

13 February 2026 at 10:23
A previously unknown threat actor tracked as UAT-9921 has been observed leveraging a new modular framework called VoidLink in its campaigns targeting the technology and financial services sectors, according to findings from Cisco Talos. "This threat actor seems to have been active since 2019, although they have not necessarily used VoidLink over the duration of their activity," researchers Nick

The Rise of Continuous Penetration Testing-as-a-Service (PTaaS)

13 February 2026 at 11:11

Traditional penetration testing has long been a cornerstone of cyber assurance. For many organisations, structured annual or biannual tests have provided an effective way to validate security controls, support compliance requirements, and identify material weaknesses across infrastructure, applications, and external attack surfaces. However, enterprise environments now change at a pace that is difficult to reconcile…

The post The Rise of Continuous Penetration Testing-as-a-Service (PTaaS) appeared first on Sentrium Security.

The post The Rise of Continuous Penetration Testing-as-a-Service (PTaaS) appeared first on Security Boulevard.

NDSS 2025 – Density Boosts Everything

13 February 2026 at 11:00

Session 12B: Malware

Authors, Creators & Presenters: Jianwen Tian (Academy of Military Sciences), Wei Kong (Zhejiang Sci-Tech University), Debin Gao (Singapore Management University), Tong Wang (Academy of Military Sciences), Taotao Gu (Academy of Military Sciences), Kefan Qiu (Beijing Institute of Technology), Zhi Wang (Nankai University), Xiaohui Kuang (Academy of Military Sciences)

PAPER
Density Boosts Everything: A One-stop Strategy For Improving Performance, Robustness, And Sustainability of Malware Detectors

In the contemporary landscape of cybersecurity, AI-driven detectors have emerged as pivotal in the realm of malware detection. However, existing AI-driven detectors encounter a myriad of challenges, including poisoning attacks, evasion attacks, and concept drift, which stem from the inherent characteristics of AI methodologies. While numerous solutions have been proposed to address these issues, they often concentrate on isolated problems, neglecting the broader implications for other facets of malware detection. This paper diverges from the conventional approach by not targeting a singular issue but instead identifying one of the fundamental causes of these challenges, sparsity. Sparsity refers to a scenario where certain feature values occur with low frequency, being represented only a minimal number of times across the dataset. The authors are the first to elevate the significance of sparsity and link it to core challenges in the domain of malware detection, and then aim to improve performance, robustness, and sustainability simultaneously by solving sparsity problems. To address the sparsity problems, a novel compression technique is designed to effectively alleviate the sparsity. Concurrently, a density boosting training method is proposed to consistently fill sparse regions. Empirical results demonstrate that the proposed methodologies not only successfully bolster the model's resilience against different attacks but also enhance the performance and sustainability over time. Moreover, the proposals are complementary to existing defensive technologies and successfully demonstrate practical classifiers with improved performance and robustness to attacks.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Density Boosts Everything appeared first on Security Boulevard.

How to find and remove credential-stealing Chrome extensions

13 February 2026 at 08:27

Researchers have uncovered 30 Chrome extensions stealing user data. Here’s how to check your browser and remove any malicious extensions step by step.

The post How to find and remove credential-stealing Chrome extensions appeared first on Security Boulevard.

How to find and remove credential-stealing Chrome extensions

13 February 2026 at 08:27

Researchers have found yet another family of malicious extensions in the Chrome Web Store. This time, 30 different Chrome extensions were found stealing credentials from more than 260,000 users.

The extensions rendered a full-screen iframe pointing to a remote domain. This iframe overlaid the current webpage and visually appeared as the extension’s interface. Because this functionality was hosted remotely, it was not included in the review that allowed the extensions into the Web Store.

In other recent findings, we reported about extensions spying on ChatGPT chats, sleeper extensions that monitored browser activity, and a fake extension that deliberately caused a browser crash.

To spread the risk of detections and take-downs, the attackers used a technique known as “extension spraying.” This means they used different names and unique identifiers for basically the same extension.

What often happens is that researchers provide a list of extension names and IDs, and it’s up to users to figure out whether they have one of these extensions installed.

Searching by name is easy when you open your “Manage extensions” tab, but unfortunately extension names are not unique. You could, for example, have the legitimate extension installed that a criminal tried to impersonate.

Searching by unique identifier

For Chrome and Edge, a browser extension ID is a unique 32‑character string of lowercase letters that stays the same even if the extension is renamed or reshipped.

When we’re looking at the extensions from a removal angle, there are two kinds: those installed by the user, and those force‑installed by other means (network admin, malware, Group Policy Object (GPO), etc.).

We will only look at the first type in this guide—the ones users installed themselves from the Web Store. The guide below is aimed at Chrome, but it’s almost the same for Edge.

How to find installed extensions

You can review the installed Chrome extensions like this:

  • In the address bar type chrome://extensions/.
  • This will open the Extensions tab and show you the installed extensions by name.
  • Now toggle Developer mode to on and you will also see their unique ID.
Extensions tab showing Malwarebytes Browser Guard
Don’t remove this one. It’s one of the good ones.

Removal method in the browser

Use the Remove button to get rid of any unwanted entries.

If it disappears and stays gone after restart, you’re done. If there is no Remove button or Chrome says it’s “Installed by your administrator,” or the extension reappears after a restart, there’s a policy, registry entry, or malware forcing it.

Alternative

Alternatively, you can also search the Extensions folder. On Windows systems this folder lives here: C:\Users\<your‑username>\AppData\Local\Google\Chrome\User Data\Default\Extensions.

Please note that the AppData folder is hidden by default. To unhide files and folders in Windows, open Explorer, click the View tab (or menu), and check the Hidden items box. For more advanced options, choose Options > Change folder and search options > View tab, then select Show hidden files, folders, and drives.

Chrome extensions folder
Chrome extensions folder

You can organize the list alphabetically by clicking on the Name column header once or twice. This makes it easier to find extensions if you have a lot of them installed.

Deleting the extension folder here has one downside. It leaves an orphaned entry in your browser. When you start Chrome again after doing this, the extension will no longer load because its files are gone. But it will still show up in the Extensions tab, only without the appropriate icon.

So, our advice is to remove extensions in the browser when possible.

Malicious extensions

Below is the list of credential-stealing extensions using the iframe method, as provided by the researchers.

Extension IDExtension name
acaeafediijmccnjlokgcdiojiljfpbeChatGPT Translate
baonbjckakcpgliaafcodddkoednpjgfXAI
bilfflcophfehljhpnklmcelkoiffapbAI For Translation
cicjlpmjmimeoempffghfglndokjihhnAI Cover Letter Generator
ckicoadchmmndbakbokhapncehanaeniAI Email Writer
ckneindgfbjnbbiggcmnjeofelhflhajAI Image Generator Chat GPT
cmpmhhjahlioglkleiofbjodhhiejheiAI Translator
dbclhjpifdfkofnmjfpheiondafpkoedAi Wallpaper Generator
djhjckkfgancelbmgcamjimgphaphjdlAI Sidebar
ebmmjmakencgmgoijdfnbailknaaiffhChat With Gemini
ecikmpoikkcelnakpgaeplcjoickgacjAi Picture Generator
fdlagfnfaheppaigholhoojabfaapnhbGoogle Gemini
flnecpdpbhdblkpnegekobahlijbmfokChatGPT Picture Generator
fnjinbdmidgjkpmlihcginjipjaoapolEmail Generator AI
fpmkabpaklbhbhegegapfkenkmpipickChat GPT for Gmail
fppbiomdkfbhgjjdmojlogeceejinadgGemini AI Sidebar
gcfianbpjcfkafpiadmheejkokcmdkjlLlama
gcdfailafdfjbailcdcbjmeginhncjkbGrok Chatbot
gghdfkafnhfpaooiolhncejnlgglhkheAI Sidebar
gnaekhndaddbimfllbgmecjijbbfpabcAsk Gemini
gohgeedemmaohocbaccllpkabadoogplDeepSeek Chat
hgnjolbjpjmhepcbjgeeallnamkjnfgiAI Letter Generator
idhknpoceajhnjokpnbicildeoligdghChatGPT Translation
kblengdlefjpjkekanpoidgoghdngdglAI GPT
kepibgehhljlecgaeihhnmibnmikbngaDeepSeek Download
lodlcpnbppgipaimgbjgniokjcnpiiadAI Message Generator
llojfncgbabajmdglnkbhmiebiinohekChatGPT Sidebar
nkgbfengofophpmonladgaldioelckbeChat Bot GPT
nlhpidbjmmffhoogcennoiopekbiglbpAI Assistant
phiphcloddhmndjbdedgfbglhpkjcffhAsking Chat Gpt
pgfibniplgcnccdnkhblpmmlfodijppgChatGBT
cgmmcoandmabammnhfnjcakdeejbfimnGrok

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

The Cyber Express Weekly Roundup: Escalating Breaches, Regulatory Crackdowns, and Global Cybercrime Developments

13 February 2026 at 05:53

The Cyber Express Weekly Roundup

As February 2026 progresses, this week’s The Cyber Express Weekly Roundup examines a series of cybersecurity incidents and enforcement actions spanning Europe, Africa, Australia, and the United States.   The developments include a breach affecting the European Commission’s mobile management infrastructure, a ransomware attack disrupting Senegal’s national identity systems, a landmark financial penalty imposed on an Australian investment firm, and the sentencing of a fugitive linked to a multimillion-dollar cryptocurrency scam.  From suspected exploitation of zero-day vulnerabilities to prolonged breach detection failures and cross-border financial crime, these cases highlights the operational, legal, and systemic dimensions of modern cyber risk.  

The Cyber Express Weekly Roundup 

European Commission Mobile Infrastructure Breach Raises Supply Chain Questions 

The European Commission reported a cyberattack on its mobile device management (MDM) system on January 30, potentially exposing staff names and mobile numbers, though no devices were compromised, and the breach was contained within nine hours. Read more... 

Ransomware Disrupts Senegal’s National Identity Systems 

In West Africa, a major cyberattack hit Senegal’s Directorate of File Automation (DAF), halting identity card production and disrupting national ID, passport, and electoral services. While authorities insist no personal data was compromised, the ransomware group. The full extent of the breach is still under investigation. Read more... 

Australian Court Imposes Landmark Cybersecurity Penalty 

In Australia, FIIG Securities was fined AU$2.5 million for failing to maintain adequate cybersecurity protections, leading to a 2023 ransomware breach that exposed 385GB of client data, including IDs, bank details, and tax numbers. The firm must also pay AU$500,000 in legal costs and implement an independent compliance program. Read more... 

Crypto Investment Scam Leader Sentenced in Absentia 

U.S. authorities sentenced Daren Li in absentia to 20 years for a $73 million cryptocurrency scam targeting American victims. Li remains a fugitive after fleeing in December 2025. The Cambodia-based scheme used “pig butchering” tactics to lure victims to fake crypto platforms, laundering nearly $60 million through U.S. shell companies. Eight co-conspirators have pleaded guilty. The case was led by the U.S. Secret Service. Read more... 

India Brings AI-Generated Content Under Formal Regulation 

India has regulated AI-generated content under notification G.S.R. 120(E), effective February 20, 2026, defining “synthetically generated information” (SGI) as AI-created content that appears real, including deepfakes and voiceovers. Platforms must label AI content, embed metadata, remove unlawful content quickly, and verify user declarations. Read More... 

Weekly Takeaway 

Taken together, this weekly roundup highlights the expanding attack surface created by digital transformation, the persistence of ransomware threats to national infrastructure, and the intensifying regulatory scrutiny facing financial institutions.  From zero-day exploitation and supply chain risks to enforcement actions and transnational crypto fraud, organizations are confronting an environment where operational resilience, compliance, and proactive monitoring are no longer optional; they are foundational to trust and continuity in the digital economy. 

60,000 Records Exposed in Cyberattack on Uzbekistan Government

13 February 2026 at 03:46

Uzbekistan cyberattack

An alleged Uzbekistan cyberattack that triggered widespread concern online has exposed around 60,000 unique data records, not the personal data of 15 million citizens, as previously claimed on social media. The clarification came from Uzbekistan’s Digital Technologies Minister Sherzod Shermatov during a press conference on 12 February, addressing mounting speculation surrounding the scale of the breach. From 27 to 30 January, information systems of three government agencies in Uzbekistan were targeted by cyberattacks. The names of the agencies have not been disclosed. However, officials were firm in rejecting viral claims suggesting a large-scale national data leak. “There is no information that the personal data of 15 million citizens of Uzbekistan is being sold online. 60,000 pieces of data — that could be five or six pieces of data per person. We are not talking about 60,000 citizens,” the minister noted, adding that law enforcement agencies were examining the types of data involved. For global readers, the distinction matters. In cybersecurity reporting, raw data units are often confused with the number of affected individuals. A single record can include multiple data points such as a name, date of birth, address, or phone number. According to Shermatov, the 60,000 figure refers to individual data units, not the number of citizens impacted.
Also read: Sanctioned Spyware Vendor Used iOS Zero-Day Exploit Chain Against Egyptian Targets

Uzbekistan Cyberattack: What Actually Happened

The Uzbekistan cyberattack targeted three government information systems over a four-day period in late January. While the breach did result in unauthorized access to certain systems, the ministry emphasized that it was not a mass compromise of citizen accounts. “Of course, there was an attack. The hackers were skilled and sophisticated. They made attempts and succeeded in gaining access to a specific system. In a sense, this is even useful — an incident like this helps to further examine other systems and increase vigilance. Some data, in a certain amount, could indeed have been obtained from some systems,” Shermatov said. His remarks reveal a balanced acknowledgment: the attack was real, the threat actors were capable, and some data exposure did occur. At the same time, the scale appears significantly smaller than initially portrayed online. The ministry also stressed that a “personal data leak” does not mean citizens’ accounts were hacked or that full digital identities were compromised. Instead, limited personal details may have been accessed.

Rising Cyber Threats in Uzbekistan

The Uzbekistan cyberattack comes amid a sharp increase in attempted digital intrusions across the country. According to the ministry, more than 7 million cyber threats were prevented in 2024 through Uzbekistan’s cybersecurity infrastructure. In 2025, that number reportedly exceeded 107 million. Looking ahead, projections suggest that over 200 million cyberattacks could target Uzbekistan in 2026. These figures highlight a broader global trend: as countries accelerate digital transformation, they inevitably expand their attack surface. Emerging digital economies, in particular, often face intense pressure from transnational cybercriminal groups seeking to exploit gaps in infrastructure and rapid system expansion. Uzbekistan’s growing digital ecosystem — from e-government services to financial platforms — is becoming a more attractive target for global threat actors. The recent Uzbekistan cyberattack illustrates that no country, regardless of size, is immune.

Strengthening Security After the Breach

Following the breach, authorities blocked further unauthorized access attempts and reinforced technical safeguards. Additional protections were implemented within the Unified Identification System (OneID), Uzbekistan’s centralized digital identity platform. Under the updated measures, users must now personally authorize access to their data by banks, telecom operators, and other organizations. This shifts more control, and responsibility, directly to citizens. The ministry emphasized that even with partial personal data, fraudsters cannot fully act on behalf of a citizen without direct involvement. However, officials warned that attackers may attempt secondary scams using exposed details. For example, a fraudster could call a citizen, pose as a bank employee, cite known personal details, and claim that someone is applying for a loan in their name — requesting an SMS code to “cancel” the transaction. Such social engineering tactics remain one of the most effective tools for cybercriminals globally.

A Reality Check on Digital Risk

The Uzbekistan cyberattack highlights two critical lessons. First, misinformation can amplify panic faster than technical facts. Second, even limited data exposure carries real risk if exploited creatively. Shermatov’s comment that the incident can help “increase vigilance” reflects a pragmatic view shared by many cybersecurity professionals worldwide: breaches, while undesirable, often drive improvements in resilience. For Uzbekistan, the challenge now is sustaining public trust while hardening systems against a growing global cyber threats. For the rest of the world, the incident serves as a reminder that cybersecurity transparency — clear communication about scope and impact — is just as important as technical defense.

Adversaries Exploiting Proprietary AI Capabilities, API Traffic to Scale Cyberattacks

13 February 2026 at 03:09

GTIG AI threat tracker

In the fourth quarter of 2025, the Google Threat Intelligence Group (GTIG) reported a significant uptick in the misuse of artificial intelligence by threat actors. According to GTIG’s AI threat tracker, what initially appeared as experimental probing has evolved into systematic, repeatable exploitation of large language models (LLMs) to enhance reconnaissance, phishing, malware development, and post-compromise activity.  A notable trend identified by GTIG is the rise of model extraction attempts, or “distillation attacks.” In these operations, threat actors systematically query production models to replicate proprietary AI capabilities without directly compromising internal networks. Using legitimate API access, attackers can gather outputs sufficient to train secondary “student” models. While knowledge distillation is a valid machine learning method, unauthorized replication constitutes intellectual property theft and a direct threat to developers of proprietary AI.  Throughout 2025, GTIG observed sustained campaigns involving more than 100,000 prompts aimed at uncovering internal reasoning and chain-of-thought logic. Attackers attempted to coerce Gemini into revealing hidden decision-making processes. GTIG’s monitoring systems detected these patterns and mitigated exposure, protecting the internal logic of proprietary AI.  

AI Threat Tracker, a Force Multiplier 

Beyond intellectual property theft, GTIG’s AI threat tracker reports that state-backed and sophisticated actors are leveraging LLMs to accelerate reconnaissance and social engineering. Threat actors use AI to synthesize open-source intelligence (OSINT), profile high-value individuals, map organizational hierarchies, and identify decision-makers, dramatically reducing the manual effort required for research.  For instance, UNC6418 employed Gemini to gather account credentials and email addresses prior to launching phishing campaigns targeting Ukrainian and defense-sector entities. Temp.HEX, a China-linked actor, used AI to collect intelligence on individuals in Pakistan and analyze separatist groups. While immediate operational targeting was not always observed, Google mitigated these risks by disabling associated assets.  Phishing tactics have similarly evolved. Generative AI enables actors to produce highly polished, culturally accurate messaging. APT42, linked to Iran, used Gemini to enumerate official email addresses, research business connections, and create personas tailored to targets, while translation capabilities allowed multilingual operations. North Korea’s UNC2970 leveraged AI to profile cybersecurity and defense professionals, refining phishing narratives with salary and role information. All identified assets were disabled, preventing further compromise. 

AI-Enhanced Malware Development 

GTIG also documented AI-assisted malware development. APT31 prompted Gemini with expert cybersecurity personas to automate vulnerability analysis, including remote code execution, firewall bypass, and SQL injection testing. UNC795 engaged Gemini regularly to troubleshoot code and explore AI-integrated auditing, suggesting early experimentation with agentic AI, systems capable of autonomous multi-step reasoning. While fully autonomous AI attacks have not yet been observed, GTIG anticipates growing underground interest in such capabilities.  Generative AI is also supporting information operations. Threat actors from China, Iran, Russia, and Saudi Arabia used Gemini to draft political content, generate propaganda, and localize messaging. According to GTIG’s AI threat tracker, these efforts improved efficiency and scale but did not produce transformative influence capabilities. AI is enhancing productivity rather than creating fundamentally new tactics in the information operations space. 

AI-Powered Malware Frameworks: HONESTCUE and COINBAIT 

In September 2025, GTIG identified HONESTCUE, a malware framework outsourcing code generation via Gemini’s API. HONESTCUE queries the AI for C# code to perform “stage two” functionality, which is compiled and executed in memory without writing artifacts to disk, complicating detection.   Similarly, COINBAIT, a phishing kit detected in November 2025, leveraged AI-generated code via Lovable AI to impersonate a cryptocurrency exchange. COINBAIT incorporated complex React single-page applications, verbose developer logs, and cloud-based hosting to evade traditional network defenses.  GTIG also reported that underground markets are exploiting AI services and API keys to scale attacks. One example, “Xanthorox,” marketed itself as a self-contained AI for autonomous malware generation but relied on commercial AI APIs, including Gemini.  

Disney Agrees Record $2.75Mn Settlement for Opt-Out Failures

13 February 2026 at 02:52

Disney CCPA settlement

Animation giant Walt Disney has agreed to pay a $2.75 million fine and overhaul its privacy practices to settle violation allegations of the California Consumer Privacy Act (CCPA). The Disney CCPA settlement marks the largest settlement in the Act's enforcement history. For a global audience watching the evolution of data privacy enforcement, the Disney CCPA settlement is more than a state-level regulatory action as it signals a tougher stance on how companies handle consumer opt-out rights in an increasingly connected digital ecosystem. Announced by California Attorney General Rob Bonta, the settlement resolves claims that Disney failed to fully honor consumers’ requests to opt out of the sale or sharing of their personal data across all devices and streaming services linked to their accounts. Under the agreement, which remains subject to court approval, Disney will pay $2.75 million in civil penalties and implement a comprehensive privacy program designed to ensure compliance with the CCPA. The company does not admit wrongdoing or accept liability. A Disney spokesperson said that as an “industry leader in privacy protection, Disney continues to invest significant resources to set the standard for responsible and transparent data practices across our streaming services.”
Also read: Disney to Pay $10M After FTC Finds It Enabled Children’s Data Collection Via YouTube Videos

Implications of the Disney CCPA Settlement

While the enforcement action stems from California law, the Disney CCPA settlement has international implications. Many global companies operate under similar opt-out and consent frameworks in Europe, Asia-Pacific, and beyond. Regulators worldwide are scrutinizing whether companies truly make it easy for users to control their data — or merely create the appearance of compliance. The investigation, launched after a January 2024 investigative sweep of streaming services, found that Disney’s opt-out mechanisms contained what the California Department of Justice described as “key gaps.” These gaps allegedly allowed the company to continue selling or sharing consumer data even after users had attempted to opt out. Attorney General Bonta made the state’s position clear: “Consumers shouldn’t have to go to infinity and beyond to assert their privacy rights. Today, my office secured the largest settlement to date under the CCPA over Disney's failure to stop selling and sharing the data of consumers that explicitly asked it to. California’s nation-leading privacy law is clear: A consumer’s opt-out right applies wherever and however a business sells data — businesses can’t force people to go device-by-device or service-by-service. In California, asking a business to stop selling your data should not be complicated or cumbersome. My office is committed to the continued enforcement of this critical privacy law.”

Investigation Findings

According to the Attorney General’s office, Disney offered multiple methods for consumers to opt out — including website toggles, webforms, and the Global Privacy Control (GPC). However, each method allegedly failed to stop data sharing comprehensively. For example, when users activated opt-out toggles within Disney websites or apps, the request was reportedly applied only to the specific streaming service being used — and often only to the specific device. This meant that data sharing could continue on other devices or services connected to the same account. Similarly, consumers who submitted opt-out requests through Disney’s webform were unable to stop all personal data sharing. The investigation alleged that Disney continued to share data with “specific third-party ad-tech companies whose code Disney embedded in its websites and apps.” The Global Privacy Control — designed as a universal “stop selling or sharing my data” signal — was also reportedly limited to the specific device used, even if the consumer was logged into their Disney account. Critically, in many connected TV streaming apps, Disney allegedly did not provide an in-app opt-out mechanism and instead redirected users to the webform. Regulators argued this “effectively leaving consumers with no way to stop Disney’s selling and sharing from these apps.”

Enforcement Momentum Under the CCPA

The Disney CCPA settlement is the seventh enforcement action under the California Consumer Privacy Act and the second action against Disney in five months. In September, the Federal Trade Commission fined Disney $10 million over child privacy violations. Attorney General Bonta emphasized that “Effective opt-out is one of the bare necessities of complying with CCPA.” The law grants California consumers the right to know how their personal data is collected and shared — and the right to request that businesses stop selling or sharing that information. Under the settlement terms, Disney must update California within 60 days after court approval on steps taken to comply. It must also submit progress reports every 60 days until all services meet CCPA requirements.

A Turning Point for Streaming Platforms?

The broader message from the Disney CCPA settlement is unmistakable: privacy controls must work across platforms, devices, and ecosystems — not in silos. Streaming platforms operate globally, with accounts spanning smartphones, smart TVs, gaming consoles, and web browsers. Regulators are increasingly unwilling to accept fragmented compliance models where privacy settings apply only to one device or one service at a time. In that sense, the Disney CCPA settlement may be remembered less for the $2.75 million fine and more for the standard it reinforces: when consumers say “stop,” companies must ensure their systems actually listen.

8,000+ ChatGPT API Keys Left Publicly Accessible

13 February 2026 at 02:30

ChatGPT API keys

The rapid integration of artificial intelligence into mainstream software development has introduced a new category of security risk, one that many organizations are still unprepared to manage. According to research conducted by Cyble Research and Intelligence Labs (CRIL), thousands of exposed ChatGPT API keys are currently accessible across public infrastructure, dramatically lowering the barrier for abuse.  CRIL identified more than 5,000 publicly accessible GitHub repositories containing hardcoded OpenAI credentials. In parallel, approximately 3,000 live production websites were found to expose active API keys directly in client-side JavaScript and other front-end assets.   Together, these findings reveal a widespread pattern of credential mismanagement affecting both development and production environments. 

GitHub as a Discovery Engine for Exposed ChatGPT API Keys 

Public GitHub repositories have become one of the most reliable sources for exposed AI credentials. During development cycles, especially in fast-moving environments, developers often embed ChatGPT API keys directly into source code, configuration files, or .env files. While the intent may be to rotate or remove them later, these keys frequently persist in commit histories, forks, archived projects, and cloned repositories.  CRIL’s analysis shows that these exposures span JavaScript applications, Python scripts, CI/CD pipelines, and infrastructure configuration files. Many repositories were actively maintained or recently updated, increasing the likelihood that the exposed ChatGPT API keys remained valid at the time of discovery.  Once committed, secrets are quickly indexed by automated scanners that monitor GitHub repositories in near real time. This drastically reduces the window between exposure and exploitation, often to mere hours or minutes. 

Exposure in Live Production Websites 

Beyond repositories, CRIL uncovered roughly 3,000 public-facing websites leaking ChatGPT API keys directly in production. In these cases, credentials were embedded within JavaScript bundles, static files, or front-end framework assets, making them visible to anyone inspecting network traffic or application source code.  A commonly observed implementation resembled: 
const OPENAI_API_KEY = "sk-proj-XXXXXXXXXXXXXXXXXXXXXXXX"; const OPENAI_API_KEY = "sk-svcacct-XXXXXXXXXXXXXXXXXXXXXXXX";  
The sk-proj- prefix typically denotes a project-scoped key tied to a specific environment and billing configuration. The sk-svcacct- prefix generally represents a service-account key intended for backend automation or system-level integration. Despite their differing scopes, both function as privileged authentication tokens granting direct access to AI inference services and billing resources.  Embedding these keys in client-side JavaScript fully exposes them. Attackers do not need to breach infrastructure or exploit software vulnerabilities; they simply harvest what is publicly available. 

“The AI Era Has Arrived — Security Discipline Has Not” 

Richard Sands, CISO at Cyble, summarized the issue bluntly: “The AI Era Has Arrived — Security Discipline Has Not.” AI systems are no longer experimental tools; they are production-grade infrastructure powering chatbots, copilots, recommendation engines, and automated workflows. Yet the security rigor applied to cloud credentials and identity systems has not consistently extended to ChatGPT API keys.  A contributing factor is the rise of what some developers call “vibe coding”—a culture that prioritizes speed, experimentation, and rapid feature delivery. While this accelerates innovation, it often sidelines foundational security practices. API keys are frequently treated as configuration values rather than production secrets.  Sands further emphasized, “Tokens are the new passwords — they are being mishandled.” From a security standpoint, ChatGPT API keys are equivalent to privileged credentials. They control inference access, usage quotas, billing accounts, and sometimes sensitive prompts or application logic. 

Monetization and Criminal Exploitation 

Once discovered, exposed keys are validated through automated scripts and operationalized almost immediately. Threat actors monitor GitHub repositories, forks, gists, and exposed JavaScript assets to harvest credentials at scale.  CRIL observed that compromised keys are typically used to: 
  • Execute high-volume inference workloads 
  • Generate phishing emails and scam scripts 
  • Assist in malware development 
  • Circumvent service restrictions and usage quotas 
  • Drain victim billing accounts and exhaust API credits 
Some exposed credentials were also referenced in discussions mentioning Cyble Vision, indicating that threat actors may be tracking and sharing discovered keys. Using Cyble Vision, CRIL identified instances in which exposed keys were subsequently leaked and discussed on underground forums.  [caption id="" align="alignnone" width="1024"]Cyble Vision indicates API key exposure leak Cyble Vision indicates API key exposure leak (Source: Cyble Vision)[/caption] Unlike traditional cloud infrastructure, AI API activity is often not integrated into centralized logging systems, SIEM platforms, or anomaly detection pipelines. As a result, abuse can persist undetected until billing spikes, quota exhaustion, or degraded service performance reveal the compromise.  Kaustubh Medhe, CPO at Cyble, warned: “Hard-coding LLM API keys risks turning innovation into liability, as attackers can drain AI budgets, poison workflows, and access sensitive prompts and outputs. Enterprises must manage secrets and monitor exposure across code and pipelines to prevent misconfigurations from becoming financial, privacy, or compliance issues.” 

Malicious Chrome Extensions Caught Stealing Business Data, Emails, and Browsing History

13 February 2026 at 06:25
Cybersecurity researchers have discovered a malicious Google Chrome extension that's designed to steal data associated with Meta Business Suite and Facebook Business Manager. The extension, named CL Suite by @CLMasters (ID: jkphinfhmfkckkcnifhjiplhfoiefffl), is marketed as a way to scrape Meta Business Suite data, remove verification pop-ups, and generate two-factor authentication (2FA) codes.

npm’s Update to Harden Their Supply Chain, and Points to Consider

13 February 2026 at 05:45
In December 2025, in response to the Sha1-Hulud incident, npm completed a major authentication overhaul intended to reduce supply-chain attacks. While the overhaul is a solid step forward, the changes don’t make npm projects immune from supply-chain attacks. npm is still susceptible to malware attacks – here’s what you need to know for a safer Node community. Let’s start with the original

The Law of Cyberwar is Pretty Discombobulated

13 February 2026 at 05:24
cyberwar, cyber, SLA, cyberattack, retailers, Ai, applications, sysdig, attack, cisco, AI, AI-powered, attacks, attackers, security, BreachRx, Cisco, Nexus, security, challenges, attacks, cybersecurity, risks, industry, Cisco Talos hackers legitimate tools used in cyberattacks

This article explores the complexities of cyberwarfare, emphasizing the need to reconsider how we categorize cyber operations within the framework of the Law of Armed Conflict (LOAC). It discusses the challenges posed by AI in transforming traditional warfare notions and highlights the potential risks associated with the misuse of emerging technologies in conflicts.

The post The Law of Cyberwar is Pretty Discombobulated appeared first on Security Boulevard.

What is a SAML Assertion in Single Sign-On?

Learn what a SAML assertion is in Single Sign-On. Discover how these XML trust tokens securely exchange identity data between IdPs and Service Providers.

The post What is a SAML Assertion in Single Sign-On? appeared first on Security Boulevard.

Researchers Observe In-the-Wild Exploitation of BeyondTrust CVSS 9.9 Vulnerability

13 February 2026 at 03:34
Threat actors have started to exploit a recently disclosed critical security flaw impacting BeyondTrust Remote Support (RS) and Privileged Remote Access (PRA) products, according to watchTowr. "Overnight we observed first in-the-wild exploitation of BeyondTrust across our global sensors," Ryan Dewhurst, head of threat intelligence at watchTowr, said in a post on X. "Attackers are abusing

Top Security Incidents of 2025:  The Emergence of the ChainedShark APT Group

13 February 2026 at 03:11

In 2025, NSFOCUS Fuying Lab disclosed a new APT group targeting China’s scientific research sector, dubbed “ChainedShark” (tracking number: Actor240820). Been active since May 2024, the group’s operations are marked by high strategic coherence and technical sophistication. Its primary targets are professionals in Chinese universities and research institutions specializing in international relations, marine technology, and related […]

The post Top Security Incidents of 2025:  The Emergence of the ChainedShark APT Group appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks..

The post Top Security Incidents of 2025:  The Emergence of the ChainedShark APT Group appeared first on Security Boulevard.

150+ Key Compliance Statistics: AI, Data Privacy, Cybersecurity & Regulatory Trends to Know in 2026

13 February 2026 at 02:40

In 2026, compliance sits at the intersection of AI adoption, expanding privacy regulations, and rising cybersecurity risk. As regulatory expectations tighten and digital systems grow more complex, organizations are under.

The post 150+ Key Compliance Statistics: AI, Data Privacy, Cybersecurity & Regulatory Trends to Know in 2026 appeared first on Indusface.

The post 150+ Key Compliance Statistics: AI, Data Privacy, Cybersecurity & Regulatory Trends to Know in 2026 appeared first on Security Boulevard.

How AutoSecT VMDR Tool Simplifies Vulnerability Management

13 February 2026 at 02:22

As it is said, the ‘why’ and ‘how’ is much important than ‘should’. It’s exactly applicable in today’s cyberspace. Every day, organizations survive in an unpredictable cyber-risk climate. If your defense storehouse comprises just fragmented tools and manual processes, you are not playing it safe. If you are ‘not safe’, you are just seconds away […]

The post How AutoSecT VMDR Tool Simplifies Vulnerability Management appeared first on Kratikal Blogs.

The post How AutoSecT VMDR Tool Simplifies Vulnerability Management appeared first on Security Boulevard.

Fake shops target Winter Olympics 2026 fans

13 February 2026 at 04:00

If you’ve seen the two stoat siblings serving as official mascots of the Milano Cortina 2026 Winter Olympics, you already know Tina and Milo are irresistible.

Designed by Italian schoolchildren and chosen from more than 1,600 entries in a public poll, the duo has already captured hearts worldwide. So much so that the official 27 cm Tina plush toy on the official Olympics web shop is listed at €40 and currently marked out of stock.

Tina and Milo are in huge demand, and scammers have noticed.

When supply runs out, scam sites rush in

In roughly the past week alone, we’ve identified nearly 20 lookalike domains designed to imitate the official Olympic merchandise store.

These aren’t crude copies thrown together overnight. The sites use the same polished storefront template, complete with promotional videos and background music designed to mirror the official shop.olympics.com experience.

Fake site offering Tina at a huge discount
Fake site offering Tina at a huge discount
Real Olympic site showing Tina out of stock
Real Olympic site showing Tina out of stock

The layout and product pages are the same—the only thing that changes is the domain name. At a quick glance, most people wouldn’t notice anything unusual.

Here’s a sample of the domains we’ve been tracking:

2026winterdeals[.]top
olympics-save[.]top
olympics2026[.]top
postolympicsale[.]com
sale-olympics[.]top
shopolympics-eu[.]top
winter0lympicsstore[.]top (note the zero replacing the letter “o”)
winterolympics[.]top
2026olympics[.]shop
olympics-2026[.]shop
olympics-2026[.]top
olympics-eu[.]top
olympics-hot[.]shop
olympics-hot[.]top
olympics-sale[.]shop
olympics-sale[.]top
olympics-top[.]shop
olympics2026[.]store
olympics2026[.]top

Based on telemetry, additional registrations are actively emerging.

Reports show users checking these domains from multiple regions including Ireland, the Czech Republic, the United States, Italy, and China—suggesting this is a global campaign targeting fans worldwide.

Malwarebytes blocks these domains as scams.

Anatomy of a fake Olympic shop

The fake sites are practically identical. Each one loads the same storefront, with the same layout, product pages, and promotional banners.

That’s usually a sign the scammers are using a ready-made template and copying it across multiple domains. One obvious giveaway, however, is the pricing.

On the official store, the Tina plush costs €40 and is currently out of stock. On the fake sites, it suddenly reappears at a hugely discounted price—in one case €20, with banners shouting “UP & SAVE 80%.” When an item is sold out everywhere official and a random .top domain has it for half price, you’re looking at bait.

The goal of these sites typically includes:

  • Stealing payment card details entered at checkout
  • Harvesting personal information such as names, addresses, and phone numbers
  • Sending follow-up phishing emails
  • Delivering malware through fake order confirmations or “tracking” links
  • Taking your money and shipping nothing at all

The Olympics are a scammer’s playground

This isn’t the first time cybercriminals have piggybacked on Olympic fever. Fake ticket sites proliferated as far back as the Beijing 2008 Games. During Paris 2024, analysts observed significant spikes in Olympics-themed phishing and DDoS activity.

The formula is simple. Take a globally recognized brand, add urgency and emotional appeal (who doesn’t want an adorable stoat plush for their kid?), mix in limited availability, and serve it up on a convincing-looking website. With over 3 billion viewers expected for Milano Cortina, the pool of potential victims is enormous.

Scammers are getting smarter. AI-powered tools now let them generate convincing phishing pages in multiple languages at scale. The days of spotting a scam by its broken images and multiple typos are fading fast.

Protect yourself from Winter Olympics scams

As excitement builds ahead of the Winter Olympics in Milano Cortina, expect scammers to ramp up their efforts across fake shops, fraudulent ticket sites, bogus livestreams, and social media phishing campaigns.

  • Buy only from shop.olympics.com. Type the address directly into your browser and bookmark it. Don’t click links from ads or emails.
  • Don’t trust extreme discounts. If it’s sold out officially but “50–80% off” elsewhere, it’s likely a scam.
  • Check the domain closely. Watch for odd extensions like .top or .shop, extra hyphens, or letter swaps like “winter0lympicsstore.”
  • Never enter payment details on unfamiliar sites. If something feels off, leave immediately.
  • Use browser protection. Tools like Malwarebytes Browser Guard block known scam sites in real time, for free. Scam Guard can help you check suspicious websites before you buy.

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

AI-Powered Knowledge Graph Generator &#x26; APTs, (Thu, Feb 12th)

12 February 2026 at 22:04

Unstructured text to interactive knowledge graph via LLM & SPO triplet extraction

Courtesy of TLDR InfoSec Launches & Tools again, another fine discovery in Robert McDermott’s AI Powered Knowledge Graph Generator. Robert’s system takes unstructured text, uses your preferred LLM and extracts knowledge in the form of Subject-Predicate-Object (SPO) triplets, then visualizes the relationships as an interactive knowledge graph.[1]

Robert has documented AI Powered Knowledge Graph Generator (AIKG) beautifully, I’ll not be regurgitating it needlessly, so please read further for details regarding features, requirements, configuration, and options. I will detail a few installation insights that got me up and running quickly.
The feature summary is this:
AIKG automatically splits large documents into manageable chunks for processing and uses AI to identify entities and their relationships. As AIKG ensures consistent entity naming across document chunks, it discovers additional relationships between disconnected parts of the graph, then creates an interactive graph visualization. AIKG works with any OpenAI-compatible API endpoint; I used Ollama exclusively here with Google’s Gemma 3, a lightweight family of models built on Gemini technology. Gemma 3 is multimodal, processing text and images, and is the current, most capable model that runs on a single GPU. I ran my experimemts on a Lenovo ThinkBook 14 G4 circa 2022 with an AMD Ryzen 7 5825U 8-core processor, Radeon Graphics, and 40gb memory running Ubuntu 24.04.3 LTS.
My installation guidelines assume you have a full instance of Python3 and Ollama installed. My installation was implemented under my tools directory.

python3 -m venv aikg # Establish a virtual environment for AIKG
cd aikg
git clone https://github.com/robert-mcdermott/ai-knowledge-graph.git # Clone AIKG into virtual environment
bin/pip3 install -r ai-knowledge-graph/requirements.txt # Install AIKG requirements
bin/python3 ai-knowledge-graph/generate-graph.py --help # Confirm AIKG installation is functional
ollama pull gemma3 # Pull the Gemma 3 model from Ollama

I opted to test AIKG via a couple of articles specific to Russian state-sponsored adversarial cyber campaigns as input:

My use of these articles in particular was based on the assertion that APT and nation state activity is often well represented via interactive knowledge graph. I’ve advocated endlessly for visual link analysis and graph tech, including Maltego (the OG of knowledge graph tools) at far back as 2009, Graphviz in 2015, GraphFrames in 2018 and Beagle in 2019. As always, visualization, coupled with entity relationship mappings, are an imperative for security analysts, threat hunters, and any security professional seeking deeper and more meaningful insights. While the SecurityWeek piece is a bit light on content and density, it served well as a good initial experiment.
The CISA advisory is much more dense and served as an excellent, more extensive experiment.
I pulled them both into individual text files more easily ingested for processing with AIKG, shared for you here if you’d like to play along at home.

Starting with SecurityWeek’s Russia’s APT28 Targeting Energy Research, Defense Collaboration Entities, and the subsequent Russia-APT28-targeting.txt file I created for model ingestion, I ran Gemma 3 as a 12 billion parameter model as follows:

ollama run gemma3:12b # Run Gemma 3 locally as 12 billion parameter model
~/tools/aikg/bin/python3 ~/tools/aikg/ai-knowledge-graph/generate-graph.py --config ~/tools/aikg/ai-knowledge-graph/config.toml -input data/Russia-APT28-targeting.txt --output Russia-APT28-targeting-kg-12b.html

You may want or need to run Gemma 3 with fewer parameters depending on the performance and capabilities of your local system. Note that I am calling file paths rather explicitly to overcome complaints about missing config and input files.
The article makes reference to APT credential harvesting activity targeting people associated with a Turkish energy and nuclear research agency, as well as a spoofed OWA login portal containing Turkish-language text to target Turkish scientists and researchers. As part of it’s use of semantic triples (Subject-Predicate-Object (SPO) triplets), how does AIKG perform linking entities, attributes and values into machine readable statements [2] derived from the article content, as seen in Figure 1?

AIKG 12b

Figure 1: AIKG Gemma 3:12b result from SecurityWeek article

Quite well, I’d say. To manipulate the graph, you may opt to disable physics in the graph output toolbar so you can tweak node placements. As drawn from the statistics view for this graph, AIKG generated 38 nodes, 105 edges, 52 extracted edges, 53 inferred edges, and four communities. You can further filter as you see fit, but even unfiltered, and with just a little by of tuning at the presentation layer, we can immediately see success where semantic triples immediately emerge to excellent effect. We can see entity/relationship connections where, as an example, threat actor –> targeted –> people and people –> associated with –> think tanks, with direct reference to the aforementioned OWA portal and Turkish language. If you’re a cyberthreat intelligence analyst (CTI) or investigator, drawing visual conclusions derived from text processing will really help you step up your game in the form of context and enrichment in report writing. This same graph extends itself to represent the connection between the victims and the exploitation methods and infrastructure. If you don’t want to go through a full installation process for yourself to complete your own model execution, you should still grab the JSON and HTML output files and experiment with them in your browser. You’ll get a real sense of the power and impact of an interactive knowledge graph with the joint forces power of LLM and SPO triplets.
For a second experiment I selected related content in a longer, more in depth analysis courtesy of a CISA Cybersecurity Advisory (CISA friends, I’m pulling for you in tough times). If you are following along at home, be sure to exit ollama so you can rerun it with additional parameters (27b vs 12b); pass /bye as a message, and restart:

ollama run gemma3:27b # Run Gemma 3 locally with 27 billion parameters
~/tools/aikg/bin/python3 ~/tools/aikg/ai-knowledge-graph/generate-graph.py --config ~/tools/aikg/ai-knowledge-graph/config.toml --input ~/tools/aikg/ai-knowledge-graph/data/Russian-GRU-Targeting-Logistics-Tech.txt --output Russian-GRU-Targeting-Logistics-Tech-kg-27b.html

Given the density and length of this article, the graph as initially rendered is a bit untenable (no fault of AIKG) and requires some tuning and filtering for optimal effect. Graph Statistics for this experiment included 118 nodes, 486 edges, 152 extracted edges, 334 inferred edges, and seven communities. To filter, with a focus again on actions taken by Russian APT operatives, I chose as follows:

  • Select a Node by ID: threat actors
  • Select a network item: Nodes
  • Select a property: color
  • Select value(s): #e41a1c (red)

The result is more visually feasible, and allows ready tweaking to optimize network connections, as seen in Figure 2.

AIKG 27b???????

Figure 2: AIKG Gemma 3:27b result from CISA advisory

Shocking absolutely no one, we immediately encapsulate actor activity specific to credential access and influence operations via shell commands, Active Directory commands, and PowerShell commands. The conclusive connection is drawn however as threat actors –> targets –> defense industry. Ya think? ;-) In the advisory, see Description of Targets, including defense industry, as well as Initial Access TTPs, including credential guessing and brute force, and finally Post-Compromise TTPs and Exfiltration regarding various shell and AD commands. As a security professional reading this treatise, its reasonable to assume you’ve read a CISA Cybersecurity Advisory before. As such, its also reasonable to assume you’ll agree that knowledge graph generation from a highly dense, content rich collection of IOCs and behaviors is highly useful. I intend to work with my workplace ML team to further incorporate the principles explored herein as part of our context and enrichment generation practices. I suggest you consider the same if you have the opportunity. While SPO triplets, aka semantic triples, are most often associated with search engine optimization (SEO), their use, coupled with LLM power, really shines for threat intelligence applications.

Cheers…until next time.

Russ McRee | @holisticinfosec | infosec.exchange/@holisticinfosec | LinkedIn.com/in/russmcree

Recommended reading and tooling:

References

[1] McDermott, R. (2025) AI Knowledge Graph. Available at: https://github.com/robert-mcdermott/ai-knowledge-graph (Accessed: 18 January 2026 - 11 February 2026).
[2] Reduan, M.H., (2025) Semantic Triples: Definition, Function, Components, Applications, Benefits, Drawbacks and Best Practices for SEO. Available at: https://www.linkedin.com/pulse/semantic-triples-definition-function-components-benefits-reduan-nqmec/ (Accessed: 11 February 2026).

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Examples of SAML Providers

Explore top examples of SAML providers like Okta, Azure AD, and Ping Identity. Learn how to implement SAML SSO for secure enterprise identity management.

The post Examples of SAML Providers appeared first on Security Boulevard.

Demystifying SAML: The Basics of Secure Single Sign-On

Learn the basics of SAML authentication for Enterprise SSO. Understand IdP vs SP roles, XML assertions, and how to secure your B2B infrastructure effectively.

The post Demystifying SAML: The Basics of Secure Single Sign-On appeared first on Security Boulevard.

❌