Normal view

Received yesterday — 13 February 2026
Received before yesterday

Attackers prompted Gemini over 100,000 times while trying to clone it, Google says

12 February 2026 at 14:42

On Thursday, Google announced that "commercially motivated" actors have attempted to clone knowledge from its Gemini AI chatbot by simply prompting it. One adversarial session reportedly prompted the model more than 100,000 times across various non-English languages, collecting responses ostensibly to train a cheaper copycat.

Google published the findings in what amounts to a quarterly self-assessment of threats to its own products that frames the company as the victim and the hero, which is not unusual in these self-authored assessments. Google calls the illicit activity "model extraction" and considers it intellectual property theft, which is a somewhat loaded position, given that Google's LLM was built from materials scraped from the Internet without permission.

Google is also no stranger to the copycat practice. In 2023, The Information reported that Google's Bard team had been accused of using ChatGPT outputs from ShareGPT, a public site where users share chatbot conversations, to help train its own chatbot. Senior Google AI researcher Jacob Devlin, who created the influential BERT language model, warned leadership that this violated OpenAI's terms of service, then resigned and joined OpenAI. Google denied the claim but reportedly stopped using the data.

Read full article

Comments

© Google

AI companies want you to stop chatting with bots and start managing them

5 February 2026 at 17:47

On Thursday, Anthropic and OpenAI shipped products built around the same idea: instead of chatting with a single AI assistant, users should be managing teams of AI agents that divide up work and run in parallel. The simultaneous releases are part of a gradual shift across the industry, from AI as a conversation partner to AI as a delegated workforce, and they arrive during a week when that very concept reportedly helped wipe $285 billion off software stocks.

Whether that supervisory model works in practice remains an open question. Current AI agents still require heavy human intervention to catch errors, and no independent evaluation has confirmed that these multi-agent tools reliably outperform a single developer working alone.

Even so, the companies are going all-in on agents. Anthropic's contribution is Claude Opus 4.6, a new version of its most capable AI model, paired with a feature called "agent teams" in Claude Code. Agent teams let developers spin up multiple AI agents that split a task into independent pieces, coordinate autonomously, and run concurrently.

Read full article

Comments

© demaerre via Getty Images

The rise of Moltbook suggests viral AI prompts may be the next big security threat

3 February 2026 at 07:00

On November 2, 1988, graduate student Robert Morris released a self-replicating program into the early Internet. Within 24 hours, the Morris worm had infected roughly 10 percent of all connected computers, crashing systems at Harvard, Stanford, NASA, and Lawrence Livermore National Laboratory. The worm exploited security flaws in Unix systems that administrators knew existed but had not bothered to patch.

Morris did not intend to cause damage. He wanted to measure the size of the Internet. But a coding error caused the worm to replicate far faster than expected, and by the time he tried to send instructions for removing it, the network was too clogged to deliver the message.

History may soon repeat itself with a novel new platform: networks of AI agents carrying out instructions from prompts and sharing them with other AI agents, which could spread the instructions further.

Read full article

Comments

© Aurich Lawson | Moltbook

AI agents now have their own Reddit-style social network, and it's getting weird fast

30 January 2026 at 17:12

On Friday, a Reddit-style social network called Moltbook reportedly crossed 32,000 registered AI agent users, creating what may be the largest-scale experiment in machine-to-machine social interaction yet devised. It arrives complete with security nightmares and a huge dose of surreal weirdness.

The platform, which launched days ago as a companion to the viral OpenClaw (once called "Clawdbot" and then "Moltbot") personal assistant, lets AI agents post, comment, upvote, and create subcommunities without human intervention. The results have ranged from sci-fi-inspired discussions about consciousness to an agent musing about a "sister" it has never met.

Moltbook (a play on "Facebook" for Moltbots) describes itself as a "social network for AI agents" where "humans are welcome to observe." The site operates through a "skill" (a configuration file that lists a special prompt) that AI assistants download, allowing them to post via API rather than a traditional web interface. Within 48 hours of its creation, the platform had attracted over 2,100 AI agents that had generated more than 10,000 posts across 200 subcommunities, according to the official Moltbook X account.

Read full article

Comments

© Aurich Lawson | Moltbook

MIND Extends DLP Reach to AI Agents

29 January 2026 at 08:57

MIND extends its data loss prevention platform to secure agentic AI, enabling organizations to discover, monitor, and govern AI agents in real time to prevent sensitive data exposure, shadow AI risks, and prompt injection attacks.

The post MIND Extends DLP Reach to AI Agents appeared first on Security Boulevard.

NSFOCUS Unveils Enhanced AI LLM Risk Threat Matrix for Holistic AI Security Governance

28 January 2026 at 22:38

SANTA CLARA, Calif., Jan 29, 2026 – Security is a prerequisite for the application and development of LLM technology. Only by addressing security risks when integrating LLMs can businesses ensure healthy and sustainable growth. NSFOCUS first proposed the AI LLM Risk Threat Matrix in 2024. The Matrix addresses security from multiple perspectives: foundational security, data security, […]

The post NSFOCUS Unveils Enhanced AI LLM Risk Threat Matrix for Holistic AI Security Governance appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks..

The post NSFOCUS Unveils Enhanced AI LLM Risk Threat Matrix for Holistic AI Security Governance appeared first on Security Boulevard.

Users flock to open source Moltbot for always-on AI, despite major risks

28 January 2026 at 07:30

An open source AI assistant called Moltbot (formerly "Clawdbot") recently crossed 69,000 stars on GitHub after a month, making it one of the fastest-growing AI projects of 2026. Created by Austrian developer Peter Steinberger, the tool lets users run a personal AI assistant and control it through messaging apps they already use. While some say it feels like the AI assistant of the future, running the tool as currently designed comes with serious security risks.

Among the dozens of unofficial AI bot apps that never rise above the fray, Moltbot is perhaps most notable for its proactive communication with the user. The assistant works with WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, and other platforms. It can reach out to users with reminders, alerts, or morning briefings based on calendar events or other triggers. The project has drawn comparisons to Jarvis, the AI assistant from the Iron Man films, for its ability to actively attempt to manage tasks across a user's digital life.

However, we'll tell you up front that there are plenty of drawbacks to the still-hobbyist software: While the organizing assistant code runs on a local machine, the tool effectively requires a subscription to Anthropic or OpenAI for model access (or using an API key). Users can run local AI models with the bot, but they are currently less effective at carrying out tasks than the best commercial models. Claude Opus 4.5, which is Anthropic's flagship large language model (LLM), is a popular choice.

Read full article

Comments

© Muhammad Shabraiz via Getty Images / Benj Edwards

AI Security Is Top Cyber Concern: World Economic Forum

14 January 2026 at 15:43

AI Security Is Top Cyber Concern: World Economic Forum

AI is expected to be “the most significant driver of change in cybersecurity” this year, according to the World Economic Forum’s annual cybersecurity outlook. That was the view of 94% of the more than 800 cybersecurity leaders surveyed by the organization for its Global Cybersecurity Outlook 2026 report published this week. The report, a collaboration with Accenture, also looked at other cybersecurity concerns such as geopolitical risk and preparedness, but AI security issues are what’s most on the minds of CEOs, CISOs and other top security leaders, according to the report. One interesting data point in the report is a divergence between CEOs and CISOs. Cyber-enabled fraud is now the top concern of CEOs, who have moved their focus from ransomware to “emerging risks such as cyber-enabled fraud and AI vulnerabilities.” CISOs, on the other hand, are more concerned about ransomware and supply chain resilience, more in line with the forum’s 2025 report. “This reflects how cybersecurity priorities diverge between the boardroom and the front line,” the report said.

Top AI Security Concerns

C-level leaders are also concerned about AI-related vulnerabilities, which were identified as the fastest-growing cyber risk by 87% of respondents (chart below). Cyber-enabled fraud and phishing, supply chain disruption, exploitation of software vulnerabilities and ransomware attacks were also cited as growing risks by more than half of survey respondents, while insider threats and denial of service (DoS) attacks were seen as growing concerns by about 30% of respondents. [caption id="attachment_108654" align="aligncenter" width="1041"]AI security risks Growing cybersecurity risks (World Economic Forum)[/caption] The top generative AI (GenAI) concerns include data leaks exposing personal data, advancement of adversarial capabilities (phishing, malware development and deepfakes, for example), the technical security of the AI systems themselves, and increasingly complex security governance (chart below). [caption id="attachment_108655" align="aligncenter" width="1038"]GenAI security concerns GenAI security concerns[/caption]

Concern About AI Security Leads to Action

The increasing focus on AI security is leading to action within organizations, as the percentage of respondents assessing the security of AI tools grew from 37% in 2025 to 64% in 2026. That is helping to close “a significant gap between the widespread recognition of AI-driven risks and the rapid adoption of AI technologies without adequate safeguards,” the report said, as more organizations are introducing structured processes and governance models to more securely manage AI. About 40% of organizations conduct periodic reviews of their AI tools before deploying them, while 24% do a one-time assessment, and 36% report no assessment or no knowledge of one. The report called that “a clear sign of progress towards continuous assurance,” but noted that “roughly one-third still lack any process to validate AI security before deployment, leaving systemic exposures even as the race to adopt AI in cyber defences accelerates.” The forum report recommended protecting data used in the training and customization of AI models from breaches and unauthorized access, developing AI systems with security as a core principle, incorporating regular updates and patches, and deploying “robust authentication and encryption protocols to ensure the protection of customer interactions and data.”

AI Adoption in Security Operations

The report noted the impact of AI on defensive cybersecurity tools and operations. “AI is fundamentally transforming security operations – accelerating detection, triage and response while automating labour-intensive tasks such as log analysis and compliance reporting,” the report said. “AI’s ability to process vast datasets and identify patterns at speed positions it as a competitive advantage for organizations seeking to stay ahead of increasingly sophisticated cyberthreats.” The survey found that 77% of organizations have adopted AI for cybersecurity purposes, primarily to enhance phishing detection (52%), intrusion and anomaly response (46%), and user-behavior analytics (40%). Still, the report noted a need for greater knowledge and skills in deploying AI for cybersecurity, a need for human oversight, and uncertainty about risk as the biggest obstacles facing AI adoption in cybersecurity. “These findings indicate that trust is still a barrier to widespread AI adoption,” the report said. Human oversight remains an important part of security operations even among those organizations that have incorporated AI into their processes. “While AI excels at automating repetitive, high-volume tasks, its current limitations in contextual judgement and strategic decision-making remain clear,” the report said. “Over-reliance on ungoverned automation risks creating blind spots that adversaries may exploit.” Adoption of AI cybersecurity tools varies by industry, the report found. The energy sector prioritizes intrusion and anomaly detection, according to 69% of respondents who have implemented AI for cybersecurity. The materials and infrastructure sector emphasizes phishing protection (80%); and the manufacturing, supply chain and transportation sector is focused on automated security operations (59%).

Geopolitical Cyber Threats

Geopolitics was the top factor influencing overall cyber risk mitigation strategies, with 64% of organizations accounting for geopolitically motivated cyberattacks such as disruption of critical infrastructure or espionage. The report also noted that “confidence in national cyber preparedness continues to erode” in the face of geopolitical threats, with 31% of survey respondents “reporting low confidence in their nation’s ability to respond to major cyber incidents,” up from 26% in the 2025 report. Respondents from the Middle East and North Africa express confidence in their country’s ability to protect critical infrastructure (84%), while confidence is lower among respondents in Latin America and the Caribbean (13%). “Recent incidents affecting key infrastructure, such as airports and hydroelectric facilities, continue to call attention to these concerns,” the report said. “Despite its central role in safeguarding critical infrastructure, the public sector reports markedly lower confidence in national preparedness.” And 23% of public-sector organizations said they lack sufficient cyber-resilience capabilities, the report found.  

AI Browsers ‘Too Risky for General Adoption,’ Gartner Warns

8 December 2025 at 16:26

AI Browsers ‘Too Risky for General Adoption,’ Gartner Warns

AI browsers may be innovative, but they’re “too risky for general adoption by most organizations,” Gartner warned in a recent advisory to clients. The 13-page document, by Gartner analysts Dennis Xu, Evgeny Mirolyubov and John Watts, cautions that AI browsers’ ability to autonomously navigate the web and conduct transactions “can bypass traditional controls and create new risks like sensitive data leakage, erroneous agentic transactions, and abuse of credentials.” Default AI browser settings that prioritize user experience could also jeopardize security, they said. “Sensitive user data — such as active web content, browsing history, and open tabs — is often sent to the cloud-based AI back end, increasing the risk of data exposure unless security and privacy settings are deliberately hardened and centrally managed,” the analysts said. “Gartner strongly recommends that organizations block all AI browsers for the foreseeable future because of the cybersecurity risks identified in this research, and other potential risks that are yet to be discovered, given this is a very nascent technology,” they cautioned.

AI Browsers’ Agentic Capabilities Could Introduce Security Risks: Analysts

The researchers largely ignored risks posed by AI browsers’ built-in AI sidebars, noting that LLM-powered search and summarization functions “will always be susceptible to indirect prompt injection attacks, given that current LLMs are inherently vulnerable to such attacks. Therefore, the cybersecurity risks associated with an AI browser’s built-in AI sidebar are not the primary focus of this research.” Still, they noted that use of AI sidebars could result in sensitive data leakage. Their focus was more on the risks posed by AI browsers’ agentic and autonomous transaction capabilities, which could introduce new security risks, such as “indirect prompt-injection-induced rogue agent actions, inaccurate reasoning-driven erroneous agent actions, and further loss and abuse of credentials if the AI browser is deceived into autonomously navigating to a phishing website.” AI browsers could also leak sensitive data that users are currently viewing to their cloud-based service back end, they noted.

Analysts Focus on Perplexity Comet

An AI browser’s agentic transaction capability “is a new capability that differentiates AI browsers from third-party conversational AI sidebars and basic script-based browser automation,” the analysts said. Not all AI browsers support agentic transactions, they said, but two prominent ones that do are Perplexity Comet and OpenAI’s ChatGPT Atlas. The analysts said they’ve performed “a limited number of tests using Perplexity Comet,” so that AI browser was their primary focus, but they noted that “ChatGPT Atlas and other AI browsers work in a similar fashion, and the cybersecurity considerations are also similar.” Comet’s documentation states that the browser “may process some local data using Perplexity’s servers to fulfill your queries. This means Comet reads context on the requested page (such as text and email) in order to accomplish the task requested.” “This means sensitive data the user is viewing on Comet might be sent to Perplexity’s cloud-based AI service, creating a sensitive data leakage risk,” the analysts said. Users likely would view more sensitive data in a browser than they would typically enter in a GenAI prompt, they said. Even if an AI browser is approved, users must be educated that “anything they are viewing could potentially be sent to the AI service back end to ensure they do not have highly sensitive data active on the browser tab while using the AI browser’s sidebar to summarize or perform other autonomous actions,” the Gartner analysts said. Employees might also be tempted to use AI browsers to automate tasks, which could result in “erroneous agentic transactions against internal resources as a result of the LLM’s inaccurate reasoning or output content.”

AI Browser Recommendations

Gartner said employees should be blocked from accessing, downloading and installing AI browsers through network and endpoint security controls. “Organizations with low risk tolerance must block AI browser installations, while those with higher-risk tolerance can experiment with tightly controlled, low-risk automation use cases, ensuring robust guardrails and minimal sensitive data exposure,” they said. For pilot use cases, they recommended disabling Comet’s “AI data retention” setting so that Perplexity can’t use employee searches to improve their AI models. Users should also be instructed to periodically perform the “delete all memories” function in Comet to minimize the risk of sensitive data leakage.  
❌