Normal view

There are new articles available, click to refresh the page.
Today — 29 June 2024Cybersecurity
Yesterday — 28 June 2024Cybersecurity
Before yesterdayCybersecurity

OpenAI Drops ChatGPT Access for Users in China, Russia, Iran – Source: www.databreachtoday.com

openai-drops-chatgpt-access-for-users-in-china,-russia,-iran-–-source:-wwwdatabreachtoday.com

Source: www.databreachtoday.com – Author: 1 Artificial Intelligence & Machine Learning , Cyberwarfare / Nation-State Attacks , Fraud Management & Cybercrime Users of All OpenAI Services in Unsupported Countries Will Lose Access by July 9 Rashmi Ramesh (rashmiramesh_) • June 26, 2024     Image: Shutterstock OpenAI appears to be removing access to its services for […]

La entrada OpenAI Drops ChatGPT Access for Users in China, Russia, Iran – Source: www.databreachtoday.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

OpenAI’s ChatGPT ‘Voice Mode’ Doesn’t Meet Safety Standards; Rollout Pushed to July

Voice Mode, OpenAI Voice Mode

Experts are raising eyebrows after OpenAI announced a one-month delay in the rollout of its highly anticipated “Voice Mode” feature for ChatGPT, citing safety concerns. The company said it needs more time to ensure the model can “detect and refuse certain content.”
“We’re improving the model’s ability to detect and refuse certain content. We’re also working on enhancing the user experience and scaling our infrastructure to support millions of users while maintaining real-time responses.” - OpenAI
The stalling of the rollout comes a month after OpenAI announced a new safety and security committee that would oversee issues related to the company’s future projects and operations. It is unclear if this postponement was suggested by the committee or by internal stakeholders.

Features of ChatGPT’s ‘Voice Mode’

OpenAI unveiled its GPT-4o system in May, boasting significant advancements in human-computer interaction. “GPT-4o (‘o’ for ‘omni’) is a step towards much more natural human-computer interaction,” OpenAI said at the time. The omni model can respond to audio inputs at an average of 320 milliseconds, which is similar to the response time of humans. Other salient features of the “Voice Mode” promise real-time conversations with human-like emotional responses, but this also raises concerns about potential manipulation and the spread of misinformation. The May announcement gave a snippet at the model’s ability to understand nuances like tone, non-verbal cues and background noise, further blurring the lines between human and machine interaction. While OpenAI plans an alpha release for a limited group of paid subscribers in July, the broader rollout remains uncertain. The company emphasizes its commitment to a “high safety and reliability” standard but the exact timeline for wider access hinges on user feedback.

The ‘Sky’ of Controversy Surrounding ‘Voice Mode’

The rollout delay of “voice mode” feature of ChatGPT follows the controversy sparked by actress Scarlett Johansson, who accused OpenAI of using her voice without permission in demonstrations of the technology. OpenAI refuted the claim stating the controversial voice of “Sky” - one of the five voice modulation that the Voice Mode offers for responses – belonged to a voice artist and not Johansson. The company said an internal team reviewed the voices it received from over 400 artists, from a product and research perspective, and after careful consideration zeroed on five of them, namely Breeze, Cove, Ember, Juniper and Sky. OpenAI, however, did confirm that its top boss Sam Altman reached out to Johannson to integrate her voice.
“On September 11, 2023, Sam spoke with Ms. Johansson and her team to discuss her potential involvement as a sixth voice actor for ChatGPT, along with the other five voices, including Sky. She politely declined the opportunity one week later through her agent.” - OpenAI
Altman took a last chance of onboarding the Hollywood star this May, when he again contacted her team to inform the launch of GPT-4o and asked if she might reconsider joining as a future additional voice in ChatGPT. But instead, with the demo version of Sky airing through, Johannson threatened to sue the company for “stealing” her voice. Owing to the pressure from her lawyers, OpenAI removed the Sky voice sample since May 19.
“The voice of Sky is not Scarlett Johansson's, and it was never intended to resemble hers. We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.” – Sam Altman
Although the issue seems to have resolved for the time being, this duel between Johannson and Altman brought to the fore the ethical considerations surrounding deepfakes and synthetic media.

Likely Delays in Apple AI and OpenAI Partnership Too

If the technical issues and the Sky voice mode controversy weren’t enough, adding another layer of complication to OpenAI’s woes is Apple’s recent brush with EU regulators that now casts a shadow over the future of ChatGPT integration into Apple devices. Announced earlier this month, the partnership aimed to leverage OpenAI's technology in Cupertino tech giant’s “Apple Intelligence” system. However, with Apple facing potential regulatory roadblocks under the EU’s Digital Markets Act (DMA), the integration’s fate remains unclear. This confluence of factors – safety concerns, potential for misuse, and regulatory hurdles – paints a complex picture for OpenAI's “Voice Mode.” The cybersecurity and regulatory industry will undoubtedly be watching closely as the technology evolves, with a keen eye on potential security vulnerabilities and the implications for responsible AI development.

Google’s Project Naptime Aims for AI-Based Vulnerability Research

25 June 2024 at 12:35
Google AI LLM vulnerability

Security analysts at Google are developing a framework that they hope will enable large language models (LLMs) to eventually be able to run automated vulnerability research, particularly analyses of malware variants. The analysts with Google’s Project Zero – a group founded a decade ago whose job it is to find zero-day vulnerabilities – have been..

The post Google’s Project Naptime Aims for AI-Based Vulnerability Research appeared first on Security Boulevard.

Tech Leaders Gather This Week for AI Risk Summit + CISO Forum at the Ritz-Carlton, Half Moon Bay

24 June 2024 at 13:56

SecurityWeek’s AI Risk Summit + CISO Forum brings together business and government stakeholders to provide meaningful guidance on risk management and cybersecurity in the age of artificial intelligence.

The post Tech Leaders Gather This Week for AI Risk Summit + CISO Forum at the Ritz-Carlton, Half Moon Bay appeared first on SecurityWeek.

Facial Recognition Startup Clearview AI Settles Privacy Suit

24 June 2024 at 03:09

Facial recognition startup Clearview AI has reached a settlement in an Illinois lawsuit alleging its massive photographic collection of faces violated the subjects’ privacy rights.

The post Facial Recognition Startup Clearview AI Settles Privacy Suit appeared first on SecurityWeek.

AI Everywhere: Key Takeaways from the Gartner Security & Risk Management Summit 2024

21 June 2024 at 19:03

The Gartner Security & Risk Management Summit 2024 showcased the transformative power of artificial intelligence (AI) across various industries, with a particular focus on the cybersecurity landscape. As organizations increasingly adopt AI for innovation and efficiency, it is crucial to understand the opportunities and challenges that come with this technology. Here are the top three […]

The post AI Everywhere: Key Takeaways from the Gartner Security & Risk Management Summit 2024 first appeared on SlashNext.

The post AI Everywhere: Key Takeaways from the Gartner Security & Risk Management Summit 2024 appeared first on Security Boulevard.

AI Weights: Securing the Heart and Soft Underbelly of Artificial Intelligence

20 June 2024 at 08:19

AI model weights govern outputs from the system, but altered or ‘poisoned’, they can make the output erroneous and, in extremis, useless and dangerous.

The post AI Weights: Securing the Heart and Soft Underbelly of Artificial Intelligence appeared first on SecurityWeek.

New Threat Group Void Arachne Targets Chinese-Speaking Audience; Promotes AI Deepfake and Misuse

By: Alan J
19 June 2024 at 16:35

Void Arachne Targets Chinese-Speaking Deepfake Deepfakes

A new threat actor group called Void Arachne is conducting a malware campaign targeting Chinese-speaking users. The group is distributing malicious MSI installer files bundled with legitimate software like AI tools, Chinese language packs, and virtual private network (VPN) clients. During installation, these files also covertly install the Winos 4.0 backdoor, which can fully compromise systems.

Void Arachne Tactics

Researchers from Trend Micro discovered that the Void Arachne group employs multiple techniques to distribute malicious installers, including search engine optimization (SEO) poisoning and posting links on Chinese-language Telegram channels.
  • SEO Poisoning: The group set up websites posing as legitimate software download sites. Through SEO poisoning, they pushed these sites to rank highly on search engines for common Chinese software keywords. The sites host MSI installer files containing Winos malware bundled with software like Chrome, language packs, and VPNs. Victims unintentionally infect themselves with Winos, while believing that they are only installing intended software.
  • Targeting VPNs: Void Arachne frequently targets Chinese VPN software in their installers and Telegram posts. Exploiting interest in VPNs is an effective infection tactic, as VPN usage is high among Chinese internet users due to government censorship. [caption id="attachment_77950" align="alignnone" width="917"]Void Arachne Chinese VPN Source: trendmicro.com[/caption]
  • Telegram Channels: In addition to SEO poisoning, Void Arachne shared malicious installers in Telegram channels focused on Chinese language and VPN topics. Channels with tens of thousands of users pinned posts with infected language packs and AI software installers, increasing exposure.
  • Deepfake Pornography: A concerning discovery was the group promoting nudifier apps generating nonconsensual deepfake pornography. They advertised the ability to undress photos of classmates and colleagues, encouraging harassment and sextortion. Infected nudifier installers were pinned prominently in their Telegram channels.
  • Face/Voice Swapping Apps: Void Arachne also advertised voice changing and face swapping apps enabling deception campaigns like virtual kidnappings. Attackers can use these apps to impersonate victims and pressure their families for ransom. As with nudifiers, infected voice/face swapper installers were shared widely on Telegram.

Winos 4.0 C&C Framework

The threat actors behind the campaign ultimately aim to install the Winos backdoor on compromised systems. Winos is a sophisticated Windows backdoor written in C++ that can fully take over infected machines. The initial infection begins with a stager module that decrypts malware configurations and downloads the main Winos payload. Campaign operations involve encrypted C&C communications that use generated session keys and a rolling XOR algorithm. The stager module then stores the full Winos module in the Windows registry and executes shellcode to launch it on affected systems. [caption id="attachment_77949" align="alignnone" width="699"]Void Arachne Winos Source: trendmicro.com[/caption] Winos grants remote access, keylogging, webcam control, microphone recording, and distributed denial of service (DDoS) capabilities. It also performs system reconnaissance like registry checks, file searches, and process injection. The malware connects to a command and control server to receive further modules/plugins that expand functionality. Several of these external plugins were observed providing functions such as collecting saved passwords from programs like Chrome and QQ, deleting antivirus software and attaching themselves to startup folders.

Concerning Trend of AI Misuse and Deepfakes

Void Arachne demonstrates technical sophistication and knowledge of effective infection tactics through their usage of SEO poisoning, Telegram channels, AI deepfakes, and voice/face swapping apps. One particularly concerning trend observed in the Void Arachne campaign is the mass proliferation of nudifier applications that use AI to create nonconsensual deepfake pornography. These images and videos are often used in sextortion schemes for further abuse, victim harassment, and financial gain. An English translation of a message advertising the usage of the nudifier AI uses the word "classmate," suggesting that one target market is minors:
Just have appropriate entertainment and satisfy your own lustful desires. Do not send it to the other party or harass the other party. Once you call the police, you will be in constant trouble! AI takes off clothes, you give me photos and I will make pictures for you. Do you want to see the female classmate you yearn for, the female colleague you have a crush on, the relatives and friends you eat and live with at home? Do you want to see them naked? Now you can realize your dream, you can see them naked and lustful for a pack of cigarette money.
[caption id="attachment_77953" align="alignnone" width="437"] Source: trendmicro.com[/caption] Additionally, the threat actors have advertised AI technologies that could be used for virtual kidnapping, a novel deception campaign that leverages AI voice-alternating technology to pressure victims into paying ransom. The promotion of this technology for deepfake nudes and virtual kidnapping is the latest example of the danger of AI misuse.  

Tech Leaders to Gather for AI Risk Summit at the Ritz-Carlton, Half Moon Bay June 25-26, 2024

17 June 2024 at 09:32

SecurityWeek’s AI Risk Summit + CISO Forum bring together business and government stakeholders to provide meaningful guidance on risk management and cybersecurity in the age of artificial intelligence.

The post Tech Leaders to Gather for AI Risk Summit at the Ritz-Carlton, Half Moon Bay June 25-26, 2024 appeared first on SecurityWeek.

Netcraft Uses Its AI Platform to Trick and Track Online Scammers

13 June 2024 at 14:00
romance scams generative AI pig butchering

At the RSA Conference last month, Netcraft introduced a generative AI-powered platform designed to interact with cybercriminals to gain insights into the operations of the conversational scams they’re running and disrupt their attacks. At the time, Ryan Woodley, CEO of the London-based company that offers a range of services from phishing detection to brand, domain,..

The post Netcraft Uses Its AI Platform to Trick and Track Online Scammers appeared first on Security Boulevard.

Cyberattack Hits Dubai: Daixin Team Claims to Steal Confidential Data, Residents at Risk

City of Dubai Ransomware Attack

The city of Dubai, known for its affluence and wealthy residents, has allegedly been hit by a ransomware attack claimed by the cybercriminal group Daixin Team. The group announced the city of Dubai ransomware attack on its dark web leak site on Wednesday, claiming to have stolen between 60-80GB of data from the Government of Dubai’s network systems. According to the Daixin Team's post, the stolen data includes ID cards, passports, and other personally identifiable information (PII). Although the group noted that the 33,712 files have not been fully analyzed or dumped on the leak site, the potential exposure of such sensitive information is concerning. Dubai, a city with over three million residents and the highest concentration of millionaires globally, presents a rich target for cybercriminals. [caption id="attachment_77008" align="aligncenter" width="504"]City of Dubai Ransomware Attack Source: Dark Web[/caption]

Potential Impact City of Dubai Ransomware Attack

The stolen data reportedly contains extensive personal information, such as full names, dates of birth, nationalities, marital statuses, job descriptions, supervisor names, housing statuses, phone numbers, addresses, vehicle information, primary contacts, and language preferences. Additionally, the databases appear to include business records, hotel records, land ownership details, HR records, and corporate contacts. [caption id="attachment_77010" align="aligncenter" width="1024"]Daixin Team Source: Dark Web[/caption] Given that over 75% of Dubai's residents are expatriates, the stolen data provides a treasure of information that could be used for targeted spear phishing attacks, vishing attacks, identity theft, and other malicious activities. The city's status as a playground for the wealthy, including 212 centi-millionaires and 15 billionaires, further heightens the risk of targeted attacks.

Daixin Team: A Persistent Threat

The Daixin Team, a Russian-speaking ransomware and data extortion group, has been active since at least June 2022. Known primarily for its cyberattacks on the healthcare sector, Daixin has recently expanded its operations to other industries, employing sophisticated hacking techniques. A 2022 report by the US Cybersecurity and Infrastructure Security Agency (CISA) highlights Daixin Team's focus on the healthcare sector in the United States. However, the group has also targeted other sectors, including the hospitality industry. Recently, Daixin claimed responsibility for a cyberattack on Omni Hotels & Resorts, exfiltrating sensitive data, including records of all visitors dating back to 2017. In another notable case, Bluewater Health, a prominent hospital network in Ontario, Canada, fell victim to a cyberattack attributed to Daixin Team. The attack affected several hospitals, including Windsor Regional Hospital, Erie Shores Healthcare, Chatham-Kent Health, and Hôtel-Dieu Grace Healthcare. The Government of Dubai has yet to release an official statement regarding the ransomware attack. However, on accessing the official website of the Dubai government, no foul play was sensed as the websites were fully functional. This leaves the alleged ransomware attack unverified. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

When Vendors Overstep – Identifying the AI You Don’t Need

12 June 2024 at 07:00

AI models are nothing without vast data sets to train them and vendors will be increasingly tempted to harvest as much data as they can and answer any questions later.

The post When Vendors Overstep – Identifying the AI You Don’t Need appeared first on SecurityWeek.

Apple Launches ‘Private Cloud Compute’ Along with Apple Intelligence AI

By: Alan J
11 June 2024 at 19:14

Private Cloud Compute Apple Intelligence AI

In a bold attempt to redefine cloud security and privacy standards, Apple has unveiled Private Cloud Compute (PCC), a groundbreaking cloud intelligence system designed to back its new Apple Intelligence with safety and transparency while integrating Apple devices into the cloud. The move comes after recognition of the widespread concerns surrounding the combination of artificial intelligence and cloud technology.

Private Cloud Compute Aims to Secure Cloud AI Processing

Apple has stated that its new Private Cloud Compute (PCC) is designed to enforce privacy and security standards over AI processing of private information. For the first time ever, Private Cloud Compute brings the same level of security and privacy that our users expect from their Apple devices to the cloud," said an Apple spokesperson. [caption id="attachment_76690" align="alignnone" width="1492"]Private Cloud Compute Apple Intelligence Source: security.apple.com[/caption] At the heart of PCC is Apple's stated commitment to on-device processing. When Apple is responsible for user data in the cloud, we protect it with state-of-the-art security in our services," the spokesperson explained. "But for the most sensitive data, we believe end-to-end encryption is our most powerful defense." Despite this commitment, Apple has stated that for more sophisticated AI requests, Apple Intelligence needs to leverage larger, more complex models in the cloud. This presented a challenge to the company, as traditional cloud AI security models were found lacking in meeting privacy expectations. Apple stated that PCC is designed with several key features to ensure the security and privacy of user data, claiming the following implementations:
  • Stateless computation: PCC processes user data only for the purpose of fulfilling the user's request, and then erases the data.
  • Enforceable guarantees: PCC is designed to provide technical enforcement for the privacy of user data during processing.
  • No privileged access: PCC does not allow Apple or any third party to access user data without the user's consent.
  • Non-targetability: PCC is designed to prevent targeted attacks on specific users.
  • Verifiable transparency: PCC provides transparency and accountability, allowing users to verify that their data is being processed securely and privately.

Apple Invites Experts to Test Standards; Online Reactions Mixed

At this week's Apple Annual Developer Conference, Apple's CEO Tim Cook described Apple Intelligence as a "personal intelligence system" that could understand and contextualize personal data to deliver results that are "incredibly useful and relevant," making "devices even more useful and delightful." Apple Intelligence mines and processes data across apps, software and services across Apple devices. This mined data includes emails, images, messages, texts, messages, documents, audio files, videos, contacts, calendars, Siri conversations, online preferences and past search history. The new PCC system attempts to ease consumer privacy and safety concerns. In its description of 'Verifiable transparency,' Apple stated:
"Security researchers need to be able to verify, with a high degree of confidence, that our privacy and security guarantees for Private Cloud Compute match our public promises. We already have an earlier requirement for our guarantees to be enforceable. Hypothetically, then, if security researchers had sufficient access to the system, they would be able to verify the guarantees."
However, despite Apple's assurances, the announcement of Apple Intelligence drew mixed reactions online, with some already likening it to Microsoft's Recall. In reaction to Apple's announcement, Elon Musk took to X to announce that Apple devices may be banned from his companies, citing the integration of OpenAI as an 'unacceptable security violation.' Others have also raised questions about the information that might be sent to OpenAI. [caption id="attachment_76692" align="alignnone" width="596"]Private Cloud Compute Apple Intelligence 1 Source: X.com[/caption] [caption id="attachment_76693" align="alignnone" width="418"]Private Cloud Compute Apple Intelligence 2 Source: X.com[/caption] [caption id="attachment_76695" align="alignnone" width="462"]Private Cloud Compute Apple Intelligence 3 Source: X.com[/caption] According to Apple's statements, requests made on its devices are not stored by OpenAI, and users’ IP addresses are obscured. Apple stated that it would also add “support for other AI models in the future.” Andy Wu, an associate professor at Harvard Business School, who researches the usage of AI by tech companies, highlighted the challenges of running powerful generative AI models while limiting their tendency to fabricate information. “Deploying the technology today requires incurring those risks, and doing so would be at odds with Apple’s traditional inclination toward offering polished products that it has full control over.”   Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Fortinet to Expand AI, Cloud Security with Lacework Acquisition

10 June 2024 at 11:16
Fortinet Lacework AI cloud security

Fortinet, known for network security capabilities within its Fortinet Security Fabric cybersecurity platform, is bolstering its AI and cloud security capabilities with the planned acquisition of Lacework and its AI-based offerings. The companies announced the proposed deal on Monday, with expectations that it will close in the second half of the year. The plan is..

The post Fortinet to Expand AI, Cloud Security with Lacework Acquisition appeared first on Security Boulevard.

Microsoft Recall is a Privacy Disaster

6 June 2024 at 13:20
Microsoft CEO Satya Nadella, with superimposed text: “Security”

It remembers everything you do on your PC. Security experts are raging at Redmond to recall Recall.

The post Microsoft Recall is a Privacy Disaster appeared first on Security Boulevard.

Prompt Injection Vulnerability in EmailGPT Discovered

6 June 2024 at 12:11
Close up notification of email

The vulnerability allows attackers to manipulate the AI service to steal data. CyRC recommends immediately removing the application to prevent exploitation.

The post Prompt Injection Vulnerability in EmailGPT Discovered appeared first on Security Boulevard.

OpenAI Disrupts AI-Deployed Influence Operations – Source: www.databreachtoday.com

openai-disrupts-ai-deployed-influence-operations-–-source:-wwwdatabreachtoday.com

Source: www.databreachtoday.com – Author: 1 Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development Low-Impact Disinformation Campaigns Based in Russia, China, Iran, Israel Rashmi Ramesh (rashmiramesh_) • May 31, 2024     OpenAI says it caught actors from China, Russia, Iran and Israel using its tools to create disinformation. (Image: Shutterstock) OpenAI said […]

La entrada OpenAI Disrupts AI-Deployed Influence Operations – Source: www.databreachtoday.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

OpenAI’s Altman Sidesteps Questions About Governance, Johansson at UN AI Summit

31 May 2024 at 06:10

Altman spent part of his virtual appearance fending off thorny questions about governance, an AI voice controversy and criticism from ousted board members.

The post OpenAI’s Altman Sidesteps Questions About Governance, Johansson at UN AI Summit appeared first on SecurityWeek.

OpenAI Forms Safety Committee as It Starts Training Latest Artificial Intelligence Model

28 May 2024 at 09:57

OpenAI is setting up a new safety and security committee and has begun training a new artificial intelligence model to supplant the GPT-4 system that underpins its ChatGPT chatbot.

The post OpenAI Forms Safety Committee as It Starts Training Latest Artificial Intelligence Model appeared first on SecurityWeek.

Microsoft AI “Recall” feature records everything, secures far less

22 May 2024 at 05:14

Developing an AI-powered threat to security, privacy, and identity is certainly a choice, but it’s one that Microsoft was willing to make this week at its “Build” developer conference.

On Monday, the computing giant unveiled a new line of PCs that integrate Artificial Intelligence (AI) technology to promise faster speeds, enhanced productivity, and a powerful data collection and search tool that screenshots a device’s activity—including password entry—every few seconds.

This is “Recall,” a much-advertised feature within what Microsoft is calling its “Copilot+ PCs,” a reference to the AI assistant and companion which the company released in late 2023. With Recall on the new Copilot+ PCs, users no longer need to manage and remember their own browsing and chat activity. Instead, by regularly taking and storing screenshots of a user’s activity, the Copilot+ PCs can comb through that visual data to deliver answers to natural language questions, such as “Find the site with the white sneakers,” and “blue pantsuit with a sequin lace from abuelita.”

As any regularly updated repository of device activity poses an enormous security threat—imagine hackers getting access to a Recall database and looking for, say, Social Security Numbers, bank account info, and addresses—Microsoft has said that all Recall screenshots are encrypted and stored locally on a device.

But, in terms of security, that’s about all users will get, as Recall will not detect and obscure passwords, shy away from recording pornographic material, or turn a blind eye to sensitive information.

According to Microsoft:

“Note that Recall does not perform content moderation. It will not hide information such as passwords or financial account numbers. That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry.”

The consequences of such a system could be enormous.

With Recall, a CEO’s personal laptop could become an even more enticing target for hackers equipped with infostealers, a journalist’s protected sources could be within closer grasp of an oppressive government that isn’t afraid to target dissidents with malware, and entire identities could be abused and impersonated by a separate device user.

In fact, Recall seems to only work best in a one-device-per-person world. Though Microsoft explained that its Copilot+ PCs will only record Recall snapshots to specific device accounts, plenty of people share devices and accounts. For the domestic abuse survivor who is forced to share an account with their abuser, for the victim of theft who—like many people—used a weak device passcode that can easily be cracked, and for the teenager who questions their identity on the family computer, Recall could be more of a burden than a benefit.

For Malwarebytes General Manager of Consumer Business Unit Mark Beare, Recall raises yet another issue:

“I worry that we are heading to a social media 2.0 like world.”

When users first raced to upload massive quantities of sensitive, personal data onto social media platforms more than 10 years ago, they couldn’t predict how that data would be scrutinized in the future, or how it would be scoured and weaponized by cybercriminals, Beare said.

“With AI there will be a strong pull to put your full self into a model (so it knows you),” Beare said. “I don’t think it’s easy to understand all the negative aspects of what can happen from doing that and how bad actors can benefit.”


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Your vacation, reservations, and online dates, now chosen by AI: Lock and Code S05E11

20 May 2024 at 11:10

This week on the Lock and Code podcast…

The irrigation of the internet is coming.

For decades, we’ve accessed the internet much like how we, so long ago, accessed water—by traveling to it. We connected (quite literally), we logged on, and we zipped to addresses and sites to read, learn, shop, and scroll. 

Over the years, the internet was accessible from increasingly more devices, like smartphones, smartwatches, and even smart fridges. But still, it had to be accessed, like a well dug into the ground to pull up the water below.

Moving forward, that could all change.

This year, several companies debuted their vision of a future that incorporates Artificial Intelligence to deliver the internet directly to you, with less searching, less typing, and less decision fatigue. 

For the startup Humane, that vision includes the use of the company’s AI-powered, voice-operated wearable pin that clips to your clothes. By simply speaking to the AI pin, users can text a friend, discover the nutritional facts about food that sits directly in front of them, and even compare the prices of an item found in stores with the price online.

For a separate startup, Rabbit, that vision similarly relies on a small, attractive smart-concierge gadget, the R1. With the bright-orange slab designed in coordination by the company Teenage Engineering, users can hail an Uber to take them to the airport, play an album on Spotify, and put in a delivery order for dinner.

Away from physical devices, The Browser Company of New York is also experimenting with AI in its own web browser, Arc. In February, the company debuted its endeavor to create a “browser that browses for you” with a snazzy video that showed off Arc’s AI capabilities to create unique, individualized web pages in response to questions about recipes, dinner reservations, and more.

But all these small-scale projects, announced in the first month or so of 2024, had to make room a few months later for big-money interest from the first ever internet conglomerate of the world—Google. At the company’s annual Google I/O conference on May 14, VP and Head of Google Search Liz Reid pitched the audience on an AI-powered version of search in which “Google will do the Googling for you.”

Now, Reid said, even complex, multi-part questions can be answered directly within Google, with no need to click a website, evaluate its accuracy, or flip through its many pages to find the relevant information within.

This, it appears, could be the next phase of the internet… and our host David Ruiz has a lot to say about it.

Today, on the Lock and Code podcast, we bring back Director of Content Anna Brading and Cybersecurity Evangelist Mark Stockley to discuss AI-powered concierges, the value of human choice when so many small decisions could be taken away by AI, and, as explained by Stockley, whether the appeal of AI is not in finding the “best” vacation, recipe, or dinner reservation, but rather the best of anything for its user.

“It’s not there to tell you what the best chocolate chip cookie in the world is for everyone. It’s there to help you figure out what the best chocolate chip cookie is for you, on a Monday evening, when the weather’s hot, and you’re hungry.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

April’s Patch Tuesday Brings Record Number of Fixes

9 April 2024 at 16:28

If only Patch Tuesdays came around infrequently — like total solar eclipse rare — instead of just creeping up on us each month like The Man in the Moon. Although to be fair, it would be tough for Microsoft to eclipse the number of vulnerabilities fixed in this month’s patch batch — a record 147 flaws in Windows and related software.

Yes, you read that right. Microsoft today released updates to address 147 security holes in Windows, Office, Azure, .NET Framework, Visual Studio, SQL Server, DNS Server, Windows Defender, Bitlocker, and Windows Secure Boot.

“This is the largest release from Microsoft this year and the largest since at least 2017,” said Dustin Childs, from Trend Micro’s Zero Day Initiative (ZDI). “As far as I can tell, it’s the largest Patch Tuesday release from Microsoft of all time.”

Tempering the sheer volume of this month’s patches is the middling severity of many of the bugs. Only three of April’s vulnerabilities earned Microsoft’s most-dire “critical” rating, meaning they can be abused by malware or malcontents to take remote control over unpatched systems with no help from users.

Most of the flaws that Microsoft deems “more likely to be exploited” this month are marked as “important,” which usually involve bugs that require a bit more user interaction (social engineering) but which nevertheless can result in system security bypass, compromise, and the theft of critical assets.

Ben McCarthy, lead cyber security engineer at Immersive Labs called attention to CVE-2024-20670, an Outlook for Windows spoofing vulnerability described as being easy to exploit. It involves convincing a user to click on a malicious link in an email, which can then steal the user’s password hash and authenticate as the user in another Microsoft service.

Another interesting bug McCarthy pointed to is CVE-2024-29063, which involves hard-coded credentials in Azure’s search backend infrastructure that could be gleaned by taking advantage of Azure AI search.

“This along with many other AI attacks in recent news shows a potential new attack surface that we are just learning how to mitigate against,” McCarthy said. “Microsoft has updated their backend and notified any customers who have been affected by the credential leakage.”

CVE-2024-29988 is a weakness that allows attackers to bypass Windows SmartScreen, a technology Microsoft designed to provide additional protections for end users against phishing and malware attacks. Childs said one of ZDI’s researchers found this vulnerability being exploited in the wild, although Microsoft doesn’t currently list CVE-2024-29988 as being exploited.

“I would treat this as in the wild until Microsoft clarifies,” Childs said. “The bug itself acts much like CVE-2024-21412 – a [zero-day threat from February] that bypassed the Mark of the Web feature and allows malware to execute on a target system. Threat actors are sending exploits in a zipped file to evade EDR/NDR detection and then using this bug (and others) to bypass Mark of the Web.”

Update, 7:46 p.m. ET: A previous version of this story said there were no zero-day vulnerabilities fixed this month. BleepingComputer reports that Microsoft has since confirmed that there are actually two zero-days. One is the flaw Childs just mentioned (CVE-2024-21412), and the other is CVE-2024-26234, described as a “proxy driver spoofing” weakness.

Satnam Narang at Tenable notes that this month’s release includes fixes for two dozen flaws in Windows Secure Boot, the majority of which are considered “Exploitation Less Likely” according to Microsoft.

“However, the last time Microsoft patched a flaw in Windows Secure Boot in May 2023 had a notable impact as it was exploited in the wild and linked to the BlackLotus UEFI bootkit, which was sold on dark web forums for $5,000,” Narang said. “BlackLotus can bypass functionality called secure boot, which is designed to block malware from being able to load when booting up. While none of these Secure Boot vulnerabilities addressed this month were exploited in the wild, they serve as a reminder that flaws in Secure Boot persist, and we could see more malicious activity related to Secure Boot in the future.”

For links to individual security advisories indexed by severity, check out ZDI’s blog and the Patch Tuesday post from the SANS Internet Storm Center. Please consider backing up your data or your drive before updating, and drop a note in the comments here if you experience any issues applying these fixes.

Adobe today released nine patches tackling at least two dozen vulnerabilities in a range of software products, including Adobe After Effects, Photoshop, Commerce, InDesign, Experience Manager, Media Encoder, Bridge, Illustrator, and Adobe Animate.

KrebsOnSecurity needs to correct the record on a point mentioned at the end of March’s “Fat Patch Tuesday” post, which looked at new AI capabilities built into Adobe Acrobat that are turned on by default. Adobe has since clarified that its apps won’t use AI to auto-scan your documents, as the original language in its FAQ suggested.

“In practice, no document scanning or analysis occurs unless a user actively engages with the AI features by agreeing to the terms, opening a document, and selecting the AI Assistant or generative summary buttons for that specific document,” Adobe said earlier this month.

CEO of Data Privacy Company Onerep.com Founded Dozens of People-Search Firms

14 March 2024 at 17:13

The data privacy company Onerep.com bills itself as a Virginia-based service for helping people remove their personal information from almost 200 people-search websites. However, an investigation into the history of onerep.com finds this company is operating out of Belarus and Cyprus, and that its founder has launched dozens of people-search services over the years.

Onerep’s “Protect” service starts at $8.33 per month for individuals and $15/mo for families, and promises to remove your personal information from nearly 200 people-search sites. Onerep also markets its service to companies seeking to offer their employees the ability to have their data continuously removed from people-search sites.

A testimonial on onerep.com.

Customer case studies published on onerep.com state that it struck a deal to offer the service to employees of Permanente Medicine, which represents the doctors within the health insurance giant Kaiser Permanente. Onerep also says it has made inroads among police departments in the United States.

But a review of Onerep’s domain registration records and that of its founder reveal a different side to this company. Onerep.com says its founder and CEO is Dimitri Shelest from Minsk, Belarus, as does Shelest’s profile on LinkedIn. Historic registration records indexed by DomainTools.com say Mr. Shelest was a registrant of onerep.com who used the email address dmitrcox2@gmail.com.

A search in the data breach tracking service Constella Intelligence for the name Dimitri Shelest brings up the email address dimitri.shelest@onerep.com. Constella also finds that Dimitri Shelest from Belarus used the email address d.sh@nuwber.com, and the Belarus phone number +375-292-702786.

Nuwber.com is a people search service whose employees all appear to be from Belarus, and it is one of dozens of people-search companies that Onerep claims to target with its data-removal service. Onerep.com’s website disavows any relationship to Nuwber.com, stating quite clearly, “Please note that OneRep is not associated with Nuwber.com.”

However, there is an abundance of evidence suggesting Mr. Shelest is in fact the founder of Nuwber. Constella found that Minsk telephone number (375-292-702786) has been used multiple times in connection with the email address dmitrcox@gmail.com. Recall that Onerep.com’s domain registration records in 2018 list the email address dmitrcox2@gmail.com.

It appears Mr. Shelest sought to reinvent his online identity in 2015 by adding a “2” to his email address. The Belarus phone number tied to Nuwber.com shows up in the domain records for comversus.com, and DomainTools says this domain is tied to both dmitrcox@gmail.com and dmitrcox2@gmail.com. Other domains that mention both email addresses in their WHOIS records include careon.me, docvsdoc.com, dotcomsvdot.com, namevname.com, okanyway.com and tapanyapp.com.

Onerep.com CEO and founder Dimitri Shelest, as pictured on the “about” page of onerep.com.

A search in DomainTools for the email address dmitrcox@gmail.com shows it is associated with the registration of at least 179 domain names, including dozens of mostly now-defunct people-search companies targeting citizens of Argentina, Brazil, Canada, Denmark, France, Germany, Hong Kong, Israel, Italy, Japan, Latvia and Mexico, among others.

Those include nuwber.fr, a site registered in 2016 which was identical to the homepage of Nuwber.com at the time. DomainTools shows the same email and Belarus phone number are in historic registration records for nuwber.at, nuwber.ch, and nuwber.dk (all domains linked here are to their cached copies at archive.org, where available).

Nuwber.com, circa 2015. Image: Archive.org.

Update, March 21, 11:15 a.m. ET: Mr. Shelest has provided a lengthy response to the findings in this story. In summary, Shelest acknowledged maintaining an ownership stake in Nuwber, but said there was “zero cross-over or information-sharing with OneRep.” Mr. Shelest said any other old domains that may be found and associated with his name are no longer being operated by him.

“I get it,” Shelest wrote. “My affiliation with a people search business may look odd from the outside. In truth, if I hadn’t taken that initial path with a deep dive into how people search sites work, Onerep wouldn’t have the best tech and team in the space. Still, I now appreciate that we did not make this more clear in the past and I’m aiming to do better in the future.” The full statement is available here (PDF).

Original story:

Historic WHOIS records for onerep.com show it was registered for many years to a resident of Sioux Falls, SD for a completely unrelated site. But around Sept. 2015 the domain switched from the registrar GoDaddy.com to eNom, and the registration records were hidden behind privacy protection services. DomainTools indicates around this time onerep.com started using domain name servers from DNS provider constellix.com. Likewise, Nuwber.com first appeared in late 2015, was also registered through eNom, and also started using constellix.com for DNS at nearly the same time.

Listed on LinkedIn as a former product manager at OneRep.com between 2015 and 2018 is Dimitri Bukuyazau, who says their hometown is Warsaw, Poland. While this LinkedIn profile (linkedin.com/in/dzmitrybukuyazau) does not mention Nuwber, a search on this name in Google turns up a 2017 blog post from privacyduck.com, which laid out a number of reasons to support a conclusion that OneRep and Nuwber.com were the same company.

“Any people search profiles containing your Personally Identifiable Information that were on Nuwber.com were also mirrored identically on OneRep.com, down to the relatives’ names and address histories,” Privacyduck.com wrote. The post continued:

“Both sites offered the same immediate opt-out process. Both sites had the same generic contact and support structure. They were – and remain – the same company (even PissedConsumer.com advocates this fact: https://nuwber.pissedconsumer.com/nuwber-and-onerep-20160707878520.html).”

“Things changed in early 2016 when OneRep.com began offering privacy removal services right alongside their own open displays of your personal information. At this point when you found yourself on Nuwber.com OR OneRep.com, you would be provided with the option of opting-out your data on their site for free – but also be highly encouraged to pay them to remove it from a slew of other sites (and part of that payment was removing you from their own site, Nuwber.com, as a benefit of their service).”

Reached via LinkedIn, Mr. Bukuyazau declined to answer questions, such as whether he ever worked at Nuwber.com. However, Constella Intelligence finds two interesting email addresses for employees at nuwber.com: d.bu@nuwber.com, and d.bu+figure-eight.com@nuwber.com, which was registered under the name “Dzmitry.”

PrivacyDuck’s claims about how onerep.com appeared and behaved in the early days are not readily verifiable because the domain onerep.com has been completely excluded from the Wayback Machine at archive.org. The Wayback Machine will honor such requests if they come directly from the owner of the domain in question.

Still, Mr. Shelest’s name, phone number and email also appear in the domain registration records for a truly dizzying number of country-specific people-search services, including pplcrwlr.in, pplcrwlr.fr, pplcrwlr.dk, pplcrwlr.jp, peeepl.br.com, peeepl.in, peeepl.it and peeepl.co.uk.

The same details appear in the WHOIS registration records for the now-defunct people-search sites waatpp.de, waatp1.fr, azersab.com, and ahavoila.com, a people-search service for French citizens.

The German people-search site waatp.de.

A search on the email address dmitrcox@gmail.com suggests Mr. Shelest was previously involved in rather aggressive email marketing campaigns. In 2010, an anonymous source leaked to KrebsOnSecurity the financial and organizational records of Spamit, which at the time was easily the largest Russian-language pharmacy spam affiliate program in the world.

Spamit paid spammers a hefty commission every time someone bought male enhancement drugs from any of their spam-advertised websites. Mr. Shelest’s email address stood out because immediately after the Spamit database was leaked, KrebsOnSecurity searched all of the Spamit affiliate email addresses to determine if any of them corresponded to social media accounts at Facebook.com (at the time, Facebook allowed users to search profiles by email address).

That mapping, which was done mainly by generous graduate students at my alma mater George Mason University, revealed that dmitrcox@gmail.com was used by a Spamit affiliate, albeit not a very profitable one. That same Facebook profile for Mr. Shelest is still active, and it says he is married and living in Minsk [Update, Mar. 16: Mr. Shelest’s Facebook account is no longer active].

The Italian people-search website peeepl.it.

Scrolling down Mr. Shelest’s Facebook page to posts made more than ten years ago show him liking the Facebook profile pages for a large number of other people-search sites, including findita.com, findmedo.com, folkscan.com, huntize.com, ifindy.com, jupery.com, look2man.com, lookerun.com, manyp.com, peepull.com, perserch.com, persuer.com, pervent.com, piplenter.com, piplfind.com, piplscan.com, popopke.com, pplsorce.com, qimeo.com, scoutu2.com, search64.com, searchay.com, seekmi.com, selfabc.com, socsee.com, srching.com, toolooks.com, upearch.com, webmeek.com, and many country-code variations of viadin.ca (e.g. viadin.hk, viadin.com and viadin.de).

The people-search website popopke.com.

Domaintools.com finds that all of the domains mentioned in the last paragraph were registered to the email address dmitrcox@gmail.com.

Mr. Shelest has not responded to multiple requests for comment. KrebsOnSecurity also sought comment from onerep.com, which likewise has not responded to inquiries about its founder’s many apparent conflicts of interest. In any event, these practices would seem to contradict the goal Onerep has stated on its site: “We believe that no one should compromise personal online security and get a profit from it.”

The people-search website findmedo.com.

Max Anderson is chief growth officer at 360 Privacy, a legitimate privacy company that works to keep its clients’ data off of more than 400 data broker and people-search sites. Anderson said it is concerning to see a direct link between between a data removal service and data broker websites.

“I would consider it unethical to run a company that sells people’s information, and then charge those same people to have their information removed,” Anderson said.

Last week, KrebsOnSecurity published an analysis of the people-search data broker giant Radaris, whose consumer profiles are deep enough to rival those of far more guarded data broker resources available to U.S. police departments and other law enforcement personnel.

That story revealed that the co-founders of Radaris are two native Russian brothers who operate multiple Russian-language dating services and affiliate programs. It also appears many of the Radaris founders’ businesses have ties to a California marketing firm that works with a Russian state-run media conglomerate currently sanctioned by the U.S. government.

KrebsOnSecurity will continue investigating the history of various consumer data brokers and people-search providers. If any readers have inside knowledge of this industry or key players within it, please consider reaching out to krebsonsecurity at gmail.com.

Update, March 15, 11:35 a.m. ET: Many readers have pointed out something that was somehow overlooked amid all this research: The Mozilla Foundation, the company that runs the Firefox Web browser, has launched a data removal service called Mozilla Monitor that bundles OneRep. That notice says Mozilla Monitor is offered as a free or paid subscription service.

“The free data breach notification service is a partnership with Have I Been Pwned (“HIBP”),” the Mozilla Foundation explains. “The automated data deletion service is a partnership with OneRep to remove personal information published on publicly available online directories and other aggregators of information about individuals (“Data Broker Sites”).”

In a statement shared with KrebsOnSecurity.com, Mozilla said they did assess OneRep’s data removal service to confirm it acts according to privacy principles advocated at Mozilla.

“We were aware of the past affiliations with the entities named in the article and were assured they had ended prior to our work together,” the statement reads. “We’re now looking into this further. We will always put the privacy and security of our customers first and will provide updates as needed.”

❌
❌