Normal view

There are new articles available, click to refresh the page.
Before yesterdayCybersecurity News and Magazine

OpenAI’s ChatGPT ‘Voice Mode’ Doesn’t Meet Safety Standards; Rollout Pushed to July

Voice Mode, OpenAI Voice Mode

Experts are raising eyebrows after OpenAI announced a one-month delay in the rollout of its highly anticipated “Voice Mode” feature for ChatGPT, citing safety concerns. The company said it needs more time to ensure the model can “detect and refuse certain content.”
“We’re improving the model’s ability to detect and refuse certain content. We’re also working on enhancing the user experience and scaling our infrastructure to support millions of users while maintaining real-time responses.” - OpenAI
The stalling of the rollout comes a month after OpenAI announced a new safety and security committee that would oversee issues related to the company’s future projects and operations. It is unclear if this postponement was suggested by the committee or by internal stakeholders.

Features of ChatGPT’s ‘Voice Mode’

OpenAI unveiled its GPT-4o system in May, boasting significant advancements in human-computer interaction. “GPT-4o (‘o’ for ‘omni’) is a step towards much more natural human-computer interaction,” OpenAI said at the time. The omni model can respond to audio inputs at an average of 320 milliseconds, which is similar to the response time of humans. Other salient features of the “Voice Mode” promise real-time conversations with human-like emotional responses, but this also raises concerns about potential manipulation and the spread of misinformation. The May announcement gave a snippet at the model’s ability to understand nuances like tone, non-verbal cues and background noise, further blurring the lines between human and machine interaction. While OpenAI plans an alpha release for a limited group of paid subscribers in July, the broader rollout remains uncertain. The company emphasizes its commitment to a “high safety and reliability” standard but the exact timeline for wider access hinges on user feedback.

The ‘Sky’ of Controversy Surrounding ‘Voice Mode’

The rollout delay of “voice mode” feature of ChatGPT follows the controversy sparked by actress Scarlett Johansson, who accused OpenAI of using her voice without permission in demonstrations of the technology. OpenAI refuted the claim stating the controversial voice of “Sky” - one of the five voice modulation that the Voice Mode offers for responses – belonged to a voice artist and not Johansson. The company said an internal team reviewed the voices it received from over 400 artists, from a product and research perspective, and after careful consideration zeroed on five of them, namely Breeze, Cove, Ember, Juniper and Sky. OpenAI, however, did confirm that its top boss Sam Altman reached out to Johannson to integrate her voice.
“On September 11, 2023, Sam spoke with Ms. Johansson and her team to discuss her potential involvement as a sixth voice actor for ChatGPT, along with the other five voices, including Sky. She politely declined the opportunity one week later through her agent.” - OpenAI
Altman took a last chance of onboarding the Hollywood star this May, when he again contacted her team to inform the launch of GPT-4o and asked if she might reconsider joining as a future additional voice in ChatGPT. But instead, with the demo version of Sky airing through, Johannson threatened to sue the company for “stealing” her voice. Owing to the pressure from her lawyers, OpenAI removed the Sky voice sample since May 19.
“The voice of Sky is not Scarlett Johansson's, and it was never intended to resemble hers. We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.” – Sam Altman
Although the issue seems to have resolved for the time being, this duel between Johannson and Altman brought to the fore the ethical considerations surrounding deepfakes and synthetic media.

Likely Delays in Apple AI and OpenAI Partnership Too

If the technical issues and the Sky voice mode controversy weren’t enough, adding another layer of complication to OpenAI’s woes is Apple’s recent brush with EU regulators that now casts a shadow over the future of ChatGPT integration into Apple devices. Announced earlier this month, the partnership aimed to leverage OpenAI's technology in Cupertino tech giant’s “Apple Intelligence” system. However, with Apple facing potential regulatory roadblocks under the EU’s Digital Markets Act (DMA), the integration’s fate remains unclear. This confluence of factors – safety concerns, potential for misuse, and regulatory hurdles – paints a complex picture for OpenAI's “Voice Mode.” The cybersecurity and regulatory industry will undoubtedly be watching closely as the technology evolves, with a keen eye on potential security vulnerabilities and the implications for responsible AI development.

New Threat Group Void Arachne Targets Chinese-Speaking Audience; Promotes AI Deepfake and Misuse

By: Alan J
19 June 2024 at 16:35

Void Arachne Targets Chinese-Speaking Deepfake Deepfakes

A new threat actor group called Void Arachne is conducting a malware campaign targeting Chinese-speaking users. The group is distributing malicious MSI installer files bundled with legitimate software like AI tools, Chinese language packs, and virtual private network (VPN) clients. During installation, these files also covertly install the Winos 4.0 backdoor, which can fully compromise systems.

Void Arachne Tactics

Researchers from Trend Micro discovered that the Void Arachne group employs multiple techniques to distribute malicious installers, including search engine optimization (SEO) poisoning and posting links on Chinese-language Telegram channels.
  • SEO Poisoning: The group set up websites posing as legitimate software download sites. Through SEO poisoning, they pushed these sites to rank highly on search engines for common Chinese software keywords. The sites host MSI installer files containing Winos malware bundled with software like Chrome, language packs, and VPNs. Victims unintentionally infect themselves with Winos, while believing that they are only installing intended software.
  • Targeting VPNs: Void Arachne frequently targets Chinese VPN software in their installers and Telegram posts. Exploiting interest in VPNs is an effective infection tactic, as VPN usage is high among Chinese internet users due to government censorship. [caption id="attachment_77950" align="alignnone" width="917"]Void Arachne Chinese VPN Source: trendmicro.com[/caption]
  • Telegram Channels: In addition to SEO poisoning, Void Arachne shared malicious installers in Telegram channels focused on Chinese language and VPN topics. Channels with tens of thousands of users pinned posts with infected language packs and AI software installers, increasing exposure.
  • Deepfake Pornography: A concerning discovery was the group promoting nudifier apps generating nonconsensual deepfake pornography. They advertised the ability to undress photos of classmates and colleagues, encouraging harassment and sextortion. Infected nudifier installers were pinned prominently in their Telegram channels.
  • Face/Voice Swapping Apps: Void Arachne also advertised voice changing and face swapping apps enabling deception campaigns like virtual kidnappings. Attackers can use these apps to impersonate victims and pressure their families for ransom. As with nudifiers, infected voice/face swapper installers were shared widely on Telegram.

Winos 4.0 C&C Framework

The threat actors behind the campaign ultimately aim to install the Winos backdoor on compromised systems. Winos is a sophisticated Windows backdoor written in C++ that can fully take over infected machines. The initial infection begins with a stager module that decrypts malware configurations and downloads the main Winos payload. Campaign operations involve encrypted C&C communications that use generated session keys and a rolling XOR algorithm. The stager module then stores the full Winos module in the Windows registry and executes shellcode to launch it on affected systems. [caption id="attachment_77949" align="alignnone" width="699"]Void Arachne Winos Source: trendmicro.com[/caption] Winos grants remote access, keylogging, webcam control, microphone recording, and distributed denial of service (DDoS) capabilities. It also performs system reconnaissance like registry checks, file searches, and process injection. The malware connects to a command and control server to receive further modules/plugins that expand functionality. Several of these external plugins were observed providing functions such as collecting saved passwords from programs like Chrome and QQ, deleting antivirus software and attaching themselves to startup folders.

Concerning Trend of AI Misuse and Deepfakes

Void Arachne demonstrates technical sophistication and knowledge of effective infection tactics through their usage of SEO poisoning, Telegram channels, AI deepfakes, and voice/face swapping apps. One particularly concerning trend observed in the Void Arachne campaign is the mass proliferation of nudifier applications that use AI to create nonconsensual deepfake pornography. These images and videos are often used in sextortion schemes for further abuse, victim harassment, and financial gain. An English translation of a message advertising the usage of the nudifier AI uses the word "classmate," suggesting that one target market is minors:
Just have appropriate entertainment and satisfy your own lustful desires. Do not send it to the other party or harass the other party. Once you call the police, you will be in constant trouble! AI takes off clothes, you give me photos and I will make pictures for you. Do you want to see the female classmate you yearn for, the female colleague you have a crush on, the relatives and friends you eat and live with at home? Do you want to see them naked? Now you can realize your dream, you can see them naked and lustful for a pack of cigarette money.
[caption id="attachment_77953" align="alignnone" width="437"] Source: trendmicro.com[/caption] Additionally, the threat actors have advertised AI technologies that could be used for virtual kidnapping, a novel deception campaign that leverages AI voice-alternating technology to pressure victims into paying ransom. The promotion of this technology for deepfake nudes and virtual kidnapping is the latest example of the danger of AI misuse.  

Cyberattack Hits Dubai: Daixin Team Claims to Steal Confidential Data, Residents at Risk

City of Dubai Ransomware Attack

The city of Dubai, known for its affluence and wealthy residents, has allegedly been hit by a ransomware attack claimed by the cybercriminal group Daixin Team. The group announced the city of Dubai ransomware attack on its dark web leak site on Wednesday, claiming to have stolen between 60-80GB of data from the Government of Dubai’s network systems. According to the Daixin Team's post, the stolen data includes ID cards, passports, and other personally identifiable information (PII). Although the group noted that the 33,712 files have not been fully analyzed or dumped on the leak site, the potential exposure of such sensitive information is concerning. Dubai, a city with over three million residents and the highest concentration of millionaires globally, presents a rich target for cybercriminals. [caption id="attachment_77008" align="aligncenter" width="504"]City of Dubai Ransomware Attack Source: Dark Web[/caption]

Potential Impact City of Dubai Ransomware Attack

The stolen data reportedly contains extensive personal information, such as full names, dates of birth, nationalities, marital statuses, job descriptions, supervisor names, housing statuses, phone numbers, addresses, vehicle information, primary contacts, and language preferences. Additionally, the databases appear to include business records, hotel records, land ownership details, HR records, and corporate contacts. [caption id="attachment_77010" align="aligncenter" width="1024"]Daixin Team Source: Dark Web[/caption] Given that over 75% of Dubai's residents are expatriates, the stolen data provides a treasure of information that could be used for targeted spear phishing attacks, vishing attacks, identity theft, and other malicious activities. The city's status as a playground for the wealthy, including 212 centi-millionaires and 15 billionaires, further heightens the risk of targeted attacks.

Daixin Team: A Persistent Threat

The Daixin Team, a Russian-speaking ransomware and data extortion group, has been active since at least June 2022. Known primarily for its cyberattacks on the healthcare sector, Daixin has recently expanded its operations to other industries, employing sophisticated hacking techniques. A 2022 report by the US Cybersecurity and Infrastructure Security Agency (CISA) highlights Daixin Team's focus on the healthcare sector in the United States. However, the group has also targeted other sectors, including the hospitality industry. Recently, Daixin claimed responsibility for a cyberattack on Omni Hotels & Resorts, exfiltrating sensitive data, including records of all visitors dating back to 2017. In another notable case, Bluewater Health, a prominent hospital network in Ontario, Canada, fell victim to a cyberattack attributed to Daixin Team. The attack affected several hospitals, including Windsor Regional Hospital, Erie Shores Healthcare, Chatham-Kent Health, and Hôtel-Dieu Grace Healthcare. The Government of Dubai has yet to release an official statement regarding the ransomware attack. However, on accessing the official website of the Dubai government, no foul play was sensed as the websites were fully functional. This leaves the alleged ransomware attack unverified. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Apple Launches ‘Private Cloud Compute’ Along with Apple Intelligence AI

By: Alan J
11 June 2024 at 19:14

Private Cloud Compute Apple Intelligence AI

In a bold attempt to redefine cloud security and privacy standards, Apple has unveiled Private Cloud Compute (PCC), a groundbreaking cloud intelligence system designed to back its new Apple Intelligence with safety and transparency while integrating Apple devices into the cloud. The move comes after recognition of the widespread concerns surrounding the combination of artificial intelligence and cloud technology.

Private Cloud Compute Aims to Secure Cloud AI Processing

Apple has stated that its new Private Cloud Compute (PCC) is designed to enforce privacy and security standards over AI processing of private information. For the first time ever, Private Cloud Compute brings the same level of security and privacy that our users expect from their Apple devices to the cloud," said an Apple spokesperson. [caption id="attachment_76690" align="alignnone" width="1492"]Private Cloud Compute Apple Intelligence Source: security.apple.com[/caption] At the heart of PCC is Apple's stated commitment to on-device processing. When Apple is responsible for user data in the cloud, we protect it with state-of-the-art security in our services," the spokesperson explained. "But for the most sensitive data, we believe end-to-end encryption is our most powerful defense." Despite this commitment, Apple has stated that for more sophisticated AI requests, Apple Intelligence needs to leverage larger, more complex models in the cloud. This presented a challenge to the company, as traditional cloud AI security models were found lacking in meeting privacy expectations. Apple stated that PCC is designed with several key features to ensure the security and privacy of user data, claiming the following implementations:
  • Stateless computation: PCC processes user data only for the purpose of fulfilling the user's request, and then erases the data.
  • Enforceable guarantees: PCC is designed to provide technical enforcement for the privacy of user data during processing.
  • No privileged access: PCC does not allow Apple or any third party to access user data without the user's consent.
  • Non-targetability: PCC is designed to prevent targeted attacks on specific users.
  • Verifiable transparency: PCC provides transparency and accountability, allowing users to verify that their data is being processed securely and privately.

Apple Invites Experts to Test Standards; Online Reactions Mixed

At this week's Apple Annual Developer Conference, Apple's CEO Tim Cook described Apple Intelligence as a "personal intelligence system" that could understand and contextualize personal data to deliver results that are "incredibly useful and relevant," making "devices even more useful and delightful." Apple Intelligence mines and processes data across apps, software and services across Apple devices. This mined data includes emails, images, messages, texts, messages, documents, audio files, videos, contacts, calendars, Siri conversations, online preferences and past search history. The new PCC system attempts to ease consumer privacy and safety concerns. In its description of 'Verifiable transparency,' Apple stated:
"Security researchers need to be able to verify, with a high degree of confidence, that our privacy and security guarantees for Private Cloud Compute match our public promises. We already have an earlier requirement for our guarantees to be enforceable. Hypothetically, then, if security researchers had sufficient access to the system, they would be able to verify the guarantees."
However, despite Apple's assurances, the announcement of Apple Intelligence drew mixed reactions online, with some already likening it to Microsoft's Recall. In reaction to Apple's announcement, Elon Musk took to X to announce that Apple devices may be banned from his companies, citing the integration of OpenAI as an 'unacceptable security violation.' Others have also raised questions about the information that might be sent to OpenAI. [caption id="attachment_76692" align="alignnone" width="596"]Private Cloud Compute Apple Intelligence 1 Source: X.com[/caption] [caption id="attachment_76693" align="alignnone" width="418"]Private Cloud Compute Apple Intelligence 2 Source: X.com[/caption] [caption id="attachment_76695" align="alignnone" width="462"]Private Cloud Compute Apple Intelligence 3 Source: X.com[/caption] According to Apple's statements, requests made on its devices are not stored by OpenAI, and users’ IP addresses are obscured. Apple stated that it would also add “support for other AI models in the future.” Andy Wu, an associate professor at Harvard Business School, who researches the usage of AI by tech companies, highlighted the challenges of running powerful generative AI models while limiting their tendency to fabricate information. “Deploying the technology today requires incurring those risks, and doing so would be at odds with Apple’s traditional inclination toward offering polished products that it has full control over.”   Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.
❌
❌