Reading view

There are new articles available, click to refresh the page.

Political deepfakes are the most popular way to misuse AI

Political deepfakes are the most popular way to misuse AI

Enlarge (credit: Arkadiusz Warguła via Getty)

Artificial intelligence-generated “deepfakes” that impersonate politicians and celebrities are far more prevalent than efforts to use AI to assist cyber attacks, according to the first research by Google’s DeepMind division into the most common malicious uses of the cutting-edge technology.

The study said the creation of realistic but fake images, video, and audio of people was almost twice as common as the next highest misuse of generative AI tools: the falsifying of information using text-based tools, such as chatbots, to generate misinformation to post online.

The most common goal of actors misusing generative AI was to shape or influence public opinion, the analysis, conducted with the search group’s research and development unit Jigsaw, found. That accounted for 27 percent of uses, feeding into fears over how deepfakes might influence elections globally this year.

Read 13 remaining paragraphs | Comments

New Threat Group Void Arachne Targets Chinese-Speaking Audience; Promotes AI Deepfake and Misuse

Void Arachne Targets Chinese-Speaking Deepfake Deepfakes

A new threat actor group called Void Arachne is conducting a malware campaign targeting Chinese-speaking users. The group is distributing malicious MSI installer files bundled with legitimate software like AI tools, Chinese language packs, and virtual private network (VPN) clients. During installation, these files also covertly install the Winos 4.0 backdoor, which can fully compromise systems.

Void Arachne Tactics

Researchers from Trend Micro discovered that the Void Arachne group employs multiple techniques to distribute malicious installers, including search engine optimization (SEO) poisoning and posting links on Chinese-language Telegram channels.
  • SEO Poisoning: The group set up websites posing as legitimate software download sites. Through SEO poisoning, they pushed these sites to rank highly on search engines for common Chinese software keywords. The sites host MSI installer files containing Winos malware bundled with software like Chrome, language packs, and VPNs. Victims unintentionally infect themselves with Winos, while believing that they are only installing intended software.
  • Targeting VPNs: Void Arachne frequently targets Chinese VPN software in their installers and Telegram posts. Exploiting interest in VPNs is an effective infection tactic, as VPN usage is high among Chinese internet users due to government censorship. [caption id="attachment_77950" align="alignnone" width="917"]Void Arachne Chinese VPN Source: trendmicro.com[/caption]
  • Telegram Channels: In addition to SEO poisoning, Void Arachne shared malicious installers in Telegram channels focused on Chinese language and VPN topics. Channels with tens of thousands of users pinned posts with infected language packs and AI software installers, increasing exposure.
  • Deepfake Pornography: A concerning discovery was the group promoting nudifier apps generating nonconsensual deepfake pornography. They advertised the ability to undress photos of classmates and colleagues, encouraging harassment and sextortion. Infected nudifier installers were pinned prominently in their Telegram channels.
  • Face/Voice Swapping Apps: Void Arachne also advertised voice changing and face swapping apps enabling deception campaigns like virtual kidnappings. Attackers can use these apps to impersonate victims and pressure their families for ransom. As with nudifiers, infected voice/face swapper installers were shared widely on Telegram.

Winos 4.0 C&C Framework

The threat actors behind the campaign ultimately aim to install the Winos backdoor on compromised systems. Winos is a sophisticated Windows backdoor written in C++ that can fully take over infected machines. The initial infection begins with a stager module that decrypts malware configurations and downloads the main Winos payload. Campaign operations involve encrypted C&C communications that use generated session keys and a rolling XOR algorithm. The stager module then stores the full Winos module in the Windows registry and executes shellcode to launch it on affected systems. [caption id="attachment_77949" align="alignnone" width="699"]Void Arachne Winos Source: trendmicro.com[/caption] Winos grants remote access, keylogging, webcam control, microphone recording, and distributed denial of service (DDoS) capabilities. It also performs system reconnaissance like registry checks, file searches, and process injection. The malware connects to a command and control server to receive further modules/plugins that expand functionality. Several of these external plugins were observed providing functions such as collecting saved passwords from programs like Chrome and QQ, deleting antivirus software and attaching themselves to startup folders.

Concerning Trend of AI Misuse and Deepfakes

Void Arachne demonstrates technical sophistication and knowledge of effective infection tactics through their usage of SEO poisoning, Telegram channels, AI deepfakes, and voice/face swapping apps. One particularly concerning trend observed in the Void Arachne campaign is the mass proliferation of nudifier applications that use AI to create nonconsensual deepfake pornography. These images and videos are often used in sextortion schemes for further abuse, victim harassment, and financial gain. An English translation of a message advertising the usage of the nudifier AI uses the word "classmate," suggesting that one target market is minors:
Just have appropriate entertainment and satisfy your own lustful desires. Do not send it to the other party or harass the other party. Once you call the police, you will be in constant trouble! AI takes off clothes, you give me photos and I will make pictures for you. Do you want to see the female classmate you yearn for, the female colleague you have a crush on, the relatives and friends you eat and live with at home? Do you want to see them naked? Now you can realize your dream, you can see them naked and lustful for a pack of cigarette money.
[caption id="attachment_77953" align="alignnone" width="437"] Source: trendmicro.com[/caption] Additionally, the threat actors have advertised AI technologies that could be used for virtual kidnapping, a novel deception campaign that leverages AI voice-alternating technology to pressure victims into paying ransom. The promotion of this technology for deepfake nudes and virtual kidnapping is the latest example of the danger of AI misuse.  

Runway’s latest AI video generator brings giant cotton candy monsters to life

Screen capture of a Runway Gen-3 Alpha video generated with the prompt

Enlarge / Screen capture of a Runway Gen-3 Alpha video generated with the prompt "A giant humanoid, made of fluffy blue cotton candy, stomping on the ground, and roaring to the sky, clear blue sky behind them." (credit: Runway)

On Sunday, Runway announced a new AI video synthesis model called Gen-3 Alpha that's still under development, but it appears to create video of similar quality to OpenAI's Sora, which debuted earlier this year (and has also not yet been released). It can generate novel, high-definition video from text prompts that range from realistic humans to surrealistic monsters stomping the countryside.

Unlike Runway's previous best model from June 2023, which could only create two-second-long clips, Gen-3 Alpha can reportedly create 10-second-long video segments of people, places, and things that have a consistency and coherency that easily surpasses Gen-2. If 10 seconds sounds short compared to Sora's full minute of video, consider that the company is working with a shoestring budget of compute compared to more lavishly funded OpenAI—and actually has a history of shipping video generation capability to commercial users.

Gen-3 Alpha does not generate audio to accompany the video clips, and it's highly likely that temporally coherent generations (those that keep a character consistent over time) are dependent on similar high-quality training material. But Runway's improvement in visual fidelity over the past year is difficult to ignore.

Read 20 remaining paragraphs | Comments

AI trained on photos from kids’ entire childhood without their consent

AI trained on photos from kids’ entire childhood without their consent

Enlarge (credit: RicardoImagen | E+)

Photos of Brazilian kids—sometimes spanning their entire childhood—have been used without their consent to power AI tools, including popular image generators like Stable Diffusion, Human Rights Watch (HRW) warned on Monday.

This act poses urgent privacy risks to kids and seems to increase risks of non-consensual AI-generated images bearing their likenesses, HRW's report said.

An HRW researcher, Hye Jung Han, helped expose the problem. She analyzed "less than 0.0001 percent" of LAION-5B, a dataset built from Common Crawl snapshots of the public web. The dataset does not contain the actual photos but includes image-text pairs derived from 5.85 billion images and captions posted online since 2008.

Read 34 remaining paragraphs | Comments

Generative AI and Data Privacy: Navigating the Complex Landscape

Generative AI

By Neelesh Kripalani, Chief Technology Officer, Clover Infotech Generative AI, which includes technologies such as deep learning, natural language processing, and speech recognition for generating text, images, and audio, is transforming various sectors from entertainment to healthcare. However, its rapid advancement has raised significant concerns about data privacy. To navigate this intricate landscape, it is crucial to understand the intersection of AI capabilities, ethical considerations, legal frameworks, and technological safeguards.

Data Privacy Challenges Raised by Generative AI

Not securing data while collection or processing- Generative AI raises significant data privacy concerns due to its need for vast amounts of diverse data, often including sensitive personal information, collected without explicit consent and difficult to anonymize effectively. Model inversion attacks and data leakage risks can expose private information, while biases in training data can lead to unfair or discriminatory outputs. The risk of generated content - The ability of generative AI to produce highly realistic fake content raises serious concerns about its potential for misuse. Whether creating convincing deepfake videos or generating fabricated text and images, there is a significant risk of this content being used for impersonation, spreading disinformation, or damaging individuals' reputations. Lack of Accountability and transparency - Since GenAI models operate through complex layers of computation, it is difficult to get visibility and clarity into how these systems arrive at their outputs. This complexity makes it difficult to track the specific steps and factors that lead to a particular decision or output. This not only hinders trust and accountability but also complicates the tracing of data usage and makes it tedious to ensure compliance with data privacy regulations. Additionally, unidentified biases in the training data can lead to unfair outputs, and the creation of highly realistic but fake content, like deepfakes, poses risks to content authenticity and verification. Addressing these issues requires improved explainability, traceability, and adherence to regulatory frameworks and ethical guidelines. Lack of fairness and ethical considerations - Generative AI models can perpetuate or even exacerbate existing biases present in their training data. This can lead to unfair treatment or misrepresentation of certain groups, raising ethical issues.

Here’s How Enterprises Can Navigate These Challenges

Understand and map the data flow - Enterprises must maintain a comprehensive inventory of the data that their GenAI systems process, including data sources, types, and destinations. Also, they should create a detailed data flow map to understand how data moves through their systems. Implement strong data governance - As per the data minimization regulation, enterprises must collect, process, and retain only the minimum amount of personal data necessary to fulfill a specific purpose. In addition to this, they should develop and enforce robust data privacy policies and procedures that comply with relevant regulations. Ensure data anonymization and pseudonymization – Techniques such as anonymization and pseudonymization can be implemented to reduce the chances of data reidentification. Strengthen security measures – Implement other security measures such as encryption for data at rest and in transit, access controls for protecting against unauthorized access, and regular monitoring and auditing to detect and respond to potential privacy breaches. To summarize, organizations must begin by complying with the latest data protection laws and practices, and strive to use data responsibly and ethically. Further, they should regularly train employees on data privacy best practices to effectively manage the challenges posed by Generative AI while leveraging its benefits responsibly and ethically. Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything. 

SecOps Teams Shift Strategy as AI-Powered Threats, Deepfakes Evolve 

blurry hand

An escalation in AI-based attacks requires security operations leaders to change cybersecurity strategies to defend against them.

The study found 61% of respondents had experienced a deepfake incident in the past year, with 75% of those attacks impersonating CEOs or other C-suite members.

The post SecOps Teams Shift Strategy as AI-Powered Threats, Deepfakes Evolve  appeared first on Security Boulevard.

❌