Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Cybersecurity Experts Warn of Rising Malware Threats from Sophisticated Social Engineering Tactics

TA571 and ClearFake Campaign 

Cybersecurity researchers have uncovered a disturbing trend in malware delivery tactics involving sophisticated social engineering techniques. These methods exploit user trust and familiarity with PowerShell scripts to compromise systems.  Among these threat actors, the two highlighted, TA571 and ClearFake campaign, were seen leveraging social engineering for spreading malware. According to researchers, the threat actors associated with TA571 and the ClearFake cluster have been actively using a novel approach to infiltrate systems.  This technique involves manipulating users into copying and pasting malicious PowerShell scripts under the guise of resolving legitimate issues.

Understanding the TA571 and ClearFake Campaign 

[caption id="attachment_77553" align="alignnone" width="1402"]TA571 and ClearFake Campaign  Example of a ClearFake attack chain. (Source: Proofpoint)[/caption] The TA571 campaign, first observed in March 2024, distributed emails containing HTML attachments that mimic legitimate Microsoft Word error messages. These messages coerce users to execute PowerShell scripts supposedly aimed at fixing document viewing issues.  Similarly, the ClearFake campaign, identified in April 2024, employs fake browser update prompts on compromised websites. These prompts instruct users to run PowerShell scripts to install what appears to be necessary security certificates, says Proofpoint. Upon interaction with the malicious prompts, users unwittingly copy PowerShell commands to their clipboard. Subsequent instructions guide them to paste and execute these commands in PowerShell terminals or via Windows Run dialog boxes. Once executed, these scripts initiate a chain of events leading to the download and execution of malware payloads such as DarkGate, Matanbuchus, and NetSupport RAT. The complexity of these attacks is compounded by their ability to evade traditional detection methods. Malicious scripts are often concealed within double-Base64 encoded HTML elements or obscured in JavaScript, making them challenging to identify and block preemptively.

Attack Variants, Evolution, and Recommendations

Since their initial observations, Proofpoint has noted the evolution of these techniques. TA571, for instance, has diversified its lures, sometimes directing victims to use the Windows Run dialog for script execution instead of PowerShell terminals. Meanwhile, Clearlake has incorporated blockchain-based techniques like "EtherHiding" to host malicious scripts, adding a layer of obfuscation. These developments highlight the critical importance of user education and better cybersecurity measures within organizations. Employees must be trained to recognize suspicious messages and actions that prompt the execution of PowerShell scripts from unknown sources. Organizations should also deploy advanced threat detection and blocking mechanisms capable of identifying malicious activities embedded within seemingly legitimate web pages or email attachments. While the TA571 and ClearFake campaigns represent distinct threat actors with varying objectives, their utilization of advanced social engineering and PowerShell exploitation techniques demands heightened vigilance from organizations worldwide. By staying informed and implementing better cybersecurity practices, businesses can better defend against these online threats.

AI and the Indian Election

13 June 2024 at 07:02

As India concluded the world’s largest election on June 5, 2024, with over 640 million votes counted, observers could assess how the various parties and factions used artificial intelligence technologies—and what lessons that holds for the rest of the world.

The campaigns made extensive use of AI, including deepfake impersonations of candidates, celebrities and dead politicians. By some estimates, millions of Indian voters viewed deepfakes.

But, despite fears of widespread disinformation, for the most part the campaigns, candidates and activists used AI constructively in the election. They used AI for typical political activities, including mudslinging, but primarily to better connect with voters.

Deepfakes without the deception

Political parties in India spent an estimated US$50 million on authorized AI-generated content for targeted communication with their constituencies this election cycle. And it was largely successful.

Indian political strategists have long recognized the influence of personality and emotion on their constituents, and they started using AI to bolster their messaging. Young and upcoming AI companies like The Indian Deepfaker, which started out serving the entertainment industry, quickly responded to this growing demand for AI-generated campaign material.

In January, Muthuvel Karunanidhi, former chief minister of the southern state of Tamil Nadu for two decades, appeared via video at his party’s youth wing conference. He wore his signature yellow scarf, white shirt, dark glasses and had his familiar stance—head slightly bent sideways. But Karunanidhi died in 2018. His party authorized the deepfake.

In February, the All-India Anna Dravidian Progressive Federation party’s official X account posted an audio clip of Jayaram Jayalalithaa, the iconic superstar of Tamil politics colloquially called “Amma” or “Mother.” Jayalalithaa died in 2016.

Meanwhile, voters received calls from their local representatives to discuss local issues—except the leader on the other end of the phone was an AI impersonation. Bhartiya Janta Party (BJP) workers like Shakti Singh Rathore have been frequenting AI startups to send personalized videos to specific voters about the government benefits they received and asking for their vote over WhatsApp.

Multilingual boost

Deepfakes were not the only manifestation of AI in the Indian elections. Long before the election began, Indian Prime Minister Narendra Modi addressed a tightly packed crowd celebrating links between the state of Tamil Nadu in the south of India and the city of Varanasi in the northern state of Uttar Pradesh. Instructing his audience to put on earphones, Modi proudly announced the launch of his “new AI technology” as his Hindi speech was translated to Tamil in real time.

In a country with 22 official languages and almost 780 unofficial recorded languages, the BJP adopted AI tools to make Modi’s personality accessible to voters in regions where Hindi is not easily understood. Since 2022, Modi and his BJP have been using the AI-powered tool Bhashini, embedded in the NaMo mobile app, to translate Modi’s speeches with voiceovers in Telugu, Tamil, Malayalam, Kannada, Odia, Bengali, Marathi and Punjabi.

As part of their demos, some AI companies circulated their own viral versions of Modi’s famous monthly radio show “Mann Ki Baat,” which loosely translates to “From the Heart,” which they voice cloned to regional languages.

Adversarial uses

Indian political parties doubled down on online trolling, using AI to augment their ongoing meme wars. Early in the election season, the Indian National Congress released a short clip to its 6 million followers on Instagram, taking the title track from a new Hindi music album named “Chor” (thief). The video grafted Modi’s digital likeness onto the lead singer and cloned his voice with reworked lyrics critiquing his close ties to Indian business tycoons.

The BJP retaliated with its own video, on its 7-million-follower Instagram account, featuring a supercut of Modi campaigning on the streets, mixed with clips of his supporters but set to unique music. It was an old patriotic Hindi song sung by famous singer Mahendra Kapoor, who passed away in 2008 but was resurrected with AI voice cloning.

Modi himself quote-tweeted an AI-created video of him dancing—a common meme that alters footage of rapper Lil Yachty on stage—commenting “such creativity in peak poll season is truly a delight.”

In some cases, the violent rhetoric in Modi’s campaign that put Muslims at risk and incited violence was conveyed using generative AI tools, but the harm can be traced back to the hateful rhetoric itself and not necessarily the AI tools used to spread it.

The Indian experience

India is an early adopter, and the country’s experiments with AI serve as an illustration of what the rest of the world can expect in future elections. The technology’s ability to produce nonconsensual deepfakes of anyone can make it harder to tell truth from fiction, but its consensual uses are likely to make democracy more accessible.

The Indian election’s embrace of AI that began with entertainment, political meme wars, emotional appeals to people, resurrected politicians and persuasion through personalized phone calls to voters has opened a pathway for the role of AI in participatory democracy.

The surprise outcome of the election, with the BJP’s failure to win its predicted parliamentary majority, and India’s return to a deeply competitive political system especially highlights the possibility for AI to have a positive role in deliberative democracy and representative governance.

Lessons for the world’s democracies

It’s a goal of any political party or candidate in a democracy to have more targeted touch points with their constituents. The Indian elections have shown a unique attempt at using AI for more individualized communication across linguistically and ethnically diverse constituencies, and making their messages more accessible, especially to rural, low-income populations.

AI and the future of participatory democracy could make constituent communication not just personalized but also a dialogue, so voters can share their demands and experiences directly with their representatives—at speed and scale.

India can be an example of taking its recent fluency in AI-assisted party-to-people communications and moving it beyond politics. The government is already using these platforms to provide government services to citizens in their native languages.

If used safely and ethically, this technology could be an opportunity for a new era in representative governance, especially for the needs and experiences of people in rural areas to reach Parliament.

This essay was written with Vandinika Shukla and previously appeared in The Conversation.

Top news app caught sharing “entirely false” AI-generated news

5 June 2024 at 16:57
Top news app caught sharing “entirely false” AI-generated news

Enlarge (credit: gmast3r | iStock / Getty Images Plus)

After the most downloaded local news app in the US, NewsBreak, shared an AI-generated story about a fake New Jersey shooting last Christmas Eve, New Jersey police had to post a statement online to reassure troubled citizens that the story was "entirely false," Reuters reported.

"Nothing even similar to this story occurred on or around Christmas, or even in recent memory for the area they described," the cops' Facebook post said. "It seems this 'news' outlet's AI writes fiction they have no problem publishing to readers."

It took NewsBreak—which attracts over 50 million monthly users—four days to remove the fake shooting story, and it apparently wasn't an isolated incident. According to Reuters, NewsBreak's AI tool, which scrapes the web and helps rewrite local news stories, has been used to publish at least 40 misleading or erroneous stories since 2021.

Read 26 remaining paragraphs | Comments

Beware of scammers impersonating Malwarebytes

30 May 2024 at 12:33

Scammers love to bank on the good name of legitimate companies to gain the trust of their intended targets. Recently, it came to our attention that a cybercriminal is using fake websites for security products to spread malware. One of those websites was impersonating the Malwarebytes brand.

Very convincing fake Malwarebytes site at malwarebytes.pro
Image courtesy of Trellix

The download from the fake website was an information stealer with a filename that resembled that of the actual Malwarebytes installer.

Besides some common system information, this stealer goes after:

  • Account tokens
  • Steam tokens
  • Saved card details
  • System profiles
  • Telegram logins
  • List of running process names
  • Installed browser lists and their version
  • Credentials from the browser “User Data” folder, Local DB an autofill
  • Cookies from the browser
  • List of folders on the C drive

This is just one scam, but there are always others using our name to target people. We regularly see tech support scammers pretending to be Malwarebytes to defraud their victims.

Some scammers sell—sometimes illegal—copies of Malwarebytes for prices that are boldly exaggerated.

scammer selling overpriced copy of Malwarebytes

Others will try and phish you by sending you a confirmation mail of your subscription to Malwarebytes.

phisihng mail saying it's an Order confirmation

And sometimes when you search for Malwarebytes you will find imposters in between legitimate re-sellers. Some even use our logo.

search result for Malwarebytes Premium pointing to an imposter site

In this case, Google warned us that there was danger up ahead.

Google warning for malwarebytes-premium.net

The site itself was not as convincing as the advert, and some poking around in the source code told us the website was likely built by a Russian speaking individual.

source code including Russian error prompt

How to avoid brand scams

It’s easy to see how people can fall for fake brand notices. Here are some things that can help you avoid scams that use our name:

  • Download software directly from our sites if you are not sure of the legitimacy of the ones offered to you.
  • Check that any emails that appear to come from Malwarebytes are sent from a malwarebytes.com address.
  • If you have any questions or doubts as to the legitimacy of something, you can contact our Support team.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Racist AI Deepfake of Baltimore Principal Leads to Arrest

26 April 2024 at 14:41
A high school athletic director in the Baltimore area was arrested after he used A.I., the police said, to make a racist and antisemitic audio clip.

© Kim Hairston/The Baltimore Sun

Myriam Rogers, superintendent of Baltimore County Public Schools, speaking about the arrest of Dazhon Darien, the athletic director of Pikesville High.
❌
❌