Normal view

There are new articles available, click to refresh the page.
Before yesterdayCybersecurity

Why Red TeamsPlay a Central Rolein Helping OrganizationsSecure AI Systems

The content you are trying to access is private only to member users of the site. You must have a free membership at CISO2CISO.COM to access this content. You can register for free.       Thank you. The CISO2CISO Advisors Team.

La entrada Why Red TeamsPlay a Central Rolein Helping OrganizationsSecure AI Systems se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Perspectiveson Securityfor the Board

The content you are trying to access is private only to member users of the site. You must have a free membership at CISO2CISO.COM to access this content. You can register for free.       Thank you. The CISO2CISO Advisors Team.

La entrada Perspectiveson Securityfor the Board se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Google’s Project Naptime Aims for AI-Based Vulnerability Research

25 June 2024 at 12:35
Google AI LLM vulnerability

Security analysts at Google are developing a framework that they hope will enable large language models (LLMs) to eventually be able to run automated vulnerability research, particularly analyses of malware variants. The analysts with Google’s Project Zero – a group founded a decade ago whose job it is to find zero-day vulnerabilities – have been..

The post Google’s Project Naptime Aims for AI-Based Vulnerability Research appeared first on Security Boulevard.

Alert: Australian Non-Profit Accuses Google Privacy Sandbox

25 June 2024 at 03:00

Google’s initiative to phase out third-party tracking cookies through its Google Privacy Sandbox has encountered criticism from Austrian privacy advocacy group noyb (none of your business). The non-profit alleges that Google’s proposed solution still facilitates user tracking, albeit in a different form. Allegations of Misleading Practices   According to noyb, Google’s Privacy Sandbox, marketed as […]

The post Alert: Australian Non-Profit Accuses Google Privacy Sandbox appeared first on TuxCare.

The post Alert: Australian Non-Profit Accuses Google Privacy Sandbox appeared first on Security Boulevard.

Microsoft, Google Come to the Aid of Rural Hospitals

11 June 2024 at 11:56
CSPM, ASPM, CISA cybersecurity healthcare

Microsoft and Google will provide free or low-cost cybersecurity tools and services to rural hospitals in the United States at a time when health care facilities are coming under increasing attack by ransomware gangs and other threat groups. For independent rural and critical access hospitals, Microsoft will provide grants and as much as 75% discounts..

The post Microsoft, Google Come to the Aid of Rural Hospitals appeared first on Security Boulevard.

Microsoft and Google Announce Plans to Help Rural U.S. Hospitals Defend Against Cyberattacks

By: Alan J
10 June 2024 at 16:55

Microsoft Google Aid Rural Hospitals

Microsoft and Google have announced plans to offer free or highly discounted cybersecurity services to rural hospitals across the United States. These initiatives come as the U.S. healthcare sector faces a surge in ransomware attacks that more than doubled last year, posing a serious threat to patient care and hospital operations. The program - developed in collaboration with the White House, the American Hospital Association, and the National Rural Health Association - aims to make rural hospitals less defenseless by providing them with free security updates, security assessments, and training for hospital staff.

Microsoft and Google Cybersecurity Plans for Rural Hospitals

Microsoft has launched a full-fledged cybersecurity program to meet the needs of rural hospitals, which are often more vulnerable to cyberattacks due to more limited IT security resources, staff and training than their urban peers. The program will deliver free and low-cost technology services, including:
  • Nonprofit pricing and discounts of up to 75% on Microsoft's security products for independent Critical Access Hospitals and Rural Emergency Hospitals.
  • Larger rural hospitals already equipped with eligible Microsoft solutions will receive free advanced security suites for free.
  • Free Windows 10 security updates for participating rural hospitals for at least one year.
  • Cybersecurity assessments and training are being made free to hospital employees to help them better manage system security.
Justin Spelhaug, corporate vice president of Microsoft Philanthropies, said in a statement, “Healthcare should be available no matter where you call home, and the rise in cyberattacks threatens the viability of rural hospitals and impact communities across the U.S. “Microsoft is committed to delivering vital technology security and support at a time when these rural hospitals need them most.” Anne Neuberger, Deputy National Security Advisor for Cyber and Emerging Technologies, said in a statement:
“Cyber-attacks against the U.S. healthcare systems rose 130% in 2023, forcing hospitals to cancel procedures and impacting Americans’ access to critical care. Rural hospitals are particularly hard hit as they are often the sole source of care for the communities they serve and lack trained cyber staff and modern cyber defenses. President Biden is committed to every American having access to the care they need, and effective cybersecurity is a part of that. So, we’re excited to work with Microsoft to launch cybersecurity programs that will provide training, advice and technology to help America’s rural hospitals be safe online.”
Alongside Microsoft's efforts, Google also announced that it will provide free cybersecurity advice to rural hospitals and non-profit organizations while also launching a pilot program to match its cybersecurity services with the specific needs of rural healthcare facilities.

Plans Are Part of Broader National Effort

Rural hospitals remain one of the most common targets for cyberattacks, according to data from the National Rural Health Association. Rural hospitals in the U.S. serve over 60 million people living in rural areas, who sometimes have to travel considerable distance for care even without the inconvenience of a cyberattack. Neuberger stated, “We’re in new territory as we see ... this wave of attacks against hospitals.” Rick Pollack, president of the American Hospital Association, said, “Rural hospitals are often the primary source of healthcare in their communities, so keeping them open and safe from cyberattacks is critical. We appreciate Microsoft stepping forward to offer its expertise and resources to help secure part of America’s healthcare safety net.” The plans are a part of a broader effort by the United States government to direct private partners and tech giants such as Microsoft and Google to use their expertise to plug significant gaps in the defense of the healthcare sector. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Google will start deleting location history

7 June 2024 at 12:26

Google announced that it will reduce the amount of personal data it is storing by automatically deleting old data from “Timeline”—the feature that, previously named “Location History,” tracks user routes and trips based on a phone’s location, allowing people to revisit all the places they’ve been in the past.

In an email, Google told users that they will have until December 1, 2024 to save all travels to their mobile devices before the company starts deleting old data. If you use this feature, that means you have about five months before losing your location history.

Moving forward, Google will link the Location information to the devices you use, rather than to the user account(s). And, instead of backing up your data to the cloud, Google will soon start to store it locally on the device.

As I pointed out years ago, Location History allowed me to “spy” on my wife’s whereabouts without having to install anything on her phone. After some digging, I learned that my Google account was added to my wife’s phone’s accounts when I logged in on the Play Store on her phone. The extra account this created on her phone was not removed when I logged out after noticing the tracking issue.

That issue should be solved by implementing this new policy. (Let’s remember, though, that this is an issue that Google formerly considered a feature rather than a problem.)

Once effective, unless you take action and enable the new Timeline settings by December 1, Google will attempt to move the past 90 days of your travel history to the first device you sign in to your Google account on. If you want to keep using Timeline:

  • Open Google Maps on your device.
  • Tap your profile picture (or initial) in the upper right corner.
  • Choose Your Timeline.
  • Select whether to keep you want to keep your location data until you manually delete it or have Google auto-delete it after 3, 18, or 36 months.

In April of 2023, Google Play launched a series of initiatives that gives users control over the way that separate, third-party apps stored data about them. This was seemingly done because Google wanted to increase transparency and control mechanisms for people to control how apps would collect and use their data.

With the latest announcement, it appears that Google is finally tackling its own apps.

Only recently, Google agreed to purge billions of records containing personal information collected from more than 136 million people in the US surfing the internet using its Chrome web browser. But this was part of a settlement in a lawsuit accusing the search giant of illegal surveillance.

It’s nice to see the needle move in the good direction for a change. As Bruce Schneier pointed out in his article Online Privacy and Overfishing:

“Each successive generation of the public is accustomed to the privacy status quo of their youth. What seems normal to us in the security community is whatever was commonplace at the beginning of our careers.”

This has led us all to a world where we don’t even have the expectation of privacy anymore when it comes to what we do online or when using modern technology in general.

If you want to take firmer control over how your location is tracked and shared, we recommend reading How to turn off location tracking on Android.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Detecting Malicious Trackers

21 May 2024 at 07:09

From Slashdot:

Apple and Google have launched a new industry standard called “Detecting Unwanted Location Trackers” to combat the misuse of Bluetooth trackers for stalking. Starting Monday, iPhone and Android users will receive alerts when an unknown Bluetooth device is detected moving with them. The move comes after numerous cases of trackers like Apple’s AirTags being used for malicious purposes.

Several Bluetooth tag companies have committed to making their future products compatible with the new standard. Apple and Google said they will continue collaborating with the Internet Engineering Task Force to further develop this technology and address the issue of unwanted tracking.

This seems like a good idea, but I worry about false alarms. If I am walking with a friend, will it alert if they have a Bluetooth tracking device in their pocket?

Your vacation, reservations, and online dates, now chosen by AI: Lock and Code S05E11

20 May 2024 at 11:10

This week on the Lock and Code podcast…

The irrigation of the internet is coming.

For decades, we’ve accessed the internet much like how we, so long ago, accessed water—by traveling to it. We connected (quite literally), we logged on, and we zipped to addresses and sites to read, learn, shop, and scroll. 

Over the years, the internet was accessible from increasingly more devices, like smartphones, smartwatches, and even smart fridges. But still, it had to be accessed, like a well dug into the ground to pull up the water below.

Moving forward, that could all change.

This year, several companies debuted their vision of a future that incorporates Artificial Intelligence to deliver the internet directly to you, with less searching, less typing, and less decision fatigue. 

For the startup Humane, that vision includes the use of the company’s AI-powered, voice-operated wearable pin that clips to your clothes. By simply speaking to the AI pin, users can text a friend, discover the nutritional facts about food that sits directly in front of them, and even compare the prices of an item found in stores with the price online.

For a separate startup, Rabbit, that vision similarly relies on a small, attractive smart-concierge gadget, the R1. With the bright-orange slab designed in coordination by the company Teenage Engineering, users can hail an Uber to take them to the airport, play an album on Spotify, and put in a delivery order for dinner.

Away from physical devices, The Browser Company of New York is also experimenting with AI in its own web browser, Arc. In February, the company debuted its endeavor to create a “browser that browses for you” with a snazzy video that showed off Arc’s AI capabilities to create unique, individualized web pages in response to questions about recipes, dinner reservations, and more.

But all these small-scale projects, announced in the first month or so of 2024, had to make room a few months later for big-money interest from the first ever internet conglomerate of the world—Google. At the company’s annual Google I/O conference on May 14, VP and Head of Google Search Liz Reid pitched the audience on an AI-powered version of search in which “Google will do the Googling for you.”

Now, Reid said, even complex, multi-part questions can be answered directly within Google, with no need to click a website, evaluate its accuracy, or flip through its many pages to find the relevant information within.

This, it appears, could be the next phase of the internet… and our host David Ruiz has a lot to say about it.

Today, on the Lock and Code podcast, we bring back Director of Content Anna Brading and Cybersecurity Evangelist Mark Stockley to discuss AI-powered concierges, the value of human choice when so many small decisions could be taken away by AI, and, as explained by Stockley, whether the appeal of AI is not in finding the “best” vacation, recipe, or dinner reservation, but rather the best of anything for its user.

“It’s not there to tell you what the best chocolate chip cookie in the world is for everyone. It’s there to help you figure out what the best chocolate chip cookie is for you, on a Monday evening, when the weather’s hot, and you’re hungry.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Another Chrome Vulnerability

14 May 2024 at 07:01

Google has patched another Chrome zero-day:

On Thursday, Google said an anonymous source notified it of the vulnerability. The vulnerability carries a severity rating of 8.8 out of 10. In response, Google said, it would be releasing versions 124.0.6367.201/.202 for macOS and Windows and 124.0.6367.201 for Linux in subsequent days.

“Google is aware that an exploit for CVE-2024-4671 exists in the wild,” the company said.

Google didn’t provide any other details about the exploit, such as what platforms were targeted, who was behind the exploit, or what they were using it for.

Google patches critical vulnerability for Androids with Qualcomm chips

3 April 2024 at 16:40

In April’s update for the Android operating system (OS), Google has patched 28 vulnerabilities, one of which is rated critical for Android devices equipped with Qualcomm chips.

You can find your device’s Android version number, security update level, and Google Play system level in your Settings app. You’ll get notifications when updates are available for you, but you can also check for updates.

If your Android phone is at patch level 2024-04-05 or later then the issues discussed below have been fixed. The updates have been made available for Android 12, 12L and 13. Android partners are notified of all issues at least a month before publication, however, this doesn’t always mean that the patches are available for devices from all vendors.

For most phones it works like this: Under About phone or About device you can tap on Software updates to check if there are new updates available for your device, although there may be slight differences based on the brand, type, and Android version of your device.

The Common Vulnerabilities and Exposures (CVE) database lists publicly disclosed computer security flaws. The Qualcomm CVE is listed as CVE-2023-28582. It has a CVSS score of 9.8 out of 20 and is described as a memory corruption in Data Modem while verifying hello-verify message during the Datagram Transport Layer Security (DTLS) handshake.

The cause of the memory corruption lies in a buffer copy without checking the size of the input. Practically, this means that a remote attacker can cause a buffer overflow during the verification of a DTLS handshake, allowing them to execute code on the affected device.

Another vulnerability highlighted by Google is CVE-2024-23704, an elevation of privilege (EoP) vulnerability in the System component that affects Android 13 and Android 14.

This vulnerability could lead to local escalation of privilege with no additional execution privileges needed. Local privilege escalation happens when one user acquires the system rights of another user. This could allow an attacker to access information they shouldn’t have access to, or perform actions at a higher level of permissions.

Pixel users

Google warns Pixel users that there are indications that two high severity vulnerabilities may be under limited, targeted exploitation. These vulnerabilities are:

  • CVE-2024-29745: An information disclosure vulnerability in the bootloader component. Bootloaders are one of the first programs to load and ensure that all relevant operating system data is loaded into the main memory when a device is started.
  • CVE-2024-29748: An elevation of privilege (EoP) vulnerability in the Pixel firmware. Firmware is device-specific software that provides basic machine instructions that allow the hardware to function and communicate with other software running on the device.

On Pixel devices, a security patch level of 2024-04-05 resolves all these security vulnerabilities.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Google Chrome gets ‘Device Bound Session Credentials’ to stop cookie theft

3 April 2024 at 15:44

Google has announced the introduction of Device Bound Session Credentials (DBSC) to secure Chrome users against cookie theft.

In January we reported how hackers found a way to gain unauthorized access to Google accounts, bypassing multi-factor authentication (MFA), by stealing authentication cookies with info-stealer malware. An authentication cookie is added to a web browser after a user proves who they are by logging in. It tells a website that a user has already logged in, so they aren’t asked for their username and password over and over again. A cybercriminal with an authentication cookie for a website doesn’t need a password, because the website thinks they’ve already logged in. It doesn’t even matter if the owner of the account changes their password.

At the time, Google said it would take action:

“We routinely upgrade our defenses against such techniques and to secure users who fall victim to malware. In this instance, Google has taken action to secure any compromised accounts detected.”

However, some info stealers reportedly updated their methods to counter Google’s fraud detection measures.

The idea that malware could steal authentication cookies and send them to a criminal did not sit well with Google. In its announcement it explains that, “because of the way cookies and operating systems interact, primarily on desktop operating systems, Chrome and other browsers cannot protect them against malware that has the same level of access as the browser itself.”

So it turned to another solution. And if the simplicity of the solution is any indication for its effectiveness, then this should be a good one.

It works by using cryptography to limit the use of an authentication cookie to the device that first created it. When a user visits a website and starts a session, the browser creates two cryptographic keys—one public, one private. The private key is stored on the device in a way that is hard to export, and the public key is given to the website. The website uses the public key to verify that the browser using the authentication cookie has the private key. In order to use a stolen cookie, a thief would also need to steal the private key, so the more robust the “hard to export” bit gets, the safer your cookies will be.

Google stated in its announcement that it thinks this will substantially reduce the success rate of cookie theft malware. This would force attackers to act locally on a device, which makes on-device detection and cleanup more effective, both for anti-malware software as well as for enterprise managed devices.

As such, Device Bound Session Credentials fits in well with Google’s strategy to phase out third-party cookies.

Development of the project is done in the open at Github with the goal of DBSC becoming an open web standard. The goal is to have a fully working trial ready by the end of 2024. Google says that identity providers such as Okta, and browsers such as Microsoft Edge, have expressed interest in DBSC as they want to secure their users against cookie theft.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Class-Action Lawsuit against Google’s Incognito Mode

3 April 2024 at 07:01

The lawsuit has been settled:

Google has agreed to delete “billions of data records” the company collected while users browsed the web using Incognito mode, according to documents filed in federal court in San Francisco on Monday. The agreement, part of a settlement in a class action lawsuit filed in 2020, caps off years of disclosures about Google’s practices that shed light on how much data the tech giant siphons from its users­—even when they’re in private-browsing mode.

Under the terms of the settlement, Google must further update the Incognito mode “splash page” that appears anytime you open an Incognito mode Chrome window after previously updating it in January. The Incognito splash page will explicitly state that Google collects data from third-party websites “regardless of which browsing or browser mode you use,” and stipulate that “third-party sites and apps that integrate our services may still share information with Google,” among other changes. Details about Google’s private-browsing data collection must also appear in the company’s privacy policy.

I was an expert witness for the prosecution (that’s the class, against Google). I don’t know if my declarations and deposition will become public.

Update Chrome now! Google patches possible drive-by vulnerability

28 March 2024 at 07:25

Google has released an update to Chrome which includes seven security fixes. Version 123.0.6312.86/.87 of Chrome for Windows and Mac and 123.0.6312.86 for Linux will roll out over the coming days/weeks.

The easiest way to update Chrome is to allow it to update automatically, which basically uses the same method as outlined below but does not require your attention. But you can end up lagging behind if you never close the browser or if something goes wrong—such as an extension stopping you from updating the browser.

So, it doesn’t hurt to check now and then. And now would be a good time, given the severity of the vulnerability in this patch. My preferred method is to have Chrome open the page chrome://settings/help which you can also find by clicking Settings > About Chrome.

If there is an update available, Chrome will notify you and start downloading it. Then all you have to do is relaunch the browser in order for the update to complete, and for you to be safe from those vulnerabilities.

Chrome is up to date

After the update, the version should be 123.0.6312.86, or later

Technical details

Google never gives out a lot of information about vulnerabilities, for obvious reasons. Access to bug details and links may be kept restricted until a majority of users are updated with a fix.

There is one critical vulnerability that looks like it might be of interest to cybercriminals.

CVE-2024-2883: Use after free (UAF) vulnerability in Angle in Google Chrome prior to 123.0.6312.86 could allow a remote attacker to potentially exploit heap corruption via a crafted HTML page.

Angle is a browser component that deals with WebGL (short for Web Graphics Library) content. WebGL is a JavaScript API for rendering interactive 2D and 3D graphics within any compatible web browser without the use of plug-ins.

UAF is a type of vulnerability that is the result of the incorrect use of dynamic memory during a program’s operation. If, after freeing a memory location, a program does not clear the pointer to that memory, an attacker can use the error to manipulate the program. Referencing memory after it has been freed can cause a program to crash, use unexpected values, or execute code. In this case, when the vulnerability is exploited, it can lead to heap corruption.

Heap corruption occurs when a program modifies the contents of a memory location outside of the memory allocated to the program. The outcome can be relatively benign and cause a memory leak, or it may be fatal and cause a memory fault, usually in the program that causes the corruption.

Chromium vulnerabilities are considered critical if they “allow an attacker to read or write arbitrary resources (including but not limited to the file system, registry, network, etc.) on the underlying platform, with the user’s full privileges.”

So, to sum this up, in this case an attacker could create a specially crafted HTML page–which can be put online as a website–that exploits the vulnerability, potentially leading to a compromised system.

My suggestion: don’t wait for the update, get it now.


We don’t just report on vulnerabilities—we identify them, and prioritize action.

Cybersecurity risks should never spread beyond a headline. Keep vulnerabilities in tow by using ThreatDown Vulnerability and Patch Management.

Google Pays $10M in Bug Bounties in 2023

22 March 2024 at 07:01

BleepingComputer has the details. It’s $2M less than in 2022, but it’s still a lot.

The highest reward for a vulnerability report in 2023 was $113,337, while the total tally since the program’s launch in 2010 has reached $59 million.

For Android, the world’s most popular and widely used mobile operating system, the program awarded over $3.4 million.

Google also increased the maximum reward amount for critical vulnerabilities concerning Android to $15,000, driving increased community reports.

During security conferences like ESCAL8 and hardwea.io, Google awarded $70,000 for 20 critical discoveries in Wear OS and Android Automotive OS and another $116,000 for 50 reports concerning issues in Nest, Fitbit, and Wearables.

Google’s other big software project, the Chrome browser, was the subject of 359 security bug reports that paid out a total of $2.1 million.

Slashdot thread.

AI and the Evolution of Social Media

19 March 2024 at 07:05

Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracy, malfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favorable to democracy. But at the same time, it has the potential to do incredible damage to society.

There is a lot we can learn about social media’s unregulated evolution over the past decade that directly applies to AI companies and technologies. These lessons can help us avoid making the same mistakes with AI that we did with social media.

In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use.

#1: Advertising

The role advertising plays in the internet arose more by accident than anything else. When commercialization first came to the internet, there was no easy way for users to make micropayments to do things like viewing a web page. Moreover, users were accustomed to free access and wouldn’t accept subscription models for services. Advertising was the obvious business model, if never the best one. And it’s the model that social media also relies on, which leads it to prioritize engagement over anything else.

Both Google and Facebook believe that AI will help them keep their stranglehold on an 11-figure online ad market (yep, 11 figures), and the tech giants that are traditionally less dependent on advertising, like Microsoft and Amazon, believe that AI will help them seize a bigger piece of that market.

Big Tech needs something to persuade advertisers to keep spending on their platforms. Despite bombastic claims about the effectiveness of targeted marketing, researchers have long struggled to demonstrate where and when online ads really have an impact. When major brands like Uber and Procter & Gamble recently slashed their digital ad spending by the hundreds of millions, they proclaimed that it made no dent at all in their sales.

AI-powered ads, industry leaders say, will be much better. Google assures you that AI can tweak your ad copy in response to what users search for, and that its AI algorithms will configure your campaigns to maximize success. Amazon wants you to use its image generation AI to make your toaster product pages look cooler. And IBM is confident its Watson AI will make your ads better.

These techniques border on the manipulative, but the biggest risk to users comes from advertising within AI chatbots. Just as Google and Meta embed ads in your search results and feeds, AI companies will be pressured to embed ads in conversations. And because those conversations will be relational and human-like, they could be more damaging. While many of us have gotten pretty good at scrolling past the ads in Amazon and Google results pages, it will be much harder to determine whether an AI chatbot is mentioning a product because it’s a good answer to your question or because the AI developer got a kickback from the manufacturer.

#2: Surveillance

Social media’s reliance on advertising as the primary way to monetize websites led to personalization, which led to ever-increasing surveillance. To convince advertisers that social platforms can tweak ads to be maximally appealing to individual people, the platforms must demonstrate that they can collect as much information about those people as possible.

It’s hard to exaggerate how much spying is going on. A recent analysis by Consumer Reports about Facebook—just Facebook—showed that every user has more than 2,200 different companies spying on their web activities on its behalf.

AI-powered platforms that are supported by advertisers will face all the same perverse and powerful market incentives that social platforms do. It’s easy to imagine that a chatbot operator could charge a premium if it were able to claim that its chatbot could target users on the basis of their location, preference data, or past chat history and persuade them to buy products.

The possibility of manipulation is only going to get greater as we rely on AI for personal services. One of the promises of generative AI is the prospect of creating a personal digital assistant advanced enough to act as your advocate with others and as a butler to you. This requires more intimacy than you have with your search engine, email provider, cloud storage system, or phone. You’re going to want it with you constantly, and to most effectively work on your behalf, it will need to know everything about you. It will act as a friend, and you are likely to treat it as such, mistakenly trusting its discretion.

Even if you choose not to willingly acquaint an AI assistant with your lifestyle and preferences, AI technology may make it easier for companies to learn about you. Early demonstrations illustrate how chatbots can be used to surreptitiously extract personal data by asking you mundane questions. And with chatbots increasingly being integrated with everything from customer service systems to basic search interfaces on websites, exposure to this kind of inferential data harvesting may become unavoidable.

#3: Virality

Social media allows any user to express any idea with the potential for instantaneous global reach. A great public speaker standing on a soapbox can spread ideas to maybe a few hundred people on a good night. A kid with the right amount of snark on Facebook can reach a few hundred million people within a few minutes.

A decade ago, technologists hoped this sort of virality would bring people together and guarantee access to suppressed truths. But as a structural matter, it is in a social network’s interest to show you the things you are most likely to click on and share, and the things that will keep you on the platform.

As it happens, this often means outrageous, lurid, and triggering content. Researchers have found that content expressing maximal animosity toward political opponents gets the most engagement on Facebook and Twitter. And this incentive for outrage drives and rewards misinformation.

As Jonathan Swift once wrote, “Falsehood flies, and the Truth comes limping after it.” Academics seem to have proved this in the case of social media; people are more likely to share false information—perhaps because it seems more novel and surprising. And unfortunately, this kind of viral misinformation has been pervasive.

AI has the potential to supercharge the problem because it makes content production and propagation easier, faster, and more automatic. Generative AI tools can fabricate unending numbers of falsehoods about any individual or theme, some of which go viral. And those lies could be propelled by social accounts controlled by AI bots, which can share and launder the original misinformation at any scale.

Remarkably powerful AI text generators and autonomous agents are already starting to make their presence felt in social media. In July, researchers at Indiana University revealed a botnet of more than 1,100 Twitter accounts that appeared to be operated using ChatGPT.

AI will help reinforce viral content that emerges from social media. It will be able to create websites and web content, user reviews, and smartphone apps. It will be able to simulate thousands, or even millions, of fake personas to give the mistaken impression that an idea, or a political position, or use of a product, is more common than it really is. What we might perceive to be vibrant political debate could be bots talking to bots. And these capabilities won’t be available just to those with money and power; the AI tools necessary for all of this will be easily available to us all.

#4: Lock-in

Social media companies spend a lot of effort making it hard for you to leave their platforms. It’s not just that you’ll miss out on conversations with your friends. They make it hard for you to take your saved data—connections, posts, photos—and port it to another platform. Every moment you invest in sharing a memory, reaching out to an acquaintance, or curating your follows on a social platform adds a brick to the wall you’d have to climb over to go to another platform.

This concept of lock-in isn’t unique to social media. Microsoft cultivated proprietary document formats for years to keep you using its flagship Office product. Your music service or e-book reader makes it hard for you to take the content you purchased to a rival service or reader. And if you switch from an iPhone to an Android device, your friends might mock you for sending text messages in green bubbles. But social media takes this to a new level. No matter how bad it is, it’s very hard to leave Facebook if all your friends are there. Coordinating everyone to leave for a new platform is impossibly hard, so no one does.

Similarly, companies creating AI-powered personal digital assistants will make it hard for users to transfer that personalization to another AI. If AI personal assistants succeed in becoming massively useful time-savers, it will be because they know the ins and outs of your life as well as a good human assistant; would you want to give that up to make a fresh start on another company’s service? In extreme examples, some people have formed close, perhaps even familial, bonds with AI chatbots. If you think of your AI as a friend or therapist, that can be a powerful form of lock-in.

Lock-in is an important concern because it results in products and services that are less responsive to customer demand. The harder it is for you to switch to a competitor, the more poorly a company can treat you. Absent any way to force interoperability, AI companies have less incentive to innovate in features or compete on price, and fewer qualms about engaging in surveillance or other bad behaviors.

#5: Monopolization

Social platforms often start off as great products, truly useful and revelatory for their consumers, before they eventually start monetizing and exploiting those users for the benefit of their business customers. Then the platforms claw back the value for themselves, turning their products into truly miserable experiences for everyone. This is a cycle that Cory Doctorow has powerfully written about and traced through the history of Facebook, Twitter, and more recently TikTok.

The reason for these outcomes is structural. The network effects of tech platforms push a few firms to become dominant, and lock-in ensures their continued dominance. The incentives in the tech sector are so spectacularly, blindingly powerful that they have enabled six megacorporations (Amazon, Apple, Google, Facebook parent Meta, Microsoft, and Nvidia) to command a trillion dollars each of market value—or more. These firms use their wealth to block any meaningful legislation that would curtail their power. And they sometimes collude with each other to grow yet fatter.

This cycle is clearly starting to repeat itself in AI. Look no further than the industry poster child OpenAI, whose leading offering, ChatGPT, continues to set marks for uptake and usage. Within a year of the product’s launch, OpenAI’s valuation had skyrocketed to about $90 billion.

OpenAI once seemed like an “open” alternative to the megacorps—a common carrier for AI services with a socially oriented nonprofit mission. But the Sam Altman firing-and-rehiring debacle at the end of 2023, and Microsoft’s central role in restoring Altman to the CEO seat, simply illustrated how venture funding from the familiar ranks of the tech elite pervades and controls corporate AI. In January 2024, OpenAI took a big step toward monetization of this user base by introducing its GPT Store, wherein one OpenAI customer can charge another for the use of its custom versions of OpenAI software; OpenAI, of course, collects revenue from both parties. This sets in motion the very cycle Doctorow warns about.

In the middle of this spiral of exploitation, little or no regard is paid to externalities visited upon the greater public—people who aren’t even using the platforms. Even after society has wrestled with their ill effects for years, the monopolistic social networks have virtually no incentive to control their products’ environmental impact, tendency to spread misinformation, or pernicious effects on mental health. And the government has applied virtually no regulation toward those ends.

Likewise, few or no guardrails are in place to limit the potential negative impact of AI. Facial recognition software that amounts to racial profiling, simulated public opinions supercharged by chatbots, fake videos in political ads—all of it persists in a legal gray area. Even clear violators of campaign advertising law might, some think, be let off the hook if they simply do it with AI.

Mitigating the risks

The risks that AI poses to society are strikingly familiar, but there is one big difference: it’s not too late. This time, we know it’s all coming. Fresh off our experience with the harms wrought by social media, we have all the warning we should need to avoid the same mistakes.

The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behavior seems to be a bridge too far.

We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

The good news is that we have a whole category of tools to modulate the risk that corporate actions pose for our lives, starting with regulation. Regulations can come in the form of restrictions on activity, such as limitations on what kinds of businesses and products are allowed to incorporate AI tools. They can come in the form of transparency rules, requiring disclosure of what data sets are used to train AI models or what new preproduction-phase models are being trained. And they can come in the form of oversight and accountability requirements, allowing for civil penalties in cases where companies disregard the rules.

The single biggest point of leverage governments have when it comes to tech companies is antitrust law. Despite what many lobbyists want you to think, one of the primary roles of regulation is to preserve competition—not to make life harder for businesses. It is not inevitable for OpenAI to become another Meta, an 800-pound gorilla whose user base and reach are several times those of its competitors. In addition to strengthening and enforcing antitrust law, we can introduce regulation that supports competition-enabling standards specific to the technology sector, such as data portability and device interoperability. This is another core strategy for resisting monopoly and corporate control.

Additionally, governments can enforce existing regulations on advertising. Just as the US regulates what media can and cannot host advertisements for sensitive products like cigarettes, and just as many other jurisdictions exercise strict control over the time and manner of politically sensitive advertising, so too could the US limit the engagement between AI providers and advertisers.

Lastly, we should recognize that developing and providing AI tools does not have to be the sovereign domain of corporations. We, the people and our government, can do this too. The proliferation of open-source AI development in 2023, successful to an extent that startled corporate players, is proof of this. And we can go further, calling on our government to build public-option AI tools developed with political oversight and accountability under our democratic system, where the dictatorship of the profit motive does not apply.

Which of these solutions is most practical, most important, or most urgently needed is up for debate. We should have a vibrant societal dialogue about whether and how to use each of these tools. There are lots of paths to a good outcome.

The problem is that this isn’t happening now, particularly in the US. And with a looming presidential election, conflict spreading alarmingly across Asia and Europe, and a global climate crisis, it’s easy to imagine that we won’t get our arms around AI any faster than we have (not) with social media. But it’s not too late. These are still the early years for practical consumer AI applications. We must and can do better.

This essay was written with Nathan Sanders, and was originally published in MIT Technology Review.

Using Google Search to Find Software Can Be Risky

25 January 2024 at 13:38

Google continues to struggle with cybercriminals running malicious ads on its search platform to trick people into downloading booby-trapped copies of popular free software applications. The malicious ads, which appear above organic search results and often precede links to legitimate sources of the same software, can make searching for software on Google a dicey affair.

Google says keeping users safe is a top priority, and that the company has a team of thousands working around the clock to create and enforce their abuse policies. And by most accounts, the threat from bad ads leading to backdoored software has subsided significantly compared to a year ago.

But cybercrooks are constantly figuring out ingenious ways to fly beneath Google’s anti-abuse radar, and new examples of bad ads leading to malware are still too common.

For example, a Google search earlier this week for the free graphic design program FreeCAD produced the following result, which shows that a “Sponsored” ad at the top of the search results is advertising the software available from freecad-us[.]org. Although this website claims to be the official FreeCAD website, that honor belongs to the result directly below — the legitimate freecad.org.

How do we know freecad-us[.]org is malicious? A review at DomainTools.com show this domain is the newest (registered Jan. 19, 2024) of more than 200 domains at the Internet address 93.190.143[.]252 that are confusingly similar to popular software titles, including dashlane-project[.]com, filezillasoft[.]com, keepermanager[.]com, and libreofficeproject[.]com.

Some of the domains at this Netherlands host appear to be little more than software review websites that steal content from established information sources in the IT world, including Gartner, PCWorld, Slashdot and TechRadar.

Other domains at 93.190.143[.]252 do serve actual software downloads, but none of them are likely to be malicious if one visits the sites through direct navigation. If one visits openai-project[.]org and downloads a copy of the popular Windows desktop management application Rainmeter, for example, the file that is downloaded has the same exact file signature as the real Rainmeter installer available from rainmeter.net.

But this is only a ruse, says Tom Hegel, principal threat researcher at the security firm Sentinel One. Hegel has been tracking these malicious domains for more than a year, and he said the seemingly benign software download sites will periodically turn evil, swapping out legitimate copies of popular software titles with backdoored versions that will allow cybercriminals to remotely commander the systems.

“They’re using automation to pull in fake content, and they’re rotating in and out of hosting malware,” Hegel said, noting that the malicious downloads may only be offered to visitors who come from specific geographic locations, like the United States. “In the malicious ad campaigns we’ve seen tied to this group, they would wait until the domains gain legitimacy on the search engines, and then flip the page for a day or so and then flip back.”

In February 2023, Hegel co-authored a report on this same network, which Sentinel One has dubbed MalVirt (a play on “malvertising”). They concluded that the surge in malicious ads spoofing various software products was directly responsible for a surge in malware infections from infostealer trojans like IcedID, Redline Stealer, Formbook and AuroraStealer.

Hegel noted that the spike in malicious software-themed ads came not long after Microsoft started blocking by default Office macros in documents downloaded from the Internet. He said the volume of the current malicious ad campaigns from this group appears to be relatively low compared to a year ago.

“It appears to be same campaign continuing,” Hegel said. “Last January, every Google search for ‘Autocad’ led to something bad. Now, it’s like they’re paying Google to get one out of every dozen of searches. My guess it’s still continuing because of the up-and-down [of the] domains hosting malware and then looking legitimate.”

Several of the websites at this Netherlands host (93.190.143[.]252) are currently blocked by Google’s Safebrowsing technology, and labeled with a conspicuous red warning saying the website will try to foist malware on visitors who ignore the warning and continue.

But it remains a mystery why Google has not similarly blocked more than 240+ other domains at this same host, or else removed them from its search index entirely. Especially considering there is nothing else but these domains hosted at that Netherlands IP address, and because they have all remained at that address for the past year.

In response to questions from KrebsOnSecurity, Google said maintaining a safe ads ecosystem and keeping malware off of its platforms is a priority across Google.

“Bad actors often employ sophisticated measures to conceal their identities and evade our policies and enforcement, sometimes showing Google one thing and users something else,” Google said in a written statement. “We’ve reviewed the ads in question, removed those that violated our policies, and suspended the associated accounts. We’ll continue to monitor and apply our protections.”

Google says it removed 5.2 billion ads in 2022, and restricted more than 4.3 billion ads and suspended over 6.7 million advertiser accounts. The company’s latest ad safety report says Google in 2022 blocked or removed 1.36 billion advertisements for violating its abuse policies.

Some of the domains referenced in this story were included in Sentinel One’s February 2023 report, but dozens more have been added since, such as those spoofing the official download sites for Corel Draw, Github Desktop, Roboform and Teamviewer.

This October 2023 report on the FreeCAD user forum came from a user who reported downloading a copy of the software from freecadsoft[.]com after seeing the site promoted at the top of a Google search result for “freecad.” Almost a month later, another FreeCAD user reported getting stung by the same scam.

“This got me,” FreeCAD forum user “Matterform” wrote on Nov. 19, 2023. “Please leave a report with Google so it can flag it. They paid Google for sponsored posts.”

Sentinel One’s report didn’t delve into the “who” behind this ongoing MalVirt campaign, and there are precious few clues that point to attribution. All of the domains in question were registered through webnic.cc, and several of them display a placeholder page saying the site is ready for content. Viewing the HTML source of these placeholder pages shows many of the hidden comments in the code are in Cyrillic.

Trying to track the crooks using Google’s Ad Transparency tools didn’t lead far. The ad transparency record for the malicious ad featuring freecad-us[.]org (in the screenshot above) shows that the advertising account used to pay for the ad has only run one previous ad through Google search: It advertised a wedding photography website in New Zealand.

The apparent owner of that photography website did not respond to requests for comment, but it’s also likely his Google advertising account was hacked and used to run these malicious ads.

❌
❌