Normal view

There are new articles available, click to refresh the page.
Yesterday — 25 June 2024Main stream

Why Red TeamsPlay a Central Rolein Helping OrganizationsSecure AI Systems

The content you are trying to access is private only to member users of the site. You must have a free membership at CISO2CISO.COM to access this content. You can register for free.       Thank you. The CISO2CISO Advisors Team.

La entrada Why Red TeamsPlay a Central Rolein Helping OrganizationsSecure AI Systems se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Perspectiveson Securityfor the Board

The content you are trying to access is private only to member users of the site. You must have a free membership at CISO2CISO.COM to access this content. You can register for free.       Thank you. The CISO2CISO Advisors Team.

La entrada Perspectiveson Securityfor the Board se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Microsoft risks huge fine over “possibly abusive” bundling of Teams and Office

25 June 2024 at 12:59
A screen shows a virtual meeting with Microsoft Teams at a conference on January 30, 2024 in Barcelona, Spain.

Enlarge / A screen shows a virtual meeting with Microsoft Teams at a conference on January 30, 2024 in Barcelona, Spain. (credit: Cesc Maymo / Contributor | Getty Images News)

Microsoft may be hit with a massive fine in the European Union for "possibly abusively" bundling Teams with its Office 365 and Microsoft 365 software suites for businesses.

On Tuesday, the European Commission (EC) announced preliminary findings of an investigation into whether Microsoft's "suite-centric business model combining multiple types of software in a single offering" unfairly shut out rivals in the "software as a service" (SaaS) market.

"Since at least April 2019," the EC found, Microsoft's practice of "tying Teams with its core SaaS productivity applications" potentially restricted competition in the "market for communication and collaboration products."

Read 31 remaining paragraphs | Comments

Google Is Killing Infinite Scroll in Search Results

By: msmash
25 June 2024 at 15:30
Google is switching back to pagination for its search results, abandoning the continuous scroll feature introduced in 2022 for desktop and 2021 for mobile. The change, effective immediately for desktop users, aims to improve search result loading speeds, Google said, adding that infinite scrolling did not significantly enhance user satisfaction. Mobile users will see the change in coming months.

Read more of this story at Slashdot.

Google’s Project Naptime Aims for AI-Based Vulnerability Research

25 June 2024 at 12:35
Google AI LLM vulnerability

Security analysts at Google are developing a framework that they hope will enable large language models (LLMs) to eventually be able to run automated vulnerability research, particularly analyses of malware variants. The analysts with Google’s Project Zero – a group founded a decade ago whose job it is to find zero-day vulnerabilities – have been..

The post Google’s Project Naptime Aims for AI-Based Vulnerability Research appeared first on Security Boulevard.

The Plagiarism Machine

25 June 2024 at 11:01
"What I learned from this experiment is that flooding the internet with an infinite amount of what could pass for journalism is cheap and even easier than I imagined, as long as I didn't respect the craft, my audience, or myself. I also learned that while AI has made all of this much easier, faster, and better, the advent of generative AI did not invent this practice—it's simply adding to a vast infrastructure of tools and services built by companies like WordPress, Fiverr, and Google designed to convert clicks to dollars at the expense of quality journalism and information, polluting the internet we all use and live in every day." I Paid $365.63 to Replace 404 Media With AI

"Luckily, after going through this process, I also learned that while doing this is profitable to some, the practice relies on a fundamental misunderstanding of what journalism is, what makes it good, and therefore gives me more confidence than ever that a fully automated blog will never be able to replace 404 Media, or other investigative news outlets."

Political deepfakes are the most popular way to misuse AI

25 June 2024 at 09:43
Political deepfakes are the most popular way to misuse AI

Enlarge (credit: Arkadiusz Warguła via Getty)

Artificial intelligence-generated “deepfakes” that impersonate politicians and celebrities are far more prevalent than efforts to use AI to assist cyber attacks, according to the first research by Google’s DeepMind division into the most common malicious uses of the cutting-edge technology.

The study said the creation of realistic but fake images, video, and audio of people was almost twice as common as the next highest misuse of generative AI tools: the falsifying of information using text-based tools, such as chatbots, to generate misinformation to post online.

The most common goal of actors misusing generative AI was to shape or influence public opinion, the analysis, conducted with the search group’s research and development unit Jigsaw, found. That accounted for 27 percent of uses, feeding into fears over how deepfakes might influence elections globally this year.

Read 13 remaining paragraphs | Comments

Alert: Australian Non-Profit Accuses Google Privacy Sandbox

25 June 2024 at 03:00

Google’s initiative to phase out third-party tracking cookies through its Google Privacy Sandbox has encountered criticism from Austrian privacy advocacy group noyb (none of your business). The non-profit alleges that Google’s proposed solution still facilitates user tracking, albeit in a different form. Allegations of Misleading Practices   According to noyb, Google’s Privacy Sandbox, marketed as […]

The post Alert: Australian Non-Profit Accuses Google Privacy Sandbox appeared first on TuxCare.

The post Alert: Australian Non-Profit Accuses Google Privacy Sandbox appeared first on Security Boulevard.

Before yesterdayMain stream

Music industry giants allege mass copyright violation by AI firms

24 June 2024 at 14:44
Michael Jackson in concert, 1986. Sony Music owns a large portion of publishing rights to Jackson's music.

Enlarge / Michael Jackson in concert, 1986. Sony Music owns a large portion of publishing rights to Jackson's music. (credit: Getty Images)

Universal Music Group, Sony Music, and Warner Records have sued AI music-synthesis companies Udio and Suno for allegedly committing mass copyright infringement by using recordings owned by the labels to train music-generating AI models, reports Reuters. Udio and Suno can generate novel song recordings based on text-based descriptions of music (i.e., "a dubstep song about Linus Torvalds").

The lawsuits, filed in federal courts in New York and Massachusetts, claim that the AI companies' use of copyrighted material to train their systems could lead to AI-generated music that directly competes with and potentially devalues the work of human artists.

Like other generative AI models, both Udio and Suno (which we covered separately in April) rely on a broad selection of existing human-created artworks that teach a neural network the relationship between words in a written prompt and styles of music. The record labels correctly note that these companies have been deliberately vague about the sources of their training data.

Read 6 remaining paragraphs | Comments

Win+C, Windows’ most cursed keyboard shortcut, is getting retired again

21 June 2024 at 14:19
A rendering of the Copilot button.

Enlarge / A rendering of the Copilot button. (credit: Microsoft)

Microsoft is all-in on its Copilot+ PC push right now, but the fact is that they'll be an extremely small minority among the PC install base for the foreseeable future. The program's stringent hardware requirements—16GB of RAM, at least 256GB of storage, and a fast neural processing unit (NPU)—disqualify all but new PCs, keeping features like Recall from running on all current Windows 11 PCs.

But the Copilot chatbot remains supported on all Windows 11 PCs (and most Windows 10 PCs), and a change Microsoft has made to recent Windows 11 Insider Preview builds is actually making the feature less useful and accessible than it is in the current publicly available versions of Windows. Copilot is being changed from a persistent sidebar into an app window that can be resized, minimized, and pinned and unpinned from the taskbar, just like any other app. But at least as of this writing, this version of Copilot can no longer adjust Windows' settings, and it's no longer possible to call it up with the Windows+C keyboard shortcut. Only newer keyboards with the dedicated Copilot key will have an easy built-in keyboard shortcut for summoning Copilot.

If Microsoft keeps these changes intact, they'll hit Windows 11 PCs when the 24H2 update is released to the general public later this year; the changes are already present on Copilot+ PCs, which are running a version of Window 11 24H2 out of the box.

Read 7 remaining paragraphs | Comments

Runway’s latest AI video generator brings giant cotton candy monsters to life

18 June 2024 at 17:41
Screen capture of a Runway Gen-3 Alpha video generated with the prompt

Enlarge / Screen capture of a Runway Gen-3 Alpha video generated with the prompt "A giant humanoid, made of fluffy blue cotton candy, stomping on the ground, and roaring to the sky, clear blue sky behind them." (credit: Runway)

On Sunday, Runway announced a new AI video synthesis model called Gen-3 Alpha that's still under development, but it appears to create video of similar quality to OpenAI's Sora, which debuted earlier this year (and has also not yet been released). It can generate novel, high-definition video from text prompts that range from realistic humans to surrealistic monsters stomping the countryside.

Unlike Runway's previous best model from June 2023, which could only create two-second-long clips, Gen-3 Alpha can reportedly create 10-second-long video segments of people, places, and things that have a consistency and coherency that easily surpasses Gen-2. If 10 seconds sounds short compared to Sora's full minute of video, consider that the company is working with a shoestring budget of compute compared to more lavishly funded OpenAI—and actually has a history of shipping video generation capability to commercial users.

Gen-3 Alpha does not generate audio to accompany the video clips, and it's highly likely that temporally coherent generations (those that keep a character consistent over time) are dependent on similar high-quality training material. But Runway's improvement in visual fidelity over the past year is difficult to ignore.

Read 20 remaining paragraphs | Comments

French Court Orders Google, Cloudflare, Cisco To Poison DNS To Stop Piracy

By: BeauHD
17 June 2024 at 18:00
An anonymous reader quotes a report from TorrentFreak: A French court has ordered Google, Cloudflare, and Cisco to poison their DNS resolvers to prevent circumvention of blocking measures, targeting around 117 pirate sports streaming domains. The move is another anti-piracy escalation for broadcaster Canal+, which also has permission to completely deindex the sites from search engine results. [...] Two decisions were handed down by the Paris judicial court last month; one concerning Premier League matches and the other the Champions League. The orders instruct Google, Cloudflare, and Cisco to implement measures similar to those in place at local ISPs. To protect the rights of Canal+, the companies must prevent French internet users from using their services to access around 117 pirate domains. According to French publication l'Informe, which broke the news, Google attorney Sebastien Proust crunched figures published by government anti-piracy agency Arcom and concluded that the effect on piracy rates, if any, is likely to be minimal. Starting with a pool of all users who use alternative DNS for any reason, users of pirate sites -- especially sites broadcasting the matches in question -- were isolated from the rest. Users of both VPNs and third-party DNS were further excluded from the group since DNS blocking is ineffective against VPNs. Proust found that the number of users likely to be affected by DNS blocking at Google, Cloudflare, and Cisco, amounts to 0.084% of the total population of French Internet users. Citing a recent survey, which found that only 2% of those who face blocks simply give up and don't find other means of circumvention, he reached an interesting conclusion. "2% of 0.084% is 0.00168% of Internet users! In absolute terms, that would represent a small group of around 800 people across France!" In common with other courts presented with the same arguments, the Paris court said the number of people using alternative DNS to access the sites, and the simplicity of switching DNS, are irrelevant. Canal+ owns the rights to the broadcasts and if it wishes to request a blocking injunction, it has the legal right to do so. The DNS providers' assertion that their services are not covered by the legislation was also waved aside by the court. Google says it intends to comply with the order. As part of the original matter in 2023, it was already required to deindex the domains from search results under the same law. At least in theory, this means that those who circumvented the original blocks using these alternative DNS services, will be back to square one and confronted by blocks all over again. Given that circumventing this set of blocks will be as straightforward as circumventing the originals, that raises the question of what measures Canal+ will demand next, and from whom.

Read more of this story at Slashdot.

Google’s abuse of Fitbit continues with web app shutdown

12 June 2024 at 15:02
Google’s abuse of Fitbit continues with web app shutdown

Enlarge (credit: Fitbit)

Google's continued abuse of the Fitbit brand is continuing with the shutdown of the web dashboard. Fitbit.com used to be both a storefront and a way for users to get a big-screen UI to sift through reams of fitness data. The store closed up shop in April, and now the web dashboard is dying in July.

In a post on the "Fitbit Community" forums, the company said: "Next month, we’re consolidating the Fitbit.com dashboard into the Fitbit app. The web browser will no longer offer access to the Fitbit.com dashboard after July 8, 2024." That's it. There's no replacement or new fitness thing Google is more interested in; web functionality is just being removed. Google, we'll remind you, used to be a web company. Now it's a phone app or nothing. Google did the same thing to its Google Fit product in 2019, killing off the more powerful website in favor of an app focus.

Dumping the web app leaves a few holes in Fitbit's ecosystem. The Fitbit app doesn't support big screens like tablet devices, so this is removing the only large-format interface for data. Fitbit's competitors all have big-screen interfaces. Garmin has a very similar website, and the Apple Watch has an iPad health app. This isn't an improvement. To make matters worse, the app does not have the features of the web dashboard, with many of the livid comments in the forums on Reddit calling out the app's deficiencies in graphing, achievement statistics, calorie counting, and logs.

Read 1 remaining paragraphs | Comments

Chrome OS switching to the Android Linux kernel and related Android subsystems

12 June 2024 at 19:23

Surprisingly quietly, in the middle of Apple’s WWDC, Google’s ChromeOS team has made a rather massive announcement that seems to be staying a bit under the radar. Google is announcing today that it is replacing many of ChromeOS’ current relatively standard Linux-based subsystems with the comparable subsystems from Android.

To continue rolling out new Google AI features to users at a faster and even larger scale, we’ll be embracing portions of the Android stack, like the Android Linux kernel and Android frameworks, as part of the foundation of ChromeOS. We already have a strong history of collaboration, with Android apps available on ChromeOS and the start of unifying our Bluetooth stacks as of ChromeOS 122.

↫ Prajakta Gudadhe and Alexander Kuscher on the Chromium blog

The benefits to Google here are obvious: instead of developing and maintaining two variants of the Linux kernel and various related subsystems, they now only have to focus on one, saving money and time. It will also make it easier for both platforms to benefit from new features and bugfixes, which should benefit users of both platforms quite a bit.

As mentioned in the snippet, the first major subsystem in ChromeOS to be replaced by its Android counterpart is Bluetooth. ChromeOS was using the BlueZ Bluetooth stack, the same one used by most (all?) Linux distributions today, which was initially developed by Qualcomm, but has now switched over to using Fluoride, the one from Android.

According to Google, Fluoride has a number of benefits over BlueZ. It runs almost entirely in userspace, as opposed to BlueZ, where more than 50% of the code resides in the kernel. In addition, Fluoride is written in Rust, and Google claims it has a simpler architecture, making it easier to perform testing. Google also highlights that Fluoride has a far larger userbase – i.e., all Android users – which also presents a number of benefits.

Google performed internal tests to measure the improvements as a result from switching ChromeOS from BlueZ to Fluoride, and the test results speak for themselves – pairing is faster, pairing fails less often, and reconnecting an already paired device fails less often. With Bluetooth being a rather problematic technology to use, any improvements to the user experience are welcome.

At the end of Google’s detailed blog post about the switch to Fluoride, the company notes that it intends for the project as whole – which is called Project Floss – to be a standalone open source project, capable of running on any Linux distribution.

↫ Russ Lindsay, Abhishek Pandit-Subedi, Alain Michaud, and Loic Wei Yu Neng on the chromeOS dev website

We aspire to position Project Floss as a standalone open source project that can reach beyond the walls of Google’s own operating system in a way where we can maximize the overall value and agility of the larger Bluetooth ecosystem. We also intend to support the Linux community as a whole with the goal that Floss can easily run on most Linux distributions.

If Fluoride can indeed deliver tangible, measurable benefits in Bluetooth performance on Linux desktops, I have no doubt quite a few distributions will be more than willing to switch over. Bluetooth is used a lot, and if Fedora, Ubuntu, Arch, and so on, can improve the Bluetooth experience by switching over, I’m pretty sure they will, or at least consider doing so.

Google’s Pixel 8 series gets USB-C to DisplayPort; desktop mode rumors heat up

11 June 2024 at 14:05
The Pixel 8.

Enlarge / The Pixel 8. (credit: Google)

Google's June Android update is out, and it's bringing a few notable changes for Pixel phones. The most interesting is that the Pixel 8a, Pixel 8, and Pixel 8 Pro are all getting DisplayPort Alt Mode capabilities via their USB-C ports. This means you can go from USB-C to DisplayPort and plug right into a TV or monitor. This has been rumored forever and landed in some of the Android Betas earlier, but now it's finally shipping out to production.

The Pixel 8's initial display support is just a mirrored mode. You can either get an awkward vertical phone in the middle of your wide-screen display or turn the phone sideways and get a more reasonable layout. You could see it being useful for videos or presentations. It would be nice if it could do more.

Alongside this year-plus of display port rumors has been a steady drum beat (again) for an Android desktop mode. Google has been playing around with this idea since Android 7.0 in 2016. In 2019, we were told it was just a development testing project, and it never shipped to any real devices. Work around Android's desktop mode has been heating up, though, so maybe a second swing at this idea will result in an actual product.

Read 3 remaining paragraphs | Comments

Apple and OpenAI currently have the most misunderstood partnership in tech

11 June 2024 at 13:29
A man talks into a smartphone.

Enlarge / He isn't using an iPhone, but some people talk to Siri like this.

On Monday, Apple premiered "Apple Intelligence" during a wide-ranging presentation at its annual Worldwide Developers Conference in Cupertino, California. However, the heart of its new tech, an array of Apple-developed AI models, was overshadowed by the announcement of ChatGPT integration into its device operating systems.

Since rumors of the partnership first emerged, we've seen confusion on social media about why Apple didn't develop a cutting-edge GPT-4-like chatbot internally. Despite Apple's year-long development of its own large language models (LLMs), many perceived the integration of ChatGPT (and opening the door for others, like Google Gemini) as a sign of Apple's lack of innovation.

"This is really strange. Surely Apple could train a very good competing LLM if they wanted? They've had a year," wrote AI developer Benjamin De Kraker on X. Elon Musk has also been grumbling about the OpenAI deal—and spreading misconceptions about it—saying things like, "It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!"

Read 19 remaining paragraphs | Comments

Microsoft, Google Come to the Aid of Rural Hospitals

11 June 2024 at 11:56
CSPM, ASPM, CISA cybersecurity healthcare

Microsoft and Google will provide free or low-cost cybersecurity tools and services to rural hospitals in the United States at a time when health care facilities are coming under increasing attack by ransomware gangs and other threat groups. For independent rural and critical access hospitals, Microsoft will provide grants and as much as 75% discounts..

The post Microsoft, Google Come to the Aid of Rural Hospitals appeared first on Security Boulevard.

The Google Pay app is dead

10 June 2024 at 17:41
The Google Play logo is flushed down a toilet alongside many dollar bills.

Enlarge / Google Pay is dead! (credit: Aurich Lawson / Ars Technica)

Google has killed off the Google Pay app. 9to5Google reports Google's old payments app stopped working recently, following shutdown plans that were announced in February. Google is shutting down the Google Pay app in the US, while in-store NFC payments seem to still be branded "Google Pay." Remember, this is Google's dysfunctional payments division, so all that's happening is Google Payment app No. 3 (Google Pay) is being shut down in favor of Google Payment app No. 4 (Google Wallet). The shutdown caps off the implosion of Google's payments division after a lot of poor decisions and failed product launches.

Google's NFC payment journey started in 2011 with Google Wallet (apps No. 1 and No. 4 are both called Google Wallet). In 2011, Google was a technology trailblazer and basically popularized the idea of paying for something with your phone in many regions (with the notable exception of Japan). Google shipped the first non-Japanese phones with the feature, fought carriers trying to stop phone payments from happening, and begged stores to get new, compatible terminals. Google's entire project was blown away when Apple Pay launched in 2014, and Google's response was its second payment app, Android Pay, in 2015. This copied much of Apple's setup, like sending payment tokens instead of the actual credit card number. Google Pay was a rebrand of this setup and arrived in 2018.

The 2018 version of Google Pay was a continuation of the Android Pay codebase, which was a continuation of the Google Wallet codebase. Despite all the rebrands, Google's payment apps were an evolution, and none of the previous apps were really "shut down"—they were in-place upgrades. Everything changed in 2021 when a new version of Google Pay was launched, which is when Google's payment division started to go off the rails.

Read 6 remaining paragraphs | Comments

Microsoft and Google Announce Plans to Help Rural U.S. Hospitals Defend Against Cyberattacks

By: Alan J
10 June 2024 at 16:55

Microsoft Google Aid Rural Hospitals

Microsoft and Google have announced plans to offer free or highly discounted cybersecurity services to rural hospitals across the United States. These initiatives come as the U.S. healthcare sector faces a surge in ransomware attacks that more than doubled last year, posing a serious threat to patient care and hospital operations. The program - developed in collaboration with the White House, the American Hospital Association, and the National Rural Health Association - aims to make rural hospitals less defenseless by providing them with free security updates, security assessments, and training for hospital staff.

Microsoft and Google Cybersecurity Plans for Rural Hospitals

Microsoft has launched a full-fledged cybersecurity program to meet the needs of rural hospitals, which are often more vulnerable to cyberattacks due to more limited IT security resources, staff and training than their urban peers. The program will deliver free and low-cost technology services, including:
  • Nonprofit pricing and discounts of up to 75% on Microsoft's security products for independent Critical Access Hospitals and Rural Emergency Hospitals.
  • Larger rural hospitals already equipped with eligible Microsoft solutions will receive free advanced security suites for free.
  • Free Windows 10 security updates for participating rural hospitals for at least one year.
  • Cybersecurity assessments and training are being made free to hospital employees to help them better manage system security.
Justin Spelhaug, corporate vice president of Microsoft Philanthropies, said in a statement, “Healthcare should be available no matter where you call home, and the rise in cyberattacks threatens the viability of rural hospitals and impact communities across the U.S. “Microsoft is committed to delivering vital technology security and support at a time when these rural hospitals need them most.” Anne Neuberger, Deputy National Security Advisor for Cyber and Emerging Technologies, said in a statement:
“Cyber-attacks against the U.S. healthcare systems rose 130% in 2023, forcing hospitals to cancel procedures and impacting Americans’ access to critical care. Rural hospitals are particularly hard hit as they are often the sole source of care for the communities they serve and lack trained cyber staff and modern cyber defenses. President Biden is committed to every American having access to the care they need, and effective cybersecurity is a part of that. So, we’re excited to work with Microsoft to launch cybersecurity programs that will provide training, advice and technology to help America’s rural hospitals be safe online.”
Alongside Microsoft's efforts, Google also announced that it will provide free cybersecurity advice to rural hospitals and non-profit organizations while also launching a pilot program to match its cybersecurity services with the specific needs of rural healthcare facilities.

Plans Are Part of Broader National Effort

Rural hospitals remain one of the most common targets for cyberattacks, according to data from the National Rural Health Association. Rural hospitals in the U.S. serve over 60 million people living in rural areas, who sometimes have to travel considerable distance for care even without the inconvenience of a cyberattack. Neuberger stated, “We’re in new territory as we see ... this wave of attacks against hospitals.” Rick Pollack, president of the American Hospital Association, said, “Rural hospitals are often the primary source of healthcare in their communities, so keeping them open and safe from cyberattacks is critical. We appreciate Microsoft stepping forward to offer its expertise and resources to help secure part of America’s healthcare safety net.” The plans are a part of a broader effort by the United States government to direct private partners and tech giants such as Microsoft and Google to use their expertise to plug significant gaps in the defense of the healthcare sector. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Unlike Google, XScreensaver will never run around and desert you

By: JHarris
9 June 2024 at 03:45
Google demanded of jwz a Privacy Policy for their Android port of XScreensaver, which collects no user data, despite their own privacy missteps. He's crowdsourcing a list of things XScreensaver will never do that Google does, with source links.

I was going to post this in the current linkthread, but then figured, it's fine as a post as-is, so let's just throw it at the front page instead. It made a sound like splat!

Google avoids jury trial by sending $2.3 million check to US government

7 June 2024 at 17:05
At Google headquarters, the company's logo is seen on the glass exterior of a building.

Enlarge (credit: Getty Images | Justin Sullivan )

Google has achieved its goal of avoiding a jury trial in one antitrust case after sending a $2.3 million check to the US Department of Justice. Google will face a bench trial, a trial conducted by a judge without a jury, after a ruling today that the preemptive check is big enough to cover any damages that might have been awarded by a jury.

"I am satisfied that the cashier's check satisfies any damages claim," US District Judge Leonie Brinkema said after a hearing in the Eastern District of Virginia on Friday, according to Bloomberg. "A fair reading of the expert reports does not support" a higher amount, Brinkema said.

The check was reportedly for $2,289,751. "Because the damages are no longer part of the case, Brinkema ruled a jury is no longer needed and she will oversee the trial, set to begin in September," according to Bloomberg.

Read 12 remaining paragraphs | Comments

Google will start deleting location history

7 June 2024 at 12:26

Google announced that it will reduce the amount of personal data it is storing by automatically deleting old data from “Timeline”—the feature that, previously named “Location History,” tracks user routes and trips based on a phone’s location, allowing people to revisit all the places they’ve been in the past.

In an email, Google told users that they will have until December 1, 2024 to save all travels to their mobile devices before the company starts deleting old data. If you use this feature, that means you have about five months before losing your location history.

Moving forward, Google will link the Location information to the devices you use, rather than to the user account(s). And, instead of backing up your data to the cloud, Google will soon start to store it locally on the device.

As I pointed out years ago, Location History allowed me to “spy” on my wife’s whereabouts without having to install anything on her phone. After some digging, I learned that my Google account was added to my wife’s phone’s accounts when I logged in on the Play Store on her phone. The extra account this created on her phone was not removed when I logged out after noticing the tracking issue.

That issue should be solved by implementing this new policy. (Let’s remember, though, that this is an issue that Google formerly considered a feature rather than a problem.)

Once effective, unless you take action and enable the new Timeline settings by December 1, Google will attempt to move the past 90 days of your travel history to the first device you sign in to your Google account on. If you want to keep using Timeline:

  • Open Google Maps on your device.
  • Tap your profile picture (or initial) in the upper right corner.
  • Choose Your Timeline.
  • Select whether to keep you want to keep your location data until you manually delete it or have Google auto-delete it after 3, 18, or 36 months.

In April of 2023, Google Play launched a series of initiatives that gives users control over the way that separate, third-party apps stored data about them. This was seemingly done because Google wanted to increase transparency and control mechanisms for people to control how apps would collect and use their data.

With the latest announcement, it appears that Google is finally tackling its own apps.

Only recently, Google agreed to purge billions of records containing personal information collected from more than 136 million people in the US surfing the internet using its Chrome web browser. But this was part of a settlement in a lawsuit accusing the search giant of illegal surveillance.

It’s nice to see the needle move in the good direction for a change. As Bruce Schneier pointed out in his article Online Privacy and Overfishing:

“Each successive generation of the public is accustomed to the privacy status quo of their youth. What seems normal to us in the security community is whatever was commonplace at the beginning of our careers.”

This has led us all to a world where we don’t even have the expectation of privacy anymore when it comes to what we do online or when using modern technology in general.

If you want to take firmer control over how your location is tracked and shared, we recommend reading How to turn off location tracking on Android.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Google Announces Investment in 15 New Cybersecurity Clinics Across the U.S.

By: Alan J
5 June 2024 at 12:12

Cybersecurity Clinics Google

Google has announced a new initiative to establish 15 cybersecurity clinics across the US. The move attempts to address escalating cybersecurity threats as well as additional risks and opportunities presented by bleeding-edge technology such as AI. These clinics aim at providing funding, mentorship, and additional resources to higher education institutions, within the area of cybersecurity. The initiative expects that its support of the increase in a skilled and dedicated cybersecurity workforce will help protect critical infrastructure and organizations and help address the cybersecurity skills shortage.

Cybersecurity Clinics Aim At Building Resilient Workforce

The cybersecurity clinic initiative, launched in collaboration with the Consortium of Cybersecurity Clinics, invites higher education institutions to apply for funding to establish new clinics. Approved clinics will receive $1 million in cybersecurity funding, mentorship, Titan Security Keys (phishing-resistant 2FA keys), and scholarships for Google's Cybersecurity Certification. Mentorship from these clinics attempts to serve as a bridge between academic knowledge and real-world application by allowing students to gain important hands-on experience. The clinics will also help regional organizations protect themselves from potential cyber threats. For example, Indiana University cybersecurity clinic students have been helping the local fire department in devising contingency plans for online communications compromise scenarios. At the Rochester Institute of Technology, students helped their local water authority review and improve their IT security configurations across operating sites. Google's collaboration page mentions the list of institutions through which the new cybersecurity clinics will be set up, marking them as 'New Grantees':
  • Tougaloo College
  • Turtle Mountain Community College
  • University of Hawai’i Maui College
  • Cyber Center of Excellence (CCOE), San Diego State University (SDSU), California State University San Marcos (CSUSM) and National University
  • West Virginia State University
  • Dakota State University
  • University of North Carolina Greensboro
  • University of Arizona
  • Franklin Cummings Tech
  • Spelman College
  • NSI CTC - HUSB
  • Northeastern State University in Oklahoma
  • Trident Technical College
  • Eastern Washington University
  • The University of Texas at El Paso
These new clinics add to the ten actively operating cybersecurity clinic grants to various institutes: [caption id="attachment_75177" align="alignnone" width="2164"]Consortium of Cybersecurity Clinics Google Active Interactive Map Indicating Active Clinics (Source:  cybersecurityclinics.org)[/caption]
  • University of Texas at San Antonio
  • UC Berkeley
  • Rochester Institute of Technology
  • Massachusetts Institute of Technology
  • Stillman College
  • Indiana University
  • University of Nevada, Las Vegas
  • The University of Alabama
  • University of Georgia
  • University of Texas at Austin

Clinics Attempt to Focus on Diversity and Inclusivity

In the announcement, Google also affirmed its commitment to foster diversity and inclusivity within the cybersecurity industry. In recognition of these values, Google has has extended its cybersecurity funding support to organizations such as the Computing Alliance of Hispanic-Serving Institutions (CAHSI), Stillman College, and the American Indian Science and Engineering Society (AISES). These institutions aid colleges and universities that served large populations of minorities such as black, Hispanic, indigenous or tribal students. "Cyber attacks are a threat to everyone's security, so it's essential that cyber education is accessible," said a Google spokesperson. "With these newest 15 clinics, we're supporting institutions that serve a variety of students and communities: traditional colleges and universities as well as community and technical colleges in both rural and urban communities." [caption id="attachment_75162" align="alignnone" width="588"]Cybersecurity Diversity Cybersecurity Clinics Source: stillman.edu[/caption] Google's investment in these clinics represent a strategic move to address the nation's workforce shortage, with at least 450,000 cybersecurity positions remaining open across the country. Google stated that its new cybersecurity clinics would help impart cybersecurity training to hundreds of students, while increasing its own commitment by $5 million, amounting to a total of about $25 million in support across clinics. The tech giant expects that these moves will help enable the operation of 25 cybersecurity clinics nationwide by 2025. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Chrome begins limiting ad blockers

31 May 2024 at 19:06

If, for some reason, you’re still using Chrome or one of the browsers that put a little hat on Chrome and call it a different browser, the time you’re going to want to consider switching to the only real alternative – Firefox – is getting closer and closer. Yesterday, Google has announced that the end of Manifest V2 is now truly here.

Starting on June 3 on the Chrome Beta, Dev and Canary channels, if users still have Manifest V2 extensions installed, some will start to see a warning banner when visiting their extension management page – chrome://extensions – informing them that some (Manifest V2) extensions they have installed will soon no longer be supported. At the same time, extensions with the Featured badge that are still using Manifest V2 will lose their badge.

This will be followed gradually in the coming months by the disabling of those extensions. Users will be directed to the Chrome Web Store, where they will be recommended Manifest V3 alternatives for their disabled extension. For a short time after the extensions are disabled, users will still be able to turn their Manifest V2 extensions back on, but over time, this toggle will go away as well.

↫ David Li on the Chromium blog

In case you’ve been asleep at the wheel – and if you’re still using Chrome, you most likely are – Manifest V3 will heavily limit what content blockers can do, making them less effective at things like blocking ads. In a move that surprises absolutely nobody, it’s not entirely coincidental that Manifest V3 is being pushed hard by Google, the world’s largest online advertising company. While Google claims all the major content blockers have Manifest V3 versions available, the company fails to mention that they carry monikers such as “uBlock Origin Lite”, to indicate they are, well, shittier at their job than their Manifest V2 counterparts.

I can’t make this any more clear: switch to Firefox. Now. While Firefox and Mozilla sure aren’t perfect, they have absolutely zero plans to phase out Manifest V2, and the proper, full versions of content blockers will continue to work. As the recent leaks have made very clear, Chrome is even more of a vehicle for user tracking and ad targeting than we already knew, and with the deprecation of Manifest V2 from Chrome, Google is limiting yet another avenue for blocking ads.

OSNews has ads, and they are beyond my control, since our ads are managed by OSNews’ owner, and not by me. My position has always been clear: your computer, your rules. Nobody has any right to display ads on your computer, using your bandwidth, using your processor cycles, using your pixels. Sure, it’d be great if we could earn some income through ads, but we’d greatly prefer you become a Patreon (which removes ads) or make an individual donation to support OSNews and keep us alive that way instead.

Over 90 malicious Android apps with 5.5M installs found on Google Play – Source: www.bleepingcomputer.com

over-90-malicious-android-apps-with-55m-installs-found-on-google-play-–-source:-wwwbleepingcomputer.com

Source: www.bleepingcomputer.com – Author: Bill Toulas Over 90 malicious Android apps were found installed over 5.5 million times through Google Play to deliver malware and adware, with the Anatsa banking trojan seeing a recent surge in activity. Anatsa (aka “Teabot”) is a banking trojan that targets over 650 applications of financial institutions in Europe, the US, the […]

La entrada Over 90 malicious Android apps with 5.5M installs found on Google Play – Source: www.bleepingcomputer.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Google Discovers Fourth Zero-Day in Less Than a Month – Source: www.darkreading.com

google-discovers-fourth-zero-day-in-less-than-a-month-–-source:-wwwdarkreading.com

Source: www.darkreading.com – Author: Dark Reading Staff 1 Min Read Source: dpa picture alliance via Alamy Stock Photo Google has released an update from its Chrome team for a high-severity security flaw, tracked as CVE-2024-5274, that actively exists in the wild. The bug is classified as critical and is a type confusion vulnerability in the […]

La entrada Google Discovers Fourth Zero-Day in Less Than a Month – Source: www.darkreading.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Google just updated its algorithm, and the Internet will never be the same

25 May 2024 at 18:48

But Google results are a zero-sum game. If the search engine sends traffic to one site, it has to take it from another, and the effects on the losers in this Reddit equation are just as dramatic. “Google’s just committing war on publisher websites,” Ray says. “It’s almost as if Google designed an algorithm update to specifically go after small bloggers. I’ve talked to so many people who’ve just had everything wiped out,” she says.

A number of website owners and search experts who spoke to the BBC said there’s been a general shift in Google results towards websites with big established brands, and away from small and independent sites, that seems totally disconnected from the quality of the content.

↫ Thomas Germain at the BBC

These stories are coming out left, right, and centre now – and the stories are heartbreaking. Websites that publish truly quality content with honest, valuable, real reviews are now not only having to combat the monster of Google’s own creation – SEO spam websites – but also Google itself, who has started downranking them in favour of fucksmith on Reddit. Add to that the various “AI” boxes and answers Google is adding to its site, and the assault on quality content is coming from all angles.

I don’t look at our numbers or traffic sources, since I don’t want to be influenced by any of that stuff. I don’t think OSNews really lives or dies by a constant flow of Google results, but if we do, there’s really not much I can do about it anyway. Google Search once gaveth, and ever since that fateful day it’s mostly been Google Search taketh. I can’t control it, so I’m not going to worry about it. All I can do is keep the site updated, point out we really do need your support on Patreon and Ko-Fi – to keep OSNews running, and perhaps maybe ever going ad-free entirely – and hope for the best.

I do feel for the people who still make quality content on the web, though – especially people like the ones mentioned in the linked BBC article, who set up an entire business around honest, quality reviews of something as mundane as air purifiers. It must be devastating to see all you’ve worked for destroyed by SEO spam, fucksmith on Reddit, and answers from an “AI” high on crack.

How to make Google’s new “Web” search option the default in your browser

21 May 2024 at 17:47

Last week, Google unveiled a new little feature in Google Search, called “Web”. Residing alongside the various other options like “All”, “Images”, “Video”, and so on, its goal is to effectively strip Google Search results from everything we generally don’t like, and just present a list of actual links to actual websites. It turns out it’s quite simple to set this as your default search “engine” in your browser, so somebody made a website to make that process a little easier.

On May 15th Google released a new “Web” filter that removes “AI Overview” and other clutter, leaving only traditional web results. Here is how you can set “Google Web” as your default search engine.

↫ TenBlueLinks.org

It’s important to note that this is not some separate search engine, and that no data is flowing any differently than when using regular Google. All this does is append the parameter UDM=14 to the URL, which loads the option “Web”.

Detecting Malicious Trackers

21 May 2024 at 07:09

From Slashdot:

Apple and Google have launched a new industry standard called “Detecting Unwanted Location Trackers” to combat the misuse of Bluetooth trackers for stalking. Starting Monday, iPhone and Android users will receive alerts when an unknown Bluetooth device is detected moving with them. The move comes after numerous cases of trackers like Apple’s AirTags being used for malicious purposes.

Several Bluetooth tag companies have committed to making their future products compatible with the new standard. Apple and Google said they will continue collaborating with the Internet Engineering Task Force to further develop this technology and address the issue of unwanted tracking.

This seems like a good idea, but I worry about false alarms. If I am walking with a friend, will it alert if they have a Bluetooth tracking device in their pocket?

Your vacation, reservations, and online dates, now chosen by AI: Lock and Code S05E11

20 May 2024 at 11:10

This week on the Lock and Code podcast…

The irrigation of the internet is coming.

For decades, we’ve accessed the internet much like how we, so long ago, accessed water—by traveling to it. We connected (quite literally), we logged on, and we zipped to addresses and sites to read, learn, shop, and scroll. 

Over the years, the internet was accessible from increasingly more devices, like smartphones, smartwatches, and even smart fridges. But still, it had to be accessed, like a well dug into the ground to pull up the water below.

Moving forward, that could all change.

This year, several companies debuted their vision of a future that incorporates Artificial Intelligence to deliver the internet directly to you, with less searching, less typing, and less decision fatigue. 

For the startup Humane, that vision includes the use of the company’s AI-powered, voice-operated wearable pin that clips to your clothes. By simply speaking to the AI pin, users can text a friend, discover the nutritional facts about food that sits directly in front of them, and even compare the prices of an item found in stores with the price online.

For a separate startup, Rabbit, that vision similarly relies on a small, attractive smart-concierge gadget, the R1. With the bright-orange slab designed in coordination by the company Teenage Engineering, users can hail an Uber to take them to the airport, play an album on Spotify, and put in a delivery order for dinner.

Away from physical devices, The Browser Company of New York is also experimenting with AI in its own web browser, Arc. In February, the company debuted its endeavor to create a “browser that browses for you” with a snazzy video that showed off Arc’s AI capabilities to create unique, individualized web pages in response to questions about recipes, dinner reservations, and more.

But all these small-scale projects, announced in the first month or so of 2024, had to make room a few months later for big-money interest from the first ever internet conglomerate of the world—Google. At the company’s annual Google I/O conference on May 14, VP and Head of Google Search Liz Reid pitched the audience on an AI-powered version of search in which “Google will do the Googling for you.”

Now, Reid said, even complex, multi-part questions can be answered directly within Google, with no need to click a website, evaluate its accuracy, or flip through its many pages to find the relevant information within.

This, it appears, could be the next phase of the internet… and our host David Ruiz has a lot to say about it.

Today, on the Lock and Code podcast, we bring back Director of Content Anna Brading and Cybersecurity Evangelist Mark Stockley to discuss AI-powered concierges, the value of human choice when so many small decisions could be taken away by AI, and, as explained by Stockley, whether the appeal of AI is not in finding the “best” vacation, recipe, or dinner reservation, but rather the best of anything for its user.

“It’s not there to tell you what the best chocolate chip cookie in the world is for everyone. It’s there to help you figure out what the best chocolate chip cookie is for you, on a Monday evening, when the weather’s hot, and you’re hungry.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Google now offers ‘web’ search — and an “AI” opt-out button

15 May 2024 at 08:24

This is not a joke: Google will now let you perform a “web” search. It’s rolling out “web” searches now, and in my early tests on desktop, it’s looking like it could be an incredibly popular change to Google’s search engine.

The optional setting filters out almost all the other blocks of content that Google crams into a search results page, leaving you with links and text — and Google confirms to The Verge that it will block the company’s new AI Overviews as well.

↫ Sean Hollister at The Verge

I hate what the web has become.

Another Chrome Vulnerability

14 May 2024 at 07:01

Google has patched another Chrome zero-day:

On Thursday, Google said an anonymous source notified it of the vulnerability. The vulnerability carries a severity rating of 8.8 out of 10. In response, Google said, it would be releasing versions 124.0.6367.201/.202 for macOS and Windows and 124.0.6367.201 for Linux in subsequent days.

“Google is aware that an exploit for CVE-2024-4671 exists in the wild,” the company said.

Google didn’t provide any other details about the exploit, such as what platforms were targeted, who was behind the exploit, or what they were using it for.

Tech workers should shine a light on the industry’s secretive work with the military

10 May 2024 at 09:00

It’s a hell of a time to have a conscience if you work in tech. The ongoing Israeli assault on Gaza has brought the stakes of Silicon Valley’s military contracts into stark relief. Meanwhile, corporate leadership has embraced a no-politics-in-the-workplace policy enforced at the point of the knife.

Workers are caught in the middle. Do I take a stand and risk my job, my health insurance, my visa, my family’s home? Or do I ignore my suspicion that my work may be contributing to the murder of innocents on the other side of the world?  

No one can make that choice for you. But I can say with confidence born of experience that such choices can be more easily made if workers know what exactly the companies they work for are doing with militaries at home and abroad. And I also know this: those same companies themselves will never reveal this information unless they are forced to do so—or someone does it for them. 

For those who doubt that workers can make a difference in how trillion-dollar companies pursue their interests, I’m here to remind you that we’ve done it before. In 2017, I played a part in the successful #CancelMaven campaign that got Google to end its participation in Project Maven, a contract with the US Department of Defense to equip US military drones with artificial intelligence. I helped bring to light information that I saw as critically important and within the bounds of what anyone who worked for Google, or used its services, had a right to know. The information I released—about how Google had signed a contract with the DOD to put AI technology in drones and later tried to misrepresent the scope of that contract, which the company’s management had tried to keep from its staff and the general public—was a critical factor in pushing management to cancel the contract. As #CancelMaven became a rallying cry for the company’s staff and customers alike, it became impossible to ignore. 

Today a similar movement, organized under the banner of the coalition No Tech for Apartheid, is targeting Project Nimbus, a joint contract between Google and Amazon to provide cloud computing infrastructure and AI capabilities to the Israeli government and military. As of May 10, just over 97,000 people had signed its petition calling for an end to collaboration between Google, Amazon, and the Israeli military. I’m inspired by their efforts and dismayed by Google’s response. Earlier this month the company fired 50 workers it said had been involved in “disruptive activity” demanding transparency and accountability for Project Nimbus. Several were arrested. It was a decided overreach.  

Google is very different from the company it was seven years ago, and these firings are proof of that. Googlers today are facing off with a company that, in direct response to those earlier worker movements, has fortified itself against new demands. But every Death Star has its thermal exhaust port, and today Google has the same weakness it did back then: dozens if not hundreds of workers with access to information it wants to keep from becoming public. 

Not much is known about the Nimbus contract. It’s worth $1.2 billion and enlists Google and Amazon to provide wholesale cloud infrastructure and AI for the Israeli government and its ministry of defense. Some brave soul leaked a document to Time last month, providing evidence that Google and Israel negotiated an expansion of the contract as recently as March 27 of this year. We also know, from reporting by The Intercept, that Israeli weapons firms are required by government procurement guidelines to buy their cloud services from Google and Amazon. 

Leaks alone won’t bring an end to this contract. The #CancelMaven victory required a sustained focus over many months, with regular escalations, coordination with external academics and human rights organizations, and extensive internal organization and discipline. Having worked on the public policy and corporate comms teams at Google for a decade, I understood that its management does not care about one negative news cycle or even a few of them. Management buckled only after we were able to keep up the pressure and escalate our actions (leaking internal emails, reporting new info about the contract, etc.) for over six months. 

The No Tech for Apartheid campaign seems to have the necessary ingredients. If a strategically placed insider released information not otherwise known to the public about the Nimbus project, it could really increase the pressure on management to rethink its decision to get into bed with a military that’s currently overseeing mass killings of women and children.

My decision to leak was deeply personal and a long time in the making. It certainly wasn’t a spontaneous response to an op-ed, and I don’t presume to advise anyone currently at Google (or Amazon, Microsoft, Palantir, Anduril, or any of the growing list of companies peddling AI to militaries) to follow my example. 

However, if you’ve already decided to put your livelihood and freedom on the line, you should take steps to try to limit your risk. This whistleblower guide is helpful. You may even want to reach out to a lawyer before choosing to share information. 

In 2017, Google was nervous about how its military contracts might affect its public image. Back then, the company responded to our actions by defending the nature of the contract, insisting that its Project Maven work was strictly for reconnaissance and not for weapons targeting—conceding implicitly that helping to target drone strikes would be a bad thing. (An aside: Earlier this year the Pentagon confirmed that Project Maven, which is now a Palantir contract, had been used in targeting drone attacks in Yemen, Iraq, and Syria.) 

Today’s Google has wrapped its arms around the American flag, for good or ill. Yet despite this embrace of the US military, it doesn’t want to be seen as a company responsible for illegal killings. Today it maintains that the work it is doing as part of Project Nimbus “is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.” At the same time, it asserts that there is no room for politics at the workplace and has fired those demanding transparency and accountability. This raises a question: If Google is doing nothing sensitive as part of the Nimbus contract, why is it firing workers who are insisting that the company reveal what work the contract actually entails?  

As you read this, AI is helping Israel annihilate Palestinians by expanding the list of possible targets beyond anything that could be compiled by a human intelligence effort, according to +972 Magazine. Some Israel Defense Forces insiders are even sounding the alarm, calling it a dangerous “mass assassination program.” The world has not yet grappled with the implications of the proliferation of AI weaponry, but that is the trajectory we are on. It’s clear that absent sufficient backlash, the tech industry will continue to push for military contracts. It’s equally clear that neither national governments nor the UN is currently willing to take a stand. 

It will take a movement. A document that clearly demonstrates Silicon Valley’s direct complicity in the assault on Gaza could be the spark. Until then, rest assured that tech companies will continue to make as much money as possible developing the deadliest weapons imaginable. 

William Fitzgerald is a founder and partner at the Worker Agency, an advocacy agency in California. Before setting the firm up in 2018, he spent a decade at Google working on its government relation and communications teams.

ChromeOS App Mall unifies app discovery for Chromebooks

9 May 2024 at 09:42

We’ve been on the lookout for the arrival of the ChromeOS App Mall for a few months now. First discovered back in March, the new App Mall is arriving to do one, simple task: put the apps users want in one place to be found a Chromebook.

While we have access to web apps, PWAs, Android apps and Linux apps on Chromebooks, it’s not always clear how to go about finding them. Should you install the web version or the Play Store version? Which Play Store apps install a PWA versus an Android app? Where should you go to find the right one for you?

↫ Robby Payne at Chrome Unboxed

ChromeOS definitely needs a more unified, single place to find applications, and this seems like exactly what’s happening here.

Google postpones phasing out third party cookies in Chrome once more

24 April 2024 at 19:30

While Firefox and Safari phased out third party cookies years ago, it’s taking Chrome a bit longer because, well, daddy Google got ads to sell. As such, Google has been developing a complicated new alternative to third party cookies that it calls “Privacy sandbox”, a name in the vain of “Greenland”. This process has not exactly been going well, because Google has had to postpone phasing out third party cookies several times now, and today, they had to postpone it again. This time, however, it’s because the UK competition authority, the CMA, still has some questions.

We recognize that there are ongoing challenges related to reconciling divergent feedback from the industry, regulators and developers, and will continue to engage closely with the entire ecosystem. It’s also critical that the CMA has sufficient time to review all evidence including results from industry tests, which the CMA has asked market participants to provide by the end of June. Given both of these significant considerations, we will not complete third-party cookie deprecation during the second half of Q4.

We remain committed to engaging closely with the CMA and ICO and we hope to conclude that process this year. Assuming we can reach an agreement, we envision proceeding with third-party cookie deprecation starting early next year.

↫ Google’s Greenland blog

Making a browser good enough to take over almost the entire browser market was an absolute master stroke by Google. Now can you all please switch over to Firefox or like Lynx or something?

The man who killed Google Search

23 April 2024 at 18:47

These emails — which I encourage you to look up — tell a dramatic story about how Google’s finance and advertising teams, led by Raghavan with the blessing of CEO Sundar Pichai, actively worked to make Google worse to make the company more money. This is what I mean when I talk about the Rot Economy — the illogical, product-destroying mindset that turns the products you love into torturous, frustrating quasi-tools that require you to fight the company’s intentions to get the service you want.

↫ Edward Zitron

Quite the read.

Google patches critical vulnerability for Androids with Qualcomm chips

3 April 2024 at 16:40

In April’s update for the Android operating system (OS), Google has patched 28 vulnerabilities, one of which is rated critical for Android devices equipped with Qualcomm chips.

You can find your device’s Android version number, security update level, and Google Play system level in your Settings app. You’ll get notifications when updates are available for you, but you can also check for updates.

If your Android phone is at patch level 2024-04-05 or later then the issues discussed below have been fixed. The updates have been made available for Android 12, 12L and 13. Android partners are notified of all issues at least a month before publication, however, this doesn’t always mean that the patches are available for devices from all vendors.

For most phones it works like this: Under About phone or About device you can tap on Software updates to check if there are new updates available for your device, although there may be slight differences based on the brand, type, and Android version of your device.

The Common Vulnerabilities and Exposures (CVE) database lists publicly disclosed computer security flaws. The Qualcomm CVE is listed as CVE-2023-28582. It has a CVSS score of 9.8 out of 20 and is described as a memory corruption in Data Modem while verifying hello-verify message during the Datagram Transport Layer Security (DTLS) handshake.

The cause of the memory corruption lies in a buffer copy without checking the size of the input. Practically, this means that a remote attacker can cause a buffer overflow during the verification of a DTLS handshake, allowing them to execute code on the affected device.

Another vulnerability highlighted by Google is CVE-2024-23704, an elevation of privilege (EoP) vulnerability in the System component that affects Android 13 and Android 14.

This vulnerability could lead to local escalation of privilege with no additional execution privileges needed. Local privilege escalation happens when one user acquires the system rights of another user. This could allow an attacker to access information they shouldn’t have access to, or perform actions at a higher level of permissions.

Pixel users

Google warns Pixel users that there are indications that two high severity vulnerabilities may be under limited, targeted exploitation. These vulnerabilities are:

  • CVE-2024-29745: An information disclosure vulnerability in the bootloader component. Bootloaders are one of the first programs to load and ensure that all relevant operating system data is loaded into the main memory when a device is started.
  • CVE-2024-29748: An elevation of privilege (EoP) vulnerability in the Pixel firmware. Firmware is device-specific software that provides basic machine instructions that allow the hardware to function and communicate with other software running on the device.

On Pixel devices, a security patch level of 2024-04-05 resolves all these security vulnerabilities.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Google Chrome gets ‘Device Bound Session Credentials’ to stop cookie theft

3 April 2024 at 15:44

Google has announced the introduction of Device Bound Session Credentials (DBSC) to secure Chrome users against cookie theft.

In January we reported how hackers found a way to gain unauthorized access to Google accounts, bypassing multi-factor authentication (MFA), by stealing authentication cookies with info-stealer malware. An authentication cookie is added to a web browser after a user proves who they are by logging in. It tells a website that a user has already logged in, so they aren’t asked for their username and password over and over again. A cybercriminal with an authentication cookie for a website doesn’t need a password, because the website thinks they’ve already logged in. It doesn’t even matter if the owner of the account changes their password.

At the time, Google said it would take action:

“We routinely upgrade our defenses against such techniques and to secure users who fall victim to malware. In this instance, Google has taken action to secure any compromised accounts detected.”

However, some info stealers reportedly updated their methods to counter Google’s fraud detection measures.

The idea that malware could steal authentication cookies and send them to a criminal did not sit well with Google. In its announcement it explains that, “because of the way cookies and operating systems interact, primarily on desktop operating systems, Chrome and other browsers cannot protect them against malware that has the same level of access as the browser itself.”

So it turned to another solution. And if the simplicity of the solution is any indication for its effectiveness, then this should be a good one.

It works by using cryptography to limit the use of an authentication cookie to the device that first created it. When a user visits a website and starts a session, the browser creates two cryptographic keys—one public, one private. The private key is stored on the device in a way that is hard to export, and the public key is given to the website. The website uses the public key to verify that the browser using the authentication cookie has the private key. In order to use a stolen cookie, a thief would also need to steal the private key, so the more robust the “hard to export” bit gets, the safer your cookies will be.

Google stated in its announcement that it thinks this will substantially reduce the success rate of cookie theft malware. This would force attackers to act locally on a device, which makes on-device detection and cleanup more effective, both for anti-malware software as well as for enterprise managed devices.

As such, Device Bound Session Credentials fits in well with Google’s strategy to phase out third-party cookies.

Development of the project is done in the open at Github with the goal of DBSC becoming an open web standard. The goal is to have a fully working trial ready by the end of 2024. Google says that identity providers such as Okta, and browsers such as Microsoft Edge, have expressed interest in DBSC as they want to secure their users against cookie theft.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Class-Action Lawsuit against Google’s Incognito Mode

3 April 2024 at 07:01

The lawsuit has been settled:

Google has agreed to delete “billions of data records” the company collected while users browsed the web using Incognito mode, according to documents filed in federal court in San Francisco on Monday. The agreement, part of a settlement in a class action lawsuit filed in 2020, caps off years of disclosures about Google’s practices that shed light on how much data the tech giant siphons from its users­—even when they’re in private-browsing mode.

Under the terms of the settlement, Google must further update the Incognito mode “splash page” that appears anytime you open an Incognito mode Chrome window after previously updating it in January. The Incognito splash page will explicitly state that Google collects data from third-party websites “regardless of which browsing or browser mode you use,” and stipulate that “third-party sites and apps that integrate our services may still share information with Google,” among other changes. Details about Google’s private-browsing data collection must also appear in the company’s privacy policy.

I was an expert witness for the prosecution (that’s the class, against Google). I don’t know if my declarations and deposition will become public.

Update Chrome now! Google patches possible drive-by vulnerability

28 March 2024 at 07:25

Google has released an update to Chrome which includes seven security fixes. Version 123.0.6312.86/.87 of Chrome for Windows and Mac and 123.0.6312.86 for Linux will roll out over the coming days/weeks.

The easiest way to update Chrome is to allow it to update automatically, which basically uses the same method as outlined below but does not require your attention. But you can end up lagging behind if you never close the browser or if something goes wrong—such as an extension stopping you from updating the browser.

So, it doesn’t hurt to check now and then. And now would be a good time, given the severity of the vulnerability in this patch. My preferred method is to have Chrome open the page chrome://settings/help which you can also find by clicking Settings > About Chrome.

If there is an update available, Chrome will notify you and start downloading it. Then all you have to do is relaunch the browser in order for the update to complete, and for you to be safe from those vulnerabilities.

Chrome is up to date

After the update, the version should be 123.0.6312.86, or later

Technical details

Google never gives out a lot of information about vulnerabilities, for obvious reasons. Access to bug details and links may be kept restricted until a majority of users are updated with a fix.

There is one critical vulnerability that looks like it might be of interest to cybercriminals.

CVE-2024-2883: Use after free (UAF) vulnerability in Angle in Google Chrome prior to 123.0.6312.86 could allow a remote attacker to potentially exploit heap corruption via a crafted HTML page.

Angle is a browser component that deals with WebGL (short for Web Graphics Library) content. WebGL is a JavaScript API for rendering interactive 2D and 3D graphics within any compatible web browser without the use of plug-ins.

UAF is a type of vulnerability that is the result of the incorrect use of dynamic memory during a program’s operation. If, after freeing a memory location, a program does not clear the pointer to that memory, an attacker can use the error to manipulate the program. Referencing memory after it has been freed can cause a program to crash, use unexpected values, or execute code. In this case, when the vulnerability is exploited, it can lead to heap corruption.

Heap corruption occurs when a program modifies the contents of a memory location outside of the memory allocated to the program. The outcome can be relatively benign and cause a memory leak, or it may be fatal and cause a memory fault, usually in the program that causes the corruption.

Chromium vulnerabilities are considered critical if they “allow an attacker to read or write arbitrary resources (including but not limited to the file system, registry, network, etc.) on the underlying platform, with the user’s full privileges.”

So, to sum this up, in this case an attacker could create a specially crafted HTML page–which can be put online as a website–that exploits the vulnerability, potentially leading to a compromised system.

My suggestion: don’t wait for the update, get it now.


We don’t just report on vulnerabilities—we identify them, and prioritize action.

Cybersecurity risks should never spread beyond a headline. Keep vulnerabilities in tow by using ThreatDown Vulnerability and Patch Management.

Google Pays $10M in Bug Bounties in 2023

22 March 2024 at 07:01

BleepingComputer has the details. It’s $2M less than in 2022, but it’s still a lot.

The highest reward for a vulnerability report in 2023 was $113,337, while the total tally since the program’s launch in 2010 has reached $59 million.

For Android, the world’s most popular and widely used mobile operating system, the program awarded over $3.4 million.

Google also increased the maximum reward amount for critical vulnerabilities concerning Android to $15,000, driving increased community reports.

During security conferences like ESCAL8 and hardwea.io, Google awarded $70,000 for 20 critical discoveries in Wear OS and Android Automotive OS and another $116,000 for 50 reports concerning issues in Nest, Fitbit, and Wearables.

Google’s other big software project, the Chrome browser, was the subject of 359 security bug reports that paid out a total of $2.1 million.

Slashdot thread.

AI and the Evolution of Social Media

19 March 2024 at 07:05

Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracy, malfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favorable to democracy. But at the same time, it has the potential to do incredible damage to society.

There is a lot we can learn about social media’s unregulated evolution over the past decade that directly applies to AI companies and technologies. These lessons can help us avoid making the same mistakes with AI that we did with social media.

In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use.

#1: Advertising

The role advertising plays in the internet arose more by accident than anything else. When commercialization first came to the internet, there was no easy way for users to make micropayments to do things like viewing a web page. Moreover, users were accustomed to free access and wouldn’t accept subscription models for services. Advertising was the obvious business model, if never the best one. And it’s the model that social media also relies on, which leads it to prioritize engagement over anything else.

Both Google and Facebook believe that AI will help them keep their stranglehold on an 11-figure online ad market (yep, 11 figures), and the tech giants that are traditionally less dependent on advertising, like Microsoft and Amazon, believe that AI will help them seize a bigger piece of that market.

Big Tech needs something to persuade advertisers to keep spending on their platforms. Despite bombastic claims about the effectiveness of targeted marketing, researchers have long struggled to demonstrate where and when online ads really have an impact. When major brands like Uber and Procter & Gamble recently slashed their digital ad spending by the hundreds of millions, they proclaimed that it made no dent at all in their sales.

AI-powered ads, industry leaders say, will be much better. Google assures you that AI can tweak your ad copy in response to what users search for, and that its AI algorithms will configure your campaigns to maximize success. Amazon wants you to use its image generation AI to make your toaster product pages look cooler. And IBM is confident its Watson AI will make your ads better.

These techniques border on the manipulative, but the biggest risk to users comes from advertising within AI chatbots. Just as Google and Meta embed ads in your search results and feeds, AI companies will be pressured to embed ads in conversations. And because those conversations will be relational and human-like, they could be more damaging. While many of us have gotten pretty good at scrolling past the ads in Amazon and Google results pages, it will be much harder to determine whether an AI chatbot is mentioning a product because it’s a good answer to your question or because the AI developer got a kickback from the manufacturer.

#2: Surveillance

Social media’s reliance on advertising as the primary way to monetize websites led to personalization, which led to ever-increasing surveillance. To convince advertisers that social platforms can tweak ads to be maximally appealing to individual people, the platforms must demonstrate that they can collect as much information about those people as possible.

It’s hard to exaggerate how much spying is going on. A recent analysis by Consumer Reports about Facebook—just Facebook—showed that every user has more than 2,200 different companies spying on their web activities on its behalf.

AI-powered platforms that are supported by advertisers will face all the same perverse and powerful market incentives that social platforms do. It’s easy to imagine that a chatbot operator could charge a premium if it were able to claim that its chatbot could target users on the basis of their location, preference data, or past chat history and persuade them to buy products.

The possibility of manipulation is only going to get greater as we rely on AI for personal services. One of the promises of generative AI is the prospect of creating a personal digital assistant advanced enough to act as your advocate with others and as a butler to you. This requires more intimacy than you have with your search engine, email provider, cloud storage system, or phone. You’re going to want it with you constantly, and to most effectively work on your behalf, it will need to know everything about you. It will act as a friend, and you are likely to treat it as such, mistakenly trusting its discretion.

Even if you choose not to willingly acquaint an AI assistant with your lifestyle and preferences, AI technology may make it easier for companies to learn about you. Early demonstrations illustrate how chatbots can be used to surreptitiously extract personal data by asking you mundane questions. And with chatbots increasingly being integrated with everything from customer service systems to basic search interfaces on websites, exposure to this kind of inferential data harvesting may become unavoidable.

#3: Virality

Social media allows any user to express any idea with the potential for instantaneous global reach. A great public speaker standing on a soapbox can spread ideas to maybe a few hundred people on a good night. A kid with the right amount of snark on Facebook can reach a few hundred million people within a few minutes.

A decade ago, technologists hoped this sort of virality would bring people together and guarantee access to suppressed truths. But as a structural matter, it is in a social network’s interest to show you the things you are most likely to click on and share, and the things that will keep you on the platform.

As it happens, this often means outrageous, lurid, and triggering content. Researchers have found that content expressing maximal animosity toward political opponents gets the most engagement on Facebook and Twitter. And this incentive for outrage drives and rewards misinformation.

As Jonathan Swift once wrote, “Falsehood flies, and the Truth comes limping after it.” Academics seem to have proved this in the case of social media; people are more likely to share false information—perhaps because it seems more novel and surprising. And unfortunately, this kind of viral misinformation has been pervasive.

AI has the potential to supercharge the problem because it makes content production and propagation easier, faster, and more automatic. Generative AI tools can fabricate unending numbers of falsehoods about any individual or theme, some of which go viral. And those lies could be propelled by social accounts controlled by AI bots, which can share and launder the original misinformation at any scale.

Remarkably powerful AI text generators and autonomous agents are already starting to make their presence felt in social media. In July, researchers at Indiana University revealed a botnet of more than 1,100 Twitter accounts that appeared to be operated using ChatGPT.

AI will help reinforce viral content that emerges from social media. It will be able to create websites and web content, user reviews, and smartphone apps. It will be able to simulate thousands, or even millions, of fake personas to give the mistaken impression that an idea, or a political position, or use of a product, is more common than it really is. What we might perceive to be vibrant political debate could be bots talking to bots. And these capabilities won’t be available just to those with money and power; the AI tools necessary for all of this will be easily available to us all.

#4: Lock-in

Social media companies spend a lot of effort making it hard for you to leave their platforms. It’s not just that you’ll miss out on conversations with your friends. They make it hard for you to take your saved data—connections, posts, photos—and port it to another platform. Every moment you invest in sharing a memory, reaching out to an acquaintance, or curating your follows on a social platform adds a brick to the wall you’d have to climb over to go to another platform.

This concept of lock-in isn’t unique to social media. Microsoft cultivated proprietary document formats for years to keep you using its flagship Office product. Your music service or e-book reader makes it hard for you to take the content you purchased to a rival service or reader. And if you switch from an iPhone to an Android device, your friends might mock you for sending text messages in green bubbles. But social media takes this to a new level. No matter how bad it is, it’s very hard to leave Facebook if all your friends are there. Coordinating everyone to leave for a new platform is impossibly hard, so no one does.

Similarly, companies creating AI-powered personal digital assistants will make it hard for users to transfer that personalization to another AI. If AI personal assistants succeed in becoming massively useful time-savers, it will be because they know the ins and outs of your life as well as a good human assistant; would you want to give that up to make a fresh start on another company’s service? In extreme examples, some people have formed close, perhaps even familial, bonds with AI chatbots. If you think of your AI as a friend or therapist, that can be a powerful form of lock-in.

Lock-in is an important concern because it results in products and services that are less responsive to customer demand. The harder it is for you to switch to a competitor, the more poorly a company can treat you. Absent any way to force interoperability, AI companies have less incentive to innovate in features or compete on price, and fewer qualms about engaging in surveillance or other bad behaviors.

#5: Monopolization

Social platforms often start off as great products, truly useful and revelatory for their consumers, before they eventually start monetizing and exploiting those users for the benefit of their business customers. Then the platforms claw back the value for themselves, turning their products into truly miserable experiences for everyone. This is a cycle that Cory Doctorow has powerfully written about and traced through the history of Facebook, Twitter, and more recently TikTok.

The reason for these outcomes is structural. The network effects of tech platforms push a few firms to become dominant, and lock-in ensures their continued dominance. The incentives in the tech sector are so spectacularly, blindingly powerful that they have enabled six megacorporations (Amazon, Apple, Google, Facebook parent Meta, Microsoft, and Nvidia) to command a trillion dollars each of market value—or more. These firms use their wealth to block any meaningful legislation that would curtail their power. And they sometimes collude with each other to grow yet fatter.

This cycle is clearly starting to repeat itself in AI. Look no further than the industry poster child OpenAI, whose leading offering, ChatGPT, continues to set marks for uptake and usage. Within a year of the product’s launch, OpenAI’s valuation had skyrocketed to about $90 billion.

OpenAI once seemed like an “open” alternative to the megacorps—a common carrier for AI services with a socially oriented nonprofit mission. But the Sam Altman firing-and-rehiring debacle at the end of 2023, and Microsoft’s central role in restoring Altman to the CEO seat, simply illustrated how venture funding from the familiar ranks of the tech elite pervades and controls corporate AI. In January 2024, OpenAI took a big step toward monetization of this user base by introducing its GPT Store, wherein one OpenAI customer can charge another for the use of its custom versions of OpenAI software; OpenAI, of course, collects revenue from both parties. This sets in motion the very cycle Doctorow warns about.

In the middle of this spiral of exploitation, little or no regard is paid to externalities visited upon the greater public—people who aren’t even using the platforms. Even after society has wrestled with their ill effects for years, the monopolistic social networks have virtually no incentive to control their products’ environmental impact, tendency to spread misinformation, or pernicious effects on mental health. And the government has applied virtually no regulation toward those ends.

Likewise, few or no guardrails are in place to limit the potential negative impact of AI. Facial recognition software that amounts to racial profiling, simulated public opinions supercharged by chatbots, fake videos in political ads—all of it persists in a legal gray area. Even clear violators of campaign advertising law might, some think, be let off the hook if they simply do it with AI.

Mitigating the risks

The risks that AI poses to society are strikingly familiar, but there is one big difference: it’s not too late. This time, we know it’s all coming. Fresh off our experience with the harms wrought by social media, we have all the warning we should need to avoid the same mistakes.

The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behavior seems to be a bridge too far.

We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

The good news is that we have a whole category of tools to modulate the risk that corporate actions pose for our lives, starting with regulation. Regulations can come in the form of restrictions on activity, such as limitations on what kinds of businesses and products are allowed to incorporate AI tools. They can come in the form of transparency rules, requiring disclosure of what data sets are used to train AI models or what new preproduction-phase models are being trained. And they can come in the form of oversight and accountability requirements, allowing for civil penalties in cases where companies disregard the rules.

The single biggest point of leverage governments have when it comes to tech companies is antitrust law. Despite what many lobbyists want you to think, one of the primary roles of regulation is to preserve competition—not to make life harder for businesses. It is not inevitable for OpenAI to become another Meta, an 800-pound gorilla whose user base and reach are several times those of its competitors. In addition to strengthening and enforcing antitrust law, we can introduce regulation that supports competition-enabling standards specific to the technology sector, such as data portability and device interoperability. This is another core strategy for resisting monopoly and corporate control.

Additionally, governments can enforce existing regulations on advertising. Just as the US regulates what media can and cannot host advertisements for sensitive products like cigarettes, and just as many other jurisdictions exercise strict control over the time and manner of politically sensitive advertising, so too could the US limit the engagement between AI providers and advertisers.

Lastly, we should recognize that developing and providing AI tools does not have to be the sovereign domain of corporations. We, the people and our government, can do this too. The proliferation of open-source AI development in 2023, successful to an extent that startled corporate players, is proof of this. And we can go further, calling on our government to build public-option AI tools developed with political oversight and accountability under our democratic system, where the dictatorship of the profit motive does not apply.

Which of these solutions is most practical, most important, or most urgently needed is up for debate. We should have a vibrant societal dialogue about whether and how to use each of these tools. There are lots of paths to a good outcome.

The problem is that this isn’t happening now, particularly in the US. And with a looming presidential election, conflict spreading alarmingly across Asia and Europe, and a global climate crisis, it’s easy to imagine that we won’t get our arms around AI any faster than we have (not) with social media. But it’s not too late. These are still the early years for practical consumer AI applications. We must and can do better.

This essay was written with Nathan Sanders, and was originally published in MIT Technology Review.

Using Google Search to Find Software Can Be Risky

25 January 2024 at 13:38

Google continues to struggle with cybercriminals running malicious ads on its search platform to trick people into downloading booby-trapped copies of popular free software applications. The malicious ads, which appear above organic search results and often precede links to legitimate sources of the same software, can make searching for software on Google a dicey affair.

Google says keeping users safe is a top priority, and that the company has a team of thousands working around the clock to create and enforce their abuse policies. And by most accounts, the threat from bad ads leading to backdoored software has subsided significantly compared to a year ago.

But cybercrooks are constantly figuring out ingenious ways to fly beneath Google’s anti-abuse radar, and new examples of bad ads leading to malware are still too common.

For example, a Google search earlier this week for the free graphic design program FreeCAD produced the following result, which shows that a “Sponsored” ad at the top of the search results is advertising the software available from freecad-us[.]org. Although this website claims to be the official FreeCAD website, that honor belongs to the result directly below — the legitimate freecad.org.

How do we know freecad-us[.]org is malicious? A review at DomainTools.com show this domain is the newest (registered Jan. 19, 2024) of more than 200 domains at the Internet address 93.190.143[.]252 that are confusingly similar to popular software titles, including dashlane-project[.]com, filezillasoft[.]com, keepermanager[.]com, and libreofficeproject[.]com.

Some of the domains at this Netherlands host appear to be little more than software review websites that steal content from established information sources in the IT world, including Gartner, PCWorld, Slashdot and TechRadar.

Other domains at 93.190.143[.]252 do serve actual software downloads, but none of them are likely to be malicious if one visits the sites through direct navigation. If one visits openai-project[.]org and downloads a copy of the popular Windows desktop management application Rainmeter, for example, the file that is downloaded has the same exact file signature as the real Rainmeter installer available from rainmeter.net.

But this is only a ruse, says Tom Hegel, principal threat researcher at the security firm Sentinel One. Hegel has been tracking these malicious domains for more than a year, and he said the seemingly benign software download sites will periodically turn evil, swapping out legitimate copies of popular software titles with backdoored versions that will allow cybercriminals to remotely commander the systems.

“They’re using automation to pull in fake content, and they’re rotating in and out of hosting malware,” Hegel said, noting that the malicious downloads may only be offered to visitors who come from specific geographic locations, like the United States. “In the malicious ad campaigns we’ve seen tied to this group, they would wait until the domains gain legitimacy on the search engines, and then flip the page for a day or so and then flip back.”

In February 2023, Hegel co-authored a report on this same network, which Sentinel One has dubbed MalVirt (a play on “malvertising”). They concluded that the surge in malicious ads spoofing various software products was directly responsible for a surge in malware infections from infostealer trojans like IcedID, Redline Stealer, Formbook and AuroraStealer.

Hegel noted that the spike in malicious software-themed ads came not long after Microsoft started blocking by default Office macros in documents downloaded from the Internet. He said the volume of the current malicious ad campaigns from this group appears to be relatively low compared to a year ago.

“It appears to be same campaign continuing,” Hegel said. “Last January, every Google search for ‘Autocad’ led to something bad. Now, it’s like they’re paying Google to get one out of every dozen of searches. My guess it’s still continuing because of the up-and-down [of the] domains hosting malware and then looking legitimate.”

Several of the websites at this Netherlands host (93.190.143[.]252) are currently blocked by Google’s Safebrowsing technology, and labeled with a conspicuous red warning saying the website will try to foist malware on visitors who ignore the warning and continue.

But it remains a mystery why Google has not similarly blocked more than 240+ other domains at this same host, or else removed them from its search index entirely. Especially considering there is nothing else but these domains hosted at that Netherlands IP address, and because they have all remained at that address for the past year.

In response to questions from KrebsOnSecurity, Google said maintaining a safe ads ecosystem and keeping malware off of its platforms is a priority across Google.

“Bad actors often employ sophisticated measures to conceal their identities and evade our policies and enforcement, sometimes showing Google one thing and users something else,” Google said in a written statement. “We’ve reviewed the ads in question, removed those that violated our policies, and suspended the associated accounts. We’ll continue to monitor and apply our protections.”

Google says it removed 5.2 billion ads in 2022, and restricted more than 4.3 billion ads and suspended over 6.7 million advertiser accounts. The company’s latest ad safety report says Google in 2022 blocked or removed 1.36 billion advertisements for violating its abuse policies.

Some of the domains referenced in this story were included in Sentinel One’s February 2023 report, but dozens more have been added since, such as those spoofing the official download sites for Corel Draw, Github Desktop, Roboform and Teamviewer.

This October 2023 report on the FreeCAD user forum came from a user who reported downloading a copy of the software from freecadsoft[.]com after seeing the site promoted at the top of a Google search result for “freecad.” Almost a month later, another FreeCAD user reported getting stung by the same scam.

“This got me,” FreeCAD forum user “Matterform” wrote on Nov. 19, 2023. “Please leave a report with Google so it can flag it. They paid Google for sponsored posts.”

Sentinel One’s report didn’t delve into the “who” behind this ongoing MalVirt campaign, and there are precious few clues that point to attribution. All of the domains in question were registered through webnic.cc, and several of them display a placeholder page saying the site is ready for content. Viewing the HTML source of these placeholder pages shows many of the hidden comments in the code are in Cyrillic.

Trying to track the crooks using Google’s Ad Transparency tools didn’t lead far. The ad transparency record for the malicious ad featuring freecad-us[.]org (in the screenshot above) shows that the advertising account used to pay for the ad has only run one previous ad through Google search: It advertised a wedding photography website in New Zealand.

The apparent owner of that photography website did not respond to requests for comment, but it’s also likely his Google advertising account was hacked and used to run these malicious ads.

❌
❌