Normal view

There are new articles available, click to refresh the page.
Today — 26 June 2024Cybersecurity

California Privacy Watchdog Inks Deal with French Counterpart to Strengthen Data Privacy Protections

Data Privacy Protections, Data Privacy, CNIL, CPPA, CCPA, Privacy, Protection

In a significant move to bolster data privacy protections, the California Privacy Protection Agency (CPPA) inked a new partnership with France’s Commission Nationale de l'Informatique et des Libertés (CNIL). The collaboration aims to conduct joint research on data privacy issues and share investigative findings that will enhance the capabilities of both organizations in safeguarding personal data. The partnership between CPPA and CNIL shows the growing emphasis on international collaboration in data privacy protection. Both California and France, along with the broader European Union (EU) through its General Data Protection Regulation (GDPR), recognize that effective data privacy measures require global cooperation. France’s membership in the EU brings additional regulatory weight to this partnership and highlights the necessity of cross-border collaboration to tackle the complex challenges of data protection in an interconnected world.

What the CPPA-CNIL Data Privacy Protections Deal Means

The CPPA on Tuesday outlined the goals of the partnership, stating, “This declaration establishes a general framework of cooperation to facilitate joint internal research and education related to new technologies and data protection issues, share best practices, and convene periodic meetings.” The strengthened framework is designed to enable both agencies to stay ahead of emerging threats and innovations in data privacy. Michael Macko, the deputy director of enforcement at the CPPA, said there were practical benefits of this collaboration. “Privacy rights are a commercial reality in our global economy,” Macko said. “We’re going to learn as much as we can from each other to advance our enforcement priorities.” This mutual learning approach aims to enhance the enforcement capabilities of both agencies, ensuring they can better protect consumers’ data in an ever-evolving digital landscape.

CPPA’s Collaborative Approach

The partnership with CNIL is not the CPPA’s first foray into international cooperation. The California agency also collaborates with three other major international organizations: the Asia Pacific Privacy Authorities (APPA), the Global Privacy Assembly, and the Global Privacy Enforcement Network (GPEN). These collaborations help create a robust network of privacy regulators working together to uphold high standards of data protection worldwide. The CPPA was established following the implementation of California's groundbreaking consumer privacy law, the California Consumer Privacy Act (CCPA). As the first comprehensive consumer privacy law in the United States, the CCPA set a precedent for other states and countries looking to enhance their data protection frameworks. The CPPA’s role as an independent data protection authority mirror that of the CNIL - France’s first independent data protection agency – which highlights the pioneering efforts of both regions in the field of data privacy. Data Privacy Protections By combining their resources and expertise, the CPPA and CNIL aim to tackle a range of data privacy issues, from the implications of new technologies to the enforcement of data protection laws. This partnership is expected to lead to the development of innovative solutions and best practices that can be shared with other regulatory bodies around the world. As more organizations and governments recognize the importance of safeguarding personal data, the need for robust and cooperative frameworks becomes increasingly clear. The CPPA-CNIL partnership serves as a model for other regions looking to strengthen their data privacy measures through international collaboration.
Yesterday — 25 June 2024Cybersecurity

Neiman Marcus confirms breach. Is the customer data already for sale?

25 June 2024 at 17:35

Luxury retail chain Neiman Marcus has begun to inform customers about a cyberattack it discovered in May. The attacker compromised a database platform storing customers’ personal information.

The letter tells customers:

“Promptly after learning of the issue, we took steps to contain it, including by disabling access to the relevant database platform.”

In the data breach notification, Neiman Marcus says 64,472 people are affected.

An investigation showed that the data contained information such as name, contact data, date of birth, and Neiman Marcus or Bergdorf Goodman gift card numbers. According to Neiman Marcus, the exposed data does not include gift card PINs. Shortly after the data breach disclosure, a cybercriminal going by the name “Sp1d3r” posted on BreachForums that they were willing to sell the data.

Post by Sp1d3r offering Neiman Marcus data for sale which has since been removed
Image courtesy of Daily Dark Web

“Neiman Marcus not interested in paying to secure data. We give them opportunity to pay and they decline. Now we sell. Enjoy!”

According to Sp1d3r, the data includes name, address, phone, dates of birth, email, last four digits of Social Security Numbers, and much more in 6 billion rows of customer shopping records, employee data, and store information.

Neiman Marcus is reportedly one of the many victims of the Snowflake incident, in which the third-party platform used by many big brands was targeted by cybercriminals. The name Sp1d3r has been associated with the selling of information belonging to other Snowflake customers.

Oddly enough, Sp1d3r’s post seems to have since disappeared.

current screenshot of Sp1d3r's profile showing 1 less post and thread
Later screenshot

Sp1d3r’s post count went down back to 19 instead of the 20 displayed in the screenshot above.

So, the post has either been removed, withdrawn, or hidden for reasons which are currently unknown. As usual, we will keep an eye on how this develops.

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened, and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims, and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

Check your exposure

While matters are still unclear on how much information was involved in the Neiman Marcus breach, it’s likely you’ve had other personal information exposed online in previous data breaches. You can check what personal information of yours has been exposed with our Digital Footprint portal. Just enter your email address (it’s best to submit the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Alert: Australian Non-Profit Accuses Google Privacy Sandbox

25 June 2024 at 03:00

Google’s initiative to phase out third-party tracking cookies through its Google Privacy Sandbox has encountered criticism from Austrian privacy advocacy group noyb (none of your business). The non-profit alleges that Google’s proposed solution still facilitates user tracking, albeit in a different form. Allegations of Misleading Practices   According to noyb, Google’s Privacy Sandbox, marketed as […]

The post Alert: Australian Non-Profit Accuses Google Privacy Sandbox appeared first on TuxCare.

The post Alert: Australian Non-Profit Accuses Google Privacy Sandbox appeared first on Security Boulevard.

Before yesterdayCybersecurity

Rafel RAT Used in 120 Campaigns Targeting Android Device Users

24 June 2024 at 13:33
Android Rafel RAT ransomware

Multiple bad actors are using the Rafel RAT malware in about 120 campaigns aimed at compromising Android devices and launching a broad array of attacks that range from stealing data and deleting files to espionage and ransomware. Rafel RAT is an open-source remote administration tool that is spread through phishing campaigns aimed at convincing targets..

The post Rafel RAT Used in 120 Campaigns Targeting Android Device Users appeared first on Security Boulevard.

Change Healthcare confirms the customer data stolen in ransomware attack

24 June 2024 at 12:42

For the first time since news broke about a ransomware attack on Change Healthcare, the company has released details about the data stolen during the attack.

First, a quick refresher: On February 21, 2024, Change Healthcare experienced serious system outages due to a cyberattack. The incident led to widespread billing outages, as well as disruptions at pharmacies across the United States. Patients were left facing enormous pharmacy bills, small medical providers teetered on the edge of insolvency, and the government scrambled to keep the money flowing and the lights on. The ransomware group ALPHV claimed responsibility for the attack.

But shortly after, the ALPHV group disappeared in an unconvincing exit scam designed to make it look as if the FBI had seized control over the group’s website. Then a new ransomware group, RansomHub, listed the organization as a victim on its dark web leak site, saying it possessed 4 TB of “highly selective data,” relating to “all Change Health clients that have sensitive data being processed by the company.”

In April, parent company UnitedHealth Group released an update, saying:

“Based on initial targeted data sampling to date, the company has found files containing protected health information (PHI) or personally identifiable information (PII), which could cover a substantial proportion of people in America.”

Now, Change Healthcare has detailed the types of medical and patient data that was stolen. Although Change cannot provide exact details for every individual, the exposed information may include:

  • Contact information: Names, addresses, dates of birth, phone numbers, and email addresses.
  • Health insurance information: Details about primary, secondary, or other health plans/policies, insurance companies, member/group ID numbers, and Medicaid-Medicare-government payor ID numbers.
  • Health information: Medical record numbers, providers, diagnoses, medicines, test results, images, and details of care and treatment.
  • Billing, claims, and payment information: Claim numbers, account numbers, billing codes, payment card details, financial and banking information, payments made, and balances due.
  • Other personal information: Social Security numbers, driver’s license or state ID numbers, and passport numbers.

Change Healthcare added:

“The information that may have been involved will not be the same for every impacted individual. To date, we have not yet seen full medical histories appear in the data review.”

Change Healthcare says it will send written letters—as long as it has a person’s address and they haven’t opted out of notifications—once it has concluded the data review.

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened, and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims, and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

Check your digital footprint

Malwarebytes has a new free tool for you to check how much of your personal data has been exposed online. Submit your email address (it’s best to give the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report and recommendations.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Facial Recognition Startup Clearview AI Settles Privacy Suit

24 June 2024 at 03:09

Facial recognition startup Clearview AI has reached a settlement in an Illinois lawsuit alleging its massive photographic collection of faces violated the subjects’ privacy rights.

The post Facial Recognition Startup Clearview AI Settles Privacy Suit appeared first on SecurityWeek.

Social Media Warning Labels, Should You Store Passwords in Your Web Browser?

By: Tom Eston
24 June 2024 at 00:00

In this episode of the Shared Security Podcast, the team debates the Surgeon General’s recent call for social media warning labels and explores the pros and cons. Scott discusses whether passwords should be stored in web browsers, potentially sparking strong opinions. The hosts also provide an update on Microsoft’s delayed release of CoPilot Plus PCs […]

The post Social Media Warning Labels, Should You Store Passwords in Your Web Browser? appeared first on Shared Security Podcast.

The post Social Media Warning Labels, Should You Store Passwords in Your Web Browser? appeared first on Security Boulevard.

💾

First million breached Ticketmaster records released for free

21 June 2024 at 12:01

The cybercriminal acting under the name “Sp1d3r” gave away the first 1 million records that are part of the data set that they claimed to have stolen from Ticketmaster/Live Nation. The files were released without a price, for free.

When Malwarebytes Labs first learned about this data breach, it happened to be the first major event that was shared on the resurrected BreachForums, and someone acting under the handle “ShinyHunters” offered the full details (name, address, email, phone) of 560 million customers for sale.

The same data set was offered for sale in an almost identical post on another forum by someone using the handle “SpidermanData.” This could be the same person or a member of the ShinyHunters group.

Following this event, Malwarebytes Labs advised readers on how to respond and stay safe. Importantly, even when a breach isn’t a “breach”—in that immediate moment when the details have yet to be confirmed and a breach subject is readying its public statements—the very news of the suspected breach can be used by advantageous cybercriminals as a phishing lure.

Later, Ticketmaster confirmed the data breach.

Bleeping Computer spoke to ShinyHunters who said they already had interested buyers. Now, Sp1d3r, who was seen posting earlier about Advance Auto Parts customer data and Truist Bank data, has released 1 million Ticketmaster related data records for free.

post giving away 1 million Ticketmaster data records
Post by Sp1d3r

In a post on BreachForums, Sp1d3r said:

“Ticketmaster will not respond to request to buy data from us.

They care not for the privacy of 680 million customers, so give you the first 1 million users free.”

The cybercriminals that are active on those forums will jump at the occasion and undoubtedly try to monetize those records. This likely means that innocent users that are included in the first million released records could receive a heavy volume of spam and phishing emails in the coming days.

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims, and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

Check your exposure

While matters are still unclear how much information was involved, it’s likely you’ve had other personal information exposed online in previous data breaches. You can check what personal information of yours has been exposed with our Digital Footprint portal. Just enter your email address (it’s best to submit the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Was T-Mobile compromised by a zero-day in Jira?

21 June 2024 at 03:34

A moderator of the notorious data breach trading platform BreachForums is offering data for sale they claim comes from a data breach at T-Mobile.

The moderator, going by the name of IntelBroker, describes the data as containing source code, SQL files, images, Terraform data, t-mobile.com certifications, and “Siloprograms.” (We’ve not heard of siloprograms, and can’t find a reference to them anywhere, so perhaps it’s a mistranslation or typo.)

post offereing data for sale supposedly from a T-Mobile internal breach
Post offereing data for sale supposedly from a T-Mobile internal breach

To prove they had the data, IntelBroker posted several screenshots showing access with administrative privileges to a Confluence server and T-Mobile’s internal Slack channels for developers.

But according to sources known to BleepingComputer, the data shared by IntelBroker actually consists of older screenshots. These screenshots show T-Mobile’s infrastructure, posted at a known—yet unnamed—third-party vendor’s servers, from where they were stolen.

When we looked at the screenshots IntelBroker attached to their post, we spotted something interesting in one of them.

search for vulnerability
Found CVE-2024-1597

This screenshot shows a search query for a critical vulnerability in Jira, a project management tool used by teams to plan, track, release and support software. It’s typically a place where you could find the source code of works in progress.

The search returns the result CVE-2024-1597, a SQL injection vulnerability. SQL injection happens when a cybercriminal injects malicious SQL code into a form on a website, such as a login page, instead of the data the form is asking for. The vulnerability affects Confluence Data Center and Server according to Atlassian’s May security bulletin.

For a better understanding, it’s important to note that Jira and Confluence are both products created by Atlassian, where Jira is the project management and issue tracking tool and Confluence is the collaboration and documentation tool. They are often used together.

If IntelBroker has a working exploit for the SQL injection vulnerability, this could also explain their claim that they have the source code of three internal tools used at Apple, including a single sign-on authentication system known as AppleConnect.

This theory is supported by the fact that IntelBroker is also offering a Jira zero-day for sale.

IntelBroker offering zero-day for JIra for sale
IntelBroker selling zero-day for JIra

“I’m selling a zero-day RCE for Atlassian’s Jira.

Works for the latest version of the desktop app, as well as Jira with confluence.

No login is required for this, and works with Okta SSO.”

If this is true then this exploit, or its fruits, might be used for data breaches that involve personal data.

Meanwhile, T-Mobile has denied it has suffered a breach, saying it is investigating whether there has been a breach at a third-party provider.

“We have no indication that T-Mobile customer data or source code was included and can confirm that the bad actor’s claim that T-Mobile’s infrastructure was accessed is false.”


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

EU Aims to Ban Math — ‘Chat Control 2.0’ Law is Paused but not Stopped

20 June 2024 at 12:43
“Oh, won’t somebody please think of the children?”

Ongoing European Union quest to break end-to-end encryption (E2EE) mysteriously disappears.

The post EU Aims to Ban Math — ‘Chat Control 2.0’ Law is Paused but not Stopped appeared first on Security Boulevard.

TikTok facing fresh lawsuit in US over children’s privacy

20 June 2024 at 05:58

The Federal Trade Commission (FTC) has announced it’s referred a complaint against TikTok and parent company ByteDance to the Department of Justice.

The investigation originally focused on Musical.ly which was acquired by ByteDance on November 10, 2017, and merged it into TikTok.

The FTC started a compliance review of Musical.ly following a 2019 settlement with the company for violations of the Children’s Online Privacy Protection Act (COPPA). In the settlement, Musical.ly received a fine of $5.7m for collecting personal information from children without parental consent.

One of the main concerns was that Musical.ly did not ask the user’s age and later failed to go back and request age information for people who already had accounts.

COPPA requires sites and services like Musical.ly and TikTok – among other things – to get parental consent before collecting personal information from children under 13.

Musical.ly also failed to deal with complaints properly. The FTC found that—in just a two-week period in September 2016—the company received over 300 complaints from parents asking Musical.ly to delete their child’s account. However, under COPPA it’s not enough just to delete existing accounts, companies have to remove the kids’ videos and profiles from the company’s servers; Musical.ly failed to do this.

In 2022, TikTok itself faced a $28m fine for failing to protect children’s privacy after an investigation of a possible breach of the UK’s data protection laws.

In the US, TikTok agreed to pay $92 million in 2021 to settle dozens of lawsuits alleging that it harvested personal data from users, including information using facial recognition technology, without consent, and shared the data with third parties.

The FTC states that during the investigation it uncovered reasons to believe that “defendants are violating or are about to violate the law and that a proceeding is in the public interest.”

The FTC also said it usually doesn’t publicize the referral of complaints but feels it is in the public interest to do so now.

TikTok has been in the crosshairs of privacy and security professionals and politicians for years.

In June 2022,  the FCC (Federal Communications Commission), called on the CEOs of Apple and Google to remove TikTok from their app stores considering it an unacceptable national security risk because of its Chinese ownership.

In 2023, General Paul Nakasone, Director of the National Security Agency (NSA) referred to TikTok as a loaded gun in the hands of America’s TikTok-addicted youth.

Recently, we reported about the take-over of some high-profile TikTok accounts just by opening a Direct Message.

And the clock is ticking when it comes to TikTok’s presence in the US, after the US Senate has approved a bill that would effectively ban TikTok from the US unless Chinese owner ByteDance gives up its share of the still immensely popular app.

Somehow we don’t think we’ve heard the last of this.

Check your digital footprint

Malwarebytes has a new free tool for you to check how much of your personal data has been exposed online. Submit your email address (it’s best to give the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report and recommendations.

43% of couples experience pressure to share logins and locations, Malwarebytes finds

18 June 2024 at 09:00

All isn’t fair in love and romance today, as 43% of people in a committed relationship said they have felt pressured by their own partners to share logins, passcodes, and/or locations. A worrying 7% admitted that this type of pressure has included the threat of breaking up or the threat of physical or emotional harm.

These are latest findings from original research conducted by Malwarebytes to explore how romantic couples navigate shared digital access to one another’s devices, accounts, and location information.

In short, digital sharing is the norm in modern relationships, but it doesn’t come without its fears.

While everybody shares some type of device, account, or location access with their significant other (100% of respondents), and plenty grant their significant other access to at least one personal account (85%), a sizeable portion longs for something different—31% said they worry about “how easy it is for my partner to track what I’m doing and where I am all times because of how much we share,” and 40% worry that “telling my partner I don’t want to share logins, PINs, and/or locations would upset them.”

By surveying 500 people in committed relationships in the United States, Malwarebytes has captured a unique portrait of what it means to date, marry, and be in love in 2024—a part of life that is now inseparable from smart devices, apps, and the internet at large.

The complete findings can be found in the latest report, “What’s mine is yours: How couples share an all-access pass to their digital lives.” You can read the full report below.

Here are some of the key findings:

  • Partners share their personal login information for an average of 12 different types of accounts.
  • 48% of partners share the login information of their personal email accounts.
  • 30% of partners regret sharing location tracking.
  • 18% of partners regret sharing account access. The number is significantly higher for men (30%).
  • 29% of partners said an ex-partner used their accounts to track their location, impersonate them, access their financial accounts, and other harms.
  • Around one in three Gen Z and Millennial partners report an ex has used their accounts to stalk them.

But the data doesn’t only point to causes for concern. It also highlights an opportunity for learning. As Malwarebytes reveals in this latest research, people are looking for guidance, with seven in 10 people admitting they want help navigating digital co-habitation.

According to one Gen Z survey respondent:

“I feel like it might take some effort (to digitally disentangle) because we are more seriously involved. We have many other kinds of digital ties that we would have to undo in order to break free from one another.”

That is why, today, Malwarebytes is also launching its online resource hub: Modern Love in the Digital Age. At this new guidance portal, readers can learn about whether they should share their locations with their partners, why car location tracking presents a new problem for some couples, and how they can protect themselves from online harassment. Access the hub below.

The Surgeon General's Fear-Mongering, Unconstitutional Effort to Label Social Media

17 June 2024 at 14:46

Surgeon General Vivek Murthy’s extraordinarily misguided and speech-chilling call this week to label social media platforms as harmful to adolescents is shameful fear-mongering that lacks scientific evidence and turns the nation’s top physician into a censor. This claim is particularly alarming given the far more complex and nuanced picture that studies have drawn about how social media and young people’s mental health interact.

The Surgeon General’s suggestion that speech be labeled as dangerous is extraordinary. Communications platforms are not comparable to unsafe food, unsafe cars, or cigarettes, all of which are physical products—rather than communications platforms—that can cause physical injury. Government warnings on speech implicate our fundamental rights to speak, to receive information, and to think. Murthy’s effort will harm teens, not help them, and the announcement puts the surgeon general in the same category as censorial public officials like Anthony Comstock

There is no scientific consensus that social media is harmful to children's mental health. Social science shows that social media can help children overcome feelings of isolation and anxiety. This is particularly true for LBGTQ+ teens. EFF recently conducted a survey in which young people told us that online platforms are the safest spaces for them, where they can say the things they can't in real life ‘for fear of torment.’ They say these spaces have improved their mental health and given them a ‘haven’ to talk openly and safely. This comports with Pew Research findings that teens are more likely to report positive than negative experiences in their social media use. 

Additionally, Murthy’s effort to label social media creates significant First Amendment problems in its own right, as any government labeling effort would be compelled speech and courts are likely to strike it down.

Young people’s use of social media has been under attack for several years. Several states have recently introduced and enacted unconstitutional laws that would require age verification on social media platforms, effectively banning some young people from them. Congress is also debating several federal censorship bills, including the Kids Online Safety Act and the Kids Off Social Media Act, that would seriously impact young people’s ability to use social media platforms without censorship. Last year, Montana banned the video-sharing app TikTok, citing both its Chinese ownership and its interest in protecting minors from harmful content. That ban was struck down as unconstitutionally overbroad; despite that, Congress passed a similar federal law forcing TikTok’s owner, ByteDance, to divest the company or face a national ban.

Like Murthy, lawmakers pushing these regulations cherry-pick the research, nebulously citing social media’s impact on young people, and dismissing both positive aspects of platforms and the dangerous impact these laws have on all users of social media, adults and minors alike. 

We agree that social media is not perfect, and can have negative impacts on some users, regardless of age. But if Congress is serious about protecting children online, it should enact policies that promote choice in the marketplace and digital literacy. Most importantly, we need comprehensive privacy laws that protect all internet users from predatory data gathering and sales that target us for advertising and abuse.

Microsoft Recall delayed after privacy and security concerns

17 June 2024 at 09:55

Microsoft has announced it will postpone the broadly available preview of the heavily discussed Recall feature for Copilot+ PCs. Copilot+ PCs are personal computers that come equipped with several artificial intelligence (AI) features.

The Recall feature tracks anything from web browsing to voice chats. The idea is that Recall can assist users to reconstruct past activity by taking regular screenshots of a user’s activity and storing them locally. The user would then be able to search the database for anything they’ve seen on their PC.

However, Recall received heavy criticism by security researchers and privacy advocates since it was announced last month. The ensuing discussion saw a lot of contradictory statements. For example, Microsoft claimed that Recall would be disabled by default, while the original documentation said otherwise.

Researchers demonstrated how easy it was to extract and search through Recall snapshots on a compromised system. While some may remark that the compromised system is the problem in that equation—and they are not wrong—Recall would potentially provide an attacker with a lot of information that normally would not be accessible. Basically, it would be a goldmine that spyware and information stealers could easily access and search.

In Microsoft’s own words:

“Recall does not perform content moderation. It will not hide information such as passwords or financial account numbers. That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry.”

Microsoft didn’t see the problem, with its vice chair and president, Brad Smith even using Recall as an example to demonstrate how Microsoft is secure during the Committee Hearing: A Cascade of Security Failures: Assessing Microsoft Corporation’s Cybersecurity Shortfalls and the Implications for Homeland Security.

But now things have changed, and Recall will now only be available for participants in the Windows Insider Program (WIP) in the coming weeks, instead of being rolled out to all Copilot+ PC users on June 18 as originally planned.

Another security measure taken only as an afterthought was that users will now have to log into Windows Hello in order to activate Recall and to view your screenshot timeline.

In its blog, Microsoft indicates it will act on the feedback it expects to receive from WIP users.

“This decision is rooted in our commitment to providing a trusted, secure and robust experience for all customers and to seek additional feedback prior to making the feature available to all Copilot+ PC users.”

Our hope is that the WIP community will convince Microsoft to abandon the whole Recall idea. If not, we will make sure to let you know how you can disable it or use it more securely if you wish to do so.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Truist bank confirms data breach

14 June 2024 at 12:29

On Wednesday June 12, 2024, a well-known dark web data broker and cybercriminal acting under the name “Sp1d3r” offered a significant amount of data allegedly stolen from Truist Bank for sale.

Truist is a US bank holding company and operates 2,781 branches in 15 states and Washington DC. By assets, it is in the top 10 of US banks. In 2020, Truist provided financial services to about 12 million consumer households.

The online handle of the seller immediately raised the suspicion that this was yet another Snowflake related data breach.

Sp1d3r offering Truist bank data for sale
Post by Sp1d3r on breach forum

The post also mentions Suntrust bank because Truist Bank arose after SunTrust Banks and BB&T (Branch Banking and Trust Company) merged in December 2019.

For the price of $1,000,000, other cybercriminals can allegedly get their hands on:

  • Employee Records: 65,000 records containing detailed personal and professional information.
  • Bank Transactions: Data including customer names, account numbers, and balances.
  • IVR Source Code: Source code for the bank’s Interactive Voice Response (IVR) funds transfer system.

IVR is a technology that allows telephone users to interact with a computer-operated telephone system through the use of voice and Dual-tone multi-frequency signaling (DTMF aka Touch-Tone) tones input with a keypad. Access to the source code may enable criminals to find security vulnerabilities they can abuse.

Given the source and the location where the data were offered, we decided at the time to keep an eye on things but not actively report on it. But now a spokesperson for Truist Bank told BleepingComputer:

“In October 2023, we experienced a cybersecurity incident that was quickly contained.”

Further, the spokesperson stated that after an investigation, the bank notified a small number of clients and denied any connection with Snowflake.

“That incident is not linked to Snowflake. To be clear, we have found no evidence of a Snowflake incident at our company.”

But the bank disclosed that based on new information that came up during the investigation, it has started another round of informing affected customers.

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims, and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

Check your exposure

While matters are still unclear how much information was involved, it’s likely you’ve had other personal information exposed online in previous data breaches. You can check what personal information of yours has been exposed with our Digital Footprint portal. Just enter your email address (it’s best to submit the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Tile/Life360 Breach: ‘Millions’ of Users’ Data at Risk

13 June 2024 at 13:28
Life360 CEO Chris Hulls

Location tracking service leaks PII, because—incompetence? Seems almost TOO easy.

The post Tile/Life360 Breach: ‘Millions’ of Users’ Data at Risk appeared first on Security Boulevard.

Connecticut Has Highest Rate of Health Care Data Breaches: Study

13 June 2024 at 09:19
health care data breaches cybersecurity

It’s no secret that hospitals and other health care organizations are among the top targets for cybercriminals. The ransomware attacks this year on UnitedHealth Group’s Change Healthcare subsidiary, nonprofit organization Ascension, and most recently the National Health Service in England illustrate not only the damage to these organizations’ infrastructure and the personal health data that’s..

The post Connecticut Has Highest Rate of Health Care Data Breaches: Study appeared first on Security Boulevard.

EFF to Ninth Circuit: Abandoning a Phone Should Not Mean Abandoning Its Contents

12 June 2024 at 18:24

This post was written by EFF legal intern Danya Hajjaji.

Law enforcement should be required to obtain a warrant to search data contained in abandoned cell phones, EFF and others explained in a friend-of-the-court brief to the Ninth Circuit Court of Appeals.

The case, United States v. Hunt, involves law enforcement’s seizure and search of an iPhone the defendant left behind after being shot and taken to the hospital. The district court held that the iPhone’s physical abandonment meant that the defendant also abandoned the data stored on the phone. In support of the defendant’s appeal, we urged the Ninth Circuit to reverse the district court’s ruling and hold that the Fourth Amendment’s abandonment exception does not apply to cell phones: as it must in other circumstances, law enforcement should generally have to obtain a warrant before it searches someone’s cell phone.

Cell phones differ significantly from other physical property. They are pocket-sized troves of highly sensitive information with immense storage capacity. Today’s phone carries and collects vast and varied data that encapsulates a user’s daily life and innermost thoughts.

Courts—including the US Supreme Court—have recognized that cell phones contain the “sum of an individual’s private life.” And, because of this recognition, law enforcement must generally obtain a warrant before it can search someone’s phone.

While people routinely carry cell phones, they also often lose them. That should not mean losing the data contained on the phones.

While the Fourth Amendment’s ”abandonment doctrine” permits law enforcement to conduct a warrantless seizure or search of an abandoned item, EFF’s brief explains that this precedent does not mechanically apply to cell phones. As the Supreme Court has recognized multiple times, the rote application of case law from prior eras with less invasive and revealing technologies threatens our Fourth Amendment protections.

Our brief goes on to explain that a cell phone owner rarely (if ever) intentionally relinquishes their expectation of privacy and possessory interests in data on their cell phones, as they must for the abandonment doctrine to apply. The realities of the modern cell phone seldom infer a purpose to discard the wealth of data they contain. Cell phone data is not usually confined to the phone itself, and is instead stored in the “cloud” and accessible across multiple devices (such as laptops, tablets, and smartwatches).

We hope the Ninth Circuit recognizes that expanding the abandonment doctrine in the manner envisioned by the district court in Hunt would make today’s cell phone an accessory to the erosion of Fourth Amendment rights.

The Next Generation of Cell-Site Simulators is Here. Here’s What We Know.

12 June 2024 at 16:40

Dozens of policing agencies are currently using cell-site simulators (CSS) by Jacobs Technology and its Engineering Integration Group (EIG), according to newly-available documents on how that company provides CSS capabilities to local law enforcement. 

A proposal document from Jacobs Technology, provided to the Massachusetts State Police (MSP) and first spotted by the Boston Institute for Nonprofit Journalism (BINJ), outlines elements of the company’s CSS services, which include discreet integration of the CSS system into a Chevrolet Silverado and lifetime technical support. The proposal document is part of a winning bid Jacobs submitted to MSP earlier this year for a nearly $1-million contract to provide CSS services, representing the latest customer for one of the largest providers of CSS equipment.

An image of the Jacobs CSS system as integrated into a Chevrolet Silverado for the Virginia State Police.

An image of the Jacobs CSS system as integrated into a Chevrolet Silverado for the Virginia State Police. Source: 2024 Jacobs Proposal Response

The proposal document from Jacobs provides some of the most comprehensive information about modern CSS that the public has had access to in years. It confirms that law enforcement has access to CSS capable of operating on 5G as well as older cellular standards. It also gives us our first look at modern CSS hardware. The Jacobs system runs on at least nine software-defined radios that simulate cellular network protocols on multiple frequencies and can also gather wifi intelligence. As these documents describe, these CSS are meant to be concealed within a common vehicle. Antennas are hidden under a false roof so nothing can be seen outside the vehicles, which is a shift from the more visible antennas and cargo van-sized deployments we’ve seen before.  The system also comes with a TRACHEA2+ and JUGULAR2+ for direction finding and mobile direction finding. 

The Jacobs 5G CSS base station system.

The Jacobs 5G CSS base station system. Source: 2024 Jacobs Proposal Response

CSS, also known as IMSI catchers, are among law enforcement’s most closely-guarded secret surveillance tools. They act like real cell phone towers, “tricking” mobile devices into connecting to them, designed to intercept the information that phones send and receive, like the location of the user and metadata for phone calls, text messages, and other app traffic. CSS are highly invasive and used discreetly. In the past, law enforcement used a technique called “parallel construction”—collecting evidence in a different way to reach an existing conclusion in order to avoid disclosing how law enforcement originally collected it—to circumvent public disclosure of location findings made through CSS. In Massachusetts, agencies are expected to get a warrant before conducting any cell-based location tracking. The City of Boston is also known to own a CSS. 

This technology is like a dragging fishing net, rather than a focused single hook in the water. Every phone in the vicinity connects with the device; even people completely unrelated to an investigation get wrapped up in the surveillance. CSS, like other surveillance technologies, subjects civilians to widespread data collection, even those who have not been involved with a crime, and has been used against protestors and other protected groups, undermining their civil liberties. Their adoption should require public disclosure, but this rarely occurs. These new records provide insight into the continued adoption of this technology. It remains unclear whether MSP has policies to govern its use. CSS may also interfere with the ability to call emergency services, especially for people who have to use accessibility technologies for those who cannot hear.

Important to the MSP contract is the modification of a Chevrolet Silverado with the CSS system. This includes both the surreptitious installment of the CSS hardware into the truck and the integration of its software user interface into the navigational system of the vehicle. According to Jacobs, this is the kind of installation with which they have a lot of experience.

Jacobs has built its CSS project on military and intelligence community relationships, which are now informing development of a tool used in domestic communities, not foreign warzones in the years after September 11, 2001. Harris Corporation, later L3Harris Technologies, Inc., was the largest provider of CSS technology to domestic law enforcement but stopped selling to non-federal agencies in 2020. Once Harris stopped selling to local law enforcement the market was open to several competitors, one of the largest of which was KeyW Corporation. Following Jacobs’s 2019 acquisition of The KeyW Corporation and its Engineering Integration Group (EIG), Jacobs is now a leading provider of CSS to police, and it claims to have more than 300 current CSS deployments globally. EIG’s CSS engineers have experience with the tool dating to late 2001, and they now provide the spectrum of CSS-related services to clients, including integration into vehicles, training, and maintenance, according to the document. Jacobs CSS equipment is operational in 35 state and local police departments, according to the documents.

EFF has been able to identify 13 agencies using the Jacobs equipment, and, according to EFF’s Atlas of Surveillance, more than 70 police departments have been known to use CSS. Our team is currently investigating possible acquisitions in California, Massachusetts, Michigan, and Virginia. 

An image of the Jacobs CSS system interface integrated into the factory-provided vehicle navigation system.

An image of the Jacobs CSS system interface integrated into the factory-provided vehicle navigation system. Source: 2024 Jacobs Proposal Response

The proposal also includes details on other agencies’ use of the tool, including that of the Fontana, CA Police Department, which it says has deployed its CSS more than 300 times between 2022 and 2023, and Prince George's County Sheriff (MO), which has also had a Chevrolet Silverado outfitted with CSS. 

Jacobs isn’t the lone competitor in the domestic CSS market. Cognyte Software and Tactical Support Equipment, Inc. also bid on the MSP contract, and last month, the City of Albuquerque closed a call for a cell-site simulator that it awarded to Cognyte Software Ltd. 

Adobe clarifies Terms of Service change, says it doesn’t train AI on customer content

12 June 2024 at 11:28

Following days of user pushback that included allegations of forcing a “spyware-like” Terms of Service (ToS) update into its products, design software giant Adobe explained itself with several clarifications.

Apparently, the concerns raised by the community, especially among Photoshop and Substance 3D users, caused the company to reflect on the language it used in the ToS. The adjustments that Adobe announced earlier this month suggested that users give the company unlimited access to all their materials—including materials covered by company Non-Disclosure Agreements (NDAs)—for content review and similar purposes.

As Adobe included in its Terms of Service update:

“As a Business User, you may have different agreements with or obligations to a Business, which may affect your Business Profile or your Content. Adobe is not responsible for any violation by you of such agreements or obligations.

This wording immediately sparked the suspicion that the company intends to use user-generated content to train its AI models. In particular, users balked at the following language:

“[.] you grant us a non-exclusive, worldwide, royalty-free sublicensable, license, to use, reproduce, publicly display, distribute, modify, create derivative works based on, publicly perform, and translate the Content.”

To reassure these users, on June 10, Adobe explained:

“We don’t train generative AI on customer content. We are adding this statement to our Terms of Use to reassure people that is a legal obligation on Adobe. Adobe Firefly is only trained on a dataset of licensed content with permission, such as Adobe Stock, and public domain content where copyright has expired.”

Alas, several artists found images that reference their work on Adobe’s stock platform.

As we have explained many times, the length and the use of legalese in the ToS does not do either the user or the company any favors. It seems that Adobe understands this now as well.

“First, we should have modernized our Terms of Use sooner. As technology evolves, we must evolve the legal language that evolves our policies and practices not just in our daily operations, but also in ways that proactively narrow and explain our legal requirements in easy-to-understand language.”

Adobe also said in its blog post that it realized it has to earn the trust of its users and is taking the feedback very seriously and it will be grounds to discuss new changes. Most importantly it wants to stress that you own your content, you have the option to opt out of the product improvement program, and that Adobe does not scan content stored locally on your computer.

Adobe expects to roll out new terms of service on June 18th and aims to better clarify what Adobe is permitted to do with its customers’ work. This is a developing story, and we’ll keep you posted.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Apple Launches ‘Private Cloud Compute’ Along with Apple Intelligence AI

By: Alan J
11 June 2024 at 19:14

Private Cloud Compute Apple Intelligence AI

In a bold attempt to redefine cloud security and privacy standards, Apple has unveiled Private Cloud Compute (PCC), a groundbreaking cloud intelligence system designed to back its new Apple Intelligence with safety and transparency while integrating Apple devices into the cloud. The move comes after recognition of the widespread concerns surrounding the combination of artificial intelligence and cloud technology.

Private Cloud Compute Aims to Secure Cloud AI Processing

Apple has stated that its new Private Cloud Compute (PCC) is designed to enforce privacy and security standards over AI processing of private information. For the first time ever, Private Cloud Compute brings the same level of security and privacy that our users expect from their Apple devices to the cloud," said an Apple spokesperson. [caption id="attachment_76690" align="alignnone" width="1492"]Private Cloud Compute Apple Intelligence Source: security.apple.com[/caption] At the heart of PCC is Apple's stated commitment to on-device processing. When Apple is responsible for user data in the cloud, we protect it with state-of-the-art security in our services," the spokesperson explained. "But for the most sensitive data, we believe end-to-end encryption is our most powerful defense." Despite this commitment, Apple has stated that for more sophisticated AI requests, Apple Intelligence needs to leverage larger, more complex models in the cloud. This presented a challenge to the company, as traditional cloud AI security models were found lacking in meeting privacy expectations. Apple stated that PCC is designed with several key features to ensure the security and privacy of user data, claiming the following implementations:
  • Stateless computation: PCC processes user data only for the purpose of fulfilling the user's request, and then erases the data.
  • Enforceable guarantees: PCC is designed to provide technical enforcement for the privacy of user data during processing.
  • No privileged access: PCC does not allow Apple or any third party to access user data without the user's consent.
  • Non-targetability: PCC is designed to prevent targeted attacks on specific users.
  • Verifiable transparency: PCC provides transparency and accountability, allowing users to verify that their data is being processed securely and privately.

Apple Invites Experts to Test Standards; Online Reactions Mixed

At this week's Apple Annual Developer Conference, Apple's CEO Tim Cook described Apple Intelligence as a "personal intelligence system" that could understand and contextualize personal data to deliver results that are "incredibly useful and relevant," making "devices even more useful and delightful." Apple Intelligence mines and processes data across apps, software and services across Apple devices. This mined data includes emails, images, messages, texts, messages, documents, audio files, videos, contacts, calendars, Siri conversations, online preferences and past search history. The new PCC system attempts to ease consumer privacy and safety concerns. In its description of 'Verifiable transparency,' Apple stated:
"Security researchers need to be able to verify, with a high degree of confidence, that our privacy and security guarantees for Private Cloud Compute match our public promises. We already have an earlier requirement for our guarantees to be enforceable. Hypothetically, then, if security researchers had sufficient access to the system, they would be able to verify the guarantees."
However, despite Apple's assurances, the announcement of Apple Intelligence drew mixed reactions online, with some already likening it to Microsoft's Recall. In reaction to Apple's announcement, Elon Musk took to X to announce that Apple devices may be banned from his companies, citing the integration of OpenAI as an 'unacceptable security violation.' Others have also raised questions about the information that might be sent to OpenAI. [caption id="attachment_76692" align="alignnone" width="596"]Private Cloud Compute Apple Intelligence 1 Source: X.com[/caption] [caption id="attachment_76693" align="alignnone" width="418"]Private Cloud Compute Apple Intelligence 2 Source: X.com[/caption] [caption id="attachment_76695" align="alignnone" width="462"]Private Cloud Compute Apple Intelligence 3 Source: X.com[/caption] According to Apple's statements, requests made on its devices are not stored by OpenAI, and users’ IP addresses are obscured. Apple stated that it would also add “support for other AI models in the future.” Andy Wu, an associate professor at Harvard Business School, who researches the usage of AI by tech companies, highlighted the challenges of running powerful generative AI models while limiting their tendency to fabricate information. “Deploying the technology today requires incurring those risks, and doing so would be at odds with Apple’s traditional inclination toward offering polished products that it has full control over.”   Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Ticketmaster is Tip of Iceberg: 165+ Snowflake Customers Hacked

11 June 2024 at 11:15
Snowflake CISO Brad Jones

Not our fault, says CISO: “UNC5537” breached at least 165 Snowflake instances, including Ticketmaster, LendingTree and, allegedly, Advance Auto Parts.

The post Ticketmaster is Tip of Iceberg: 165+ Snowflake Customers Hacked appeared first on Security Boulevard.

23andMe data breach under joint investigation in two countries

11 June 2024 at 07:38

The British and Canadian privacy authorities have announced they will undertake a joint investigation into the data breach at global genetic testing company 23andMe that was discovered in October 2023.

On Friday October 6, 2023, 23andMe confirmed via a somewhat opaque blog post that cybercriminals had “obtained information from certain accounts, including information about users’ DNA Relatives profiles.”

Later, an investigation by 23andMe showed that an attacker was able to directly access the accounts of roughly 0.1% of 23andMe’s users, which is about 14,000 of its 14 million customers. The attacker accessed the accounts using credential stuffing which is where someone tries existing username and password combinations to see if they can log in to a service. These combinations are usually stolen from another breach and then put up for sale on the dark web. Because people often reuse passwords across accounts, cybercriminals buy those combinations and then use them to login on other services and platforms.

For a subset of these accounts, the stolen data contained health-related information based on the user’s genetics.

The finding that most data was accessed through credential stuffing led to 23andMe sending a letter to legal representatives of victims blaming the victims themselves.

Privacy Commissioner of Canada Philippe Dufresne and UK Information Commissioner John Edwards say they will investigate the 23andMe breach jointly, leveraging the combined resources and expertise of their two offices.

The privacy watchdogs are going to investigate:

  • the scope of information that was exposed by the breach and potential harms to affected individuals;
  • whether 23andMe had adequate safeguards to protect the highly sensitive information within its control; and
  • whether the company provided adequate notification about the breach to the two regulators and affected individuals as required under Canadian and UK privacy and data protection laws.               

The joint investigation will be conducted in accordance with the Memorandum of Understanding between the ICO and OPC.

Scan for your exposed personal data

You can check what personal information of yours has been exposed online with our Digital Footprint portal. Just enter your email address (it’s best to submit the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report. If your data was part of the 23andMe breach, we’ll let you know.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

When things go wrong: A digital sharing warning for couples

11 June 2024 at 06:55

“When things go wrong” is a troubling prospect for most couples to face, but the internet—and the way that romantic partners engage both with and across it—could require that this worst-case scenario become more of a best practice.

In new research that Malwarebytes will release this month, romantic partners revealed that the degree to which they share passwords, locations, and devices with one another can invite mild annoyances—like having an ex mooch off a shared Netflix account—serious invasions of privacy—like being spied on through a smart doorbell—and even stalking and abuse.

Importantly, this isn’t just about jilted exes. This is also about people in active, committed relationships who have been pressured or forced into digital sharing beyond their limit.

The proof is in the data.

When Malwarebytes surveyed 500 people in committed relationships, 30% said they regretted sharing location tracking with their partner, 27% worried about their partners tracking them through location-based apps and services, and 23% worried that their current partner had accessed their accounts without their permission.



Plenty of healthy, happy relationships share digital access through trust and consent. For those couples, mapping out how to digitally separate and insulate their accounts from one another “when things go wrong” could seem misguided.

But for the many spouses, girlfriends, boyfriends, and partners who do not fully trust their significant other—or who are still figuring out how much to trust someone new—this exercise should serve as an act of security.

Here’s what people can think about when working through just how much of their digital lives to share.

Inconvenient, annoying, and just plain bothersome

A great deal of digital sharing within couples occurs on streaming platforms. One partner has Netflix, the other has Hulu, the two share Disney+, and years down the line, the couple can’t quite tell who is in charge of Apple Music and who is supposed to cancel the one-week free trial to Peacock.

This logistical nightmare, already difficult for people who are not in a committed relationship, is further complicated after a breakup (or during the relationship if one partner is particularly sensitive about their weekly algorithmic recommendations from Spotify).

If an ex maintains access to your streaming accounts even after a breakup, there’s little chance for abuse, but the situation can be aggravating. Maybe you don’t want your ex to know that you’re watching corny rom-coms, or that you’re absolutely going through it on your seventh replay of Spotify’s “Angry Breakup Mix.” These are valid annoyances that will require a password reset to boot your ex out of the shared account.

But there’s one type of shared account that should raise more caution than those listed above: A shared online shopping account, like Amazon.

With access to a shared online shopping account, a spiteful ex could purchase goods using your saved credit card. They could also keep updates on your location should you ever move and change addresses in the app. This isn’t the same threat as an ex having your real-time location, but for some individuals—particularly survivors of domestic abuse who have escaped their partner—any leak of a new address presents a major risk.

Non-consensual tracking, monitoring, and spying

When couples move into the same home, it can make sense to start sharing a variety of location-based apps.

Looking for a vacation rental online for your next getaway? You’re (hopefully) lodging together. Ordering delivery because nobody wants to make dinner? That order is being sent to the same shared address. Even some credit cards offer specific bonuses on services like Lyft, incentivizing some couples to rely more heavily on one account to score extra credits.

While sharing access between these types of accounts can increase efficiency, it’s important to know—and this may sound obvious—that many of these same shared location-based apps can reveal locations to a romantic partner, even after a breakup.

Your vacation could be revealed to an ex who is abusing their previously shared login privileges into services like Airbnb or Vrbo, or by someone peering into the trip history of a shared Uber account that discloses that a car was recently taken to the airport. Food delivery apps, similarly, can reveal new addresses after a move—a particular risk for survivors of domestic abuse who are trying to escape their physical situation.

In fact, any account that tracks and provides access to location—including Google’s own “Timeline” feature and fitness tracking devices made by Strava—could, in the wrong hands, become a security risk for stalking and abuse.

The vulnerabilities extend farther.

With the popularity of Internet of Things devices like smart doorbells and baby monitors, some partners may want to consider how safe they are from spying in their own homes. Plenty of user posts on a variety of community forums claim that exes and former spouses weaponized video-equipped doorbells and baby monitors to spy on a partner.

These scenarios are frightening, but they are part of a larger question about whether you should share your location with your partner. With the proper care and discussion, your location-sharing will be consensual, respected, and convenient for all.

Stalking and abuse

When discussing the risks around digital sharing between couples, it’s important to clarify that trustworthy partners do not become abusive simply because of their access to technology. A shared food delivery app doesn’t guarantee that a partner will be spied on. A baby monitor with a live video stream is sometimes just that—a baby monitor.

But many of the stories shared here expose the dangers that lie within arm’s reach for abusive partners. The technology alone cannot be blamed for the abuse. Instead, the technology must be scrutinized simply because of its ubiquitous use in today’s world.

The most serious concerns regarding digital access are the potential for stalking and abuse.

For partners that share devices and device passcodes, the notorious threat of stalkerware makes it easy for an abusive partner to pry into a person’s photos, videos, phone calls, text messages, locations, and more. Stalkerware can be installed on a person’s device in a matter of minutes—a low barrier of entry for couples that live with one another and who share each other’s device passcodes.

For partners who share a vehicle, a recent problem has emerged. In December, The New York Times reported on the story of a woman who—despite obtaining a restraining order against her ex-husband—could not turn off her shared vehicle’s location tracking. Because the car was in her husband’s name, he was able to reportedly continue tracking and harassing her.

Even shared smart devices have become a threat. According to reporting from The New York Times in 2018, survivors of domestic abuse began calling support lines with a bevvy of new concerns within their homes:

“One woman had turned on her air-conditioner, but said it then switched off without her touching it. Another said the code numbers of the digital lock at her front door changed every day and she could not figure out why. Still another told an abuse help line that she kept hearing the doorbell ring, but no one was there.”

The survivors’ stories all pointed to the abuse of shared smart devices.

Whereas the solutions to many of the inconveniences and annoyances that can come with shared digital access are simple—a reset password, a removal of a shared account—the “solutions” for technology-enabled abuse are far more complex. These are problems that cannot be solely addressed with advice and good cybersecurity hygiene.

If you are personally experiencing this type of harassment, you can contact the National Network to End Domestic Violence on their hotline at 1-800-799-SAFE.

Making sure things go right

Sharing your life with your partner should be a function of trust, and for many couples, it is. But, in the same way that it is impossible for a cybersecurity company to ignore even one ransomware attack, it’s also improper for this cybersecurity and privacy company to ignore the reality facing many couples today.

There are new rules and standards for digital access within relationships. With the right information and the right guidance, hopefully more people will feel empowered to make the best decisions for themselves.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Shhh. Did you hear that?

It’s Day One of EFF’s summer membership drive for internet freedom! Gather round the virtual campfire because I’ve got special treats and a story for you:

  1. New member t-shirts and limited-edition gear drop TODAY.

  2. Through EFF's 34th birthday on July 10, you can get 2 rare gifts and become an EFF member for just $20! AND new automatic monthly or annual donors get an instant match.

  3. I’m proud to share the first post in a series from our friends, The Encryptids—the rarely-seen enigmas who inspire campfire lore. But this time, they’re spilling secrets about how they survive this ever-digital world. We begin by checking in with the legendary Bigfoot de la Sasquatch...

-Aaron
EFF Membership Team

____________________________

Bigfoot with sunglasses in a forest saying "Privacy is a human right."

P

eople say I'm the most famous of The Encryptids, but sometimes I don't want the spotlight. They all want a piece of me: exes, ad trackers, scammers, even the government. A picture may be worth a thousand words, but my digital profile is worth cash (to skeezy data brokers). I can’t hit a city block without being captured by doorbell cameras, CCTV, license plate readers, and a maze of street-level surveillance. It can make you want to give up on privacy altogether. Honey, no. Why should you have to hole up in some dank, busted forest for freedom and respect? You don’t.

Privacy isn't about hiding. It's about revealing what you want to who you want on your terms. It's your basic right to dignity.

Privacy isn't about hiding...It's your basic right to dignity.

A wise EFF technologist once told me, “Nothing makes you a ghost online.” So what we need is control, sweetie! You're not on your own! EFF worked for decades to set legal precedents for us, to push for good policy, fight crap policy, and create tools so you can be more private and secure on the web RIGHT NOW. They even have whole ass guides that help people around the world protect themselves online. For free!

I know a few things about strangers up in your business, leaked photos, and wanting to live in peace. Your rights and freedoms are too important to leave them up to tech companies and politicians. This world is a better place for having people like the lawyers, activists, and techs at EFF.

Join EFF

Privacy is a "human" right

Privacy is a team sport and the team needs you. Sign up with EFF today and not only can you get fun stuff (featuring ya boy Footy), you’ll make the internet better for everyone.

XOXO,

Bigfoot DLS

____________________________

EFF is a member-supported U.S. 501(c)(3) organization celebrating TEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Utah Consumer Privacy Act (UCPA) 

10 June 2024 at 03:49

What is the Utah Consumer Privacy Act? The Utah Consumer Privacy Act, or UCPA, is a state-level data privacy law enacted in Utah, USA, aimed at providing residents with greater control over their personal data. The UCPA shares similarities with other state privacy laws like the California Consumer Privacy Act (CCPA) but has its own […]

The post Utah Consumer Privacy Act (UCPA)  appeared first on Centraleyes.

The post Utah Consumer Privacy Act (UCPA)  appeared first on Security Boulevard.

Ticketmaster Data Breach and Rising Work from Home Scams

By: Tom Eston
10 June 2024 at 00:00

In episode 333 of the Shared Security Podcast, Tom and Scott discuss a recent massive data breach at Ticketmaster involving the data of 560 million customers, the blame game between Ticketmaster and third-party provider Snowflake, and the implications for both companies. Additionally, they discuss Live Nation’s ongoing monopoly investigation. In the ‘Aware Much’ segment, the […]

The post Ticketmaster Data Breach and Rising Work from Home Scams appeared first on Shared Security Podcast.

The post Ticketmaster Data Breach and Rising Work from Home Scams appeared first on Security Boulevard.

💾

Surveillance Defense for Campus Protests

The recent wave of protests calling for peace in Palestine have been met with unwarranted and aggressive suppression from law enforcement, universities, and other bad actors. It’s clear that the changing role of surveillance on college campuses exacerbates the dangers faced by all of the communities colleges are meant to support, and only serves to suppress lawful speech. These harmful practices must come to an end, and until they do, activists should take precautions to protect themselves and their communities. There are no easy or universal answers, but here we outline some common considerations to help guide campus activists.

Protest Pocket Guide

How We Got Here

Over the past decade, many campuses have been building up their surveillance arsenal and inviting a greater police presence on campus. EFF and fellow privacy and speech advocates have been clear that this is a dangerous trend that chills free expression and makes students feel less safe, while fostering an adversarial and distrustful relationship with the administration.

Many tools used on campuses overlap with the street-level surveillance used by law enforcement, but universities are in a unique position of power over students being monitored. For students, universities are not just their school, but often their home, employer, healthcare provider, visa sponsor, place of worship, and much more. This reliance heightens the risks imposed by surveillance, and brings it into potentially every aspect of students’ lives.

Putting together a security plan is an essential first step to protect yourself from surveillance.

EFF has also been clear for years: as campuses build up their surveillance capabilities in the name of safety, they chill speech and foster a more adversarial relationship between students and the administration. Yet, this expansion has continued in recent years, especially after the COVID-19 lockdowns.

This came to a head in April, when groups across the U.S. pressured their universities to disclose and divest their financial interest in companies doing business in Israel and weapons manufacturers, and to distance themselves from ties to the defense industry. These protests echo similar campus divestment campaigns against the prison industry in 2015, and the campaign against apartheid South Africa in the 1980s. However, the current divestment movement has been met with disroportionate suppression and unprecedented digital surveillance from many universities.

This guide is written with those involved in protests in mind. Student journalists covering protests may also face digital threats and can refer to our previous guide to journalists covering protests.

Campus Security Planning

Putting together a security plan is an essential first step to protect yourself from surveillance. You can’t protect all information from everyone, and as a practical matter you probably wouldn’t want to. Instead, you want to identify what information is sensitive and who should and shouldn’t have access to it.

That means this plan will be very specific to your context and your own tolerance of risk from physical and psychological harm. For a more general walkthrough you can check out our Security Plan article on Surveillance Self-Defense. Here, we will walk through this process with prevalent concerns from current campus protests.

What do I want to protect?

Current university protests are a rapid and decentralized response to what the UN International Court of Justice ruled as a plausible case of genocide in Gaza, and to the reported humanitarian crisis in occupied East Jerusalem and the West Bank. Such movements will need to focus on secure communication, immediate safety at protests, and protection from collected data being used for retaliation—either at protests themselves or on social media.

At a protest, a mix of visible and invisible surveillance may be used to identify protesters. This can include administrators or law enforcement simply attending and keeping notes of what is said, but often digital recordings can make that same approach less plainly visible. This doesn't just include video and audio recordings—protesters may also be subject to tracking methods like face recognition technology and location tracking from their phone, school ID usage, or other sensors. So here, you want to be mindful of anything you say or anything on your person, which can reveal your identity or role in the protest, or those of fellow protestors.

This may also be paired with online surveillance. The university or police may monitor activity on social media, even joining private or closed groups to gather information. Of course, any services hosted by the university, such as email or WiFi networks, can also be monitored for activity. Again, taking care of what information is shared with whom is essential, including carefully separating public information (like the time of a rally) and private information (like your location when attending). Also keep in mind how what you say publicly, even in a moment of frustration, may be used to draw negative attention to yourself and undermine the cause.

However, many people may strategically use their position and identity publicly to lend credibility to a movement, such as a prominent author or alumnus. In doing so they should be mindful of those around them in more vulnerable positions.

Who do I want to protect it from?

Divestment challenges the financial underpinning of many institutions in higher education. The most immediate adversaries are clear: the university being pressured and the institutions being targeted for divestment.

However, many schools are escalating by inviting police on campus, sometimes as support for their existing campus police, making them yet another potential adversary. Pro-Palestine protests have drawn attention from some federal agencies, meaning law enforcement will inevitably be a potential surveillance adversary even when not invited by universities.

With any sensitive political issue, there are also people who will oppose your position. Others at the protest can escalate threats to safety, or try to intimidate and discredit those they disagree with. Private actors, whether individuals or groups, can weaponize surveillance tools available to consumers online or at a protest, even if it is as simple as video recording and doxxing attendees.

How bad are the consequences if I fail?

Failing to protect information can have a range of consequences that will depend on the institution and local law enforcement’s response. Some schools defused campus protests by agreeing to enter talks with protesters. Others opted to escalate tensions by having police dismantle encampments and having participants suspended, expelled, or arrested. Such disproportionate disciplinary actions put students at risk in myriad ways, depending how they relied on the institution. The extent to which institutions will attempt to chill speech with surveillance will vary, but unlike direct physical disruption, surveillance tools may be used with less hesitation.

The safest bet is to lock your devices with a pin or password, turn off biometric unlocks such as face or fingerprint, and say nothing but to assert your rights.

All interactions with law enforcement carry some risk, and will differ based on your identity and history of police interactions. This risk can be mitigated by knowing your rights and limiting your communication with police unless in the presence of an attorney. 

How likely is it that I will need to protect it?

Disproportionate disciplinary actions will often coincide with and be preceded by some form of surveillance. Even schools that are more accommodating of peace protests may engage in some level of monitoring, particularly schools that have already adopted surveillance tech. School devices, services, and networks are also easy targets, so try to use alternatives to these when possible. Stick to using personal devices and not university-administered ones for sensitive information, and adopt tools to limit monitoring, like Tor. Even banal systems like campus ID cards, presence monitors, class attendance monitoring, and wifi access points can create a record of student locations or tip off schools to people congregating. Online surveillance is also easy to implement by simply joining groups on social media, or even adopting commercial social media monitoring tools.

Schools that invite a police presence make their students and workers subject to the current practices of local law enforcement. Our resource, the Atlas of Surveillance, gives an idea of what technology local law enforcement is capable of using, and our Street-Level Surveillance hub breaks down the capabilities of each device. But other factors, like how well-resourced local law enforcement is, will determine the scale of the response. For example, if local law enforcement already have social media monitoring programs, they may use them on protesters at the request of the university.

Bad actors not directly affiliated with the university or law enforcement may be the most difficult factor to anticipate. These threats can arise from people who are physically present, such as onlookers or counter-protesters, and individuals who are offsite. Information about protesters can be turned against them for purposes of surveillance, harassment, or doxxing. Taking measures found in this guide will also be useful to protect yourself from this potentiality.

Finally, don’t confuse your rights with your safety. Even if you are in a context where assembly is legal and surveillance and suppression is not, be prepared for it to happen anyway. Legal protections are retrospective, so for your own safety, be prepared for adversaries willing to overstep these protections.

How much trouble am I willing to go through to try to prevent potential consequences?

There is no perfect answer to this question, and every individual protester has their own risks and considerations. In setting this boundary, it is important to communicate it with others and find workable solutions that meet people where they’re at. Being open and judgment-free in these discussions make the movement being built more consensual and less prone to abuses.  Centering consent in organizing can also help weed out bad actors in your own camp who will raise the risk for all who participate, deliberately or not.

Keep in mind that nearly any electronic device you own can be used to track you, but there are a few steps you can take to make that data collection more difficult. 

Sometimes a surveillance self-defense tactic will invite new threats. Some universities and governments have been so eager to get images of protesters’ faces they have threatened criminal penalties on people wearing masks at gatherings. These new potential charges must now need to be weighed against the potential harms of face recognition technology, doxxing, and retribution someone may face by exposing their face.

Privacy is also a team sport. Investing a lot of energy in only your own personal surveillance defense may have diminishing returns, but making an effort to educate peers and adjust the norms of the movement puts less work on any one person has a potentially greater impact. Sharing resources in this post and the surveillance self-defense guides, and hosting your own workshops with the security education companion, are good first steps.

Who are my allies?

Cast a wide net of support; many members of faculty and staff may be able to provide forms of support to students, like institutional knowledge about school policies. Many school alumni are also invested in the reputation of their alma mater, and can bring outside knowledge and resources.

A number of non-profit organizations can also support protesters who face risks on campus. For example, many campus bail funds have been set up to support arrested protesters. The National Lawyers Guild has chapters across the U.S. that can offer Know Your Rights training and provide and train people to become legal observers (people who document a protest so that there is a clear legal record of civil liberties’ infringements should protesters face prosecution).

Many local solidarity groups may also be able to help provide trainings, street medics, and jail support. Many groups in EFF’s grassroots network, the Electronic Frontier Alliance, also offer free digital rights training and consultations.

Finally, EFF can help victims of surveillance directly when they email info@eff.org or Signal 510-243-8020. Even when EFF cannot take on your case, we have a wide network of attorneys and cybersecurity researchers who can offer support.

Beyond preparing according to your security plan, preparing plans with networks of support outside of the protest is a good idea.

Tips and Resources

Keep in mind that nearly any electronic device you own can be used to track you, but there are a few steps you can take to make that data collection more difficult. To prevent tracking, your best option is to leave all your devices at home, but that’s not always possible, and makes communication and planning much more difficult. So, it’s useful to get an idea of what sorts of surveillance is feasible, and what you can do to prevent it. This is meant as a starting point, not a comprehensive summary of everything you may need to do or know:

Prepare yourself and your devices for protests

Our guide for attending a protest covers the basics for protecting your smartphone and laptop, as well as providing guidance on how to communicate and share information responsibly. We have a handy printable version available here, too, that makes it easy to share with others.

Beyond preparing according to your security plan, preparing plans with networks of support outside of the protest is a good idea. Tell friends or family when you plan to attend and leave, so that if there are arrests or harassment they can follow up to make sure you are safe. If there may be arrests, make sure to have the phone number of an attorney and possibly coordinate with a jail support group.

Protect your online accounts

Doxxing, when someone exposes information about you, is a tactic reportedly being used on some protesters. This information is often found in public places, like "people search" sites and social media. Being doxxed can be overwhelming and difficult to control in the moment, but you can take some steps to manage it or at least prepare yourself for what information is available. To get started, check out this guide that the New York Times created to train its journalists how to dox themselves, and Pen America's Online Harassment Field Manual

Compartmentalize

Being deliberate about how and where information is shared can limit the impact of any one breach of privacy. Online, this might look like using different accounts for different purposes or preferring smaller Signal chats, and offline it might mean being deliberate about with whom information is shared, and bringing “clean” devices (without sensitive information) to protests.

Be mindful of potential student surveillance tools 

It’s difficult to track what tools each campus is using to track protesters, but it’s possible that colleges are using the same tricks they’ve used for monitoring students in the past alongside surveillance tools often used by campus police. One good rule of thumb: if a device, software, or an online account was provided by the school (like an .edu email address or test-taking monitoring software), then the school may be able to access what you do on it. Likewise, remember that if you use a corporate or university-controlled tool without end-to-end encryption for communication or collaboration, like online documents or email, content may be shared by the corporation or university with law enforcement when compelled with a warrant. 

Know your rights if you’re arrested: 

Thousands of students, staff, faculty, and community members have been arrested, but it’s important to remember that the vast majority of the people who have participated in street and campus demonstrations have not been arrested nor taken into custody. Nevertheless, be careful and know what to do if you’re arrested.

The safest bet is to lock your devices with a pin or password, turn off biometric unlocks such as face or fingerprint, and say nothing but to assert your rights, for example, refusing consent to a search of your devices, bags, vehicles, or home. Law enforcement can lie and pressure arrestees into saying things that are later used against them, so waiting until you have a lawyer before speaking is always the right call.

Barring a warrant, law enforcement cannot compel you to unlock your devices or answer questions, beyond basic identification in some jurisdictions. Law enforcement may not respect your rights when they’re taking you into custody, but your lawyer and the courts can protect your rights later, especially if you assert them during the arrest and any time in custody.

Google will start deleting location history

7 June 2024 at 12:26

Google announced that it will reduce the amount of personal data it is storing by automatically deleting old data from “Timeline”—the feature that, previously named “Location History,” tracks user routes and trips based on a phone’s location, allowing people to revisit all the places they’ve been in the past.

In an email, Google told users that they will have until December 1, 2024 to save all travels to their mobile devices before the company starts deleting old data. If you use this feature, that means you have about five months before losing your location history.

Moving forward, Google will link the Location information to the devices you use, rather than to the user account(s). And, instead of backing up your data to the cloud, Google will soon start to store it locally on the device.

As I pointed out years ago, Location History allowed me to “spy” on my wife’s whereabouts without having to install anything on her phone. After some digging, I learned that my Google account was added to my wife’s phone’s accounts when I logged in on the Play Store on her phone. The extra account this created on her phone was not removed when I logged out after noticing the tracking issue.

That issue should be solved by implementing this new policy. (Let’s remember, though, that this is an issue that Google formerly considered a feature rather than a problem.)

Once effective, unless you take action and enable the new Timeline settings by December 1, Google will attempt to move the past 90 days of your travel history to the first device you sign in to your Google account on. If you want to keep using Timeline:

  • Open Google Maps on your device.
  • Tap your profile picture (or initial) in the upper right corner.
  • Choose Your Timeline.
  • Select whether to keep you want to keep your location data until you manually delete it or have Google auto-delete it after 3, 18, or 36 months.

In April of 2023, Google Play launched a series of initiatives that gives users control over the way that separate, third-party apps stored data about them. This was seemingly done because Google wanted to increase transparency and control mechanisms for people to control how apps would collect and use their data.

With the latest announcement, it appears that Google is finally tackling its own apps.

Only recently, Google agreed to purge billions of records containing personal information collected from more than 136 million people in the US surfing the internet using its Chrome web browser. But this was part of a settlement in a lawsuit accusing the search giant of illegal surveillance.

It’s nice to see the needle move in the good direction for a change. As Bruce Schneier pointed out in his article Online Privacy and Overfishing:

“Each successive generation of the public is accustomed to the privacy status quo of their youth. What seems normal to us in the security community is whatever was commonplace at the beginning of our careers.”

This has led us all to a world where we don’t even have the expectation of privacy anymore when it comes to what we do online or when using modern technology in general.

If you want to take firmer control over how your location is tracked and shared, we recommend reading How to turn off location tracking on Android.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Microsoft Recall is a Privacy Disaster

6 June 2024 at 13:20
Microsoft CEO Satya Nadella, with superimposed text: “Security”

It remembers everything you do on your PC. Security experts are raging at Redmond to recall Recall.

The post Microsoft Recall is a Privacy Disaster appeared first on Security Boulevard.

Advance Auto Parts customer data posted for sale

6 June 2024 at 08:57

A cybercriminal using the handle Sp1d3r is offering to sell 3 TB of data taken from Advance Auto Parts, Inc. Advance Auto Parts is a US automotive aftermarket parts provider that serves both professional installers and do it yourself customers.

Allegedly the customer data includes:

  • Names
  • Email addresses
  • Phone numbers
  • Physical address
  • Orders
  • Loyalty and gas card numbers
  • Sales history

The data set allegedly also includes information about 358,000 employees and candidates—which is a lot more than are currently employed by Advance Auto Parts (69,000 in 2023).

The cybercriminal is asking $1.5 Million for the data set.

post by Sp1d3r offering data for sale
Cybercriminal offering Advance Auto Parts data for sale

Advance Auto Parts has not disclosed any information about a possible data breach and has not responded to inquiries. But BleepingComputer confirms that a large number of the Advance Auto Parts sample customer records are legitimate.

Interestingly enough, the seller claims in their post that the data comes from Snowflake, a cloud company used by thousands of companies to manage their data. On May 31st, Snowflake said it had recently observed and was investigating an increase in cyber threat activity targeting some of its customers’ accounts. It didn’t mention which customers.

At the time, everybody focused on Live Nation / Ticketmaster, another client of Snowflake which said it had detected unauthorized activity within a “third-party cloud database environment” containing company data.

The problem allegedly lies in the fact that Snowflake lets each customer manage the security of their environments, and does not enforce multi-factor authentication (MFA).

Online media outlet TechCrunch says it has:

“Seen hundreds of alleged Snowflake customer credentials that are available online for cybercriminals to use as part of hacking campaigns, suggesting that the risk of Snowflake customer account compromises may be far wider than first known.”

TechCrunch also says it found more than 500 credentials containing employee usernames and passwords, along with the web addresses of the login pages for Snowflake environments, belonging to Santander, Ticketmaster, at least two pharmaceutical giants, a food delivery service, a public-run freshwater supplier, and others.

Meanwhile, Snowflake has urged its customers to immediately switch on MFA for their accounts.

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened, and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims, and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

Check your exposure

While the Advance Auto Parts data has yet to be confirmed, it’s likely you’ve had other personal information exposed online in previous data breaches. You can check what personal information of yours has been exposed with our Digital Footprint portal. Just enter your email address (it’s best to submit the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Husband stalked ex-wife with seven AirTags, indictment says

6 June 2024 at 08:20

Following their divorce, a husband carried out a campaign of stalking and abuse against his ex-wife—referred to only as “S.K.”—by allegedly hiding seven separate Apple AirTags on or near her car, according to documents filed by US prosecutors for the Eastern District of Pennsylvania.

The documents, unearthed by 404 Media in collaboration with Court Watch, reveal how everyday consumer tools, like Bluetooth trackers, are sometimes leveraged for abuse against spouses and romantic partners.

 “The Defendant continued to adapt and use increasingly sophisticated efforts to hide the AirTags he placed on S.K.’s car,” US attorneys said. “It is clear from the timing of the placement of the AirTags and corroborating cell-site data, that he was monitoring S.K.’s movements.”

On May 8, the US government filed an indictment against the defendant, Ibodullo Muhiddinov Numanovich, with one alleged count of stalking against his ex-wife, S.K.

The stalking at the center of the government’s indictment allegedly began around March 27, when the FBI first learned about S.K. finding and removing an AirTag from her car. Less than a month later, on April 18, the FBI found a second AirTag that “was taped underneath the front bumper of S.K.’s vehicle with white duct tape.”

The very next day, the FBI found a third AirTag. This time, it was “wrapped in a blue medical mask and secured under the vehicle near the rear passenger side wheel well.”

This pattern of finding an AirTag, removing it, and then finding another was punctuated by physical and verbal intimidation, the government wrote. After a fourth AirTag was removed, the government said that Numanovich called S.K., followed her to a car wash, and “banged on her windows, and demanded to know why S.K. was not answering his calls.” Less than one week later, during a period of just 10 minutes, the government said that Numanovich left five threatening voice mails on S.K.’s phone, calling her “disgusting” and “worse than an animal.”

During the investigation, the FBI retrieved seven AirTags in total. Here is where those AirTags were found:

  1. Found by S.K. with no detail on specific location
  2. Duct-taped underneath the front bumper of S.K.’s car
  3. Underneath S.K.’s car, near the passenger-side wheel well, wrapped in a blue medical mask
  4. Within the frame of SK’s driver-side mirror, wedged between the mirror itself and the casing around it
  5. “An opening within the vehicle’s frame” which, documents say, was previously sealed by a rubber plug that was removed
  6. Underneath the license plate on S.K.’s car
  7. Undisclosed

For two of the retrieved AirTags, the FBI deactivated the trackers and then, away from S.K., placed the AirTags at separate locations. At an undisclosed location in Philadelphia where the FBI placed one AirTag, FBI agents later saw Numanovich “exit his vehicle with his phone in his hand, and begin searching for the AirTag.” At a convenience store where the FBI placed a second AirTag, agents said they again saw Numanovich.

The FBI also received information about attempted pairings and successful unpairings with Numanovich’s Apple account for three of the Apple AirTags.

In addition to the alleged pattern of stalking, the government also accused Numanovich of abusing SK both physically and emotionally, threatening her in person and over the phone, and recording sexually explicit videos of her to use as extortion. After a search warrant was authorized on May 13, agents found “approximately 140 sexually explicit photographs and videos of S.K.” stored on Numanovich’s phone, along with records for “numerous” financial accounts that transferred more than $4 million between 2022 and 2023.

In a follow-on request from the government to detain Numanovich before his trial begins, prosecutors also revealed that S.K. may have been brought into the US through a “Russian-based human smuggling network”—a network of which Numanovich might be a member.

According to 404 Media, a jury trial for Numanovich is scheduled to start on June 8.

Improving AirTag safety

Just last month, Apple and Google announced an industry specification for Bluetooth tracking devices such as AirTags to help alert users to unwanted tracking. The specification will make it possible to alert users across both iOS and Android if a device is unknowingly being used to track them. We applaud this development.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Microsoft Recall snapshots can be easily grabbed with TotalRecall tool

6 June 2024 at 07:44

Microsoft’s Recall feature has been criticized heavily by pretty much everyone since it was announced last month. Now, researchers have demonstrated the risks by creating a tool that can find, extract, and display everything Recall has stored on a device.

For those unaware, Recall is a feature within what Microsoft is calling its “Copilot+ PCs,” a reference to the AI assistant and companion which the company released in late 2023.

The idea is that Recall can assist users to reconstruct past activity by taking regular screenshots of a user’s activity and storing them, so it can answer important questions like “where did I see those expensive white sneakers?”

However, the scariest part is that Recall does not perform content moderation. It will not hide information such as passwords or financial account numbers and that data may be in snapshots that are stored on your device.

Many security professionals have pointed out that this kind of built-in spyware is a security risk. But Microsoft tried to reassure users, saying:

“Recall data is only stored locally and not accessed by Microsoft or anyone who does not have device access.”

The problem lies in that last part of the statement. Who has device access? Although Microsoft claimed that an attacker would need to gain physical access, unlock the device and sign in before they could access saved screenshots, it turns out that might not be true.

As a warning about how Recall could be abused by criminal hackers, Alex Hagenah, a cybersecurity researcher, has released a demo tool that is capable of automatically extracting and displaying everything Recall records on a laptop.

For reasons any science fiction fan will understand, Hagenah has named that tool TotalRecall.  All the information that Recall saves into its main database on a Windows laptop can be “recalled.“

As Hagenah points out:

“The database is unencrypted. It’s all plain text.”

TotalRecall can automatically find the Recall database on a person’s computer and make a copy of the file, for whatever date range you want. Pulling one day of screenshots from Recall, which stores its information in an SQLite database, took two seconds at most, according to Hagenah. Once TotalRecall has been deployed, it is possible to generate a summary about the data or search for specific terms in the database.

Now imagine an info-stealer that incorporates the capabilities of TotalRecall. This is not a far-fetched scenario because many information stealers are modular. The operators can add or leave out certain modules based on the target and the information they are after. And reportedly, the number of devices infected with data stealing malware has seen a sevenfold increase since 2023.

Another researcher, Kevin Beaumont, says he has built a website where a Recall database can be uploaded and instantly searched. He says he hasn’t released the site yet, to allow Microsoft time to potentially change the system.

According to Beaumont:

“InfoStealer trojans, which automatically steal usernames and passwords, are a major problem for well over a decade—now these can just be easily modified to support Recall.”

It’s true that any information stealer will need administrator rights to access Recall data, but attacks that gain those right have been around for years, and most information stealer malware does this already.

Hagenah also warned that in cases of employers with bring your own devices (BYOD) policies, there’s a risk of someone leaving with huge volumes of company data saved on their laptops.

It is worrying that this type of tools is already available even before the official launch of Recall. The risk of identity theft only increases when we allow our machines to “capture” every move we make and everything we look at.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Researcher Develops ‘TotalRecall’ Tool That Can Extract Data From Microsoft Recall

By: Alan J
5 June 2024 at 19:15

TotalRecall Microsoft Security Vulnerability

While Microsoft's forthcoming Recall feature has already sparked security and privacy concerns, the tech giant attempted to downplay those reactions by stating that collected data would remain on the user's device. Despite this reassurance, concerns remain, as researchers - including the developer of a new tool dubbed "TotalRecall" - have observed various inherent vulnerabilities in the local database maintained by Recall, lending credibility to critics of Microsoft's implementation of the AI tool.

TotalRecall Tool Demonstrates Recall's Inherent Vulnerabilities

Recall is a new Windows AI tool planned for Copilot+ PCs that captures screenshots from user devices every five seconds, then storing the data in a local database. The tool's announcement, however, led many to fear that this process would make sensitive information on devices susceptible to unauthorized access. TotalRecall, a new tool developed by Alex Hagenah and named after the 1990 sci-fi film, highlights the potential compromise of this stored information. Hagenah states that the the local database is unencrypted and stores data in plain text format. The researcher likened Recall to spyware, calling it a "Trojan 2.0." TotalRecall was designed to extract and display all the information stored in the Recall database, pulling out screenshots, text data, and other sensitive information, highlighting the potential for abuse by criminal hackers or domestic abusers who may gain physical access to a device. Hagenah's concerns are echoed by others in the cybersecurity community, who have also compared Recall to spyware or stalkerware. Recall captures screenshots of everything displayed on a user's desktop, including messages from encrypted apps like Signal and WhatsApp, websites visited, and all text shown on the PC. TotalRecall can locate and copy the Recall database, parse its data, and generate summaries of the captured information, with features for date range filtering and term searches. Hagenah stated that by releasing the tool on GitHub, he aims to push Microsoft to fully address these security issues before Recall's launch on June 18.

Microsoft Recall Privacy and Security Concerns

Cybersecurity researcher Kevin Beaumont has also developed a website for searching Recall databases, though he has withheld its release to give Microsoft time to make changes. Microsoft's privacy documentation for Recall mentions the ability to disable screenshot saving, pause Recall on the system, filter out applications, and delete data. Nonetheless, the company acknowledges that Recall does not moderate the captured content, which could include sensitive information like passwords, financial details and more. The risks extend beyond individual users, as employees under "bring your own device" policies could leave with significant amounts of company data saved on their laptops. The UK's data protection regulator has requested more information from Microsoft regarding Recall and its privacy implications. Amid criticism over recent hacks affecting US government data, Microsoft CEO Satya Nadella has emphasized its need to prioritize security. However, the issues surrounding Recall demonstrate that security concerns were not given sufficient attention, and necessitate inspection of its data collection practices before its official release. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Generative AI and Data Privacy: Navigating the Complex Landscape

Generative AI

By Neelesh Kripalani, Chief Technology Officer, Clover Infotech Generative AI, which includes technologies such as deep learning, natural language processing, and speech recognition for generating text, images, and audio, is transforming various sectors from entertainment to healthcare. However, its rapid advancement has raised significant concerns about data privacy. To navigate this intricate landscape, it is crucial to understand the intersection of AI capabilities, ethical considerations, legal frameworks, and technological safeguards.

Data Privacy Challenges Raised by Generative AI

Not securing data while collection or processing- Generative AI raises significant data privacy concerns due to its need for vast amounts of diverse data, often including sensitive personal information, collected without explicit consent and difficult to anonymize effectively. Model inversion attacks and data leakage risks can expose private information, while biases in training data can lead to unfair or discriminatory outputs. The risk of generated content - The ability of generative AI to produce highly realistic fake content raises serious concerns about its potential for misuse. Whether creating convincing deepfake videos or generating fabricated text and images, there is a significant risk of this content being used for impersonation, spreading disinformation, or damaging individuals' reputations. Lack of Accountability and transparency - Since GenAI models operate through complex layers of computation, it is difficult to get visibility and clarity into how these systems arrive at their outputs. This complexity makes it difficult to track the specific steps and factors that lead to a particular decision or output. This not only hinders trust and accountability but also complicates the tracing of data usage and makes it tedious to ensure compliance with data privacy regulations. Additionally, unidentified biases in the training data can lead to unfair outputs, and the creation of highly realistic but fake content, like deepfakes, poses risks to content authenticity and verification. Addressing these issues requires improved explainability, traceability, and adherence to regulatory frameworks and ethical guidelines. Lack of fairness and ethical considerations - Generative AI models can perpetuate or even exacerbate existing biases present in their training data. This can lead to unfair treatment or misrepresentation of certain groups, raising ethical issues.

Here’s How Enterprises Can Navigate These Challenges

Understand and map the data flow - Enterprises must maintain a comprehensive inventory of the data that their GenAI systems process, including data sources, types, and destinations. Also, they should create a detailed data flow map to understand how data moves through their systems. Implement strong data governance - As per the data minimization regulation, enterprises must collect, process, and retain only the minimum amount of personal data necessary to fulfill a specific purpose. In addition to this, they should develop and enforce robust data privacy policies and procedures that comply with relevant regulations. Ensure data anonymization and pseudonymization – Techniques such as anonymization and pseudonymization can be implemented to reduce the chances of data reidentification. Strengthen security measures – Implement other security measures such as encryption for data at rest and in transit, access controls for protecting against unauthorized access, and regular monitoring and auditing to detect and respond to potential privacy breaches. To summarize, organizations must begin by complying with the latest data protection laws and practices, and strive to use data responsibly and ethically. Further, they should regularly train employees on data privacy best practices to effectively manage the challenges posed by Generative AI while leveraging its benefits responsibly and ethically. Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything. 

Online Privacy and Overfishing

5 June 2024 at 07:00

Microsoft recently caught state-backed hackers using its generative AI tools to help with their attacks. In the security community, the immediate questions weren’t about how hackers were using the tools (that was utterly predictable), but about how Microsoft figured it out. The natural conclusion was that Microsoft was spying on its AI users, looking for harmful hackers at work.

Some pushed back at characterizing Microsoft’s actions as “spying.” Of course cloud service providers monitor what users are doing. And because we expect Microsoft to be doing something like this, it’s not fair to call it spying.

We see this argument as an example of our shifting collective expectations of privacy. To understand what’s happening, we can learn from an unlikely source: fish.

In the mid-20th century, scientists began noticing that the number of fish in the ocean—so vast as to underlie the phrase “There are plenty of fish in the sea”—had started declining rapidly due to overfishing. They had already seen a similar decline in whale populations, when the post-WWII whaling industry nearly drove many species extinct. In whaling and later in commercial fishing, new technology made it easier to find and catch marine creatures in ever greater numbers. Ecologists, specifically those working in fisheries management, began studying how and when certain fish populations had gone into serious decline.

One scientist, Daniel Pauly, realized that researchers studying fish populations were making a major error when trying to determine acceptable catch size. It wasn’t that scientists didn’t recognize the declining fish populations. It was just that they didn’t realize how significant the decline was. Pauly noted that each generation of scientists had a different baseline to which they compared the current statistics, and that each generation’s baseline was lower than that of the previous one.

What seems normal to us in the security community is whatever was commonplace at the beginning of our careers.

Pauly called this “shifting baseline syndrome” in a 1995 paper. The baseline most scientists used was the one that was normal when they began their research careers. By that measure, each subsequent decline wasn’t significant, but the cumulative decline was devastating. Each generation of researchers came of age in a new ecological and technological environment, inadvertently masking an exponential decline.

Pauly’s insights came too late to help those managing some fisheries. The ocean suffered catastrophes such as the complete collapse of the Northwest Atlantic cod population in the 1990s.

Internet surveillance, and the resultant loss of privacy, is following the same trajectory. Just as certain fish populations in the world’s oceans have fallen 80 percent, from previously having fallen 80 percent, from previously having fallen 80 percent (ad infinitum), our expectations of privacy have similarly fallen precipitously. The pervasive nature of modern technology makes surveillance easier than ever before, while each successive generation of the public is accustomed to the privacy status quo of their youth. What seems normal to us in the security community is whatever was commonplace at the beginning of our careers.

Historically, people controlled their computers, and software was standalone. The always-connected cloud-deployment model of software and services flipped the script. Most apps and services are designed to be always-online, feeding usage information back to the company. A consequence of this modern deployment model is that everyone—cynical tech folks and even ordinary users—expects that what you do with modern tech isn’t private. But that’s because the baseline has shifted.

AI chatbots are the latest incarnation of this phenomenon: They produce output in response to your input, but behind the scenes there’s a complex cloud-based system keeping track of that input—both to improve the service and to sell you ads.

Shifting baselines are at the heart of our collective loss of privacy. The U.S. Supreme Court has long held that our right to privacy depends on whether we have a reasonable expectation of privacy. But expectation is a slippery thing: It’s subject to shifting baselines.

The question remains: What now? Fisheries scientists, armed with knowledge of shifting-baseline syndrome, now look at the big picture. They no longer consider relative measures, such as comparing this decade with the last decade. Instead, they take a holistic, ecosystem-wide perspective to see what a healthy marine ecosystem and thus sustainable catch should look like. They then turn these scientifically derived sustainable-catch figures into limits to be codified by regulators.

In privacy and security, we need to do the same. Instead of comparing to a shifting baseline, we need to step back and look at what a healthy technological ecosystem would look like: one that respects people’s privacy rights while also allowing companies to recoup costs for services they provide. Ultimately, as with fisheries, we need to take a big-picture perspective and be aware of shifting baselines. A scientifically informed and democratic regulatory process is required to preserve a heritage—whether it be the ocean or the Internet—for the next generation.

This essay was written with Barath Raghavan, and previously appeared in IEEE Spectrum.

Online Privacy and Overfishing

5 June 2024 at 07:00

Microsoft recently caught state-backed hackers using its generative AI tools to help with their attacks. In the security community, the immediate questions weren’t about how hackers were using the tools (that was utterly predictable), but about how Microsoft figured it out. The natural conclusion was that Microsoft was spying on its AI users, looking for harmful hackers at work.

Some pushed back at characterizing Microsoft’s actions as “spying.” Of course cloud service providers monitor what users are doing. And because we expect Microsoft to be doing something like this, it’s not fair to call it spying...

The post Online Privacy and Overfishing appeared first on Security Boulevard.

❌
❌