Reading view

There are new articles available, click to refresh the page.

TEMU sued for being “dangerous malware” by Arkansas Attorney General

Chinese online shopping giant Temu is facing a lawsuit filed by State of Arkansas Attorney General Tim Griffin, alleging that the retailer’s mobile app spies on users.

“Temu purports to be an online shopping platform, but it is dangerous malware, surreptitiously granting itself access to virtually all data on a user’s cellphone.”

Temu quickly denied the allegations.

In speaking with the outlet Ars Technica, a Temu spokesperson said “the allegations in the lawsuit are based on misinformation circulated online, primarily from a short-seller, and are totally unfounded.”

According to Baclinko statistics, Temu was the most downloaded shopping app worldwide in 2023, with 337.2 million downloads, 1.8x more than Amazon Shopping, and according to TechCrunch, Temu was the most downloaded free iPhone app in the US for 2023.

Temu is most popular today likely for its exceedingly low prices (a brief scan of its website shows a shoulder-sling backpack being sold for $2.97, and a broom-and-dust–pan combo for $12.47). How those low prices are achieved has been a mystery for some onlookers, but current theories include:

  • Temu relies on the de minimis exception to ship goods directly to U.S. customers for a low price. A shipment below the de minimis value of $800 isn’t inspected or taxed by US Customs.
  • The online webshop pressures manufacturers to lower their prices even further to appease discount-seeking customers, leaving those manufacturers with little to no profit in return.
  • Most items sold on Temu are unbranded and manufactured en masse by manufacturers in China. Almost every tech product on Temu is a knockoff or “dupe” of a real, brand-name product.

But according to reporting last year from Wired, Temu’s low prices are easy to decipher—Temu itself is losing millions of dollars to break into the US market.

“An analysis of the company’s supply chain costs by WIRED—confirmed by a company insider—shows that Temu is losing an average of $30 per order as it throws money at trying to break into the American market.”

Attorney General Griffin seems determined that Temu baits users with misleading promises of discounted, quality goods and adds addictive features like wheels of fortune to keep users engaged to the app.

He called Temu “functionally malware and spyware,” adding that the app was “purposefully designed to gain unrestricted access to a user’s phone operating system.”

The lawsuit claims that Temu’s app can sneakily access “a user’s camera, specific location, contacts, text messages, documents, and other applications.” Further, the lawsuit alleges that Temu is capable of recompiling itself, changing properties, and overriding the data privacy settings set by the user. If true, this would make it almost impossible to detect, even by “sophisticated” users, the lawsuit said.

Some may suspect that this is another attempt to ban an app hailing from a “foreign adversarial country” like TikTok, but Attorney General Griffin is very clear about his reasons.

“Temu is not an online marketplace like Amazon or Walmart. It is a data-theft business that sells goods online as a means to an end.”


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Victory! Grand Jury Finds Sacramento Cops Illegally Shared Driver Data

For the past year, EFF has been sounding the alarm about police in California illegally sharing drivers' location data with anti-abortion states, putting abortion seekers and providers at risk of prosecution. We thus applaud the Sacramento County Grand Jury for hearing this call and investigating two police agencies that had been unlawfully sharing this data out-of-state.

The grand jury, a body of 19 residents charged with overseeing local government including law enforcement, released their investigative report on Wednesday. In it, they affirmed that the Sacramento County Sheriff's Office and Sacramento Police Department violated state law and "unreasonably risked" aiding the potential prosecution of "women who traveled to California to seek or receive healthcare services."

In May 2023, EFF, along with the American Civil Liberties Union of Northern California and the American Civil Liberties Union of Southern California, sent letters to 71 California police agencies demanding that they stop sharing automated license plate reader (ALPR) data with law enforcement agencies in other states. This sensitive location information can reveal where individuals work, live, worship, and seek medical care—including reproductive health services. Since the Supreme Court overturned Roe v. Wade with its decision in Dobbs v. Jackson Women’s Health Organization, ALPR data has posed particular risks to those who seek or assist abortions that have been criminalized in their home states.

Since 2016, California law has prohibited sharing ALPR data with out-of-state or federal law enforcement agencies. Despite this, dozens of rogue California police agencies continued sharing this information with other states, even after the state's attorney general issued legal guidance in October "reminding" them to stop.

In Sacramento County, both the Sacramento County Sheriff's Office and the Sacramento Police Department have dismissed calls for them to start obeying the law. Last year, the Sheriff's Office even claimed on Twitter that EFF's concerns were part "a broader agenda to promote lawlessness and prevent criminals from being held accountable." That agency, at least, seems to have had a change of heart: The Sacramento County Grand Jury reports that, after they began investigating police practices, the Sacramento County Sheriff's Office agreed to stop sharing ALPR data with out-of-state law enforcement agencies.

The Sacramento Police Department, however, has continued to share ALPR data with out-of-state agencies. In their report, the grand jury calls for the department to comply with the California Attorney General's legal guidance. The grand jury also recommends that all Sacramento law enforcement agencies make their ALPR policies available to the public in compliance with the law.

As the grand jury's report notes, EFF and California's ACLU affiliates "were among the first" organizations to call attention to police in the state illegally sharing ALPR data. While we are glad that many police departments have since complied with our demands that they stop this practice, we remain committed to bringing attention and pressure to agencies, like the Sacramento Police Department, that have not. In January, for instance, EFF and the ACLU sent a letter urging the California Attorney General to enforce the state's ALPR laws.

For nearly a decade, EFF has been investigating and raising the alarm about the illegal mass-sharing of ALPR data by California law enforcement agencies. The grand jury's report details what is just the latest in a series of episodes in which Sacramento agencies violated the law with ALPR. In December 2018, the Sacramento County Department of Human Assistance terminated its program after public pressure resulting from EFF's revelation that the agency was accessing ALPR data in violation of the law. The next year, EFF successfully lobbied the state legislature to order an audit of four agencies, including the Sacramento County Sheriff's Office, and how they use ALPR. The result was a damning report that the sheriff had fallen short of many of the basic requirements under state law.

Drone As First Responder Programs Are Swarming Across the United States

Law enforcement wants more drones, and we’ll probably see many more of them overhead as police departments seek to implement a popular project justifying the deployment of unmanned aerial vehicles (UAVs): the “drone as first responder” (DFR).

Police DFR programs involve a fleet of drones, which can range in number from four or five to hundreds. In response to 911 calls and other law enforcement calls for service, a camera-equipped drone is launched from a regular base (like the police station roof) to get to the incident first, giving responding officers a view of the scene before they arrive. In theory and in marketing materials, the advance view from the drone will help officers understand the situation more thoroughly before they get there, better preparing them for the scene and assisting them in things such as locating wanted or missing individuals more quickly. Police call this “situational awareness.”

In practice, law enforcement's desire to get “a view of the scene” becomes a justification for over-surveilling neighborhoods that produce more 911 calls and for collecting information on anyone who happens to be in the drone’s path. For example, a drone responding to a vandalism case may capture video footage of everyone it passes along the way. Also, drones are subject to the same mission-creep issues that already plague other police tools designed to record the public; what is pitched as a solution to violent crime can quickly become a tool for policing homelessness or low-level infractions that otherwise wouldn't merit police resources.

With their birds-eye view, drones can observe individuals in previously private and constitutionally protected spaces, like their backyards, roofs, and even through home windows. And they can capture crowds of people, like protestors and other peaceful gatherers exercising their First Amendment rights. Drones can be equipped with cameras, thermal imaging, microphones, license plate readers, face recognition, mapping technology, cell-site simulators, weapons, and other payloads. Proliferation of these devices enables state surveillance even for routine operations and in response to innocuous calls —situations unrelated to the original concerns of terrorism or violent crime originally used to justify their adoption.

Drones are also increasingly tied into other forms of surveillance. More departments — including those in Las Vegas, Louisville, and New York City — are toying with the idea of dispatching drones in response to ShotSpotter gunshot detection alerts, which are known to send many false positive alerts. This could lead to drone surveillance of communities that happen to have a higher concentration of ShotSpotter microphones or other acoustic gunshot detection technology. Data revealed recently shows that a disproportionate number of these gunshot detection sensors  are located in Black communities in the United States. Also, artificial intelligence is also being added to drone data collection; connecting what's gathered from the sky to what has been gathered on the street and through other methods is a trending part of the police panopticon plan.

A CVPD official explains the DFR program to EFF staff in 2022. Credit: Jason Kelley (EFF)

DFR programs have been growing in popularity since first launched by the Chula Vista Police Department in 2018. Now there are a few dozen departments with known DFR programs among the approximately 1,500 police departments known to have any drone program at all, according to EFF’s Atlas of Surveillance, the most comprehensive dataset of this kind of information. The Federal Aviation Administration (FAA) regulates use of drones and is currently mandated to prepare new regulations for how they can be operated beyond the operator’s line of sight (BVLOS), the kind of long-distance flight that currently requires a special waiver. All the while, police departments and the companies that sell drones are eager to move forward with more DFR initiatives.

Agency State
Arapahoe County Sheriff's Office CO
Beverly Hills Police Department CA
Brookhaven Police Department GA
Burbank Police Department CA
Chula Vista Police Department CA
Clovis Police Department CA
Commerce City Police Department CO
Daytona Beach Police Department FL
Denver Police Department CO
Elk Grove Police Department CA
Flagler County Sheriff's Office FL
Fort Wayne Police Department IN
Fremont Police Department CA
Gresham Police Department OR
Hawthorne Police Department CA
Hemet Police Department CA
Irvine Police Department CA
Montgomery County Police Department MD
New York City Police Department NY
Oklahoma City Police Department OK
Oswego Police Department NY
Redondo Beach CA
Santa Monica Police Department CA
West Palm Beach Police Department FL
Yonkers Police Department NY
Schenectady Police Department NY
Queen Creek Police Department AZ
Greenwood Village Police Department CO
Hawthorne Police Department CA

Transparency around the acquisition and use of drones will be important to the effort to protect civilians from government and police overreach and abuse as agencies commission more of these flying machines. A recent Wired investigation raised concerns about Chula Vista’s program, finding that roughly one in 10 drone flights lacked a stated purpose, and for nearly 500 of its recent flights, the reason for deployment was an “unknown problem.” That same investigation also found that each average drone flight exposes nearly 5,000 city residents to enhanced surveillance, primarily in predominantly Black and brown neighborhoods.

“For residents we spoke to,” Wired wrote, “the discrepancy raises serious concerns about the accuracy and reliability of the department's transparency efforts—and experts say the use of the drones is a classic case of self-perpetuating mission creep, with their existence both justifying and necessitating their use.”

Chula Vista's "Drone-Related Activity Dashboard" indicates that more than 20 percent of drone flights are welfare checks or mental health crises, while only roughly 6% are responding to assault calls. Chula Vista Police claim that the DFR program lets them avoid potentially dangerous or deadly interactions with members of the public, with drone responses resulting in their department avoiding sending a patrol unit in response to 4,303 calls. However, this theory and the supporting data needs to be meaningfully evaluated by independent researchers.

This type of analysis is not possible without transparency around the program in Chula Vista, which, to its credit, publishes regular details like the location and reason for each of its deployments. Still, that department has also tried to prevent the public from learning about its program, rejecting California Public Records Act (CPRA) requests for drone footage. This led to a lawsuit in which EFF submitted an amicus brief, and ultimately the California Court of Appeal correctly found that drone footage is not exempt from CPRA requests.

While some might take for granted that the government is not allowed to conduct surveillance — intentional, incidental, or otherwise — on you in spaces like your fenced-in backyard, this is not always the case. It took a lawsuit and a recent Alaska Supreme Court decision to ensure that police in that state must obtain a warrant for drone surveillance in otherwise private areas. While some states do require a warrant to use a drone to violate the privacy of a person’s airspace, Alaska, California, Hawaii, and Vermont are currently the only states where courts have held that warrantless aerial surveillance violates residents’ constitutional protections against unreasonable search and seizure absent specific exceptions.

Clear policies around the use of drones are a valuable part of holding police departments accountable for their drone use. These policies must include rules around why a drone is deployed and guardrails on the kind of footage that is collected, the length of time it is retained, and with whom it can be shared.

A few state legislatures have taken some steps toward providing some public accountability over growing drone use.

  • In Minnesota, law enforcement agencies are required to annually report their drone programs' costs and the number of times they deployed drones with, including how many times they were deployed without a warrant.
  • In Illinois, the Drones as First Responders Act went into effect June 2023, requiring agencies to report whether they own drones; how many are owned; the number of times the drones were deployed, as well as the date, location, and reason for the deployment; and whether video was captured and then retained from each deployment. Illinois agencies also must share a copy of their latest use policies, drone footage is generally supposed to be deleted after 24 hours, and the use of face recognition technology is prohibited except in certain circumstances.
  • In California, AB 481 — which took effect in May 2022 with the aim of providing public oversight over military-grade police equipment — requires police departments to publicly share a regular inventory of the drones that they use. Under this law, police acquisition of drones and the policies governing their use require approval from local elected officials following an opportunity for public comment, giving communities an important chance to provide feedback.

DFR programs are just one way police are acquiring drones, but law enforcement and UAV manufacturers are interested in adding drones in other ways, including as part of regular patrols and in response to high-speed vehicle pursuits. These uses also create the risk of law enforcement bypassing important safeguards.  Reasonable protections for public privacy, like robust use policies, are not a barrier to public safety but a crucial part of ensuring just and constitutional policing.

Companies are eager to tap this growing market. Police technology company Axon —known for its Tasers and body-worn cameras — recently acquired drone company Dedrone, specifically citing that company’s efforts to push DFR programs as one reason for the acquisition. Axon since has established a partnership with Skydio in order to expand their DFR sales.

It’s clear that as the skies open up for more drone usage, law enforcement will push to procure more of these flying surveillance tools. But police and lawmakers must exercise far more skepticism over what may ultimately prove to be a flashy trend that wastes resources, infringes on people's rights, and results in unforeseen shifts in policing strategy. The public must be kept aware of how cops are coming for their privacy from above.

Driving licences and other official documents leaked by authentication service used by Uber, TikTok, X, and more

A company that helps to authenticate users for big brands had a set of administration credentials exposed online for over a year, potentially allowing access to user identity documents such as driving licenses.

As more and more legislation emerges requiring websites and platforms—like gambling services, social networks, and porn sites—to verify their users’ age, the requirement for authentication companies offering that service rises.

You may never have heard of the Israeli based authentication company, AU10TIX, but you will certainly recognize some of its major customers, like Uber, TikTok, X, Fiverr, Coinbase, LinkedIn, and Saxo Bank.

Au10tix advertising the authentication and age verification for the world's leading brands

AU10TIX checks users’ identities via the upload of a photo of an official document.

A researcher found that AU10TIX had left the credentials exposed, providing 404 Media with screenshots and data to demonstrate their findings. The credentials led to a logging platform containing data about people that had uploaded documents to prove their identity.

Whoever accessed the platform could peruse information about those people, including name, date of birth, nationality, identification number, and the type of uploaded document such as a drivers’ license, linking to an image of the identity document itself.

Research showed that the likely source of the credentials was an infostealer on a computer of a Network Operations Center Manager at AU10TIX.

Stolen credentials have shown to be a major source of breaches like those recently associated with Snowflake. Snowflake pointed to research which found that one cybercriminal obtained access to multiple organizations’ Snowflake customer instances using stolen customer credentials.

Another major problem is that these sets of credentials get traded and sold all the time. And it’s not as if when you sold them once, that’s it. Digital information can be copied and combined endlessly, leading to huge data sets that criminals can use as they see fit.

We’ve talked about the dangers of data brokers in the past. The California Privacy Protection Agency (CPPA) defines data brokers as businesses that consumers don’t directly interact with, but that buy and sell information about consumers from and to other businesses. There are around 480 data brokers registered with the CPPA. However, that might be just the tip of the iceberg, because there are a host of smaller players active that try to keep a low profile.

Either way, for any company and particularly an authentication company working with sensitive data, having such an account accessible with just login credentials should be grounds for serious penalties.

In a statement given to 404 Media, AU10TIX said it was no longer using the system and had no evidence the data had been used:

“While PII data was potentially accessible, based on our current findings, we see no evidence that such data has been exploited. Our customers’ security is of the utmost importance, and they have been notified.”

For now, there’s not much that individual users of the brands can do apart from keep an eye out for any official statements, and consider an ongoing identity monitoring solution. Below are some general tips on what to do if your data has been part of a data breach:

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened, and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims, and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

Check your personal data exposure

You can check what personal information of yours has been exposed on our Digital Footprint portal. Just enter your email address (it’s best to submit the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report.

Hack of Age Verification Company Shows Privacy Danger of Social Media Laws

We’ve said it before: online age verification is incompatible with privacy. Companies responsible for storing or processing sensitive documents like drivers’ licenses are likely to encounter data breaches, potentially exposing not only personal data like users’ government-issued ID, but also information about the sites that they visit. 

This threat is not hypothetical. This morning, 404 Media reported that a major identity verification company, AU10TIX, left login credentials exposed online for more than a year, allowing access to this very sensitive user data. 

A researcher gained access to the company’s logging platform, “which in turn contained links to data related to specific people who had uploaded their identity documents,” including “the person’s name, date of birth, nationality, identification number, and the type of document uploaded such as a drivers’ license,” as well as images of those identity documents. Platforms reportedly using AU10TIX for identity verification include TikTok and X, formerly Twitter. 

Lawmakers pushing forward with dangerous age verifications laws should stop and consider this report. Proposals like the federal Kids Online Safety Act and California’s Assembly Bill 3080 are moving further toward passage, with lawmakers in the House scheduled to vote in a key committee on KOSA this week, and California's Senate Judiciary committee set to discuss  AB 3080 next week. Several other laws requiring age verification for accessing “adult” content and social media content have already passed in states across the country. EFF and others are challenging some of these laws in court. 

In the final analysis, age verification systems are surveillance systems. Mandating them forces websites to require visitors to submit information such as government-issued identification to companies like AU10TIX. Hacks and data breaches of this sensitive information are not a hypothetical concern; it is simply a matter of when the data will be exposed, as this breach shows. 

Data breaches can lead to any number of dangers for users: phishing, blackmail, or identity theft, in addition to the loss of anonymity and privacy. Requiring users to upload government documents—some of the most sensitive user data—will hurt all users. 

According to the news report, so far the exposure of user data in the AU10TIX case did not lead to exposure beyond what the researcher showed was possible. If age verification requirements are passed into law, users will likely find themselves forced to share their private information across networks of third-party companies if they want to continue accessing and sharing online content. Within a year, it wouldn’t be strange to have uploaded your ID to a half-dozen different platforms. 

No matter how vigilant you are, you cannot control what other companies do with your data. If age verification requirements become law, you’ll have to be lucky every time you are forced to share your private information. Hackers will just have to be lucky once. 

California Privacy Watchdog Inks Deal with French Counterpart to Strengthen Data Privacy Protections

Data Privacy Protections, Data Privacy, CNIL, CPPA, CCPA, Privacy, Protection

In a significant move to bolster data privacy protections, the California Privacy Protection Agency (CPPA) inked a new partnership with France’s Commission Nationale de l'Informatique et des Libertés (CNIL). The collaboration aims to conduct joint research on data privacy issues and share investigative findings that will enhance the capabilities of both organizations in safeguarding personal data. The partnership between CPPA and CNIL shows the growing emphasis on international collaboration in data privacy protection. Both California and France, along with the broader European Union (EU) through its General Data Protection Regulation (GDPR), recognize that effective data privacy measures require global cooperation. France’s membership in the EU brings additional regulatory weight to this partnership and highlights the necessity of cross-border collaboration to tackle the complex challenges of data protection in an interconnected world.

What the CPPA-CNIL Data Privacy Protections Deal Means

The CPPA on Tuesday outlined the goals of the partnership, stating, “This declaration establishes a general framework of cooperation to facilitate joint internal research and education related to new technologies and data protection issues, share best practices, and convene periodic meetings.” The strengthened framework is designed to enable both agencies to stay ahead of emerging threats and innovations in data privacy. Michael Macko, the deputy director of enforcement at the CPPA, said there were practical benefits of this collaboration. “Privacy rights are a commercial reality in our global economy,” Macko said. “We’re going to learn as much as we can from each other to advance our enforcement priorities.” This mutual learning approach aims to enhance the enforcement capabilities of both agencies, ensuring they can better protect consumers’ data in an ever-evolving digital landscape.

CPPA’s Collaborative Approach

The partnership with CNIL is not the CPPA’s first foray into international cooperation. The California agency also collaborates with three other major international organizations: the Asia Pacific Privacy Authorities (APPA), the Global Privacy Assembly, and the Global Privacy Enforcement Network (GPEN). These collaborations help create a robust network of privacy regulators working together to uphold high standards of data protection worldwide. The CPPA was established following the implementation of California's groundbreaking consumer privacy law, the California Consumer Privacy Act (CCPA). As the first comprehensive consumer privacy law in the United States, the CCPA set a precedent for other states and countries looking to enhance their data protection frameworks. The CPPA’s role as an independent data protection authority mirror that of the CNIL - France’s first independent data protection agency – which highlights the pioneering efforts of both regions in the field of data privacy. Data Privacy Protections By combining their resources and expertise, the CPPA and CNIL aim to tackle a range of data privacy issues, from the implications of new technologies to the enforcement of data protection laws. This partnership is expected to lead to the development of innovative solutions and best practices that can be shared with other regulatory bodies around the world. As more organizations and governments recognize the importance of safeguarding personal data, the need for robust and cooperative frameworks becomes increasingly clear. The CPPA-CNIL partnership serves as a model for other regions looking to strengthen their data privacy measures through international collaboration.

Neiman Marcus confirms breach. Is the customer data already for sale?

Luxury retail chain Neiman Marcus has begun to inform customers about a cyberattack it discovered in May. The attacker compromised a database platform storing customers’ personal information.

The letter tells customers:

“Promptly after learning of the issue, we took steps to contain it, including by disabling access to the relevant database platform.”

In the data breach notification, Neiman Marcus says 64,472 people are affected.

An investigation showed that the data contained information such as name, contact data, date of birth, and Neiman Marcus or Bergdorf Goodman gift card numbers. According to Neiman Marcus, the exposed data does not include gift card PINs. Shortly after the data breach disclosure, a cybercriminal going by the name “Sp1d3r” posted on BreachForums that they were willing to sell the data.

Post by Sp1d3r offering Neiman Marcus data for sale which has since been removed
Image courtesy of Daily Dark Web

“Neiman Marcus not interested in paying to secure data. We give them opportunity to pay and they decline. Now we sell. Enjoy!”

According to Sp1d3r, the data includes name, address, phone, dates of birth, email, last four digits of Social Security Numbers, and much more in 6 billion rows of customer shopping records, employee data, and store information.

Neiman Marcus is reportedly one of the many victims of the Snowflake incident, in which the third-party platform used by many big brands was targeted by cybercriminals. The name Sp1d3r has been associated with the selling of information belonging to other Snowflake customers.

Oddly enough, Sp1d3r’s post seems to have since disappeared.

current screenshot of Sp1d3r's profile showing 1 less post and thread
Later screenshot

Sp1d3r’s post count went down back to 19 instead of the 20 displayed in the screenshot above.

So, the post has either been removed, withdrawn, or hidden for reasons which are currently unknown. As usual, we will keep an eye on how this develops.

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened, and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims, and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

Check your exposure

While matters are still unclear on how much information was involved in the Neiman Marcus breach, it’s likely you’ve had other personal information exposed online in previous data breaches. You can check what personal information of yours has been exposed with our Digital Footprint portal. Just enter your email address (it’s best to submit the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Alert: Australian Non-Profit Accuses Google Privacy Sandbox

Google’s initiative to phase out third-party tracking cookies through its Google Privacy Sandbox has encountered criticism from Austrian privacy advocacy group noyb (none of your business). The non-profit alleges that Google’s proposed solution still facilitates user tracking, albeit in a different form. Allegations of Misleading Practices   According to noyb, Google’s Privacy Sandbox, marketed as […]

The post Alert: Australian Non-Profit Accuses Google Privacy Sandbox appeared first on TuxCare.

The post Alert: Australian Non-Profit Accuses Google Privacy Sandbox appeared first on Security Boulevard.

Rafel RAT Used in 120 Campaigns Targeting Android Device Users

Android Rafel RAT ransomware

Multiple bad actors are using the Rafel RAT malware in about 120 campaigns aimed at compromising Android devices and launching a broad array of attacks that range from stealing data and deleting files to espionage and ransomware. Rafel RAT is an open-source remote administration tool that is spread through phishing campaigns aimed at convincing targets..

The post Rafel RAT Used in 120 Campaigns Targeting Android Device Users appeared first on Security Boulevard.

Change Healthcare confirms the customer data stolen in ransomware attack

For the first time since news broke about a ransomware attack on Change Healthcare, the company has released details about the data stolen during the attack.

First, a quick refresher: On February 21, 2024, Change Healthcare experienced serious system outages due to a cyberattack. The incident led to widespread billing outages, as well as disruptions at pharmacies across the United States. Patients were left facing enormous pharmacy bills, small medical providers teetered on the edge of insolvency, and the government scrambled to keep the money flowing and the lights on. The ransomware group ALPHV claimed responsibility for the attack.

But shortly after, the ALPHV group disappeared in an unconvincing exit scam designed to make it look as if the FBI had seized control over the group’s website. Then a new ransomware group, RansomHub, listed the organization as a victim on its dark web leak site, saying it possessed 4 TB of “highly selective data,” relating to “all Change Health clients that have sensitive data being processed by the company.”

In April, parent company UnitedHealth Group released an update, saying:

“Based on initial targeted data sampling to date, the company has found files containing protected health information (PHI) or personally identifiable information (PII), which could cover a substantial proportion of people in America.”

Now, Change Healthcare has detailed the types of medical and patient data that was stolen. Although Change cannot provide exact details for every individual, the exposed information may include:

  • Contact information: Names, addresses, dates of birth, phone numbers, and email addresses.
  • Health insurance information: Details about primary, secondary, or other health plans/policies, insurance companies, member/group ID numbers, and Medicaid-Medicare-government payor ID numbers.
  • Health information: Medical record numbers, providers, diagnoses, medicines, test results, images, and details of care and treatment.
  • Billing, claims, and payment information: Claim numbers, account numbers, billing codes, payment card details, financial and banking information, payments made, and balances due.
  • Other personal information: Social Security numbers, driver’s license or state ID numbers, and passport numbers.

Change Healthcare added:

“The information that may have been involved will not be the same for every impacted individual. To date, we have not yet seen full medical histories appear in the data review.”

Change Healthcare says it will send written letters—as long as it has a person’s address and they haven’t opted out of notifications—once it has concluded the data review.

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened, and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims, and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

Check your digital footprint

Malwarebytes has a new free tool for you to check how much of your personal data has been exposed online. Submit your email address (it’s best to give the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report and recommendations.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Social Media Warning Labels, Should You Store Passwords in Your Web Browser?

In this episode of the Shared Security Podcast, the team debates the Surgeon General’s recent call for social media warning labels and explores the pros and cons. Scott discusses whether passwords should be stored in web browsers, potentially sparking strong opinions. The hosts also provide an update on Microsoft’s delayed release of CoPilot Plus PCs […]

The post Social Media Warning Labels, Should You Store Passwords in Your Web Browser? appeared first on Shared Security Podcast.

The post Social Media Warning Labels, Should You Store Passwords in Your Web Browser? appeared first on Security Boulevard.

💾

First million breached Ticketmaster records released for free

The cybercriminal acting under the name “Sp1d3r” gave away the first 1 million records that are part of the data set that they claimed to have stolen from Ticketmaster/Live Nation. The files were released without a price, for free.

When Malwarebytes Labs first learned about this data breach, it happened to be the first major event that was shared on the resurrected BreachForums, and someone acting under the handle “ShinyHunters” offered the full details (name, address, email, phone) of 560 million customers for sale.

The same data set was offered for sale in an almost identical post on another forum by someone using the handle “SpidermanData.” This could be the same person or a member of the ShinyHunters group.

Following this event, Malwarebytes Labs advised readers on how to respond and stay safe. Importantly, even when a breach isn’t a “breach”—in that immediate moment when the details have yet to be confirmed and a breach subject is readying its public statements—the very news of the suspected breach can be used by advantageous cybercriminals as a phishing lure.

Later, Ticketmaster confirmed the data breach.

Bleeping Computer spoke to ShinyHunters who said they already had interested buyers. Now, Sp1d3r, who was seen posting earlier about Advance Auto Parts customer data and Truist Bank data, has released 1 million Ticketmaster related data records for free.

post giving away 1 million Ticketmaster data records
Post by Sp1d3r

In a post on BreachForums, Sp1d3r said:

“Ticketmaster will not respond to request to buy data from us.

They care not for the privacy of 680 million customers, so give you the first 1 million users free.”

The cybercriminals that are active on those forums will jump at the occasion and undoubtedly try to monetize those records. This likely means that innocent users that are included in the first million released records could receive a heavy volume of spam and phishing emails in the coming days.

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims, and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

Check your exposure

While matters are still unclear how much information was involved, it’s likely you’ve had other personal information exposed online in previous data breaches. You can check what personal information of yours has been exposed with our Digital Footprint portal. Just enter your email address (it’s best to submit the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Was T-Mobile compromised by a zero-day in Jira?

A moderator of the notorious data breach trading platform BreachForums is offering data for sale they claim comes from a data breach at T-Mobile.

The moderator, going by the name of IntelBroker, describes the data as containing source code, SQL files, images, Terraform data, t-mobile.com certifications, and “Siloprograms.” (We’ve not heard of siloprograms, and can’t find a reference to them anywhere, so perhaps it’s a mistranslation or typo.)

post offereing data for sale supposedly from a T-Mobile internal breach
Post offereing data for sale supposedly from a T-Mobile internal breach

To prove they had the data, IntelBroker posted several screenshots showing access with administrative privileges to a Confluence server and T-Mobile’s internal Slack channels for developers.

But according to sources known to BleepingComputer, the data shared by IntelBroker actually consists of older screenshots. These screenshots show T-Mobile’s infrastructure, posted at a known—yet unnamed—third-party vendor’s servers, from where they were stolen.

When we looked at the screenshots IntelBroker attached to their post, we spotted something interesting in one of them.

search for vulnerability
Found CVE-2024-1597

This screenshot shows a search query for a critical vulnerability in Jira, a project management tool used by teams to plan, track, release and support software. It’s typically a place where you could find the source code of works in progress.

The search returns the result CVE-2024-1597, a SQL injection vulnerability. SQL injection happens when a cybercriminal injects malicious SQL code into a form on a website, such as a login page, instead of the data the form is asking for. The vulnerability affects Confluence Data Center and Server according to Atlassian’s May security bulletin.

For a better understanding, it’s important to note that Jira and Confluence are both products created by Atlassian, where Jira is the project management and issue tracking tool and Confluence is the collaboration and documentation tool. They are often used together.

If IntelBroker has a working exploit for the SQL injection vulnerability, this could also explain their claim that they have the source code of three internal tools used at Apple, including a single sign-on authentication system known as AppleConnect.

This theory is supported by the fact that IntelBroker is also offering a Jira zero-day for sale.

IntelBroker offering zero-day for JIra for sale
IntelBroker selling zero-day for JIra

“I’m selling a zero-day RCE for Atlassian’s Jira.

Works for the latest version of the desktop app, as well as Jira with confluence.

No login is required for this, and works with Okta SSO.”

If this is true then this exploit, or its fruits, might be used for data breaches that involve personal data.

Meanwhile, T-Mobile has denied it has suffered a breach, saying it is investigating whether there has been a breach at a third-party provider.

“We have no indication that T-Mobile customer data or source code was included and can confirm that the bad actor’s claim that T-Mobile’s infrastructure was accessed is false.”


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

TikTok facing fresh lawsuit in US over children’s privacy

The Federal Trade Commission (FTC) has announced it’s referred a complaint against TikTok and parent company ByteDance to the Department of Justice.

The investigation originally focused on Musical.ly which was acquired by ByteDance on November 10, 2017, and merged it into TikTok.

The FTC started a compliance review of Musical.ly following a 2019 settlement with the company for violations of the Children’s Online Privacy Protection Act (COPPA). In the settlement, Musical.ly received a fine of $5.7m for collecting personal information from children without parental consent.

One of the main concerns was that Musical.ly did not ask the user’s age and later failed to go back and request age information for people who already had accounts.

COPPA requires sites and services like Musical.ly and TikTok – among other things – to get parental consent before collecting personal information from children under 13.

Musical.ly also failed to deal with complaints properly. The FTC found that—in just a two-week period in September 2016—the company received over 300 complaints from parents asking Musical.ly to delete their child’s account. However, under COPPA it’s not enough just to delete existing accounts, companies have to remove the kids’ videos and profiles from the company’s servers; Musical.ly failed to do this.

In 2022, TikTok itself faced a $28m fine for failing to protect children’s privacy after an investigation of a possible breach of the UK’s data protection laws.

In the US, TikTok agreed to pay $92 million in 2021 to settle dozens of lawsuits alleging that it harvested personal data from users, including information using facial recognition technology, without consent, and shared the data with third parties.

The FTC states that during the investigation it uncovered reasons to believe that “defendants are violating or are about to violate the law and that a proceeding is in the public interest.”

The FTC also said it usually doesn’t publicize the referral of complaints but feels it is in the public interest to do so now.

TikTok has been in the crosshairs of privacy and security professionals and politicians for years.

In June 2022,  the FCC (Federal Communications Commission), called on the CEOs of Apple and Google to remove TikTok from their app stores considering it an unacceptable national security risk because of its Chinese ownership.

In 2023, General Paul Nakasone, Director of the National Security Agency (NSA) referred to TikTok as a loaded gun in the hands of America’s TikTok-addicted youth.

Recently, we reported about the take-over of some high-profile TikTok accounts just by opening a Direct Message.

And the clock is ticking when it comes to TikTok’s presence in the US, after the US Senate has approved a bill that would effectively ban TikTok from the US unless Chinese owner ByteDance gives up its share of the still immensely popular app.

Somehow we don’t think we’ve heard the last of this.

Check your digital footprint

Malwarebytes has a new free tool for you to check how much of your personal data has been exposed online. Submit your email address (it’s best to give the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report and recommendations.

43% of couples experience pressure to share logins and locations, Malwarebytes finds

All isn’t fair in love and romance today, as 43% of people in a committed relationship said they have felt pressured by their own partners to share logins, passcodes, and/or locations. A worrying 7% admitted that this type of pressure has included the threat of breaking up or the threat of physical or emotional harm.

These are latest findings from original research conducted by Malwarebytes to explore how romantic couples navigate shared digital access to one another’s devices, accounts, and location information.

In short, digital sharing is the norm in modern relationships, but it doesn’t come without its fears.

While everybody shares some type of device, account, or location access with their significant other (100% of respondents), and plenty grant their significant other access to at least one personal account (85%), a sizeable portion longs for something different—31% said they worry about “how easy it is for my partner to track what I’m doing and where I am all times because of how much we share,” and 40% worry that “telling my partner I don’t want to share logins, PINs, and/or locations would upset them.”

By surveying 500 people in committed relationships in the United States, Malwarebytes has captured a unique portrait of what it means to date, marry, and be in love in 2024—a part of life that is now inseparable from smart devices, apps, and the internet at large.

The complete findings can be found in the latest report, “What’s mine is yours: How couples share an all-access pass to their digital lives.” You can read the full report below.

Here are some of the key findings:

  • Partners share their personal login information for an average of 12 different types of accounts.
  • 48% of partners share the login information of their personal email accounts.
  • 30% of partners regret sharing location tracking.
  • 18% of partners regret sharing account access. The number is significantly higher for men (30%).
  • 29% of partners said an ex-partner used their accounts to track their location, impersonate them, access their financial accounts, and other harms.
  • Around one in three Gen Z and Millennial partners report an ex has used their accounts to stalk them.

But the data doesn’t only point to causes for concern. It also highlights an opportunity for learning. As Malwarebytes reveals in this latest research, people are looking for guidance, with seven in 10 people admitting they want help navigating digital co-habitation.

According to one Gen Z survey respondent:

“I feel like it might take some effort (to digitally disentangle) because we are more seriously involved. We have many other kinds of digital ties that we would have to undo in order to break free from one another.”

That is why, today, Malwarebytes is also launching its online resource hub: Modern Love in the Digital Age. At this new guidance portal, readers can learn about whether they should share their locations with their partners, why car location tracking presents a new problem for some couples, and how they can protect themselves from online harassment. Access the hub below.

The Surgeon General's Fear-Mongering, Unconstitutional Effort to Label Social Media

Surgeon General Vivek Murthy’s extraordinarily misguided and speech-chilling call this week to label social media platforms as harmful to adolescents is shameful fear-mongering that lacks scientific evidence and turns the nation’s top physician into a censor. This claim is particularly alarming given the far more complex and nuanced picture that studies have drawn about how social media and young people’s mental health interact.

The Surgeon General’s suggestion that speech be labeled as dangerous is extraordinary. Communications platforms are not comparable to unsafe food, unsafe cars, or cigarettes, all of which are physical products—rather than communications platforms—that can cause physical injury. Government warnings on speech implicate our fundamental rights to speak, to receive information, and to think. Murthy’s effort will harm teens, not help them, and the announcement puts the surgeon general in the same category as censorial public officials like Anthony Comstock

There is no scientific consensus that social media is harmful to children's mental health. Social science shows that social media can help children overcome feelings of isolation and anxiety. This is particularly true for LBGTQ+ teens. EFF recently conducted a survey in which young people told us that online platforms are the safest spaces for them, where they can say the things they can't in real life ‘for fear of torment.’ They say these spaces have improved their mental health and given them a ‘haven’ to talk openly and safely. This comports with Pew Research findings that teens are more likely to report positive than negative experiences in their social media use. 

Additionally, Murthy’s effort to label social media creates significant First Amendment problems in its own right, as any government labeling effort would be compelled speech and courts are likely to strike it down.

Young people’s use of social media has been under attack for several years. Several states have recently introduced and enacted unconstitutional laws that would require age verification on social media platforms, effectively banning some young people from them. Congress is also debating several federal censorship bills, including the Kids Online Safety Act and the Kids Off Social Media Act, that would seriously impact young people’s ability to use social media platforms without censorship. Last year, Montana banned the video-sharing app TikTok, citing both its Chinese ownership and its interest in protecting minors from harmful content. That ban was struck down as unconstitutionally overbroad; despite that, Congress passed a similar federal law forcing TikTok’s owner, ByteDance, to divest the company or face a national ban.

Like Murthy, lawmakers pushing these regulations cherry-pick the research, nebulously citing social media’s impact on young people, and dismissing both positive aspects of platforms and the dangerous impact these laws have on all users of social media, adults and minors alike. 

We agree that social media is not perfect, and can have negative impacts on some users, regardless of age. But if Congress is serious about protecting children online, it should enact policies that promote choice in the marketplace and digital literacy. Most importantly, we need comprehensive privacy laws that protect all internet users from predatory data gathering and sales that target us for advertising and abuse.

Microsoft Recall delayed after privacy and security concerns

Microsoft has announced it will postpone the broadly available preview of the heavily discussed Recall feature for Copilot+ PCs. Copilot+ PCs are personal computers that come equipped with several artificial intelligence (AI) features.

The Recall feature tracks anything from web browsing to voice chats. The idea is that Recall can assist users to reconstruct past activity by taking regular screenshots of a user’s activity and storing them locally. The user would then be able to search the database for anything they’ve seen on their PC.

However, Recall received heavy criticism by security researchers and privacy advocates since it was announced last month. The ensuing discussion saw a lot of contradictory statements. For example, Microsoft claimed that Recall would be disabled by default, while the original documentation said otherwise.

Researchers demonstrated how easy it was to extract and search through Recall snapshots on a compromised system. While some may remark that the compromised system is the problem in that equation—and they are not wrong—Recall would potentially provide an attacker with a lot of information that normally would not be accessible. Basically, it would be a goldmine that spyware and information stealers could easily access and search.

In Microsoft’s own words:

“Recall does not perform content moderation. It will not hide information such as passwords or financial account numbers. That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry.”

Microsoft didn’t see the problem, with its vice chair and president, Brad Smith even using Recall as an example to demonstrate how Microsoft is secure during the Committee Hearing: A Cascade of Security Failures: Assessing Microsoft Corporation’s Cybersecurity Shortfalls and the Implications for Homeland Security.

But now things have changed, and Recall will now only be available for participants in the Windows Insider Program (WIP) in the coming weeks, instead of being rolled out to all Copilot+ PC users on June 18 as originally planned.

Another security measure taken only as an afterthought was that users will now have to log into Windows Hello in order to activate Recall and to view your screenshot timeline.

In its blog, Microsoft indicates it will act on the feedback it expects to receive from WIP users.

“This decision is rooted in our commitment to providing a trusted, secure and robust experience for all customers and to seek additional feedback prior to making the feature available to all Copilot+ PC users.”

Our hope is that the WIP community will convince Microsoft to abandon the whole Recall idea. If not, we will make sure to let you know how you can disable it or use it more securely if you wish to do so.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Truist bank confirms data breach

On Wednesday June 12, 2024, a well-known dark web data broker and cybercriminal acting under the name “Sp1d3r” offered a significant amount of data allegedly stolen from Truist Bank for sale.

Truist is a US bank holding company and operates 2,781 branches in 15 states and Washington DC. By assets, it is in the top 10 of US banks. In 2020, Truist provided financial services to about 12 million consumer households.

The online handle of the seller immediately raised the suspicion that this was yet another Snowflake related data breach.

Sp1d3r offering Truist bank data for sale
Post by Sp1d3r on breach forum

The post also mentions Suntrust bank because Truist Bank arose after SunTrust Banks and BB&T (Branch Banking and Trust Company) merged in December 2019.

For the price of $1,000,000, other cybercriminals can allegedly get their hands on:

  • Employee Records: 65,000 records containing detailed personal and professional information.
  • Bank Transactions: Data including customer names, account numbers, and balances.
  • IVR Source Code: Source code for the bank’s Interactive Voice Response (IVR) funds transfer system.

IVR is a technology that allows telephone users to interact with a computer-operated telephone system through the use of voice and Dual-tone multi-frequency signaling (DTMF aka Touch-Tone) tones input with a keypad. Access to the source code may enable criminals to find security vulnerabilities they can abuse.

Given the source and the location where the data were offered, we decided at the time to keep an eye on things but not actively report on it. But now a spokesperson for Truist Bank told BleepingComputer:

“In October 2023, we experienced a cybersecurity incident that was quickly contained.”

Further, the spokesperson stated that after an investigation, the bank notified a small number of clients and denied any connection with Snowflake.

“That incident is not linked to Snowflake. To be clear, we have found no evidence of a Snowflake incident at our company.”

But the bank disclosed that based on new information that came up during the investigation, it has started another round of informing affected customers.

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims, and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

Check your exposure

While matters are still unclear how much information was involved, it’s likely you’ve had other personal information exposed online in previous data breaches. You can check what personal information of yours has been exposed with our Digital Footprint portal. Just enter your email address (it’s best to submit the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Connecticut Has Highest Rate of Health Care Data Breaches: Study

health care data breaches cybersecurity

It’s no secret that hospitals and other health care organizations are among the top targets for cybercriminals. The ransomware attacks this year on UnitedHealth Group’s Change Healthcare subsidiary, nonprofit organization Ascension, and most recently the National Health Service in England illustrate not only the damage to these organizations’ infrastructure and the personal health data that’s..

The post Connecticut Has Highest Rate of Health Care Data Breaches: Study appeared first on Security Boulevard.

EFF to Ninth Circuit: Abandoning a Phone Should Not Mean Abandoning Its Contents

This post was written by EFF legal intern Danya Hajjaji.

Law enforcement should be required to obtain a warrant to search data contained in abandoned cell phones, EFF and others explained in a friend-of-the-court brief to the Ninth Circuit Court of Appeals.

The case, United States v. Hunt, involves law enforcement’s seizure and search of an iPhone the defendant left behind after being shot and taken to the hospital. The district court held that the iPhone’s physical abandonment meant that the defendant also abandoned the data stored on the phone. In support of the defendant’s appeal, we urged the Ninth Circuit to reverse the district court’s ruling and hold that the Fourth Amendment’s abandonment exception does not apply to cell phones: as it must in other circumstances, law enforcement should generally have to obtain a warrant before it searches someone’s cell phone.

Cell phones differ significantly from other physical property. They are pocket-sized troves of highly sensitive information with immense storage capacity. Today’s phone carries and collects vast and varied data that encapsulates a user’s daily life and innermost thoughts.

Courts—including the US Supreme Court—have recognized that cell phones contain the “sum of an individual’s private life.” And, because of this recognition, law enforcement must generally obtain a warrant before it can search someone’s phone.

While people routinely carry cell phones, they also often lose them. That should not mean losing the data contained on the phones.

While the Fourth Amendment’s ”abandonment doctrine” permits law enforcement to conduct a warrantless seizure or search of an abandoned item, EFF’s brief explains that this precedent does not mechanically apply to cell phones. As the Supreme Court has recognized multiple times, the rote application of case law from prior eras with less invasive and revealing technologies threatens our Fourth Amendment protections.

Our brief goes on to explain that a cell phone owner rarely (if ever) intentionally relinquishes their expectation of privacy and possessory interests in data on their cell phones, as they must for the abandonment doctrine to apply. The realities of the modern cell phone seldom infer a purpose to discard the wealth of data they contain. Cell phone data is not usually confined to the phone itself, and is instead stored in the “cloud” and accessible across multiple devices (such as laptops, tablets, and smartwatches).

We hope the Ninth Circuit recognizes that expanding the abandonment doctrine in the manner envisioned by the district court in Hunt would make today’s cell phone an accessory to the erosion of Fourth Amendment rights.

The Next Generation of Cell-Site Simulators is Here. Here’s What We Know.

Dozens of policing agencies are currently using cell-site simulators (CSS) by Jacobs Technology and its Engineering Integration Group (EIG), according to newly-available documents on how that company provides CSS capabilities to local law enforcement. 

A proposal document from Jacobs Technology, provided to the Massachusetts State Police (MSP) and first spotted by the Boston Institute for Nonprofit Journalism (BINJ), outlines elements of the company’s CSS services, which include discreet integration of the CSS system into a Chevrolet Silverado and lifetime technical support. The proposal document is part of a winning bid Jacobs submitted to MSP earlier this year for a nearly $1-million contract to provide CSS services, representing the latest customer for one of the largest providers of CSS equipment.

An image of the Jacobs CSS system as integrated into a Chevrolet Silverado for the Virginia State Police.

An image of the Jacobs CSS system as integrated into a Chevrolet Silverado for the Virginia State Police. Source: 2024 Jacobs Proposal Response

The proposal document from Jacobs provides some of the most comprehensive information about modern CSS that the public has had access to in years. It confirms that law enforcement has access to CSS capable of operating on 5G as well as older cellular standards. It also gives us our first look at modern CSS hardware. The Jacobs system runs on at least nine software-defined radios that simulate cellular network protocols on multiple frequencies and can also gather wifi intelligence. As these documents describe, these CSS are meant to be concealed within a common vehicle. Antennas are hidden under a false roof so nothing can be seen outside the vehicles, which is a shift from the more visible antennas and cargo van-sized deployments we’ve seen before.  The system also comes with a TRACHEA2+ and JUGULAR2+ for direction finding and mobile direction finding. 

The Jacobs 5G CSS base station system.

The Jacobs 5G CSS base station system. Source: 2024 Jacobs Proposal Response

CSS, also known as IMSI catchers, are among law enforcement’s most closely-guarded secret surveillance tools. They act like real cell phone towers, “tricking” mobile devices into connecting to them, designed to intercept the information that phones send and receive, like the location of the user and metadata for phone calls, text messages, and other app traffic. CSS are highly invasive and used discreetly. In the past, law enforcement used a technique called “parallel construction”—collecting evidence in a different way to reach an existing conclusion in order to avoid disclosing how law enforcement originally collected it—to circumvent public disclosure of location findings made through CSS. In Massachusetts, agencies are expected to get a warrant before conducting any cell-based location tracking. The City of Boston is also known to own a CSS. 

This technology is like a dragging fishing net, rather than a focused single hook in the water. Every phone in the vicinity connects with the device; even people completely unrelated to an investigation get wrapped up in the surveillance. CSS, like other surveillance technologies, subjects civilians to widespread data collection, even those who have not been involved with a crime, and has been used against protestors and other protected groups, undermining their civil liberties. Their adoption should require public disclosure, but this rarely occurs. These new records provide insight into the continued adoption of this technology. It remains unclear whether MSP has policies to govern its use. CSS may also interfere with the ability to call emergency services, especially for people who have to use accessibility technologies for those who cannot hear.

Important to the MSP contract is the modification of a Chevrolet Silverado with the CSS system. This includes both the surreptitious installment of the CSS hardware into the truck and the integration of its software user interface into the navigational system of the vehicle. According to Jacobs, this is the kind of installation with which they have a lot of experience.

Jacobs has built its CSS project on military and intelligence community relationships, which are now informing development of a tool used in domestic communities, not foreign warzones in the years after September 11, 2001. Harris Corporation, later L3Harris Technologies, Inc., was the largest provider of CSS technology to domestic law enforcement but stopped selling to non-federal agencies in 2020. Once Harris stopped selling to local law enforcement the market was open to several competitors, one of the largest of which was KeyW Corporation. Following Jacobs’s 2019 acquisition of The KeyW Corporation and its Engineering Integration Group (EIG), Jacobs is now a leading provider of CSS to police, and it claims to have more than 300 current CSS deployments globally. EIG’s CSS engineers have experience with the tool dating to late 2001, and they now provide the spectrum of CSS-related services to clients, including integration into vehicles, training, and maintenance, according to the document. Jacobs CSS equipment is operational in 35 state and local police departments, according to the documents.

EFF has been able to identify 13 agencies using the Jacobs equipment, and, according to EFF’s Atlas of Surveillance, more than 70 police departments have been known to use CSS. Our team is currently investigating possible acquisitions in California, Massachusetts, Michigan, and Virginia. 

An image of the Jacobs CSS system interface integrated into the factory-provided vehicle navigation system.

An image of the Jacobs CSS system interface integrated into the factory-provided vehicle navigation system. Source: 2024 Jacobs Proposal Response

The proposal also includes details on other agencies’ use of the tool, including that of the Fontana, CA Police Department, which it says has deployed its CSS more than 300 times between 2022 and 2023, and Prince George's County Sheriff (MO), which has also had a Chevrolet Silverado outfitted with CSS. 

Jacobs isn’t the lone competitor in the domestic CSS market. Cognyte Software and Tactical Support Equipment, Inc. also bid on the MSP contract, and last month, the City of Albuquerque closed a call for a cell-site simulator that it awarded to Cognyte Software Ltd. 

Adobe clarifies Terms of Service change, says it doesn’t train AI on customer content

Following days of user pushback that included allegations of forcing a “spyware-like” Terms of Service (ToS) update into its products, design software giant Adobe explained itself with several clarifications.

Apparently, the concerns raised by the community, especially among Photoshop and Substance 3D users, caused the company to reflect on the language it used in the ToS. The adjustments that Adobe announced earlier this month suggested that users give the company unlimited access to all their materials—including materials covered by company Non-Disclosure Agreements (NDAs)—for content review and similar purposes.

As Adobe included in its Terms of Service update:

“As a Business User, you may have different agreements with or obligations to a Business, which may affect your Business Profile or your Content. Adobe is not responsible for any violation by you of such agreements or obligations.

This wording immediately sparked the suspicion that the company intends to use user-generated content to train its AI models. In particular, users balked at the following language:

“[.] you grant us a non-exclusive, worldwide, royalty-free sublicensable, license, to use, reproduce, publicly display, distribute, modify, create derivative works based on, publicly perform, and translate the Content.”

To reassure these users, on June 10, Adobe explained:

“We don’t train generative AI on customer content. We are adding this statement to our Terms of Use to reassure people that is a legal obligation on Adobe. Adobe Firefly is only trained on a dataset of licensed content with permission, such as Adobe Stock, and public domain content where copyright has expired.”

Alas, several artists found images that reference their work on Adobe’s stock platform.

As we have explained many times, the length and the use of legalese in the ToS does not do either the user or the company any favors. It seems that Adobe understands this now as well.

“First, we should have modernized our Terms of Use sooner. As technology evolves, we must evolve the legal language that evolves our policies and practices not just in our daily operations, but also in ways that proactively narrow and explain our legal requirements in easy-to-understand language.”

Adobe also said in its blog post that it realized it has to earn the trust of its users and is taking the feedback very seriously and it will be grounds to discuss new changes. Most importantly it wants to stress that you own your content, you have the option to opt out of the product improvement program, and that Adobe does not scan content stored locally on your computer.

Adobe expects to roll out new terms of service on June 18th and aims to better clarify what Adobe is permitted to do with its customers’ work. This is a developing story, and we’ll keep you posted.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Apple Launches ‘Private Cloud Compute’ Along with Apple Intelligence AI

Private Cloud Compute Apple Intelligence AI

In a bold attempt to redefine cloud security and privacy standards, Apple has unveiled Private Cloud Compute (PCC), a groundbreaking cloud intelligence system designed to back its new Apple Intelligence with safety and transparency while integrating Apple devices into the cloud. The move comes after recognition of the widespread concerns surrounding the combination of artificial intelligence and cloud technology.

Private Cloud Compute Aims to Secure Cloud AI Processing

Apple has stated that its new Private Cloud Compute (PCC) is designed to enforce privacy and security standards over AI processing of private information. For the first time ever, Private Cloud Compute brings the same level of security and privacy that our users expect from their Apple devices to the cloud," said an Apple spokesperson. [caption id="attachment_76690" align="alignnone" width="1492"]Private Cloud Compute Apple Intelligence Source: security.apple.com[/caption] At the heart of PCC is Apple's stated commitment to on-device processing. When Apple is responsible for user data in the cloud, we protect it with state-of-the-art security in our services," the spokesperson explained. "But for the most sensitive data, we believe end-to-end encryption is our most powerful defense." Despite this commitment, Apple has stated that for more sophisticated AI requests, Apple Intelligence needs to leverage larger, more complex models in the cloud. This presented a challenge to the company, as traditional cloud AI security models were found lacking in meeting privacy expectations. Apple stated that PCC is designed with several key features to ensure the security and privacy of user data, claiming the following implementations:
  • Stateless computation: PCC processes user data only for the purpose of fulfilling the user's request, and then erases the data.
  • Enforceable guarantees: PCC is designed to provide technical enforcement for the privacy of user data during processing.
  • No privileged access: PCC does not allow Apple or any third party to access user data without the user's consent.
  • Non-targetability: PCC is designed to prevent targeted attacks on specific users.
  • Verifiable transparency: PCC provides transparency and accountability, allowing users to verify that their data is being processed securely and privately.

Apple Invites Experts to Test Standards; Online Reactions Mixed

At this week's Apple Annual Developer Conference, Apple's CEO Tim Cook described Apple Intelligence as a "personal intelligence system" that could understand and contextualize personal data to deliver results that are "incredibly useful and relevant," making "devices even more useful and delightful." Apple Intelligence mines and processes data across apps, software and services across Apple devices. This mined data includes emails, images, messages, texts, messages, documents, audio files, videos, contacts, calendars, Siri conversations, online preferences and past search history. The new PCC system attempts to ease consumer privacy and safety concerns. In its description of 'Verifiable transparency,' Apple stated:
"Security researchers need to be able to verify, with a high degree of confidence, that our privacy and security guarantees for Private Cloud Compute match our public promises. We already have an earlier requirement for our guarantees to be enforceable. Hypothetically, then, if security researchers had sufficient access to the system, they would be able to verify the guarantees."
However, despite Apple's assurances, the announcement of Apple Intelligence drew mixed reactions online, with some already likening it to Microsoft's Recall. In reaction to Apple's announcement, Elon Musk took to X to announce that Apple devices may be banned from his companies, citing the integration of OpenAI as an 'unacceptable security violation.' Others have also raised questions about the information that might be sent to OpenAI. [caption id="attachment_76692" align="alignnone" width="596"]Private Cloud Compute Apple Intelligence 1 Source: X.com[/caption] [caption id="attachment_76693" align="alignnone" width="418"]Private Cloud Compute Apple Intelligence 2 Source: X.com[/caption] [caption id="attachment_76695" align="alignnone" width="462"]Private Cloud Compute Apple Intelligence 3 Source: X.com[/caption] According to Apple's statements, requests made on its devices are not stored by OpenAI, and users’ IP addresses are obscured. Apple stated that it would also add “support for other AI models in the future.” Andy Wu, an associate professor at Harvard Business School, who researches the usage of AI by tech companies, highlighted the challenges of running powerful generative AI models while limiting their tendency to fabricate information. “Deploying the technology today requires incurring those risks, and doing so would be at odds with Apple’s traditional inclination toward offering polished products that it has full control over.”   Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

23andMe data breach under joint investigation in two countries

The British and Canadian privacy authorities have announced they will undertake a joint investigation into the data breach at global genetic testing company 23andMe that was discovered in October 2023.

On Friday October 6, 2023, 23andMe confirmed via a somewhat opaque blog post that cybercriminals had “obtained information from certain accounts, including information about users’ DNA Relatives profiles.”

Later, an investigation by 23andMe showed that an attacker was able to directly access the accounts of roughly 0.1% of 23andMe’s users, which is about 14,000 of its 14 million customers. The attacker accessed the accounts using credential stuffing which is where someone tries existing username and password combinations to see if they can log in to a service. These combinations are usually stolen from another breach and then put up for sale on the dark web. Because people often reuse passwords across accounts, cybercriminals buy those combinations and then use them to login on other services and platforms.

For a subset of these accounts, the stolen data contained health-related information based on the user’s genetics.

The finding that most data was accessed through credential stuffing led to 23andMe sending a letter to legal representatives of victims blaming the victims themselves.

Privacy Commissioner of Canada Philippe Dufresne and UK Information Commissioner John Edwards say they will investigate the 23andMe breach jointly, leveraging the combined resources and expertise of their two offices.

The privacy watchdogs are going to investigate:

  • the scope of information that was exposed by the breach and potential harms to affected individuals;
  • whether 23andMe had adequate safeguards to protect the highly sensitive information within its control; and
  • whether the company provided adequate notification about the breach to the two regulators and affected individuals as required under Canadian and UK privacy and data protection laws.               

The joint investigation will be conducted in accordance with the Memorandum of Understanding between the ICO and OPC.

Scan for your exposed personal data

You can check what personal information of yours has been exposed online with our Digital Footprint portal. Just enter your email address (it’s best to submit the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report. If your data was part of the 23andMe breach, we’ll let you know.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

When things go wrong: A digital sharing warning for couples

“When things go wrong” is a troubling prospect for most couples to face, but the internet—and the way that romantic partners engage both with and across it—could require that this worst-case scenario become more of a best practice.

In new research that Malwarebytes will release this month, romantic partners revealed that the degree to which they share passwords, locations, and devices with one another can invite mild annoyances—like having an ex mooch off a shared Netflix account—serious invasions of privacy—like being spied on through a smart doorbell—and even stalking and abuse.

Importantly, this isn’t just about jilted exes. This is also about people in active, committed relationships who have been pressured or forced into digital sharing beyond their limit.

The proof is in the data.

When Malwarebytes surveyed 500 people in committed relationships, 30% said they regretted sharing location tracking with their partner, 27% worried about their partners tracking them through location-based apps and services, and 23% worried that their current partner had accessed their accounts without their permission.



Plenty of healthy, happy relationships share digital access through trust and consent. For those couples, mapping out how to digitally separate and insulate their accounts from one another “when things go wrong” could seem misguided.

But for the many spouses, girlfriends, boyfriends, and partners who do not fully trust their significant other—or who are still figuring out how much to trust someone new—this exercise should serve as an act of security.

Here’s what people can think about when working through just how much of their digital lives to share.

Inconvenient, annoying, and just plain bothersome

A great deal of digital sharing within couples occurs on streaming platforms. One partner has Netflix, the other has Hulu, the two share Disney+, and years down the line, the couple can’t quite tell who is in charge of Apple Music and who is supposed to cancel the one-week free trial to Peacock.

This logistical nightmare, already difficult for people who are not in a committed relationship, is further complicated after a breakup (or during the relationship if one partner is particularly sensitive about their weekly algorithmic recommendations from Spotify).

If an ex maintains access to your streaming accounts even after a breakup, there’s little chance for abuse, but the situation can be aggravating. Maybe you don’t want your ex to know that you’re watching corny rom-coms, or that you’re absolutely going through it on your seventh replay of Spotify’s “Angry Breakup Mix.” These are valid annoyances that will require a password reset to boot your ex out of the shared account.

But there’s one type of shared account that should raise more caution than those listed above: A shared online shopping account, like Amazon.

With access to a shared online shopping account, a spiteful ex could purchase goods using your saved credit card. They could also keep updates on your location should you ever move and change addresses in the app. This isn’t the same threat as an ex having your real-time location, but for some individuals—particularly survivors of domestic abuse who have escaped their partner—any leak of a new address presents a major risk.

Non-consensual tracking, monitoring, and spying

When couples move into the same home, it can make sense to start sharing a variety of location-based apps.

Looking for a vacation rental online for your next getaway? You’re (hopefully) lodging together. Ordering delivery because nobody wants to make dinner? That order is being sent to the same shared address. Even some credit cards offer specific bonuses on services like Lyft, incentivizing some couples to rely more heavily on one account to score extra credits.

While sharing access between these types of accounts can increase efficiency, it’s important to know—and this may sound obvious—that many of these same shared location-based apps can reveal locations to a romantic partner, even after a breakup.

Your vacation could be revealed to an ex who is abusing their previously shared login privileges into services like Airbnb or Vrbo, or by someone peering into the trip history of a shared Uber account that discloses that a car was recently taken to the airport. Food delivery apps, similarly, can reveal new addresses after a move—a particular risk for survivors of domestic abuse who are trying to escape their physical situation.

In fact, any account that tracks and provides access to location—including Google’s own “Timeline” feature and fitness tracking devices made by Strava—could, in the wrong hands, become a security risk for stalking and abuse.

The vulnerabilities extend farther.

With the popularity of Internet of Things devices like smart doorbells and baby monitors, some partners may want to consider how safe they are from spying in their own homes. Plenty of user posts on a variety of community forums claim that exes and former spouses weaponized video-equipped doorbells and baby monitors to spy on a partner.

These scenarios are frightening, but they are part of a larger question about whether you should share your location with your partner. With the proper care and discussion, your location-sharing will be consensual, respected, and convenient for all.

Stalking and abuse

When discussing the risks around digital sharing between couples, it’s important to clarify that trustworthy partners do not become abusive simply because of their access to technology. A shared food delivery app doesn’t guarantee that a partner will be spied on. A baby monitor with a live video stream is sometimes just that—a baby monitor.

But many of the stories shared here expose the dangers that lie within arm’s reach for abusive partners. The technology alone cannot be blamed for the abuse. Instead, the technology must be scrutinized simply because of its ubiquitous use in today’s world.

The most serious concerns regarding digital access are the potential for stalking and abuse.

For partners that share devices and device passcodes, the notorious threat of stalkerware makes it easy for an abusive partner to pry into a person’s photos, videos, phone calls, text messages, locations, and more. Stalkerware can be installed on a person’s device in a matter of minutes—a low barrier of entry for couples that live with one another and who share each other’s device passcodes.

For partners who share a vehicle, a recent problem has emerged. In December, The New York Times reported on the story of a woman who—despite obtaining a restraining order against her ex-husband—could not turn off her shared vehicle’s location tracking. Because the car was in her husband’s name, he was able to reportedly continue tracking and harassing her.

Even shared smart devices have become a threat. According to reporting from The New York Times in 2018, survivors of domestic abuse began calling support lines with a bevvy of new concerns within their homes:

“One woman had turned on her air-conditioner, but said it then switched off without her touching it. Another said the code numbers of the digital lock at her front door changed every day and she could not figure out why. Still another told an abuse help line that she kept hearing the doorbell ring, but no one was there.”

The survivors’ stories all pointed to the abuse of shared smart devices.

Whereas the solutions to many of the inconveniences and annoyances that can come with shared digital access are simple—a reset password, a removal of a shared account—the “solutions” for technology-enabled abuse are far more complex. These are problems that cannot be solely addressed with advice and good cybersecurity hygiene.

If you are personally experiencing this type of harassment, you can contact the National Network to End Domestic Violence on their hotline at 1-800-799-SAFE.

Making sure things go right

Sharing your life with your partner should be a function of trust, and for many couples, it is. But, in the same way that it is impossible for a cybersecurity company to ignore even one ransomware attack, it’s also improper for this cybersecurity and privacy company to ignore the reality facing many couples today.

There are new rules and standards for digital access within relationships. With the right information and the right guidance, hopefully more people will feel empowered to make the best decisions for themselves.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Shhh. Did you hear that?

It’s Day One of EFF’s summer membership drive for internet freedom! Gather round the virtual campfire because I’ve got special treats and a story for you:

  1. New member t-shirts and limited-edition gear drop TODAY.

  2. Through EFF's 34th birthday on July 10, you can get 2 rare gifts and become an EFF member for just $20! AND new automatic monthly or annual donors get an instant match.

  3. I’m proud to share the first post in a series from our friends, The Encryptids—the rarely-seen enigmas who inspire campfire lore. But this time, they’re spilling secrets about how they survive this ever-digital world. We begin by checking in with the legendary Bigfoot de la Sasquatch...

-Aaron
EFF Membership Team

____________________________

Bigfoot with sunglasses in a forest saying "Privacy is a human right."

P

eople say I'm the most famous of The Encryptids, but sometimes I don't want the spotlight. They all want a piece of me: exes, ad trackers, scammers, even the government. A picture may be worth a thousand words, but my digital profile is worth cash (to skeezy data brokers). I can’t hit a city block without being captured by doorbell cameras, CCTV, license plate readers, and a maze of street-level surveillance. It can make you want to give up on privacy altogether. Honey, no. Why should you have to hole up in some dank, busted forest for freedom and respect? You don’t.

Privacy isn't about hiding. It's about revealing what you want to who you want on your terms. It's your basic right to dignity.

Privacy isn't about hiding...It's your basic right to dignity.

A wise EFF technologist once told me, “Nothing makes you a ghost online.” So what we need is control, sweetie! You're not on your own! EFF worked for decades to set legal precedents for us, to push for good policy, fight crap policy, and create tools so you can be more private and secure on the web RIGHT NOW. They even have whole ass guides that help people around the world protect themselves online. For free!

I know a few things about strangers up in your business, leaked photos, and wanting to live in peace. Your rights and freedoms are too important to leave them up to tech companies and politicians. This world is a better place for having people like the lawyers, activists, and techs at EFF.

Join EFF

Privacy is a "human" right

Privacy is a team sport and the team needs you. Sign up with EFF today and not only can you get fun stuff (featuring ya boy Footy), you’ll make the internet better for everyone.

XOXO,

Bigfoot DLS

____________________________

EFF is a member-supported U.S. 501(c)(3) organization celebrating TEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Utah Consumer Privacy Act (UCPA) 

What is the Utah Consumer Privacy Act? The Utah Consumer Privacy Act, or UCPA, is a state-level data privacy law enacted in Utah, USA, aimed at providing residents with greater control over their personal data. The UCPA shares similarities with other state privacy laws like the California Consumer Privacy Act (CCPA) but has its own […]

The post Utah Consumer Privacy Act (UCPA)  appeared first on Centraleyes.

The post Utah Consumer Privacy Act (UCPA)  appeared first on Security Boulevard.

Ticketmaster Data Breach and Rising Work from Home Scams

In episode 333 of the Shared Security Podcast, Tom and Scott discuss a recent massive data breach at Ticketmaster involving the data of 560 million customers, the blame game between Ticketmaster and third-party provider Snowflake, and the implications for both companies. Additionally, they discuss Live Nation’s ongoing monopoly investigation. In the ‘Aware Much’ segment, the […]

The post Ticketmaster Data Breach and Rising Work from Home Scams appeared first on Shared Security Podcast.

The post Ticketmaster Data Breach and Rising Work from Home Scams appeared first on Security Boulevard.

💾

Surveillance Defense for Campus Protests

The recent wave of protests calling for peace in Palestine have been met with unwarranted and aggressive suppression from law enforcement, universities, and other bad actors. It’s clear that the changing role of surveillance on college campuses exacerbates the dangers faced by all of the communities colleges are meant to support, and only serves to suppress lawful speech. These harmful practices must come to an end, and until they do, activists should take precautions to protect themselves and their communities. There are no easy or universal answers, but here we outline some common considerations to help guide campus activists.

Protest Pocket Guide

How We Got Here

Over the past decade, many campuses have been building up their surveillance arsenal and inviting a greater police presence on campus. EFF and fellow privacy and speech advocates have been clear that this is a dangerous trend that chills free expression and makes students feel less safe, while fostering an adversarial and distrustful relationship with the administration.

Many tools used on campuses overlap with the street-level surveillance used by law enforcement, but universities are in a unique position of power over students being monitored. For students, universities are not just their school, but often their home, employer, healthcare provider, visa sponsor, place of worship, and much more. This reliance heightens the risks imposed by surveillance, and brings it into potentially every aspect of students’ lives.

Putting together a security plan is an essential first step to protect yourself from surveillance.

EFF has also been clear for years: as campuses build up their surveillance capabilities in the name of safety, they chill speech and foster a more adversarial relationship between students and the administration. Yet, this expansion has continued in recent years, especially after the COVID-19 lockdowns.

This came to a head in April, when groups across the U.S. pressured their universities to disclose and divest their financial interest in companies doing business in Israel and weapons manufacturers, and to distance themselves from ties to the defense industry. These protests echo similar campus divestment campaigns against the prison industry in 2015, and the campaign against apartheid South Africa in the 1980s. However, the current divestment movement has been met with disroportionate suppression and unprecedented digital surveillance from many universities.

This guide is written with those involved in protests in mind. Student journalists covering protests may also face digital threats and can refer to our previous guide to journalists covering protests.

Campus Security Planning

Putting together a security plan is an essential first step to protect yourself from surveillance. You can’t protect all information from everyone, and as a practical matter you probably wouldn’t want to. Instead, you want to identify what information is sensitive and who should and shouldn’t have access to it.

That means this plan will be very specific to your context and your own tolerance of risk from physical and psychological harm. For a more general walkthrough you can check out our Security Plan article on Surveillance Self-Defense. Here, we will walk through this process with prevalent concerns from current campus protests.

What do I want to protect?

Current university protests are a rapid and decentralized response to what the UN International Court of Justice ruled as a plausible case of genocide in Gaza, and to the reported humanitarian crisis in occupied East Jerusalem and the West Bank. Such movements will need to focus on secure communication, immediate safety at protests, and protection from collected data being used for retaliation—either at protests themselves or on social media.

At a protest, a mix of visible and invisible surveillance may be used to identify protesters. This can include administrators or law enforcement simply attending and keeping notes of what is said, but often digital recordings can make that same approach less plainly visible. This doesn't just include video and audio recordings—protesters may also be subject to tracking methods like face recognition technology and location tracking from their phone, school ID usage, or other sensors. So here, you want to be mindful of anything you say or anything on your person, which can reveal your identity or role in the protest, or those of fellow protestors.

This may also be paired with online surveillance. The university or police may monitor activity on social media, even joining private or closed groups to gather information. Of course, any services hosted by the university, such as email or WiFi networks, can also be monitored for activity. Again, taking care of what information is shared with whom is essential, including carefully separating public information (like the time of a rally) and private information (like your location when attending). Also keep in mind how what you say publicly, even in a moment of frustration, may be used to draw negative attention to yourself and undermine the cause.

However, many people may strategically use their position and identity publicly to lend credibility to a movement, such as a prominent author or alumnus. In doing so they should be mindful of those around them in more vulnerable positions.

Who do I want to protect it from?

Divestment challenges the financial underpinning of many institutions in higher education. The most immediate adversaries are clear: the university being pressured and the institutions being targeted for divestment.

However, many schools are escalating by inviting police on campus, sometimes as support for their existing campus police, making them yet another potential adversary. Pro-Palestine protests have drawn attention from some federal agencies, meaning law enforcement will inevitably be a potential surveillance adversary even when not invited by universities.

With any sensitive political issue, there are also people who will oppose your position. Others at the protest can escalate threats to safety, or try to intimidate and discredit those they disagree with. Private actors, whether individuals or groups, can weaponize surveillance tools available to consumers online or at a protest, even if it is as simple as video recording and doxxing attendees.

How bad are the consequences if I fail?

Failing to protect information can have a range of consequences that will depend on the institution and local law enforcement’s response. Some schools defused campus protests by agreeing to enter talks with protesters. Others opted to escalate tensions by having police dismantle encampments and having participants suspended, expelled, or arrested. Such disproportionate disciplinary actions put students at risk in myriad ways, depending how they relied on the institution. The extent to which institutions will attempt to chill speech with surveillance will vary, but unlike direct physical disruption, surveillance tools may be used with less hesitation.

The safest bet is to lock your devices with a pin or password, turn off biometric unlocks such as face or fingerprint, and say nothing but to assert your rights.

All interactions with law enforcement carry some risk, and will differ based on your identity and history of police interactions. This risk can be mitigated by knowing your rights and limiting your communication with police unless in the presence of an attorney. 

How likely is it that I will need to protect it?

Disproportionate disciplinary actions will often coincide with and be preceded by some form of surveillance. Even schools that are more accommodating of peace protests may engage in some level of monitoring, particularly schools that have already adopted surveillance tech. School devices, services, and networks are also easy targets, so try to use alternatives to these when possible. Stick to using personal devices and not university-administered ones for sensitive information, and adopt tools to limit monitoring, like Tor. Even banal systems like campus ID cards, presence monitors, class attendance monitoring, and wifi access points can create a record of student locations or tip off schools to people congregating. Online surveillance is also easy to implement by simply joining groups on social media, or even adopting commercial social media monitoring tools.

Schools that invite a police presence make their students and workers subject to the current practices of local law enforcement. Our resource, the Atlas of Surveillance, gives an idea of what technology local law enforcement is capable of using, and our Street-Level Surveillance hub breaks down the capabilities of each device. But other factors, like how well-resourced local law enforcement is, will determine the scale of the response. For example, if local law enforcement already have social media monitoring programs, they may use them on protesters at the request of the university.

Bad actors not directly affiliated with the university or law enforcement may be the most difficult factor to anticipate. These threats can arise from people who are physically present, such as onlookers or counter-protesters, and individuals who are offsite. Information about protesters can be turned against them for purposes of surveillance, harassment, or doxxing. Taking measures found in this guide will also be useful to protect yourself from this potentiality.

Finally, don’t confuse your rights with your safety. Even if you are in a context where assembly is legal and surveillance and suppression is not, be prepared for it to happen anyway. Legal protections are retrospective, so for your own safety, be prepared for adversaries willing to overstep these protections.

How much trouble am I willing to go through to try to prevent potential consequences?

There is no perfect answer to this question, and every individual protester has their own risks and considerations. In setting this boundary, it is important to communicate it with others and find workable solutions that meet people where they’re at. Being open and judgment-free in these discussions make the movement being built more consensual and less prone to abuses.  Centering consent in organizing can also help weed out bad actors in your own camp who will raise the risk for all who participate, deliberately or not.

Keep in mind that nearly any electronic device you own can be used to track you, but there are a few steps you can take to make that data collection more difficult. 

Sometimes a surveillance self-defense tactic will invite new threats. Some universities and governments have been so eager to get images of protesters’ faces they have threatened criminal penalties on people wearing masks at gatherings. These new potential charges must now need to be weighed against the potential harms of face recognition technology, doxxing, and retribution someone may face by exposing their face.

Privacy is also a team sport. Investing a lot of energy in only your own personal surveillance defense may have diminishing returns, but making an effort to educate peers and adjust the norms of the movement puts less work on any one person has a potentially greater impact. Sharing resources in this post and the surveillance self-defense guides, and hosting your own workshops with the security education companion, are good first steps.

Who are my allies?

Cast a wide net of support; many members of faculty and staff may be able to provide forms of support to students, like institutional knowledge about school policies. Many school alumni are also invested in the reputation of their alma mater, and can bring outside knowledge and resources.

A number of non-profit organizations can also support protesters who face risks on campus. For example, many campus bail funds have been set up to support arrested protesters. The National Lawyers Guild has chapters across the U.S. that can offer Know Your Rights training and provide and train people to become legal observers (people who document a protest so that there is a clear legal record of civil liberties’ infringements should protesters face prosecution).

Many local solidarity groups may also be able to help provide trainings, street medics, and jail support. Many groups in EFF’s grassroots network, the Electronic Frontier Alliance, also offer free digital rights training and consultations.

Finally, EFF can help victims of surveillance directly when they email info@eff.org or Signal 510-243-8020. Even when EFF cannot take on your case, we have a wide network of attorneys and cybersecurity researchers who can offer support.

Beyond preparing according to your security plan, preparing plans with networks of support outside of the protest is a good idea.

Tips and Resources

Keep in mind that nearly any electronic device you own can be used to track you, but there are a few steps you can take to make that data collection more difficult. To prevent tracking, your best option is to leave all your devices at home, but that’s not always possible, and makes communication and planning much more difficult. So, it’s useful to get an idea of what sorts of surveillance is feasible, and what you can do to prevent it. This is meant as a starting point, not a comprehensive summary of everything you may need to do or know:

Prepare yourself and your devices for protests

Our guide for attending a protest covers the basics for protecting your smartphone and laptop, as well as providing guidance on how to communicate and share information responsibly. We have a handy printable version available here, too, that makes it easy to share with others.

Beyond preparing according to your security plan, preparing plans with networks of support outside of the protest is a good idea. Tell friends or family when you plan to attend and leave, so that if there are arrests or harassment they can follow up to make sure you are safe. If there may be arrests, make sure to have the phone number of an attorney and possibly coordinate with a jail support group.

Protect your online accounts

Doxxing, when someone exposes information about you, is a tactic reportedly being used on some protesters. This information is often found in public places, like "people search" sites and social media. Being doxxed can be overwhelming and difficult to control in the moment, but you can take some steps to manage it or at least prepare yourself for what information is available. To get started, check out this guide that the New York Times created to train its journalists how to dox themselves, and Pen America's Online Harassment Field Manual

Compartmentalize

Being deliberate about how and where information is shared can limit the impact of any one breach of privacy. Online, this might look like using different accounts for different purposes or preferring smaller Signal chats, and offline it might mean being deliberate about with whom information is shared, and bringing “clean” devices (without sensitive information) to protests.

Be mindful of potential student surveillance tools 

It’s difficult to track what tools each campus is using to track protesters, but it’s possible that colleges are using the same tricks they’ve used for monitoring students in the past alongside surveillance tools often used by campus police. One good rule of thumb: if a device, software, or an online account was provided by the school (like an .edu email address or test-taking monitoring software), then the school may be able to access what you do on it. Likewise, remember that if you use a corporate or university-controlled tool without end-to-end encryption for communication or collaboration, like online documents or email, content may be shared by the corporation or university with law enforcement when compelled with a warrant. 

Know your rights if you’re arrested: 

Thousands of students, staff, faculty, and community members have been arrested, but it’s important to remember that the vast majority of the people who have participated in street and campus demonstrations have not been arrested nor taken into custody. Nevertheless, be careful and know what to do if you’re arrested.

The safest bet is to lock your devices with a pin or password, turn off biometric unlocks such as face or fingerprint, and say nothing but to assert your rights, for example, refusing consent to a search of your devices, bags, vehicles, or home. Law enforcement can lie and pressure arrestees into saying things that are later used against them, so waiting until you have a lawyer before speaking is always the right call.

Barring a warrant, law enforcement cannot compel you to unlock your devices or answer questions, beyond basic identification in some jurisdictions. Law enforcement may not respect your rights when they’re taking you into custody, but your lawyer and the courts can protect your rights later, especially if you assert them during the arrest and any time in custody.

Google will start deleting location history

Google announced that it will reduce the amount of personal data it is storing by automatically deleting old data from “Timeline”—the feature that, previously named “Location History,” tracks user routes and trips based on a phone’s location, allowing people to revisit all the places they’ve been in the past.

In an email, Google told users that they will have until December 1, 2024 to save all travels to their mobile devices before the company starts deleting old data. If you use this feature, that means you have about five months before losing your location history.

Moving forward, Google will link the Location information to the devices you use, rather than to the user account(s). And, instead of backing up your data to the cloud, Google will soon start to store it locally on the device.

As I pointed out years ago, Location History allowed me to “spy” on my wife’s whereabouts without having to install anything on her phone. After some digging, I learned that my Google account was added to my wife’s phone’s accounts when I logged in on the Play Store on her phone. The extra account this created on her phone was not removed when I logged out after noticing the tracking issue.

That issue should be solved by implementing this new policy. (Let’s remember, though, that this is an issue that Google formerly considered a feature rather than a problem.)

Once effective, unless you take action and enable the new Timeline settings by December 1, Google will attempt to move the past 90 days of your travel history to the first device you sign in to your Google account on. If you want to keep using Timeline:

  • Open Google Maps on your device.
  • Tap your profile picture (or initial) in the upper right corner.
  • Choose Your Timeline.
  • Select whether to keep you want to keep your location data until you manually delete it or have Google auto-delete it after 3, 18, or 36 months.

In April of 2023, Google Play launched a series of initiatives that gives users control over the way that separate, third-party apps stored data about them. This was seemingly done because Google wanted to increase transparency and control mechanisms for people to control how apps would collect and use their data.

With the latest announcement, it appears that Google is finally tackling its own apps.

Only recently, Google agreed to purge billions of records containing personal information collected from more than 136 million people in the US surfing the internet using its Chrome web browser. But this was part of a settlement in a lawsuit accusing the search giant of illegal surveillance.

It’s nice to see the needle move in the good direction for a change. As Bruce Schneier pointed out in his article Online Privacy and Overfishing:

“Each successive generation of the public is accustomed to the privacy status quo of their youth. What seems normal to us in the security community is whatever was commonplace at the beginning of our careers.”

This has led us all to a world where we don’t even have the expectation of privacy anymore when it comes to what we do online or when using modern technology in general.

If you want to take firmer control over how your location is tracked and shared, we recommend reading How to turn off location tracking on Android.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

❌