Normal view

There are new articles available, click to refresh the page.
Today — 26 June 2024Cybersecurity News and Magazine

California Privacy Watchdog Inks Deal with French Counterpart to Strengthen Data Privacy Protections

Data Privacy Protections, Data Privacy, CNIL, CPPA, CCPA, Privacy, Protection

In a significant move to bolster data privacy protections, the California Privacy Protection Agency (CPPA) inked a new partnership with France’s Commission Nationale de l'Informatique et des Libertés (CNIL). The collaboration aims to conduct joint research on data privacy issues and share investigative findings that will enhance the capabilities of both organizations in safeguarding personal data. The partnership between CPPA and CNIL shows the growing emphasis on international collaboration in data privacy protection. Both California and France, along with the broader European Union (EU) through its General Data Protection Regulation (GDPR), recognize that effective data privacy measures require global cooperation. France’s membership in the EU brings additional regulatory weight to this partnership and highlights the necessity of cross-border collaboration to tackle the complex challenges of data protection in an interconnected world.

What the CPPA-CNIL Data Privacy Protections Deal Means

The CPPA on Tuesday outlined the goals of the partnership, stating, “This declaration establishes a general framework of cooperation to facilitate joint internal research and education related to new technologies and data protection issues, share best practices, and convene periodic meetings.” The strengthened framework is designed to enable both agencies to stay ahead of emerging threats and innovations in data privacy. Michael Macko, the deputy director of enforcement at the CPPA, said there were practical benefits of this collaboration. “Privacy rights are a commercial reality in our global economy,” Macko said. “We’re going to learn as much as we can from each other to advance our enforcement priorities.” This mutual learning approach aims to enhance the enforcement capabilities of both agencies, ensuring they can better protect consumers’ data in an ever-evolving digital landscape.

CPPA’s Collaborative Approach

The partnership with CNIL is not the CPPA’s first foray into international cooperation. The California agency also collaborates with three other major international organizations: the Asia Pacific Privacy Authorities (APPA), the Global Privacy Assembly, and the Global Privacy Enforcement Network (GPEN). These collaborations help create a robust network of privacy regulators working together to uphold high standards of data protection worldwide. The CPPA was established following the implementation of California's groundbreaking consumer privacy law, the California Consumer Privacy Act (CCPA). As the first comprehensive consumer privacy law in the United States, the CCPA set a precedent for other states and countries looking to enhance their data protection frameworks. The CPPA’s role as an independent data protection authority mirror that of the CNIL - France’s first independent data protection agency – which highlights the pioneering efforts of both regions in the field of data privacy. Data Privacy Protections By combining their resources and expertise, the CPPA and CNIL aim to tackle a range of data privacy issues, from the implications of new technologies to the enforcement of data protection laws. This partnership is expected to lead to the development of innovative solutions and best practices that can be shared with other regulatory bodies around the world. As more organizations and governments recognize the importance of safeguarding personal data, the need for robust and cooperative frameworks becomes increasingly clear. The CPPA-CNIL partnership serves as a model for other regions looking to strengthen their data privacy measures through international collaboration.
Before yesterdayCybersecurity News and Magazine

Apple Launches ‘Private Cloud Compute’ Along with Apple Intelligence AI

By: Alan J
11 June 2024 at 19:14

Private Cloud Compute Apple Intelligence AI

In a bold attempt to redefine cloud security and privacy standards, Apple has unveiled Private Cloud Compute (PCC), a groundbreaking cloud intelligence system designed to back its new Apple Intelligence with safety and transparency while integrating Apple devices into the cloud. The move comes after recognition of the widespread concerns surrounding the combination of artificial intelligence and cloud technology.

Private Cloud Compute Aims to Secure Cloud AI Processing

Apple has stated that its new Private Cloud Compute (PCC) is designed to enforce privacy and security standards over AI processing of private information. For the first time ever, Private Cloud Compute brings the same level of security and privacy that our users expect from their Apple devices to the cloud," said an Apple spokesperson. [caption id="attachment_76690" align="alignnone" width="1492"]Private Cloud Compute Apple Intelligence Source: security.apple.com[/caption] At the heart of PCC is Apple's stated commitment to on-device processing. When Apple is responsible for user data in the cloud, we protect it with state-of-the-art security in our services," the spokesperson explained. "But for the most sensitive data, we believe end-to-end encryption is our most powerful defense." Despite this commitment, Apple has stated that for more sophisticated AI requests, Apple Intelligence needs to leverage larger, more complex models in the cloud. This presented a challenge to the company, as traditional cloud AI security models were found lacking in meeting privacy expectations. Apple stated that PCC is designed with several key features to ensure the security and privacy of user data, claiming the following implementations:
  • Stateless computation: PCC processes user data only for the purpose of fulfilling the user's request, and then erases the data.
  • Enforceable guarantees: PCC is designed to provide technical enforcement for the privacy of user data during processing.
  • No privileged access: PCC does not allow Apple or any third party to access user data without the user's consent.
  • Non-targetability: PCC is designed to prevent targeted attacks on specific users.
  • Verifiable transparency: PCC provides transparency and accountability, allowing users to verify that their data is being processed securely and privately.

Apple Invites Experts to Test Standards; Online Reactions Mixed

At this week's Apple Annual Developer Conference, Apple's CEO Tim Cook described Apple Intelligence as a "personal intelligence system" that could understand and contextualize personal data to deliver results that are "incredibly useful and relevant," making "devices even more useful and delightful." Apple Intelligence mines and processes data across apps, software and services across Apple devices. This mined data includes emails, images, messages, texts, messages, documents, audio files, videos, contacts, calendars, Siri conversations, online preferences and past search history. The new PCC system attempts to ease consumer privacy and safety concerns. In its description of 'Verifiable transparency,' Apple stated:
"Security researchers need to be able to verify, with a high degree of confidence, that our privacy and security guarantees for Private Cloud Compute match our public promises. We already have an earlier requirement for our guarantees to be enforceable. Hypothetically, then, if security researchers had sufficient access to the system, they would be able to verify the guarantees."
However, despite Apple's assurances, the announcement of Apple Intelligence drew mixed reactions online, with some already likening it to Microsoft's Recall. In reaction to Apple's announcement, Elon Musk took to X to announce that Apple devices may be banned from his companies, citing the integration of OpenAI as an 'unacceptable security violation.' Others have also raised questions about the information that might be sent to OpenAI. [caption id="attachment_76692" align="alignnone" width="596"]Private Cloud Compute Apple Intelligence 1 Source: X.com[/caption] [caption id="attachment_76693" align="alignnone" width="418"]Private Cloud Compute Apple Intelligence 2 Source: X.com[/caption] [caption id="attachment_76695" align="alignnone" width="462"]Private Cloud Compute Apple Intelligence 3 Source: X.com[/caption] According to Apple's statements, requests made on its devices are not stored by OpenAI, and users’ IP addresses are obscured. Apple stated that it would also add “support for other AI models in the future.” Andy Wu, an associate professor at Harvard Business School, who researches the usage of AI by tech companies, highlighted the challenges of running powerful generative AI models while limiting their tendency to fabricate information. “Deploying the technology today requires incurring those risks, and doing so would be at odds with Apple’s traditional inclination toward offering polished products that it has full control over.”   Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Researcher Develops ‘TotalRecall’ Tool That Can Extract Data From Microsoft Recall

By: Alan J
5 June 2024 at 19:15

TotalRecall Microsoft Security Vulnerability

While Microsoft's forthcoming Recall feature has already sparked security and privacy concerns, the tech giant attempted to downplay those reactions by stating that collected data would remain on the user's device. Despite this reassurance, concerns remain, as researchers - including the developer of a new tool dubbed "TotalRecall" - have observed various inherent vulnerabilities in the local database maintained by Recall, lending credibility to critics of Microsoft's implementation of the AI tool.

TotalRecall Tool Demonstrates Recall's Inherent Vulnerabilities

Recall is a new Windows AI tool planned for Copilot+ PCs that captures screenshots from user devices every five seconds, then storing the data in a local database. The tool's announcement, however, led many to fear that this process would make sensitive information on devices susceptible to unauthorized access. TotalRecall, a new tool developed by Alex Hagenah and named after the 1990 sci-fi film, highlights the potential compromise of this stored information. Hagenah states that the the local database is unencrypted and stores data in plain text format. The researcher likened Recall to spyware, calling it a "Trojan 2.0." TotalRecall was designed to extract and display all the information stored in the Recall database, pulling out screenshots, text data, and other sensitive information, highlighting the potential for abuse by criminal hackers or domestic abusers who may gain physical access to a device. Hagenah's concerns are echoed by others in the cybersecurity community, who have also compared Recall to spyware or stalkerware. Recall captures screenshots of everything displayed on a user's desktop, including messages from encrypted apps like Signal and WhatsApp, websites visited, and all text shown on the PC. TotalRecall can locate and copy the Recall database, parse its data, and generate summaries of the captured information, with features for date range filtering and term searches. Hagenah stated that by releasing the tool on GitHub, he aims to push Microsoft to fully address these security issues before Recall's launch on June 18.

Microsoft Recall Privacy and Security Concerns

Cybersecurity researcher Kevin Beaumont has also developed a website for searching Recall databases, though he has withheld its release to give Microsoft time to make changes. Microsoft's privacy documentation for Recall mentions the ability to disable screenshot saving, pause Recall on the system, filter out applications, and delete data. Nonetheless, the company acknowledges that Recall does not moderate the captured content, which could include sensitive information like passwords, financial details and more. The risks extend beyond individual users, as employees under "bring your own device" policies could leave with significant amounts of company data saved on their laptops. The UK's data protection regulator has requested more information from Microsoft regarding Recall and its privacy implications. Amid criticism over recent hacks affecting US government data, Microsoft CEO Satya Nadella has emphasized its need to prioritize security. However, the issues surrounding Recall demonstrate that security concerns were not given sufficient attention, and necessitate inspection of its data collection practices before its official release. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Generative AI and Data Privacy: Navigating the Complex Landscape

Generative AI

By Neelesh Kripalani, Chief Technology Officer, Clover Infotech Generative AI, which includes technologies such as deep learning, natural language processing, and speech recognition for generating text, images, and audio, is transforming various sectors from entertainment to healthcare. However, its rapid advancement has raised significant concerns about data privacy. To navigate this intricate landscape, it is crucial to understand the intersection of AI capabilities, ethical considerations, legal frameworks, and technological safeguards.

Data Privacy Challenges Raised by Generative AI

Not securing data while collection or processing- Generative AI raises significant data privacy concerns due to its need for vast amounts of diverse data, often including sensitive personal information, collected without explicit consent and difficult to anonymize effectively. Model inversion attacks and data leakage risks can expose private information, while biases in training data can lead to unfair or discriminatory outputs. The risk of generated content - The ability of generative AI to produce highly realistic fake content raises serious concerns about its potential for misuse. Whether creating convincing deepfake videos or generating fabricated text and images, there is a significant risk of this content being used for impersonation, spreading disinformation, or damaging individuals' reputations. Lack of Accountability and transparency - Since GenAI models operate through complex layers of computation, it is difficult to get visibility and clarity into how these systems arrive at their outputs. This complexity makes it difficult to track the specific steps and factors that lead to a particular decision or output. This not only hinders trust and accountability but also complicates the tracing of data usage and makes it tedious to ensure compliance with data privacy regulations. Additionally, unidentified biases in the training data can lead to unfair outputs, and the creation of highly realistic but fake content, like deepfakes, poses risks to content authenticity and verification. Addressing these issues requires improved explainability, traceability, and adherence to regulatory frameworks and ethical guidelines. Lack of fairness and ethical considerations - Generative AI models can perpetuate or even exacerbate existing biases present in their training data. This can lead to unfair treatment or misrepresentation of certain groups, raising ethical issues.

Here’s How Enterprises Can Navigate These Challenges

Understand and map the data flow - Enterprises must maintain a comprehensive inventory of the data that their GenAI systems process, including data sources, types, and destinations. Also, they should create a detailed data flow map to understand how data moves through their systems. Implement strong data governance - As per the data minimization regulation, enterprises must collect, process, and retain only the minimum amount of personal data necessary to fulfill a specific purpose. In addition to this, they should develop and enforce robust data privacy policies and procedures that comply with relevant regulations. Ensure data anonymization and pseudonymization – Techniques such as anonymization and pseudonymization can be implemented to reduce the chances of data reidentification. Strengthen security measures – Implement other security measures such as encryption for data at rest and in transit, access controls for protecting against unauthorized access, and regular monitoring and auditing to detect and respond to potential privacy breaches. To summarize, organizations must begin by complying with the latest data protection laws and practices, and strive to use data responsibly and ethically. Further, they should regularly train employees on data privacy best practices to effectively manage the challenges posed by Generative AI while leveraging its benefits responsibly and ethically. Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything. 

Australian Privacy Watchdog Files Lawsuit Against Medibank Over 2022 Data Breach

Lawsuit against Medibank, Medibank, Medibank Data Breach, Medibank Data Breach 2022

The Australian privacy watchdog on Wednesday filed a lawsuit against Medibank, the country's largest private health insurer, for failing to protect its 9.7 million customers' personal information in a 2022 data breach incident.

The Australian Information Commissioner said in a civil penalty proceedings filed in the Federal Court that Medibank "seriously interfered" with the privacy of Australians by failing to take reasonable steps to protect their data from misuse and unauthorized access. These issues are allegedly in breach of the country's Privacy Act 1988, according to the OAIC.

The legal actions follow an investigation from the Australian Information Commissioner Angelene Falk into the Medibank cyberattack in which threat actors accessed the personal information of millions of current and former Medibank customers. The personally identifiable data that was stolen in this breach also ended up being published on the dark web. “The release of personal information on the dark web exposed a large number of Australians to the likelihood of serious harm, including potential emotional distress and the material risk of identity theft, extortion and financial crime,” said acting Australian Information Commissioner Elizabeth Tydd. Tydd emphasized that Medibank’s business as a health insurance services provider involves collecting and holding customers’ personal and sensitive health information.
“We allege Medibank failed to take reasonable steps to protect personal information it held given its size, resources, the nature and volume of the sensitive and personal information it handled, and the risk of serious harm for an individual in the case of a breach,” Tydd said. “We consider Medibank’s conduct resulted in a serious interference with the privacy of a very large number of individuals.”
Privacy Commissioner Carly Kind put the responsibility of data security and privacy on the organizations that collect, use and store personal information. These orgnizations have a considerable responsibility to ensure that data is held safely and securely, particularly in the case of sensitive data, she said. “This case should serve as a wakeup call to Australian organizations to invest in their digital defenses,” Kind added.

Aim and Findings of OAIC's Medibank Data Breach Investigation

OAIC commenced the investigation into Medibank’s privacy practices in December 2022 following an October data breach of Medibank and its subsidiary ahm. The investigation focused on whether Medibank's actions constituted a privacy interference or breached Australian Privacy Principle (APP) 11.1. This law enforcement mandates organizations to take reasonable steps in the protection of information from misuse, interference, and unauthorized access. The OAIC's findings suggested that Medibank's measures were insufficient given the circumstances. Under section 13G of the Privacy Act, the Commissioner can apply for a civil penalty order for serious or repeated privacy interferences. For the period from March 2021 to October 2022, the Federal Court can impose a civil penalty of up to AU$2.2 million (approximately US$1.48 million) per violation.

A spokesperson for the health insurer did not detail the plan of action against the lawsuit but told The Cyber Express that ”Medibank intends to defend the proceedings.”

Set Aside Millions to Fix the Issues

Australia's banking regulator last year advised Medibank to set aside AU$250 million (approximately US$167 million) in extra capital to fix the weaknesses identified in its information security after the 2022 data breach incident. The Australian Prudential and Regulation Authority (APRA) said at the time that the capital adjustment would remain in place until an agreed remediation programe was completed by Medibank to the regulator's satisfaction. Medibank told investors and customers that it had sufficient existing capital to meet this adjustment. APRA also said it would conduct a technology review of Medibank that would expedite the remediation process for the health insurer. It did not immediately respond to The Cyber Express' request for an update on this matter.

Medibank Hacker Sanctioned and Arrested

The United States, Australia and the United Kingdom earlier in the year sanctioned a Russian man the governments believed was behind the 2022 Medibank hack. 33-year-old Aleksandr Gennadievich Ermakov, having aliases AlexanderErmakov, GustaveDore, aiiis_ermak, blade_runner and JimJones, was said to be the face behind the screen. Post the sanctions, Russian police arrested three men including Ermakov, on charges of violating Article 273 of the country's criminal code, which prohibits creating, using or disseminating harmful computer code, said Russian cybersecurity firm F.A.C.C.T. Extradition of Ermakov in the current political environment seems highly unlikely. The legal action against Medibank serves a critical reminder for organizations to prioritize data security and adhere to privacy regulations. The outcome of this lawsuit will likely influence how Australian entities manage and protect personal information in the future, reinforcing the need for stringent cybersecurity practices in an evolving digital landscape. “Organizations have an ethical as well as legal duty to protect the personal information they are entrusted with and a responsibility to keep it safe,” Kind said.
❌
❌