Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Adobe clarifies Terms of Service change, says it doesn’t train AI on customer content

12 June 2024 at 11:28

Following days of user pushback that included allegations of forcing a “spyware-like” Terms of Service (ToS) update into its products, design software giant Adobe explained itself with several clarifications.

Apparently, the concerns raised by the community, especially among Photoshop and Substance 3D users, caused the company to reflect on the language it used in the ToS. The adjustments that Adobe announced earlier this month suggested that users give the company unlimited access to all their materials—including materials covered by company Non-Disclosure Agreements (NDAs)—for content review and similar purposes.

As Adobe included in its Terms of Service update:

“As a Business User, you may have different agreements with or obligations to a Business, which may affect your Business Profile or your Content. Adobe is not responsible for any violation by you of such agreements or obligations.

This wording immediately sparked the suspicion that the company intends to use user-generated content to train its AI models. In particular, users balked at the following language:

“[.] you grant us a non-exclusive, worldwide, royalty-free sublicensable, license, to use, reproduce, publicly display, distribute, modify, create derivative works based on, publicly perform, and translate the Content.”

To reassure these users, on June 10, Adobe explained:

“We don’t train generative AI on customer content. We are adding this statement to our Terms of Use to reassure people that is a legal obligation on Adobe. Adobe Firefly is only trained on a dataset of licensed content with permission, such as Adobe Stock, and public domain content where copyright has expired.”

Alas, several artists found images that reference their work on Adobe’s stock platform.

As we have explained many times, the length and the use of legalese in the ToS does not do either the user or the company any favors. It seems that Adobe understands this now as well.

“First, we should have modernized our Terms of Use sooner. As technology evolves, we must evolve the legal language that evolves our policies and practices not just in our daily operations, but also in ways that proactively narrow and explain our legal requirements in easy-to-understand language.”

Adobe also said in its blog post that it realized it has to earn the trust of its users and is taking the feedback very seriously and it will be grounds to discuss new changes. Most importantly it wants to stress that you own your content, you have the option to opt out of the product improvement program, and that Adobe does not scan content stored locally on your computer.

Adobe expects to roll out new terms of service on June 18th and aims to better clarify what Adobe is permitted to do with its customers’ work. This is a developing story, and we’ll keep you posted.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Adobe to update vague AI terms after users threaten to cancel subscriptions

11 June 2024 at 13:06
Adobe to update vague AI terms after users threaten to cancel subscriptions

Enlarge (credit: bennymarty | iStock Editorial / Getty Images Plus)

Adobe has promised to update its terms of service to make it "abundantly clear" that the company will "never" train generative AI on creators' content after days of customer backlash, with some saying they would cancel Adobe subscriptions over its vague terms.

Users got upset last week when an Adobe pop-up informed them of updates to terms of use that seemed to give Adobe broad permissions to access user content, take ownership of that content, or train AI on that content. The pop-up forced users to agree to these terms to access Adobe apps, disrupting access to creatives' projects unless they immediately accepted them.

For any users unwilling to accept, canceling annual plans could trigger fees amounting to 50 percent of their remaining subscription cost. Adobe justifies collecting these fees because a "yearly subscription comes with a significant discount."

Read 25 remaining paragraphs | Comments

Narrowing the Stubborn Cybersecurity Worker Gap

6 June 2024 at 16:12
cybersecurity worker skills gap

There is still a significant gap between cybersecurity needs and available talent, according to Cyberseek, but all those tech industry layoffs are raising eyebrows. Organizations can expand the candidate pool by training people for these jobs rather than insisting on outside industry credentials.

The post Narrowing the Stubborn Cybersecurity Worker Gap appeared first on Security Boulevard.

Microsoft Recall is a Privacy Disaster

6 June 2024 at 13:20
Microsoft CEO Satya Nadella, with superimposed text: “Security”

It remembers everything you do on your PC. Security experts are raging at Redmond to recall Recall.

The post Microsoft Recall is a Privacy Disaster appeared first on Security Boulevard.

AI Prompt Engineering for Cybersecurity: The Details Matter

AI prompt engineering for security

AI has been a major focus of the Gartner Security and Risk Management Summit in National Harbor, Maryland this week, and the consensus has been that while large language models (LLMs) have so far overpromised and under-delivered, there are still AI threats and defensive use cases that cybersecurity pros need to be aware of. Jeremy D’Hoinne, Gartner Research VP for AI & Cybersecurity, told conference attendees that hacker uses of AI so far include improved phishing and social engineering – with deepfakes a particular concern. But D’Hoinne and Director Analyst Kevin Schmidt agreed in a joint panel that there haven’t been any novel attack technique arising from AI yet, just improvements on existing attack techniques like business email compromise (BEC) or voice scams. AI security tools likewise remain underdeveloped, with AI assistants perhaps the most promising cybersecurity application so far, able to potentially help with patching, mitigations, alerts and interactive threat intelligence. D’Hoinne cautions that the tools should be used as an adjunct to security staffers so they don’t lose their ability to think critically.

AI Prompt Engineering for Cybersecurity: Precision Matters

Using AI assistants and LLMs for cybersecurity use cases was the focus of a separate presentation by Schmidt, who cautioned that AI prompt engineering needs to be very specific for security uses to overcome the limitations of LLMs, and even then the answer may only get you 70%-80% toward your goal. Outputs need to be validated, and junior staff will require the oversight of senior staff, who will more quickly be able to determine the significance of the output. Schmidt also cautioned that chatbots like ChatGPT should only be used for noncritical data. Schmidt gave examples of good and bad AI security prompts for helping security operations teams. “Create a query in my <name of SIEM> to identify suspicious logins” is too vague, he said. He gave an example of a better way to craft a SIEM query: “Create a detection rule in <name of SIEM> to identify suspicious logins from multiple locations within the last 24 hours. Provide the <SIEM> query language and explain the logic behind it and place the explanations in tabular format.” That prompt should produce something like the following output: [caption id="attachment_75212" align="alignnone" width="300"]SIEM query AI prompt output SIEM query AI prompt output (source: Gartner)[/caption] Analyzing firewall logs was another example. Schmidt gave the following as an example of an ineffective prompt: “Analyze the firewall logs for any unusual patterns or anomalies.” A better prompt would be: “Analyze the firewall logs from the past 24 hours and identify any unusual patterns or anomalies. Summarize your findings in a report format suitable for a security team briefing.” That produced the following output: [caption id="attachment_75210" align="alignnone" width="300"]Firewall log prompt output Firewall log prompt output (source: Gartner)[/caption] Another example involved XDR tools. Instead of a weak prompt like “Summarize the top two most critical security alerts in a vendor’s XDR,” Schmidt recommended something along these lines: “Summarize the top two most critical security alerts in a vendor’s XDR, including the alert ID, description, severity and affected entities. This will be used for the monthly security review report. Provide the response in tabular form.” That prompt produced the following output: [caption id="attachment_75208" align="alignnone" width="300"]XDR alert prompt output XDR alert prompt output (source: Gartner)[/caption]

Other Examples of AI Security Prompts

Schmidt gave two more examples of good AI prompts, one on incident investigation and another on web application vulnerabilities. For security incident investigations, an effective prompt might be “Provide a detailed explanation of incident DB2024-001. Include the timeline of events, methods used by the attacker and the impact on the organization. This information is needed for an internal investigation report. Produce the output in tabular form.” That prompt should lead to something like the following output: [caption id="attachment_75206" align="alignnone" width="300"]Incident response prompt output Incident response AI prompt output (source: Gartner)[/caption] For web application vulnerabilities, Schmidt recommended the following approach: “Identify and list the top five vulnerabilities in our web application that could be exploited by attackers. Provide a brief description of each vulnerability and suggest mitigation steps. This will be used to prioritize our security patching efforts. Produce this in tabular format.” That should produce something like this output: [caption id="attachment_75205" align="alignnone" width="300"]Application vulnerability prompt output Web application vulnerability prompt output (source: Gartner)[/caption]

Tools for AI Security Assistants

Schmidt listed some of the GenAI tools that security teams might use, ranging from chatbots to SecOps AI assistants – such as CrowdStrike Charlotte AI, Microsoft Copilot for Security, SentinelOne Purple AI and Splunk AI – and startups such as AirMDR, Crogl, Dropzone and Radiant Security (see Schmidt’s slide below). [caption id="attachment_75202" align="alignnone" width="300"]GenAI security assistants GenAI tools for possible cybersecurity use (source: Gartner)[/caption]

Defending Against Persistent Phishing: A Real-World Case Study

2 June 2024 at 08:19

One of the scariest acronyms in a CISO’s knowledge base is APT – Advanced Persistent Threat. This term refers to someone determined to harm you and can do so in sophisticated ways. A colleague once taught me that the real threat isn’t just the advanced tools of the adversary, but their persistence. This means the […]

The post Defending Against Persistent Phishing: A Real-World Case Study appeared first on CybeReady.

The post Defending Against Persistent Phishing: A Real-World Case Study appeared first on Security Boulevard.

Using Scary but Fun Stories to Aid Cybersecurity Training – Source: securityboulevard.com

using-scary-but-fun-stories-to-aid-cybersecurity-training-–-source:-securityboulevard.com

Source: securityboulevard.com – Author: Steve Winterfeld Security experts have many fun arguments about our field. For example, while I believe War Games is the best hacker movie, opinions vary based on age and generation. Other never-ending debates include what the best hack is, the best operating system (though this is more of a religious debate), […]

La entrada Using Scary but Fun Stories to Aid Cybersecurity Training – Source: securityboulevard.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Training LLMs: Questions Rise Over AI Auto Opt-In by Vendors – Source: www.govinfosecurity.com

training-llms:-questions-rise-over-ai-auto-opt-in-by-vendors-–-source:-wwwgovinfosecurity.com

Source: www.govinfosecurity.com – Author: 1 Few Restrictions Appear to Exist, Provided Companies Behave Transparently Mathew J. Schwartz (euroinfosec) • May 21, 2024     Image: Shutterstock Can individuals’ personal data and content be used by artificial intelligence firms to train their large language models without requiring users to opt in? See Also: Webinar | Mythbusting […]

La entrada Training LLMs: Questions Rise Over AI Auto Opt-In by Vendors – Source: www.govinfosecurity.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

❌
❌