Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

How to opt out of Meta’s AI training

14 June 2024 at 04:57

MIT Technology Review’s How To series helps you get things done. 

If you post or interact with chatbots on Facebook, Instagram, Threads, or WhatsApp, Meta can use your data to train its generative AI models beginning June 26, according to its recently updated privacy policy. Even if you don’t use any of Meta’s platforms, it can still scrape data such as photos of you if someone else posts them.

Internet data scraping is one of the biggest fights in AI right now. Tech companies argue that anything on the public internet is fair game, but they are facing a barrage of lawsuits over their data practices and copyright. It will likely take years until clear rules are in place. 

In the meantime, they are running out of training data to build even bigger, more powerful models, and to Meta, your posts are a gold mine. 

If you’re uncomfortable with having Meta use your personal information and intellectual property to train its AI models in perpetuity, consider opting out. Although Meta does not guarantee it will allow this, it does say it will “review objection requests in accordance with relevant data protection laws.” 

What that means for US users

Users in the US or other countries without national data privacy laws don’t have any foolproof ways to prevent Meta from using their data to train AI, which has likely already been used for such purposes. Meta does not have an opt-out feature for people living in these places. 

A spokesperson for Meta says it does not use the content of people’s private messages to each other to train AI. However, public social media posts are seen as fair game and can be hoovered up into AI training data sets by anyone. Users who don’t want that can set their account settings to private to minimize the risk. 

The company has built in-platform tools that allow people to delete their personal information from chats with Meta AI, the spokesperson says.

How users in Europe and the UK can opt out 

Users in the European Union and the UK, which are protected by strict data protection regimes, have the right to object to their data being scraped, so they can opt out more easily. 

If you have a Facebook account:

1. Log in to your account. You can access the new privacy policy by following this link. At the very top of the page, you should see a box that says “Learn more about your right to object.” Click on that link, or here

Alternatively, you can click on your account icon at the top right-hand corner. Select “Settings and privacy” and then “Privacy center.” On the left-hand side you will see a drop-down menu labeled “How Meta uses information for generative AI models and features.” Click on that, and scroll down. Then click on “Right to object.” 

2. Fill in the form with your information. The form requires you to explain how Meta’s data processing affects you. I was successful in my request by simply stating that I wished to exercise my right under data protection law to object to my personal data being processed. You will likely have to confirm your email address. 

3. You should soon receive both an email and a notification on your Facebook account confirming if your request has been successful. I received mine a minute after submitting the request.

If you have an Instagram account: 

1. Log in to your account. Go to your profile page, and click on the three lines at the top-right corner. Click on “Settings and privacy.”

2. Scroll down to the “More info and support” section, and click “About.” Then click on “Privacy policy.” At the very top of the page, you should see a box that says “Learn more about your right to object.” Click on that link, or here

3. Repeat steps 2 and 3 as above. 

AI Prompt Engineering for Cybersecurity: The Details Matter

AI prompt engineering for security

AI has been a major focus of the Gartner Security and Risk Management Summit in National Harbor, Maryland this week, and the consensus has been that while large language models (LLMs) have so far overpromised and under-delivered, there are still AI threats and defensive use cases that cybersecurity pros need to be aware of. Jeremy D’Hoinne, Gartner Research VP for AI & Cybersecurity, told conference attendees that hacker uses of AI so far include improved phishing and social engineering – with deepfakes a particular concern. But D’Hoinne and Director Analyst Kevin Schmidt agreed in a joint panel that there haven’t been any novel attack technique arising from AI yet, just improvements on existing attack techniques like business email compromise (BEC) or voice scams. AI security tools likewise remain underdeveloped, with AI assistants perhaps the most promising cybersecurity application so far, able to potentially help with patching, mitigations, alerts and interactive threat intelligence. D’Hoinne cautions that the tools should be used as an adjunct to security staffers so they don’t lose their ability to think critically.

AI Prompt Engineering for Cybersecurity: Precision Matters

Using AI assistants and LLMs for cybersecurity use cases was the focus of a separate presentation by Schmidt, who cautioned that AI prompt engineering needs to be very specific for security uses to overcome the limitations of LLMs, and even then the answer may only get you 70%-80% toward your goal. Outputs need to be validated, and junior staff will require the oversight of senior staff, who will more quickly be able to determine the significance of the output. Schmidt also cautioned that chatbots like ChatGPT should only be used for noncritical data. Schmidt gave examples of good and bad AI security prompts for helping security operations teams. “Create a query in my <name of SIEM> to identify suspicious logins” is too vague, he said. He gave an example of a better way to craft a SIEM query: “Create a detection rule in <name of SIEM> to identify suspicious logins from multiple locations within the last 24 hours. Provide the <SIEM> query language and explain the logic behind it and place the explanations in tabular format.” That prompt should produce something like the following output: [caption id="attachment_75212" align="alignnone" width="300"]SIEM query AI prompt output SIEM query AI prompt output (source: Gartner)[/caption] Analyzing firewall logs was another example. Schmidt gave the following as an example of an ineffective prompt: “Analyze the firewall logs for any unusual patterns or anomalies.” A better prompt would be: “Analyze the firewall logs from the past 24 hours and identify any unusual patterns or anomalies. Summarize your findings in a report format suitable for a security team briefing.” That produced the following output: [caption id="attachment_75210" align="alignnone" width="300"]Firewall log prompt output Firewall log prompt output (source: Gartner)[/caption] Another example involved XDR tools. Instead of a weak prompt like “Summarize the top two most critical security alerts in a vendor’s XDR,” Schmidt recommended something along these lines: “Summarize the top two most critical security alerts in a vendor’s XDR, including the alert ID, description, severity and affected entities. This will be used for the monthly security review report. Provide the response in tabular form.” That prompt produced the following output: [caption id="attachment_75208" align="alignnone" width="300"]XDR alert prompt output XDR alert prompt output (source: Gartner)[/caption]

Other Examples of AI Security Prompts

Schmidt gave two more examples of good AI prompts, one on incident investigation and another on web application vulnerabilities. For security incident investigations, an effective prompt might be “Provide a detailed explanation of incident DB2024-001. Include the timeline of events, methods used by the attacker and the impact on the organization. This information is needed for an internal investigation report. Produce the output in tabular form.” That prompt should lead to something like the following output: [caption id="attachment_75206" align="alignnone" width="300"]Incident response prompt output Incident response AI prompt output (source: Gartner)[/caption] For web application vulnerabilities, Schmidt recommended the following approach: “Identify and list the top five vulnerabilities in our web application that could be exploited by attackers. Provide a brief description of each vulnerability and suggest mitigation steps. This will be used to prioritize our security patching efforts. Produce this in tabular format.” That should produce something like this output: [caption id="attachment_75205" align="alignnone" width="300"]Application vulnerability prompt output Web application vulnerability prompt output (source: Gartner)[/caption]

Tools for AI Security Assistants

Schmidt listed some of the GenAI tools that security teams might use, ranging from chatbots to SecOps AI assistants – such as CrowdStrike Charlotte AI, Microsoft Copilot for Security, SentinelOne Purple AI and Splunk AI – and startups such as AirMDR, Crogl, Dropzone and Radiant Security (see Schmidt’s slide below). [caption id="attachment_75202" align="alignnone" width="300"]GenAI security assistants GenAI tools for possible cybersecurity use (source: Gartner)[/caption]

80+ Essential Command Prompt (CMD) Commands

17 February 2024 at 03:33

Windows’ celebrated CLI (Command-Line Interpreter) is a treasure trove of hidden features, tools, and settings. Command Prompt lets you tap into every area of your Operating System, from creating new folders to formatting internal and external storage. To help you navigate cmd.exe like a pro, we’ve prepared a compressive list of cool CMD commands to […]

The post 80+ Essential Command Prompt (CMD) Commands appeared first on Heimdal Security Blog.

❌
❌