❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Narrowing the Stubborn Cybersecurity Worker Gap

6 June 2024 at 16:12
cybersecurity worker skills gap

There is still a significant gap between cybersecurity needs and available talent, according to Cyberseek, but all those tech industry layoffs are raising eyebrows. Organizations can expand the candidate pool by training people for these jobs rather than insisting on outside industry credentials.

The post Narrowing the Stubborn Cybersecurity Worker Gap appeared first on Security Boulevard.

AI Prompt Engineering for Cybersecurity: The Details Matter

AI prompt engineering for security

AI has been a major focus of the Gartner Security and Risk Management Summit in National Harbor, Maryland this week, and the consensus has been that while large language models (LLMs) have so far overpromised and under-delivered, there are still AI threats and defensive use cases that cybersecurity pros need to be aware of. Jeremy D’Hoinne, Gartner Research VP for AI & Cybersecurity, told conference attendees that hacker uses of AI so far include improved phishing and social engineering – with deepfakes a particular concern. But D’Hoinne and Director Analyst Kevin Schmidt agreed in a joint panel that there haven’t been any novel attack technique arising from AI yet, just improvements on existing attack techniques like business email compromise (BEC) or voice scams. AI security tools likewise remain underdeveloped, with AI assistants perhaps the most promising cybersecurity application so far, able to potentially help with patching, mitigations, alerts and interactive threat intelligence. D’Hoinne cautions that the tools should be used as an adjunct to security staffers so they don’t lose their ability to think critically.

AI Prompt Engineering for Cybersecurity: Precision Matters

Using AI assistants and LLMs for cybersecurity use cases was the focus of a separate presentation by Schmidt, who cautioned that AI prompt engineering needs to be very specific for security uses to overcome the limitations of LLMs, and even then the answer may only get you 70%-80% toward your goal. Outputs need to be validated, and junior staff will require the oversight of senior staff, who will more quickly be able to determine the significance of the output. Schmidt also cautioned that chatbots like ChatGPT should only be used for noncritical data. Schmidt gave examples of good and bad AI security prompts for helping security operations teams. β€œCreate a query in my <name of SIEM> to identify suspicious logins” is too vague, he said. He gave an example of a better way to craft a SIEM query: β€œCreate a detection rule in <name of SIEM> to identify suspicious logins from multiple locations within the last 24 hours. Provide the <SIEM> query language and explain the logic behind it and place the explanations in tabular format.” That prompt should produce something like the following output: [caption id="attachment_75212" align="alignnone" width="300"]SIEM query AI prompt output SIEM query AI prompt output (source: Gartner)[/caption] Analyzing firewall logs was another example. Schmidt gave the following as an example of an ineffective prompt: β€œAnalyze the firewall logs for any unusual patterns or anomalies.” A better prompt would be: β€œAnalyze the firewall logs from the past 24 hours and identify any unusual patterns or anomalies. Summarize your findings in a report format suitable for a security team briefing.” That produced the following output: [caption id="attachment_75210" align="alignnone" width="300"]Firewall log prompt output Firewall log prompt output (source: Gartner)[/caption] Another example involved XDR tools. Instead of a weak prompt like β€œSummarize the top two most critical security alerts in a vendor’s XDR,” Schmidt recommended something along these lines: β€œSummarize the top two most critical security alerts in a vendor’s XDR, including the alert ID, description, severity and affected entities. This will be used for the monthly security review report. Provide the response in tabular form.” That prompt produced the following output: [caption id="attachment_75208" align="alignnone" width="300"]XDR alert prompt output XDR alert prompt output (source: Gartner)[/caption]

Other Examples of AI Security Prompts

Schmidt gave two more examples of good AI prompts, one on incident investigation and another on web application vulnerabilities. For security incident investigations, an effective prompt might be β€œProvide a detailed explanation of incident DB2024-001. Include the timeline of events, methods used by the attacker and the impact on the organization. This information is needed for an internal investigation report. Produce the output in tabular form.” That prompt should lead to something like the following output: [caption id="attachment_75206" align="alignnone" width="300"]Incident response prompt output Incident response AI prompt output (source: Gartner)[/caption] For web application vulnerabilities, Schmidt recommended the following approach: β€œIdentify and list the top five vulnerabilities in our web application that could be exploited by attackers. Provide a brief description of each vulnerability and suggest mitigation steps. This will be used to prioritize our security patching efforts. Produce this in tabular format.” That should produce something like this output: [caption id="attachment_75205" align="alignnone" width="300"]Application vulnerability prompt output Web application vulnerability prompt output (source: Gartner)[/caption]

Tools for AI Security Assistants

Schmidt listed some of the GenAI tools that security teams might use, ranging from chatbots to SecOps AI assistants – such as CrowdStrike Charlotte AI, Microsoft Copilot for Security, SentinelOne Purple AI and Splunk AI – and startups such as AirMDR, Crogl, Dropzone and Radiant Security (see Schmidt’s slide below). [caption id="attachment_75202" align="alignnone" width="300"]GenAI security assistants GenAI tools for possible cybersecurity use (source: Gartner)[/caption]

Using Scary but Fun Stories to Aid Cybersecurity Training – Source: securityboulevard.com

using-scary-but-fun-stories-to-aid-cybersecurity-training-–-source:-securityboulevard.com

Source: securityboulevard.com – Author: Steve Winterfeld Security experts have many fun arguments about our field. For example, while I believe War Games is the best hacker movie, opinions vary based on age and generation. Other never-ending debates include what the best hack is, the best operating system (though this is more of a religious debate), […]

La entrada Using Scary but Fun Stories to Aid Cybersecurity Training – Source: securityboulevard.com se publicΓ³ primero en CISO2CISO.COM & CYBER SECURITY GROUP.

❌
❌