![AI prompt engineering for security](../themes/icons/grey.gif)
AI has been a major focus of the Gartner Security and Risk Management Summit in National Harbor, Maryland this week, and the consensus has been that while large language models (LLMs) have so far overpromised and under-delivered, there are still AI threats and defensive use cases that cybersecurity pros need to be aware of.
Jeremy DβHoinne, Gartner Research VP for AI & Cybersecurity, told conference attendees that hacker uses of AI so far include improved phishing and social engineering β with
deepfakes a particular concern.
But DβHoinne and Director Analyst Kevin Schmidt agreed in a joint panel that there havenβt been any novel attack technique arising from AI yet, just improvements on existing attack techniques like business email compromise (BEC) or voice scams.
AI security tools likewise remain underdeveloped, with AI assistants perhaps the most promising
cybersecurity application so far, able to potentially help with patching, mitigations, alerts and interactive threat intelligence. DβHoinne cautions that the tools should be used as an adjunct to security staffers so they donβt lose their ability to think critically.
AI Prompt Engineering for Cybersecurity: Precision Matters
Using AI assistants and LLMs for cybersecurity use cases was the focus of a separate presentation by Schmidt, who cautioned that AI prompt engineering needs to be very specific for security uses to overcome the limitations of LLMs, and even then the answer may only get you 70%-80% toward your goal. Outputs need to be validated, and junior staff will require the oversight of senior staff, who will more quickly be able to determine the significance of the output. Schmidt also cautioned that chatbots like ChatGPT should only be used for noncritical
data.
Schmidt gave examples of good and bad AI security prompts for helping security operations teams.
βCreate a query in my <name of SIEM> to identify suspicious loginsβ is too vague, he said.
He gave an example of a better way to craft a SIEM query: βCreate a detection rule in <name of SIEM> to identify suspicious logins from multiple locations within the last 24 hours. Provide the <SIEM> query language and explain the logic behind it and place the explanations in tabular format.β
That prompt should produce something like the following output:
[caption id="attachment_75212" align="alignnone" width="300"]
![SIEM query AI prompt output](../themes/icons/grey.gif)
SIEM query AI prompt output (source: Gartner)[/caption]
Analyzing firewall logs was another example. Schmidt gave the following as an example of an ineffective prompt: βAnalyze the firewall logs for any unusual patterns or anomalies.β
A better prompt would be: βAnalyze the firewall logs from the past 24 hours and identify any unusual patterns or anomalies. Summarize your findings in a report format suitable for a security team briefing.β
That produced the following output:
[caption id="attachment_75210" align="alignnone" width="300"]
![Firewall log prompt output](../themes/icons/grey.gif)
Firewall log prompt output (source: Gartner)[/caption]
Another example involved XDR tools. Instead of a weak prompt like βSummarize the top two most critical security alerts in a vendorβs XDR,β Schmidt recommended something along these lines: βSummarize the top two most critical security alerts in a vendorβs XDR, including the alert ID, description, severity and affected entities. This will be used for the monthly security review report. Provide the response in tabular form.β
That prompt produced the following output:
[caption id="attachment_75208" align="alignnone" width="300"]
![XDR alert prompt output](../themes/icons/grey.gif)
XDR alert prompt output (source: Gartner)[/caption]
Other Examples of AI Security Prompts
Schmidt gave two more examples of good AI prompts, one on incident investigation and another on web application
vulnerabilities.
For security incident investigations, an effective prompt might be βProvide a detailed explanation of incident DB2024-001. Include the timeline of
events, methods used by the attacker and the impact on the organization. This information is needed for an internal investigation report. Produce the output in tabular form.β
That prompt should lead to something like the following output:
[caption id="attachment_75206" align="alignnone" width="300"]
![Incident response prompt output](../themes/icons/grey.gif)
Incident response AI prompt output (source: Gartner)[/caption]
For web application vulnerabilities, Schmidt recommended the following approach: βIdentify and list the top five vulnerabilities in our web application that could be exploited by attackers. Provide a brief description of each vulnerability and suggest mitigation steps. This will be used to prioritize our security patching efforts. Produce this in tabular format.β
That should produce something like this output:
[caption id="attachment_75205" align="alignnone" width="300"]
![Application vulnerability prompt output](../themes/icons/grey.gif)
Web application vulnerability prompt output (source: Gartner)[/caption]
Tools for AI Security Assistants
Schmidt listed some of the GenAI tools that security teams might use, ranging from chatbots to SecOps AI assistants β such as CrowdStrike Charlotte AI, Microsoft Copilot for Security, SentinelOne Purple AI and Splunk AI β and startups such as AirMDR, Crogl, Dropzone and Radiant Security (see Schmidtβs slide below).
[caption id="attachment_75202" align="alignnone" width="300"]
![GenAI security assistants](../themes/icons/grey.gif)
GenAI tools for possible cybersecurity use (source: Gartner)[/caption]