Normal view

There are new articles available, click to refresh the page.
Today — 26 June 2024Main stream

AzzaSec Reveals Advanced Windows Ransomware Builder, Threatens Cybersecurity

Windows ransomware builder

Hacktivist group AzzaSec has announced the release of a Windows ransomware builder. The builder was posted via the Telegram channel on June 23, 2024. Designed in .NET, this malicious software features sophisticated functionality including SHA 512 and AES encryption, ensuring its undetectable (FUD) status with minimal risk of detection, as verified by its single hit on KleenScan. AzzaSec claims their ransomware can bypass major antivirus solutions such as Windows 10 / 11 Defender, Avast, Kaspersky, and AVG. In addition to its encryption prowess, the builder includes anti-virtual machine, anti-debugging, and anti-sandbox capabilities, as demonstrated in a revealing demo video shared alongside the announcement. This video showcases how decryption keys and victim information are stored securely on a centralized Command and Control (C2) server.

AzzaSec Announces New Windows Ransomware Builder

[caption id="attachment_78968" align="alignnone" width="373"]AzzaSec Announces New Windows Ransomware Builder Source: Dark Web[/caption] Pricing for AzzaSec's ransomware varies, from $300 for a single-use stub to a subscription model costing up to $4500 for six months. The source code for this Windows ransomware builder is also available for purchase at a steep $8000. The development of AzzaSec's ransomware marks a new advancement in cyber threats, highlighting the evolution of ransomware-as-a-service (RaaS). This model not only empowers threat actors with turnkey tools but also commodifies cyber extortion, potentially increasing the frequency and impact of ransomware attacks globally. The group's announcement highlights a growing trend where malicious actors leverage sophisticated technologies and monetization strategies to maximize their impact on unsuspecting victims. As cybersecurity defenses evolve, so do the tactics of those seeking illicit gains through digital means.

Features and Functionality of the Windows Ransomware Builder

In their Telegram post, AzzaSec described their ransomware's capabilities in detail. Developed with VB.NET and weighing 10MB, the ransomware utilizes a unique algorithm for encryption. It operates with a fully undetectable structure, boasting a detection rate of only 1 out of 40 on KleenScan. Tested against various security solutions including Windows Defender, Avast, Kaspersky, and AVG, AzzaSec ensures its malware's effectiveness in compromising systems. The ransomware functions by connecting to a C2 server, where decryption keys and device information are stored. This approach allows the threat actors to monitor and control the ransomware's impact remotely. Furthermore, the ransomware includes anti-virtual machine, anti-debugging, and anti-sandbox features, making it resilient against common security countermeasures. AzzaSec also outlined its pricing strategy: $300 for a single-use stub, escalating to $4500 for a six-month subscription. For those seeking full control, the source code is available for $8000, enabling other threat actors to customize and deploy the ransomware independently. AzzaSec's emergence into the ransomware scene signals a reminder for organizations and individuals alike to upgrade their cybersecurity measures and remain vigilant against online threats. As ransomware-as-a-service models become more accessible, preemptive cybersecurity measures and incident response plans are essential defenses against these ever-present dangers.
Yesterday — 25 June 2024Main stream
Before yesterdayMain stream

Social Media Warning Labels, Should You Store Passwords in Your Web Browser?

By: Tom Eston
24 June 2024 at 00:00

In this episode of the Shared Security Podcast, the team debates the Surgeon General’s recent call for social media warning labels and explores the pros and cons. Scott discusses whether passwords should be stored in web browsers, potentially sparking strong opinions. The hosts also provide an update on Microsoft’s delayed release of CoPilot Plus PCs […]

The post Social Media Warning Labels, Should You Store Passwords in Your Web Browser? appeared first on Shared Security Podcast.

The post Social Media Warning Labels, Should You Store Passwords in Your Web Browser? appeared first on Security Boulevard.

💾

Deconstructing Logon Session Enumeration

21 June 2024 at 14:18

Purple Teaming

How we define and create test cases for our purple team runbooks

Intro

In our purple team service, we try to take a depth and quality approach and run many different functionally diverse test cases for a given technique. In this blog, I will describe our process of defining and implementing test cases for our purple team runbooks. The goal of this blog post is to provide the community with a bit more information about how we implement test cases for logon session enumeration, what preventative controls might be, and how this process can be applied to other techniques.

Defining Unique Test Cases

We wanted to develop a logical numbering system to separate test cases for each technique. After a couple of iterations of our purple team service, we started to deliberately select test cases and run variations based on three distinct categories:

  1. Distinct Procedures: Jared defines this as “a sequence of operations that, when combined, implement a technique or sub-technique.” We attempt to deconstruct tools that implement the technique to find functional differences, whether that tool is open-source or a Microsoft binary. This can require reverse engineering or reviewing source code to reveal what the tool is doing under the hood. It also might involve writing or altering existing tooling to meet your needs. An example of this can be found in part 1 of Jared’s blog On Detection: Tactical to Functional, where he reviews the source code of Mimikatz’s sekurlsa::logonPasswords module. If the tool implements a unique set of operations in the call graph, then we define that as a distinct procedure.
  2. Execution Modality: We then alter the execution modality, which changes how the set of functions is implemented. This is outlined in part 12 of Jared’s blog On Detection: Tactical to Functional: “one tool that is built into the operating system (Built-in Console Application), a tool that had to be dropped to disk (Third-Party Console Application), a tool that could run in PowerShell’s memory (PowerShell Script), a tool that runs in the memory of an arbitrary process (Beacon Object File), and a tool that can run via a proxy without ever touching the subject endpoint (Direct RPC Request)”. This variation helps us determine if we run the same distinct procedure but with a different execution mechanism (Beacon Object File, Unmanaged PowerShell, etc.) or is implemented in a different programming language (C, Python, PowerShell, etc.) will alter whether your security controls detected or prevented it.
  3. Minor Variations: Finally, we introduce slight variations to alter the payload, target user, computer, or process depending on the technique we are working on. In the case of logon session enumeration, we alter local vs. remote logon sessions and the machine we are targeting (i.e., file server, workstation, etc). During purple team assessments, we often find ourselves using this variation based on the organization’s environmental factors. For other techniques, these environmental factors normally include choosing which account to Kerberoast or which process to inject into.

Defining test cases in this manner allows us to triangulate a technique’s coverage estimation rather than treat the techniques in the MITRE ATT&CK matrix as a bingo card where we run net session and net1 session, fill in the box for this technique, and move on to the next one. After running each test case during the purple team assessment, we look for whether the test case was prevented, detected, or observed (telemetry) by any security controls the organization may have.

Deconstructing Distinct Logon Session Enumeration Procedures

Let’s dive into logon session enumeration by deconstructing the functional differences between three distinct procedures. If you want to learn more (or want to apply this methodology yourself), you can find out more about the process we use to examine the function call stack of tools in Nathan’s Beyond Procedures: Digging into the Function Call Stack and Jared’s On Detection: Tactical to Functional series.

We can start by examining the three distinct procedures that SharpHound implements. Rohan blogged about the three different methods SharpHound uses. SharpHound can attempt to use all three depending on the context it’s running under and what arguments are passed to it. The implementation of each procedure can be found here: NetSessionEnum, NetWkstaEnum, and GetSubKeyNames in the SharpHoundCommon library. Matt also talks about this in his BOFHound: Session Integration blog.

Here is a breakdown of each of the three unique procedures implemented in SharpHound for remote session enumeration:

Distinct Procedure #1: Network Session Enumeration (NetSessionEnum)

NetSessionEnum is a Win32 API implemented in netapi32.dll. The image below shows where each tool is implemented in the function call stack:

NetSessionEnum Function Call Graph

This Win32 API returns a list of active remote or network logon sessions. These two blogs (Netwrix and Compass Security) go into detail about which operating systems allow “Authenticated Users” to query logon sessions and how to check and restrict access to this API remotely by altering the security descriptor in the HKLM/SYSTEM/CurrentControlSet/Services/LanmanServer/DefaultSecurity/SrvsvcSessionInfo registry key. If we read Microsoft’s documentation on the RPC server, we see the MS-SRVS RPC server is only implemented via the \PIPE\srvsvc named pipe (RPC servers can also be commonly implemented via TCP as well). As Microsoft’s documentation states, named pipes communicate over CIFS\SMB via port 445.

In our purple team service, we usually target the organization’s most active file server for two reasons. First, port 445 (SMB) will generally be open from everywhere on the internal network for this server. Second, this server has the most value to an attacker since it could contain hundreds or even thousands of user-to-machine mappings an attacker could use for “user hunting.”

Distinct Procedure #2: Interactive, Service, and Batch Logon Session Enumeration (NetWkstaUserEnum)

NetWkstaUserEnum is also a Win32 API implemented in netapi32.dll. Below is the breakdown of the function call stack and where each tool is implemented:

NetWkstaUserEnum Function Call Graph

As Microsoft documentation says: “This list includes interactive, service, and batch logons” and “Members of the Administrators, and the Server, System, and Print Operator local groups can also view information.” This API call has different permission requirements and returns a different set of information than the NetSessionEnum API call; however, just like NetSessionEnum, the RPC server is implemented only via the \PIPE\wkssvc named pipe. Again, this blog from Compass Security goes into more detail about the requirements.

Since this, by default, requires administrator or other privileged rights on the target machine, we will again attempt to target file servers and usually get an access denied response when running this procedure. As a detection engineer, if someone attempts to enumerate sessions, do we have the telemetry even if they are unsuccessful? Next, we will attempt to target a workstation on which we have administrator rights to enumerate sessions using this minor variation in a different test case.

Distinct Procedure #3: Interactive Session Enumeration (RegEnumKeyExW)

Note: I’m only showing the function call stack of RegEnumKeyExW, SharpHound calls OpenRemoteBaseKey to get a handle to the remote key before calling RegEnumKeyExW. I also left out calls to API sets in this graph.

RegEnumKeyExW is, again, a Win32 API implemented in advapi32.dll. Below is the breakdown of the function call stack and where each tool is implemented:

RegEnumKeyExW Function Call Graph

As Microsoft documentation says, the remote system “requires the Remote Registry service to be running on the remote computer.” Again, this blog from Compass Security goes into more detail about the requirements, but by default, the service is disabled on workstation operating systems like Windows 11 and 10 and set to trigger start on server operating systems by interacting with the \PIPE\winreg named pipe. If the remote registry service is running (or triggerable), then the HKEY_USERS hive can be queried for a list of subkeys. These subkeys contain SIDs for users that are interactively logged on. Like NetWkstaUserEnum and NetSessionEnum, the RPC server is implemented only via the \PIPE\winreg named pipe.

Putting it all Together with Test Cases

Now that we have a diverse set of procedures and tooling examples that use a variety of execution modalities, we can start creating test cases to run for this technique. Below, I have included an example set of test cases and associated numbering system using each of the three distinct procedures and altering the execution modality for each one.

You can also find a full TOML runbook for the examples below here: https://ghst.ly/session-enumeration-runbook. All of the test cases are free or open source and can be executed via an Apollo agent with the Mythic C2 framework.

For example, our numbering looks like: Test Case X.Y.Z

  • X — Distinct Procedure
  • Y — Execution Modality
  • Z — Minor Variation

A sample set of test cases we might include:

Network Session Enumeration (NetSessionEnum)

  • Test Case 1.0.0 — Enumerate SMB Sessions From Third-Party Utility On Disk (NetSess)
  • Test Case 1.1.0 — Enumerate SMB Sessions via Beacon Object File (BOF) — get-netsession
  • Test Case 1.2.0 — Enumerate SMB Sessions via PowerView’s Get-NetSession
  • Test Case 1.3.0 — Enumerate SMB Sessions via Proxied RPC

Interactive, Service, and Batch Logon Session Enumeration (NetWkstaUserEnum)

  • Test Case 2.0.0 — Enumerate Interactive, Service, and Batch Logon Sessions from BOF (netloggedon) — Server
  • Test Case 2.0.1 — Enumerate Interactive, Service, and Batch Logon Sessions from BOF (netloggedon) — Workstation
  • Test Case 2.1.0 — Enumerate Interactive, Service, and Batch Logon Sessions from Impacket (netloggedon.py)
  • Test Case 2.2.0 — Enumerate SMB Sessions via PowerView’s Get-NetLoggedOn

Interactive Session Enumeration (RegEnumKeyExW)

  • Test Case 3.0.0 — Enumerate Interactive Sessions via reg_query BOF (Server)
  • Test Case 3.0.1 — Enumerate Interactive Logon Sessions via reg_query BOF (workstation)
  • Test Case 3.1.0 — Enumerate Interactive Sessions from Impacket (reg.py)

After executing each test case, we can determine if the test case was prevented, detected, or observed. Tracking information like this allows us to provide feedback on your controls and predict how likely they would detect or prevent an adversary’s arbitrary selection of procedure or execution modality. Also, we space test cases about 10 minutes apart; name artifacts like files, registry keys, and processes by their corresponding test case number; and alternate the machine and source user we are executing from to make finding observable telemetry easier. We may include or exclude certain test cases based on the organization’s security controls. For example, if they block and alert on all powershell.exe usage, we aren’t going to run 40 test cases across multiple techniques that attempt to call the PowerShell binary.

Conclusion

By researching and deconstructing each tool and looking at the underlying function call stacks, we found that regardless of which distinct procedure or execution modality was used, they all used three different RPC servers, each implemented using named pipes. This will also allow us to triangulate detection coverage and help determine if a custom or vendor-based rule is looking for a brittle indicator or a tool-specific detail\toolmark.

We now have a fairly broad set of test cases for a runbook that accounts for a wide variety of attacker tradecraft for this technique. Knowing this as a blue teamer or detection engineer will allow me to implement a much more comprehensive detection strategy for this particular technique around the three named pipes we discovered. This allows us to write robust detection rules, rather than looking for the string “Get-NetSession” in a PowerShell script. Would this produce a perfect detection for session enumeration? No. Does this include every single way an attacker can determine where a user is logged? No. Does deconstructing adversary tradecraft in this manner vastly improve our coverage for the technique? Absolutely.

In my next post, I will cover many log sources native to Windows (I’m counting Sysmon as native) and a couple of EDRs that allow us to detect logon session enumeration via named pipes (or TCP in some cases). Some of these sources you might be familiar with, others aren’t very well documented. Each of these log sources can be enabled and shipped to a centralized place like a SIEM. Each source has its requirements, provides a different context, and has its pros and cons for use in a detection rule.

References


Deconstructing Logon Session Enumeration was originally published in Posts By SpecterOps Team Members on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post Deconstructing Logon Session Enumeration appeared first on Security Boulevard.

EU Aims to Ban Math — ‘Chat Control 2.0’ Law is Paused but not Stopped

20 June 2024 at 12:43
“Oh, won’t somebody please think of the children?”

Ongoing European Union quest to break end-to-end encryption (E2EE) mysteriously disappears.

The post EU Aims to Ban Math — ‘Chat Control 2.0’ Law is Paused but not Stopped appeared first on Security Boulevard.

How to secure non-human identities? with Andrew Wilder and Amir Shaked

19 June 2024 at 03:38

This blog is based on our conversation with Andrew Wilder, Retained Chief Security Officer at Community Veterinary Partners and Amir Shaked, VP of R&D at Oasis Security. It covers the unique challenges of securing non-human identities.

The post How to secure non-human identities? with Andrew Wilder and Amir Shaked appeared first on Security Boulevard.

Tile/Life360 Breach: ‘Millions’ of Users’ Data at Risk

13 June 2024 at 13:28
Life360 CEO Chris Hulls

Location tracking service leaks PII, because—incompetence? Seems almost TOO easy.

The post Tile/Life360 Breach: ‘Millions’ of Users’ Data at Risk appeared first on Security Boulevard.

Ticketmaster Data Breach and Rising Work from Home Scams

By: Tom Eston
10 June 2024 at 00:00

In episode 333 of the Shared Security Podcast, Tom and Scott discuss a recent massive data breach at Ticketmaster involving the data of 560 million customers, the blame game between Ticketmaster and third-party provider Snowflake, and the implications for both companies. Additionally, they discuss Live Nation’s ongoing monopoly investigation. In the ‘Aware Much’ segment, the […]

The post Ticketmaster Data Breach and Rising Work from Home Scams appeared first on Shared Security Podcast.

The post Ticketmaster Data Breach and Rising Work from Home Scams appeared first on Security Boulevard.

💾

Microsoft Recall is a Privacy Disaster

6 June 2024 at 13:20
Microsoft CEO Satya Nadella, with superimposed text: “Security”

It remembers everything you do on your PC. Security experts are raging at Redmond to recall Recall.

The post Microsoft Recall is a Privacy Disaster appeared first on Security Boulevard.

Digital natives are not cybersecurity natives

6 June 2024 at 03:59

At TurkuSec meetup in April, I had the opportunity to share my insights on a pressing issue we’ve been researching lately at F-Secure: the cybersecurity challenges faced by digital natives. These are individuals who have grown up with fast internet and personal screens, making them uniquely vulnerable to online threats. Our research highlights some concerning … Continue reading Digital natives are not cybersecurity natives

The post Digital natives are not cybersecurity natives appeared first on Security Boulevard.

💾

One Phish Two Phish, Red Teams Spew Phish

4 June 2024 at 13:52

PHISHING SCHOOL

How to Give your Phishing Domains a Reputation Boost

“Armed with the foreknowledge of my own death, I knew the giant couldn’t kill me. All the same, I preferred to keep my bones unbroken” — Big Phish

When we send out our phishing emails, we are reckoning with giants. Spamhaus, SpamAssassin, SpamTitan, Barracuda, and many more giants wish to grind your bones to bake their bread. They are big. They are scary. But they don’t catch everything. Just like Edward Bloom learned; the best way to deal with giants is to make a good first impression.

What’s your name, giant? — Karl

That’s how we will deal with these giants as well. Let’s talk about proper etiquette when speaking with giants.

“He Ate My Dog”

The first opportunity a mail server has to reject your email is right at the beginning of an SMTP exchange. The server can run checks on the reputation of both your sending IP, and the domain you are claiming to be in your FROM address. In general, the main controls that will get you blocked by a good secure email gateway (SEG) are some version of the following:

  • Does your sending IP show up on a known SPAM list like Spamhaus?
  • Is the geographical location of the sending IP in another country?
  • What’s the reputation/category of the sending domain?
  • What’s the age of the sending domain?
  • Does SPF fail?
He ate my dog

Of course, all of these checks are generally configurable, and not every SEG will perform all of these checks. These are just some of the most common controls I tend to expect are the root cause of a failure at this stage in the phish’s lifecycle. None of these is really all that difficult to address/bypass so by taking them into account, we can usually ensure our phishing campaigns are not blocked at this layer.

Bypassing Sender Reputation Checks

When we attempt to deliver our message, we will be connecting to a mail server for the target email’s domain, telling it who we are and who we want to send a message to, and then giving it the contents of the message. Here is an example of what that conversation looks like under the hood with data we control in blue and server responses in green:

SMTP Greetings

In general, IP address blocks are easy for us to troubleshoot because if our sending IP is blocked, the server will typically just terminate the TCP connection and we won’t even be able to start this SMTP call and response. It’s pretty rare to be blocked by IP right off the bat, and typically means you’re sending from an IP that is associated with sending a lot of SPAM, or the GeoIP result for your sending IP is in a blocked country or region. If that’s the case, just try sending from another IP.

The next opportunity for the server to block us is by the reputation of the domain we supply as the sending domain. Notice in our example that the sending domain is ‘contoso.com’ and is used in both the EHLO and MAIL FROM commands. These domains typically match, but they don’t have to. Feel free to tinker with mismatching these values to see how different mail servers respond.

You can think of the EHLO and MAIL FROM commands as saying “I’m a mail server for contoso.com, and I’m going to send an email from chris@contoso.com to one of your users”. At this point, the receiving server has an opportunity to accept or reject our claim by checking a few things:

  • Is contoso.com on a known SPAM list? (never going to let it through)
  • Is contoso.com explicitly whitelisted? (always going to let it through)
  • How old is the contoso.com domain?
  • Is contoso.com categorized? And if so, what category is it?
  • Has the server ever received emails from contoso.com before?
  • Does contoso.com’s MX record match your sending IP address?
  • Does a reverse DNS lookup of your sending IP resolve to <something>.contoso.com?
  • Does contoso.com publish an SPF record that includes your sending IP address?

Most mail servers will perform one or more of these checks to obtain hints about whether you can be trusted or not. Some of these hints, like reverse DNS records, will increase our reputation if they pass the check, but won’t really hurt our reputation too much if they fail. Others, like trying to spoof a domain from an IP that is not in that domain’s SPF record, will likely get us burned right away. Therefore, we should try to pass as many of these checks as we can, while avoiding techniques that could be clearly identified as a spoof.

So, Which Domain Should I Use?

When choosing our sending domain, we have a few options, each with its own pros and cons. We can either spoof a domain that we do not own, or buy a domain. If we buy a domain, we can either get a brand new domain of our choosing, or buy an expired domain that was previously bought by someone else. No one method is inherently better than the others, but we should have a clear understanding of how to get the most out of each technique when making our choice.

Spoofed Domains

Pros

  • Free! ‘Nough said
  • Makes attribution difficult. You are basically framing someone else. (Note: this could be a con if you have to go through attribution and deconfliction with the blue team)
  • Probably already categorized and old enough to pass checks for newly registered domains
  • Successful spoofs can look very legitimate, and therefore boost success rates

Cons

  • Limited to existing domains that you can find
  • You may be able to use non-existent domains, but they are easy to identify and often blocked. It’s generally better to purchase in this case.
  • Most high-value domains have implemented email security settings like SPF that could make it difficult, if not impossible to spoof without detection
  • We can’t set up our own email security settings on the domain that might boost our reputation (no control and no proof-of-ownership)
  • We have no influence over domain category or reputation

Considerations When Spoofing Domains

Does our target mail server actually check SPF?

If not, we should be able to successfully spoof just about any domain. I’ve seen several cases where, for whatever reason, SPF checks were either turned off or misconfigured so that failing SPF did not result in rejected emails. It’s often worth a shot to try to deliver a few benign emails while intentionally failing SPF to see if the server sends back an error message or says it delivered them. While you won’t receive any bounce messages or responses from recipients (those would go to the mail server for your spoofed domain), you could try putting image links or tags for remote CSS resources in the emails that point to a web server you own and track requests for these resources as potential indicators that the emails were delivered.

Is there a similar registered domain that lacks an SPF record?

You may be able to find domains that have been registered that look very similar to your target organization’s domain, but have not been configured with any email security settings that would allow a mail server to confirm or deny a spoof attempt. In some cases, I’ve seen clients that buy up domains similar to their own to prevent typosquatting, but then forget to apply email settings to prevent spoofing. In other cases, I’ve seen opportunistic domain squatters who buy look-alike domains in case they may become valuable, and don’t bother to set up any DNS records for their domains. In either case, we can potentially spoof these domains and the mail server will have no way of knowing we don’t own them. While this might mean we are starting off with a generally low trust score, most mail servers will not outright reject messages just because the sending domain lacks a properly configured SPF record. That is because there are so many legitimate organizations that have never bothered to set one up.

Does your target organization have an expired domain listed in an ‘include’ statement in their SPF record?

Let’s say our target organization’s email domain is fabrikam.com, and we would like to spoof an internal email address (bigboss@fabrikam.com) while sending to our target users. Consider that fabrikam.com has the following SPF record as an example:

v=spf1 mx a ip4:12.34.45.56/28 include:sendgrid.net include:imexpired.net -all

This SPF record lists the hosts/IPs that are allowed to send emails from frabrikam.com. Here is a breakdown of what each piece means:

v=spf1

This is an SPF version 1 TXT record. Used to distinguish SPF from other TXT records.

mx

The IP of the domain’s MX (mail server) record is trusted to send emails from fabrikam.com

a

the IP from doing a forward DNS lookup on the domain itself is also trusted

ip4:12.34.45.56/28

Any IP in this small range is trusted to send emails from fabrikam.com

include:sendgrid.net

any IPs or IP ranges listed in sendgrid.net’s SPF record are also trusted

include:imexpired.net

any IPs or IP ranges listed in imexpired.net’s SPF record are also trusted

-all

every other sending IP that was not listed in the previous directives is explicitly denied from sending emails claiming to be from fabrikam.com

As this demonstration shows, the ‘include’ directive in SPF allows you to trust other domains to send on your domain’s behalf. For example, emailing services like sendgrid can be configured to pass SPF checks so that sendgrid can deliver marketing emails from fabrikam.com to their customers. In the case of our example, if our client previously trusted imexpired.net to send emails, but the domain is now expired, it means we buy imexpired.net and add our own server to imexpired.net’s SPF record. We will then be able to pass SPF checks for fabrikam.com and spoof internal users.

Can we bypass SPF protections with other technical tricks?

Depending on the receiving mail server’s configuration, there may be other ways we can spoof domains that we do not own. For example, some email servers do not require SPF alignment. This means that we can have a mismatch between the domain we specify in the ‘MAIL FROM’ SMTP command, and the ‘From’ field in the message content itself. In these cases, we can use either a domain we own and can pass SPF for, or any domain without an SPF record in the ‘SMTP FROM’ command and then choose any other domain for the email address that our target user sees in their mail client. Another trick we can sometimes use is to supply what’s known as the “null sender”. To do this, we simply specify “<>” as the sender. The null sender is used for bounce messages and is sometimes allowed by spam filters for troubleshooting.

SMTP is a purely text-based protocol. This means that there is often ambiguity in determining the beginning and end of each section of data and SMTP implementations often have parsing flaws that can be abused to bypass security features like SPF and DKIM. In 2020, Chen Jianjun et. al published a whitepaper demonstrating 18 separate such flaws they discovered and that affected even some big email providers like Google:

https://i.blackhat.com/USA-20/Thursday/us-20-Chen-You-Have-No-Idea-Who-Sent-That-Email-18-Attacks-On-Email-Sender-Authentication.pdf

https://www.usenix.org/system/files/sec20fall_chen-jianjun_prepub_0.pdf

They also published the tool they used to discover these vulnerabilities, as well as their research methodology. While they disclosed these vulnerabilities to the vendors they tested, they did not test every major SEG for all of these vulnerabilities, and you might find that some popular SEGs are still vulnerable ;)

In 2023, Marcello Salvati demonstrated a flaw in a popular email sending service that allowed anyone to pass SPF for millions of legitimate domains.

https://www.youtube.com/watch?v=NwnT15q_PS8

His DefCon talk on the subject demonstrates his research and discovery process as well. I know that there are similar flaws in some other major email sending services, but I will not be throwing them under the bus in this post. Happy hunting!

Purchasing Brand New Domains

Pros

  • We can set up SPF, DMARC, DKIM to ensure we pass these checks
  • We control the MX record so we can receive mail at this domain
  • More granular creative control over the name

Cons

  • May take some time to build a reputation before sending a campaign

Purchasing Expired Domains

Pros

  • We can set up SPF, DMARC, DKIM and ensure we pass these checks
  • We control the MX record so we can receive mail at this domain
  • More likely to already be categorized
  • You may have wayback machine results to help keep the current category
  • You might salvage some freebee social media accounts

Cons

  • Little control over the domain name. You can get key words but otherwise just have to get lucky on which ones become available

Considerations for any Purchased Domain

The two major advantages to using domains we own for phishing is that we can specify the mail server and mail security settings for the domain. This means that we can actually have back-and-forth conversations with phishing targets if we need to. This opens up the possibility for more individualized pretexts. By setting up our own SPF, DMARC, and DKIM records, we can ensure that we pass these checks to give our emails a high probability of passing reputation based checks.

The third benefit of owning a domain is that we can control more factors that might help us categorize the domain. Email gateways, web proxies, and DNS security appliances will often crawl new domains to determine what kind of content is posted on the domain’s main website if one exists. They then make a determination of whether to block or allow requests for that domain based on the type of site it is. Security products that include category checks like this are often configurable. For example, some organizations block social media sites while others allow employees to use social media at work. In general, most network administrators will allow certain common benign categories. A couple classic desirable categories for red teamers are ‘healthcare’ and ‘finance’. These categories have additional benefits when used for command and control (C2) traffic, though for the purposes of delivering emails, we really just need to be categorized as anything that’s not sketchy. If your domain is not categorized at all, or in a ‘bad’ category, then your campaign will be dead in the water, so here’s some tips on getting off the naughty list.

Categorizing New Domains

.US.ORG Domains

One of my favorite shortcuts for domain categorization is to register domains off the ‘us.org’ or similar top levels. Because ‘us.org’ is a domain registrar, its Bluecoat category is ‘Web Hosting’, and has been around for many years. The minute you buy one of these domains, you are automatically going to be lumped in a generally benign and frequently allowed category.

Categorize with HumbleChameleon

While Humble Chameleon (HC) was designed to bypass multi-factor authentication on phishing assessments, it also has some great features for ‘hiding behind’ benign domains. You can set HC’s ‘primary_target’ option to point at a site you would like to mimic and let the transparent proxy do the rest.

Warning: any time your domain is crawled, your server will be forwarding traffic to the impersonated site. I have seen logs on my server before where someone was scanning my site for WordPress vulnerabilities, and therefore causing my server to send the same malicious traffic to the site I was impersonating. While the chances of being sued for this are pretty low, please be aware that this technique may put you in a bit of a gray area. Choose targets wisely and keep good logs.

Categorizing with Clones and Nginx/Apache

If you’ve never used httrack to clone a site, you should really do yourself a favor and explore this absolute classic! How classic you ask? Xavier Roche first released this website cloner back in 1998 and it’s still a great utility for quickly cloning a site. Httrack saves website resources in a directory structure that is ideal for hosting with a static Nginx or Apache web server. The whole setup process can be easily scripted with Ansible to stand up clones for domain categorization.

Categorizing with Chatgpt and S3

The more modern version of static site hosting is to ask an AI bot to write your website for you, and then serve the static content using cheap cloud services like S3 buckets and content delivery networks. If you get stuck, or don’t know how to automate this process, you can just ask the robot ;)

Categorizing Expired Domains

Whenever you purchase an expired domain, you should go ahead and check its category before you buy. It’s much easier to maintain a good category than to try to re-categorize a domain, so only buy it if it’s already in a benign category. Once you buy one, you have two options to maintain the category. You can either generate some new content or re-host the site’s old content. If you want to generate new content, the AI route is probably the most efficient. However, my preferred approach is to just put the content of the old site back up on the Internet. If that content was what got the original categorization, then it should also keep it in the same category.

Categorizing with Wayback

If the expired domain you just purchased was indexed by the Wayback Machine, then you can download the old pages with a simple Ruby script:

https://github.com/hartator/wayback-machine-downloader

This script is insanely easy to install and use, and will generally get you a nice static site clone within minutes when paired with Nginx, Apache, etc. Keep in mind that you may want to manually modify site content that might still be under copyright or trademark from the prior owner.

Social Media Account Bonus

For expired sites you’ve bought, have you ever set up a mail server and monitored for incoming emails? I have! And the results can be fascinating. What you might find is that the domain you just bought used to belong to a functioning company, and some of that company’s employees used their work email to sign up for services like social media sites. Now, the company went under, those employees don’t have access to their former work email, but you do. Once again, I need to stress that this is a legal gray area, so do what you can to give these people back their accounts, and change PII on any others you decide to keep as catphish accounts.

General Tips for any Domain

Send Some ‘Safe’ Emails First

Some SEGs will block emails from domains they have never seen before regardless of reputation. Therefore, it’s generally a good idea to send some completely benign emails with your new domain before you send out any phishing pretexts. You can address these to the ‘info@’ or similar generic mailboxes and keep the messages vague to hopefully go unnoticed/ignored by any humans on the other end. We’re just trying to prime the SEG to make a determination that emails from our domain are safe and should be allowed in the future. I’ve also heard that adding hidden links to emails with HREFs that point to your domain will also work against some SEGs, though I have not personally experimented with this technique.

Use Sendgrid, MailChimp, etc.

These services specialize in mass email delivery for their clients. They work tirelessly to keep their email servers off of block lists and help marketers deliver phishing campaigns… ahem I mean ‘advertisements’ to their targets… oops I mean ‘leads’. Once again, marketing tools are your friend when cold emailing. Keep in mind that these services don’t appreciate scammers and spammers like you, so don’t be surprised if you get too aggressive with your phishing and your account gets blocked. Yet another argument for keeping campaigns small and targeted.

Use your Registrar’s Email Service

Most domain registrars also offer hosting and email services. Some even provide a few mailboxes for free! Similar to email sending services, your registrar’s mail servers will often be delivering tons of legitimate messages for other customers and therefore have a good reputation from an IP perspective. If you take this route, be sure to let the registrar know that you are a penetration tester and which domains you will be using for what campaigns. Most I’ve worked with have an abuse email address and will honor your use case.

Use Gmail for the Long Con

Personal emails are often a blind spot for corporate email filters. Completely blocking inbound emails from Gmail, Yahoo, Outlook, etc. is a tough pill to swallow and most network administrators decide to allow them instead of answering constant requests from employees to put in an exception for this or that sender. In addition, it’s extremely difficult to apply reputation scores to individual senders. Therefore, if you can think of a pretext that would make sense to come from a Gmail account, you have a high probability of at least bypassing domain reputation checks. This can work really well for individualized campaigns where you actually have a back-and-forth conversation with the targets before link/payload delivery.

Set Up SPF, DKIM, and DMARC

For sending domains you own, it will always work in your favor to set up email security settings. Check the API documentation for your registrar and write a script to set these DNS settings.

Read the Logs!

When you do start sending emails with a domain, make sure to watch your SMTP logs carefully. They are often extremely helpful for identifying if your messages are being blocked for reasons you control. For example, I know Mimecast will frequently block emails from new senders that it hasn’t seen before, but will respond with a very helpful message explaining how to get off their ‘gray list’. The solution is simply to resend the email within the next 24 hours. If you aren’t watching the logs, you would be blocked when you could easily deliver your email.

Parting Wisdom

“What I recalled of Sunday School was that the more difficult something became, the more rewarding it was in the end.” — Big Phish


One Phish Two Phish, Red Teams Spew Phish was originally published in Posts By SpecterOps Team Members on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post One Phish Two Phish, Red Teams Spew Phish appeared first on Security Boulevard.

#Infosec2024: How Williams Racing Relies on Data Security for Peak Performance – Source: www.infosecurity-magazine.com

#infosec2024:-how-williams-racing-relies-on-data-security-for-peak-performance-–-source:-wwwinfosecurity-magazine.com

Source: www.infosecurity-magazine.com – Author: 1 Formula 1, the pinnacle of motorsport, is driven on data and cybersecurity is key to protect the data that fuels their performance. The Williams Racing team hold and process vast quantities of data to optimize their performance on the F1 circuit. Infosecurity spoke to key members of the F1 Team […]

La entrada #Infosec2024: How Williams Racing Relies on Data Security for Peak Performance – Source: www.infosecurity-magazine.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

#Infosec2024: Decoding SentinelOne’s AI Threat Hunting Assistant – Source: www.infosecurity-magazine.com

#infosec2024:-decoding-sentinelone’s-ai-threat-hunting-assistant-–-source:-wwwinfosecurity-magazine.com

Source: www.infosecurity-magazine.com – Author: 1 Artificial intelligence (AI) has lowered the barrier to entry for both cyber attackers and cyber defenders. During Infosecurity Europe 2024, endpoint protection provider SentinelOne will showcase how Purple AI, its new assistant tool for cybersecurity professionals, can help speed up the work of skilled analysts and democratize threat hunting for […]

La entrada #Infosec2024: Decoding SentinelOne’s AI Threat Hunting Assistant – Source: www.infosecurity-magazine.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Enforce and Report on PCI DSS v4 Compliance with Rapid7

17 April 2024 at 09:00
Enforce and Report on PCI DSS v4 Compliance with Rapid7

The PCI Security Standards Council (PCI SSC) is a global forum that connects stakeholders from the payments and payment processing industries to craft and facilitate adoption of data security standards and relevant resources that enable safe payments worldwide.

According to the PCI SSC website, “PCI Security Standards are developed specifically to protect payment account data throughout the payment lifecycle and to enable technology solutions that devalue this data and remove the incentive for criminals to steal it. They include standards for merchants, service providers, and financial institutions on security practices, technologies and processes, and standards for developers and vendors for creating secure payment products and solutions.”

Perhaps the most recognizable standard from PCI, their Data Security Standard (PCI DSS), is a global standard that provides a baseline of technical and operational requirements designed to protect account data. In March 2022, PCI SSC published version v4.0 of the standard, which replaces version v3.2.1. The updated version addresses emerging threats and technologies and enables innovative methods to combat new threats. This post will cover the changes to the standard that came with version 4.0 along with a high-level overview of how Rapid7 helps teams ensure their cloud-based applications can effectively implement and enforce compliance.

What’s New With Version 4.0, and Why Is It Important Now?

So, why are we talking about the new standard nearly two years after it was published? That’s because when the standard was published there was a two year transition period for organizations to adopt the new version and implement required changes that came with v4.0. During this transition period, organizations were given the option to assess against either PCI DSS v4.0 or PCI DSS v3.2.1.

For those that haven’t yet made the jump, the time is now This is because the transition period concluded on March 31, 2024, at which time version 3.2.1 was retired and organizations seeking PCI DSS certification will need to adhere to the new requirements and best practices. Important to note, there are some requirements that have been “future-dated.” For those requirements, organizations have been granted another full year, with those updates being required by March 31, 2025.

The changes were driven by direct feedback from organizations across the global payments industry. According to PCI, more than 200 organizations provided feedback to ensure the standard continues to meet the complex, ever-changing landscape of payment security.

Key changes for this version update include:

Flexibility in How Teams Achieve Compliance / Customized Approach

A primary goal for PCI DSS v4.0 was to provide greater flexibility for organizations in how they can achieve their security objectives. PCI DSS v4.0 introduces a new method – known as the Customized Approach – by which organizations can implement and validate PCI DSS controls Previously, organizations had the option of implementing Compensating controls, however these are only applicable when a situation arises whereby there is a constraint – such as legacy systems or processes – impacting the ability to meet a requirement.

PCI DSS v4.0 now provides organizations the means to choose to meet a requirement leveraging other means than the stated requirement. Requirement 12.3.2 and Appendices D and E outline the customized approach and how to apply it. To support customers, Rapid7’s new PCI DSS v4.0 compliance pack provides a greater number of insights than in previous iterations. This should lead to increased visibility and refinement in the process of  choosing to mitigate and manage requirements.

A Targeted Approach to Risk Management

Alongside the customized approach concept, one of the most significant updates  is the introduction of targeted risk analysis (TRA). TRAallows organizations to assess and respond to risks in the context of an organization's specific operational environment. The PCI council has published guidance “PCI DSS v4 x: Targeted Risk Analysis Guidance” that outlines the two types of TRAs that an entity can employ regarding frequency of performing a given control and the second addressing any PCI DSS requirement for when an entity utilizes a customized approach.

To assist in understanding and having a consolidated view of security risks in their cloud environments, Rapid7 customers can leverage InsightCloudSec Layered Context and the recently introduced Risk Score feature. This feature combines a variety of risk signals, assigning a higher risk score to resources that suffer from toxic combinations or multiple risk vectors.Risk score holistically analyzes the risks that compound and increase the likelihood or impact of compromise.

Enhanced Validation Methods & Procedures

PCI DSS v4.0 has provided improvements to the self-assessment (SAQ) document and to the Report on Compliance (RoC) template, increasing alignment between them and the information summarized in an Attestation of Compliance to support organizations in their efforts when self-attesting or working with assessors to increase transparency and granularity.

New Requirements

PCI DSS v4.0 has brought with it a range of new requirements to address emerging threats. With modernization of network security controls, explicit guidance on cardholder data protections, and process maturity, the standard focuses on establishing sustainable controls and governance. While there are quite a few updates - which you can find detailed here on the summary of changes - let’s highlight a few of particular importance:

  • Multifactor authentication is now required for all access into the Cardholder Data Environment (CDE) - req. 8.5.1
  • Encryption of sensitive authentication data (SAD) - req. 3.3.3
  • New password requirements and updated specific password strength requirements: Passwords must now consist of 12 characters with special characters, uppercase and lowercase - reqs. 8.3.6 and 8.6.3
  • Access roles and privileges are based on least privilege access (LPA), and system components operate using deny by default - req. 7.2.5
  • Audit log reviews are performed using automated mechanisms - req. 10.4.1.1

These controls place role-based access control, configuration management, risk analysis and continuous monitoring as foundations, assisting organizations to mature and achieve their security objectives. Rapid7 can help  with implementing and enforcing these new controls, with a host of solutions that offer PCI-related support – all of which have been updated to align with these new requirements.

How Rapid7 Supports Customers to Attain PCI DSS v4.0 Compliance

InsightCloudSec allows security teams to establish, continuously measure, and illustrate compliance against organizational policies. This is accomplished via compliance packs, which are sets of checks that can be used to continuously assess your entire cloud environment - whether single or multi-cloud. The platform comes out of the box with dozens of compliance packs, including a dedicated pack for the PCI DSS v4.0.

Enforce and Report on PCI DSS v4 Compliance with Rapid7

InsightCloudSec assesses your cloud environments in real-time for compliance with the requirements and best practices outlined by PCI It also enables teams to identify, assess, and act on noncompliant resources when misconfigurations are detected. If you so choose, you can make use of the platform’s native, no-code automation to remediate the issue the moment it's detected, whether that means alerting relevant resource owners, adjusting the configuration or permissions directly or even deleting the non-compliant resource altogether without any human intervention. Check out the demo to learn more about how InsightCloudSec helps continuously and automatically enforce cloud security standards.

InsightAppSec also enables measurement against PCI v4.0 requirements to help you obtain PCI compliance. It allows users to create a PCI v4.0 report to help prepare for an audit, assessment or a questionnaire around PCI compliance. The PCI report gives you the ability to uncover potential issues that will affect the outcome or any of these exercises. Crucially, the report allows you to take action and secure critical vulnerabilities on any assets that deal with payment card data. PCI compliance auditing comes out of the box and is simple to generate once you have completed a scan against which to run the report.

Enforce and Report on PCI DSS v4 Compliance with Rapid7

InsightAppSec achieves this coverage by cross referencing and then mapping our suite of 100+ attack modules against PCI requirements, identifying which attacks are relevant to particular requirements and then attempting to exploit your application with those attacks to obtain areas where your application may be vulnerable. Those vulnerabilities are then packaged up in the PCI 4.0 report where you can see vulnerabilities listed by PCI requirements This provides you with crucial insights into any vulnerabilities you may have as well as enabling  management of those vulnerabilities in a simplistic format.

For InsightVM customers, an important change in the revision is the need to perform authenticated internal vulnerability scans for requirement 11.3.1.2. Previous versions of the standard allowed for internal scanning without the use of credentials, which is no longer sufficient. For more details see this blog post.

Rapid7 provides a wide array of solutions to assist you in your compliance and governance efforts. Contact a member of our team to learn more about any of these capabilities or sign up for a free trial.

What’s New in Rapid7 Products & Services: Q1 2024 in Review

4 April 2024 at 09:00
What’s New in Rapid7 Products & Services: Q1 2024 in Review

We kicked off 2024 with a continued focus on bringing security professionals (which if you're reading this blog, is likely you!) the tools and functionality needed to anticipate risks, pinpoint threats, and respond faster with confidence. Below we’ve highlighted some key releases and updates from this past quarter across Rapid7 products and services—including InsightCloudSec, InsightVM, InsightIDR, Rapid7 Labs, and our managed services.

Anticipate Imminent Threats Across Your Environment

Monitor, remediate, and takedown threats with Managed Digital Risk Protection (DRP)

Rapid7’s new Managed Digital Risk Protection (DRP) service provides expert monitoring and remediation of external threats across the clear, deep, and dark web to prevent attacks earlier.

Now available in our highest tier of Managed Threat Complete and as an add on for all other Managed D&R customers, Managed DRP extends your team with Rapid7 security experts to:

  • Identify the first signs of a cyber threat to prevent a breach
  • Rapidly remediate and takedown threats to minimize exposure
  • Protect against ransomware data leakage, phishing, credential leakage, data leakage, and provide dark web monitoring

Read more about the benefits of Managed DRP in our blog here.

What’s New in Rapid7 Products & Services: Q1 2024 in Review

Ensure safe AI development in the cloud with Rapid7 AI/ML Security Best Practices

We’ve recently expanded InsightCloudSec’s support for GenAI development and training services (including AWS Bedrock, Azure OpenAI Service and GCP Vertex) to provide more coverage so teams can effectively identify, assess, and quickly act to resolve risks related to AI/ML development.

This expanded generative AI coverage enriches our proprietary compliance pack, Rapid7 AI/ML Security Best Practices, which continuously assesses your environment through event-driven harvesting to ensure your team is safely developing with AI in a manner that won’t leave you exposed to common risks like data leakage, model poisoning, and more.

As with all critical resources connected to your InsightCloudSec environment, these risks are enriched with Layered Context to automatically prioritize AI/ML risk based on exploitability and potential impact. They’re also continuously monitored for effective permissions and actual usage to rightsize permissions to ensure alignment with LPA. In addition to this extensive visibility, InsightCloudSec offers native automation to alert on and even remediate risk across your environment without the need for human intervention.

Stay ahead of emerging threats with insights and guidance from Rapid7 Labs

In the first quarter of this year, Rapid7 initiated the Emergent Threat Response (ETR) process for 12 different threats, including (but not limited to):

  • Zero-day exploitation of Ivanti Connect Secure and Ivanti Pulse Secure gateways, the former of which has historically been targeted by both financially motivated and state-sponsored threat actors in addition to low-skilled attackers.
  • Critical CVEs affecting outdated versions of Atlassian Confluence and VMware vCenter Server, both widely deployed products in corporate environments that have been high-value targets for adversaries, including in large-scale ransomware campaigns.
  • High-risk authentication bypass and remote code execution vulnerabilities in ConnectWise ScreenConnect, widely used software with potential for large-scale ransomware attacks, providing coverage before CVE identifiers were assigned.
  • Two authentication bypass vulnerabilities in JetBrains TeamCity CI/CD server that were discovered by Rapid7’s research team.

Rapid7’s ETR program is a cross-team effort to deliver fast, expert analysis alongside first-rate security content for the highest-priority security threats to help you understand any potential exposure and act quickly to defend your network. Keep up with future ETRs on our blog here.

Pinpoint Critical and Actionable Insights to Effectively and Confidently Respond

Introducing the newest tier of Managed Threat Complete

Since we released Managed Threat Complete last year, organizations all over the globe have unified their vulnerability management programs with their threat detection and response programs. Now, teams have a unified view into the full kill chain and a tailored service to turbocharge their program, mitigate the most pressing risks and eliminate threats.

Managed Threat Complete Ultimate goes beyond our previously available Managed Threat Complete bundles to include:

  • Managed Digital Risk Protection for monitoring and remediation of threats across the clear, deep, and dark web
  • Managed Vulnerability Management for clarity guidance to remediate the highest priority risk
  • Velociraptor, Rapid7’s leading open-source DFIR framework, from monitoring and hunting to in-depth investigations into potential threats, access the tool that is leveraged by our Incident Response experts on behalf of our managed customers
  • Ransomware Prevention for recognizing threats and stopping attacks before they happen with multi-layered prevention (coming soon - stay tuned)

Get to the data you need faster with new Log Search and Investigation features in InsightIDR

Our latest enhancements to Log Search and Investigations will help drive efficiency for your team and give you time back in your day-to-day—and when you really need it in the heat of an incident. Faster search times, easier-to-write queries, and intuitive recommendations will help you find event trends within your data and save you time without sacrificing results.

  • Triage investigations faster with log data readily accessible from the investigations timeline - with a click of the new “view log entry” button you’ll instantly see the context and log data behind an associated alert.
  • Create precise queries quickly with new automatic suggestions - as you type in Log Search, the query bar will automatically suggest the elements of LEQL that you can use in your query to get to the data you need—like users, IP addresses, and processes—faster.
  • Save time sifting through search results with new LEQL ‘select’ clause - define exactly what keys to return in the search results so you can quickly answer questions from log data and avoid superfluous information.

Easily view vital cloud alert context with Simplified Cloud Threat Alerts

This quarter we launched Simplified Cloud Threat Alerts within InsightIDR to make it easier to quickly understand what a cloud alert - like those from AWS GuardDuty - means, which can be a daunting task for even the most experienced analysts due to the scale and complexity of cloud environments.

With this new feature, you can view details and known issues with the resources (e.g. assets, users, etc.) implicated in the alert and have clarity on the steps that should be taken to appropriately respond to the alert. This will help you:

  • Quickly understand what a given cloud resource is, its intended purpose, what applications it supports and who “owns” it.
  • Get a clear picture around what an alert means, what next steps to take to verify the alert, or how to respond if the alert is in fact malicious.
  • Prioritize response efforts based on potential impact with insight into whether or not the compromised resource is misconfigured, has active vulnerabilities, or has been recently updated in a manner that signals potential pre-attack reconnaissance.

A growing library of actionable detections in InsightIDR

In Q1 2024 we added 1,349 new detection rules. See them in-product or visit the Detection Library for descriptions and recommendations.

Stay tuned!

As always, we’re continuing to work on exciting product enhancements and releases throughout the year. Keep an eye on our blog and release notes as we continue to highlight the latest in product and service investments at Rapid7.


Securing the Next Level: Automated Cloud Defense in Game Development with InsightCloudSec

7 March 2024 at 13:04
Securing the Next Level: Automated Cloud Defense in Game Development with InsightCloudSec

Imagine the following scenario: You're about to enjoy a strategic duel on chess.com or dive into an intense battle in Fortnite, but as you log in, you find your hard-earned achievements, ranks, and reputation have vanished into thin air. This is not just a hypothetical scenario but a real possibility in today's cloud gaming landscape, where a single security breach can undo years of dedication and achievement.

Cloud gaming, powered by giants like AWS, is transforming the gaming industry, offering unparalleled accessibility and dynamic gaming experiences. Yet, with this technological leap forward comes an increase in cyber threats. The gaming world has already witnessed significant security breaches, such as the GTA5 code theft and Activision's consistent data challenges, highlighting the lurking dangers in this digital arena.

In such a scenario, securing cloud-based games isn't just an additional feature; it's an absolute necessity. As we examine the intricate world of cloud gaming, the role of comprehensive security solutions becomes increasingly vital. In the subsequent sections, we will explore how Rapid7's InsightCloudSec can be instrumental in securing cloud infrastructure and CI/CD processes in game development, thereby safeguarding the integrity and continuity of our virtual gaming experiences.

Challenges in Cloud-Based Game Development

Picture this: You're a game developer, immersed in creating the next big title in cloud gaming. Your team is buzzing with creativity, coding, and testing. But then, out of the blue, you're hit by a cyberattack, much like the one that rocked CD Projekt Red in 2021. Imagine the chaos – months of hard work (e.g. Cyberpunk 2077 or The Witcher 3) locked up by ransomware, with all sorts of confidential data floating in the wrong hands. This scenario is far from fiction in today's digital gaming landscape.

What Does This Kind of Attack Really Mean for a Game Development Team?

The Network Weak Spot: It's like leaving the back door open while you focus on the front; hackers can sneak in through network gaps we never knew existed. That's what might have happened with CD Projekt Red. A more fortified network could have been their digital moat.

When Data Gets Held Hostage: It's one thing to secure your castle, but what about safeguarding the treasures inside? The CD Projekt Red incident showed us how vital it is to keep our game codes and internal documents under lock and key, digitally speaking.

A Safety Net Missing: Imagine if CD Projekt Red had a robust backup system. Even after the attack, they could have bounced back quicker, minimizing the damage. It's like having a safety net when you're walking a tightrope. You hope you won't need it, but you'll be glad it's there if you do.

This is where a solution like Rapid7's InsightCloudSec comes into play. It's not just about building higher walls; it's about smarter, more responsive defense systems. Think of it as having a digital watchdog that's always on guard, sniffing out threats, and barking alarms at the first sign of trouble.

With tools that watch over your cloud infrastructure, monitor every digital move in real time, and keep your compliance game strong, you're not just creating games; you're also playing the ultimate game of digital security – and winning.

Navigating Cloud Security in Game Development: An Artful Approach

In the realm of cloud-based game development, mastering AWS services' security nuances transcends mere technical skill – it's akin to painting a masterpiece. Let's embark on a journey through the essential AWS services like EC2, S3, Lambda, CloudFront, and RDS, with a keen focus on their security features – our guardians in the digital expanse.

Consider Amazon EC2 as the infrastructure's backbone, hosting the very servers that breathe life into games. Here, Security Groups act as discerning gatekeepers, meticulously managing who gets in and out. They're not just gatekeepers but wise ones, remembering allowed visitors and ensuring a seamless yet secure flow of traffic.

Amazon S3 stands as our digital vault, safeguarding data with precision-crafted bucket policies. These policies aren't just rules; they're declarations of trust, dictating who can glimpse or alter the stored treasures. History is littered with tales of those who faltered, so precision here is paramount.

Lambda functions emerge as the silent virtuosos of serverless architecture, empowering game backends with their scalable might. Yet, their power is wielded judiciously, guided by the principle of least privilege through meticulously assigned roles and permissions, minimizing the shadow of vulnerability.

Amazon CloudFront, our swift courier, ensures game content flies across the globe securely and at breakneck speed. Coupled with AWS Shield (Advanced), it stands as a bulwark against DDoS onslaughts, guaranteeing that game delivery remains both rapid and impregnable.

Amazon RDS, the fortress for player data, automates the mundane yet crucial tasks – backups, patches, scaling – freeing developers to craft experiences. It whispers secrets only to those meant to hear, guarding data with robust encryption, both at rest and in transit.

Visibility and vigilance form the bedrock of our security ethos. With tools like AWS CloudTrail and CloudWatch, our gaze extends across every corner of our domain, ever watchful for anomalies, ready to act with precision and alacrity.

Encryption serves as our silent sentinel, a protective veil over data, whether nestled in S3's embrace or traversing the vastness to and from EC2 and RDS. It's our unwavering shield against the curious and the malevolent alike.

In weaving the security measures of these AWS services into the fabric of game development, we engage not in mere procedure but in the creation of a secure tapestry that envelops every facet of the development journey. In the vibrant, ever-evolving landscape of game creation, fortifying our cloud infrastructure with a security-first mindset is not just a technical endeavor – it's a strategic masterpiece, ensuring our games are not only a source of joy but bastions of privacy and security in the cloud.

Automated Cloud Security with InsightCloudSec

When it comes to deploying a game in the cloud, understanding and implementing automated security is paramount. This is where Rapid7's InsightCloudSec takes center stage, revolutionizing how game developers secure their cloud environments with a focus on automation and real-time monitoring.

Data Harvesting Strategies

InsightCloudSec differentiates itself through its innovative approach to data collection and analysis, employing two primary methods: API harvesting and Event Driven Harvesting (EDH). Initially, InsightCloudSec utilizes the API method, where it directly calls cloud provider APIs to gather essential platform information. This process enables InsightCloudSec to populate its platform with critical data, which is then unified into a cohesive dataset. For example, disparate storage solutions from AWS, Azure, and GCP are consolidated under a generic "Storage" category, while compute instances are unified as "Instances." This normalization allows for the creation of universal compliance packs that can be applied across multiple cloud environments, enhancing the platform's efficiency and coverage.

However, the real game-changer is Rapid7's implementation of EDH. Unlike the traditional API pull method, EDH leverages serverless functions within the customer's cloud environment to ingest security event data and configuration changes in real-time. This data is then pushed to the InsightCloudSec platform, significantly reducing costs and increasing the speed of data acquisition. For AWS environments, this means event information can be updated in near real-time, within 60 seconds, and within 2-3 minutes for Azure and GCP. This rapid update capability is a stark contrast to the hourly or daily updates provided by other cloud security platforms, setting InsightCloudSec apart as a leader in real-time cloud security monitoring.

Securing the Next Level: Automated Cloud Defense in Game Development with InsightCloudSec

Automated Remediation with InsightCloudSec Bots

The integration of near-to-real-time event information through Event Driven Harvesting (EDH) with InsightCloudSec's advanced bot automation features equips developers with a formidable toolset for safeguarding cloud environments. This unique combination not only flags vulnerable configurations but also facilitates automatic remediation within minutes, a critical capability for maintaining a pristine cloud ecosystem. InsightCloudSec's bots go beyond mere detection; they proactively manage misconfigurations and vulnerabilities across virtual machines and containers, ensuring the cloud space is both secure and compliant.

The versatility of these bots is remarkable. Developers have the flexibility to define the scope of the bot's actions, allowing changes to be applied across one or multiple accounts. This granular control ensures that automated security measures are aligned with the specific needs and architecture of the cloud environment.

Moreover, the timing of these interventions can be finely tuned. Whether responding to a set schedule or reacting to specific events – such as the creation, modification, or deletion of resources – the bots are adept at addressing security concerns at the most opportune moments. This responsiveness is especially beneficial in dynamic cloud environments where changes are frequent and the security landscape is constantly evolving.

The actions undertaken by InsightCloudSec's bots are diverse and impactful. According to the extensive list of sample bots provided by Rapid7, these automated guardians can, for example:

  • Automatically tag resources lacking proper identification, ensuring that all elements within the cloud are categorized and easily manageable
  • Enforce compliance by identifying and rectifying resources that do not adhere to established security policies, such as unencrypted databases or improperly configured networks
  • Remediate exposed resources by adjusting security group settings to prevent unauthorized access, a crucial step in safeguarding sensitive data
  • Monitor and manage excessive permissions, scaling back unnecessary access rights to adhere to the principle of least privilege, thereby reducing the risk of internal and external threats
  • And much more…

This automation, powered by InsightCloudSec, transforms cloud security from a reactive task to a proactive, streamlined process.

By harnessing the power of EDH for real-time data harvesting and leveraging the sophisticated capabilities of bots for immediate action, developers can ensure that their cloud environments are not just reactively protected but are also preemptively fortified against potential vulnerabilities and misconfigurations. This shift towards automated, intelligent cloud security management empowers developers to focus on innovation and development, confident in the knowledge that their infrastructure is secure, compliant, and optimized for the challenges of modern cloud computing.

Infrastructure as Code (IaC) Enhanced: Introducing InsightCloudSec's mimICS tool

In the dynamic arena of cloud security, particularly in the bustling sphere of game development, the wisdom of "an ounce of prevention is worth a pound of cure" holds unprecedented significance. This is where the role of Infrastructure as Code (IaC) becomes pivotal, and Rapid7's innovative tool, mimInsightCloudSec, elevates this approach to new heights.

mimInsightCloudSec, a cutting-edge component of the InsightCloudSec platform, is specifically designed to integrate seamlessly into any development pipeline, whether you prefer working with executable binaries or thrive in a containerized ecosystem. Its versatility allows it to be a perfect fit for various deployment strategies, making it an indispensable tool for developers aiming to embed security directly into their infrastructure deployment process.

The primary goal of mimInsightCloudSec is to identify vulnerabilities before the infrastructure is even created, thus embodying the proactive stance on security. This foresight is crucial in the realm of game development, where the stakes are high, and the digital landscape is constantly shifting. By catching vulnerabilities at this nascent stage, developers can ensure that their games are built on a foundation of security, devoid of the common pitfalls that could jeopardize their work in the future.

Securing the Next Level: Automated Cloud Defense in Game Development with InsightCloudSec
Figure 2: Shift Left – Infrastructure as Code (IaC) Security

Upon completion of its analysis, mimInsightCloudSec presents its findings in a variety of formats suitable for any team's needs, including HTML, SARIF, and XML. This flexibility ensures that the results are not just comprehensive but also accessible, allowing teams to swiftly understand and address any identified issues. Moreover, these results are pushed to the InsightCloudSec platform, where they contribute to a broader overview of the security posture, offering actionable insights into potential misconfigurations.

But the capabilities of the InsightCloudSec platform extend even further. Within this sophisticated environment, developers can craft custom configurations, tailoring the security checks to fit the unique requirements of their projects. This feature is particularly valuable for teams looking to go beyond standard security measures, aiming instead for a level of infrastructure hardening that is both rigorous and bespoke. These custom configurations empower developers to establish a static level of security that is robust, nuanced, and perfectly aligned with the specific needs of their game-development projects.

By leveraging mimInsightCloudSec within the InsightCloudSec ecosystem, game developers not only can anticipate and mitigate vulnerabilities before they manifest but also refine their cloud infrastructure with precision-tailored security measures. This proactive and customized approach ensures that the gaming experiences they craft are not only immersive and engaging but also built on a secure, resilient digital foundation.

Securing the Next Level: Automated Cloud Defense in Game Development with InsightCloudSec
Figure 3: Misconfigurations and recommended remediations in the InsightCloudSec platform

In summary, Rapid7's InsightCloudSec offers a comprehensive and automated approach to cloud security, crucial for the dynamic environment of game development. By leveraging both API harvesting and innovative Event Driven Harvesting – along with robust support for Infrastructure as Code – InsightCloudSec ensures that game developers can focus on what they do best: creating engaging and immersive gaming experiences with the knowledge that their cloud infrastructure is secure, compliant, and optimized for performance.

In a forthcoming blog post, we'll explore the unique security challenges that arise when operating a game in the cloud. We’ll also demonstrate how InsightCloudSec can offer automated solutions to effortlessly maintain a robust security posture.

❌
❌