Normal view

There are new articles available, click to refresh the page.
Before yesterdaySecurity Boulevard

Intrusion Detection in Linux: Protecting Your System from Threats

24 June 2024 at 04:00

Safeguarding your Linux environment from potential threats is more critical than ever. Whether you’re managing a small server or an extensive network, having hands-on knowledge of intrusion detection systems (IDS) is essential. IDS tools play a vital role in maintaining the security and integrity of your system. This guide will walk you through the practical […]

The post Intrusion Detection in Linux: Protecting Your System from Threats appeared first on TuxCare.

The post Intrusion Detection in Linux: Protecting Your System from Threats appeared first on Security Boulevard.

AI Everywhere: Key Takeaways from the Gartner Security & Risk Management Summit 2024

21 June 2024 at 19:03

The Gartner Security & Risk Management Summit 2024 showcased the transformative power of artificial intelligence (AI) across various industries, with a particular focus on the cybersecurity landscape. As organizations increasingly adopt AI for innovation and efficiency, it is crucial to understand the opportunities and challenges that come with this technology. Here are the top three […]

The post AI Everywhere: Key Takeaways from the Gartner Security & Risk Management Summit 2024 first appeared on SlashNext.

The post AI Everywhere: Key Takeaways from the Gartner Security & Risk Management Summit 2024 appeared first on Security Boulevard.

Simplifying Azure Key Vault Updates With AppViewX Automation

21 June 2024 at 14:06

Azure Key Vault service offers a secure storage solution for cryptographic keys, API keys, passwords, and certificates in the cloud. However, managing this vault typically involves manual updates and additions by cloud administrators. Given the large volume of certificates and keys and the frequent updates they require, manual updates can become quite tedious and time-consuming. […]

The post Simplifying Azure Key Vault Updates With AppViewX Automation appeared first on Security Boulevard.

Python Developers Targeted Via Fake Crytic-Compilers Package

21 June 2024 at 03:00

As per recent reports, cybersecurity experts uncovered a troubling development on the Python Package Index (PyPI) – a platform used widely by developers to find and distribute Python packages. A malicious package named ‘crytic-compilers‘ was discovered, mimicking the legitimate ‘crytic-compile’ library developed by Trail of Bits. This fraudulent package was designed with sinister intent: to […]

The post Python Developers Targeted Via Fake Crytic-Compilers Package appeared first on TuxCare.

The post Python Developers Targeted Via Fake Crytic-Compilers Package appeared first on Security Boulevard.

The Impending Identity Crisis Of Machines: Why We Need To Secure All Non-Human Identities, From Genai To Microservices And IOT

The digital landscape is no longer solely populated by human actors. Lurking beneath the surface is a silent legion – non-human or machine identities . These non-human identities encompass computers, mobile devices, servers, workloads, service accounts, application programming interfaces (APIs), machine learning models, and the ever-expanding internet of things (IoT) devices. They are the backbone […]

The post The Impending Identity Crisis Of Machines: Why We Need To Secure All Non-Human Identities, From Genai To Microservices And IOT appeared first on Security Boulevard.

Chariot Continuous Threat Exposure Management (CTEM) Updates

17 June 2024 at 17:19

Our engineering team has been hard at work, reworking our flagship platform to enhance the Chariot platform to remain the most comprehensive and powerful CTEM platform on the market. So what’s new? Here are several new features recently added to Chariot: 1. Unmanaged Platform Chariot, Praetorian’s Continuous Threat Exposure Management (CTEM) solution, is now available […]

The post Chariot Continuous Threat Exposure Management (CTEM) Updates appeared first on Praetorian.

The post Chariot Continuous Threat Exposure Management (CTEM) Updates appeared first on Security Boulevard.

Mapping Snowflake’s Access Landscape

13 June 2024 at 12:02

Attack Path Management

Because Every Snowflake (Graph) is Unique

Introduction

On June 2nd, 2024, Snowflake released a joint statement with Crowdstrike and Mandiant addressing reports of “[an] ongoing investigation involving a targeted threat campaign against some Snowflake customer accounts.” A SpecterOps customer contacted me about their organization’s response to this campaign and mentioned that there seems to be very little security-based information related to Snowflake. In their initial statement, Snowflake recommended the following steps for organizations that may be affected (or that want to avoid being affected, for that matter!):

  1. Enforce Multi-Factor Authentication on all accounts;
  2. Set up Network Policy Rules to only allow authorized users or only allow traffic from trusted locations (VPN, Cloud workload NAT, etc.); and
  3. Impacted organizations should reset and rotate Snowflake credentials.

While these recommendations are a good first step, I wondered if there was anything else we could do once we better grasped Snowflake’s Access Control Model (and its associated Attack Paths) and better understood the details of the attacker’s activity on the compromised accounts. In this post, I will describe the high-level Snowflake Access Control Model, analyze the incident reporting released by Mandiant, and provide instructions on graphing the “access model” of your Snowflake deployment.

These recommendations address how organizations might address initial access to their Snowflake instance. However, I was curious about “post-exploitation” in a Snowflake environment. After a quick Google search, I realized there is very little threat research on Snowflake. My next thought was to check out Snowflake’s access control model to better understand the access landscape. I hoped that if I could understand how users are granted access to resources in a Snowflake account, I could start to understand what attackers might do once they are authenticated. I also thought we could analyze the existing attack paths to make recommendations to reduce the blast radius of a breach of the type Crowdstrike and Mandiant reported.

While we have not yet integrated Snowflake into BloodHound Community Edition (BHCE) or Enterprise (BHE), we believe there is value in taking a graph-centric approach to analyzing your deployment, as it can help you understand the impact of a campaign similar to the one described in the intro to this post.

Snowflake Access Control Model

My first step was to search for any documentation on Snowflake’s access control model. I was pleased to find a page providing a relatively comprehensive and simple-to-understand model description. They describe their model as a mix of Discretionary Access Control, where “each object has an owner, who can in turn grant access to that object,” and Role-based Access Control, where “privileges are assigned to roles, which are in turn assigned to users.” These relationships are shown in the image below:

https://docs.snowflake.com/en/user-guide/security-access-control-overview#access-control-framework

Notice that Role 1 “owns” Objects 1 and 2. Then, notice that two different privileges are granted from Object 1 to Role 2 and that Role 2 is granted to Users 1 and 2. Also, notice that Roles can be granted to other Roles, which means there is a nested hierarchy similar to groups in Active Directory. One thing that I found helpful was to flip the relationship of some of these “edges.” In this graphic, they are pointing toward the grant, but the direction of access is the opposite. Imagine that you are User 1, and you are granted Role 2, which has two Privileges on Object 1. Therefore, you have two Privileges on Object 1 through transitivity.

We have a general idea of how privileges on objects are granted, but what types of objects does Snowflake implement? They provide a graphic to show the relationship between these objects, which they describe as “hierarchical.”

https://docs.snowflake.com/en/user-guide/security-access-control-overview#securable-objects

Notice that at the top of the hierarchy, there is an organization. Each organization can have one or many accounts. For example, the trial I created to do this research has only one Account, but the client that contacted me has ~10. The Account is generally considered to be the nexus of everything. It is helpful to think of an Account as the equivalent of an Active Directory Domain. Within the Account are Users, Roles (Groups), Databases, Warehouses (virtual compute resources), and many other objects, such as Security Integrations. Within the Database context is a Schema, and within the Schema context are Tables, Views, Stages (temporary stores for loading/unloading data), etc.

As I began understanding the implications of each object and the types of privileges each affords, I started to build a model showing their possible relationships. In doing so, I found it helpful to start at the top of the hierarchy (the account) and work my way down with respect to integrating entity types into the model. This is useful because access to entities often depends on access to their parent. For example, a user can only interact with a schema if the user also has access to the schema’s parent database. This allows us to abstract away details and make educated inferences about lower-level access. Below, I will describe the primary objects that I consider in my model.

Account (think Domain)

The account is the equivalent to the domain. All objects exist within the context of the account. When you log into Snowflake, you log in as a user within a specific account. Most administrative privileges are privileges to operate on the account, such as CREATE USER, MANAGE GRANTS, CREATE ROLE, CREATE DATABASE, EXECUTE TASK, etc.

Users (precisely what you think they are)

Users are your identity in the Snowflake ecosystem. When you log into the system, you do so as a particular user, and you have access to resources based on your granted roles and the role’s granted privileges.

Roles (think Groups)

Roles are the primary object to which privileges are assigned. Users can be granted “USAGE” of a role, similar to being added as group members. Roles can also be granted to other roles, which creates a nested structure that facilitates granular control of privileges. There are ~ five default admin accounts. The first is ACCOUNTADMIN, which is the Snowflake equivalent of Domain Admin. The remaining four are ORGADMIN, SYSADMIN, SECURITYADMIN, and USERADMIN.

Warehouses

A Warehouse is “a cluster of computer resources… such as CPU, memory, and temporary storage” used to perform database-related operations in a Snowflake session. Operations such as retrieving rows from tables, updating rows in tables, and loading/unloading data from tables all require a warehouse.

Databases

A database is defined as “a logical grouping of schemas.” It is the container for information that we would expect attackers to target. While the database object itself does not contain any data, a user must have access to the database to access its subordinate objects (Schemas, Tables, etc.).

Privileges (think Access Rights)

Privileges define who can perform which operation on which resources. In our context, privileges are primarily assigned to roles. Snowflake supports many privileges, some of which apply in a global or account context (e.g., CREATE USER), while others are specific to an object type (e.g., CREATE SCHEMA on a Database). Users accumulate privileges through the Roles that they have been granted recursively.

Access Graph

With this basic understanding of Snowflake’s access control model, we can create a graph model that describes the relationships between entities via privileges. For instance, we know that a user can be granted the USAGE privilege of a role. This is the equivalent of an Active Directory user being a MemberOf a group. Additionally, we find that a role can be granted USAGE of another role, similar to group nesting in AD. Eventually, we can produce this relatively complete initial model for the Snowflake “access graph.”

This model can help us better understand what likely happened during the incident. It can also help us better understand the access landscape of our Snowflake deployment, which can help us reduce the blast radius should an attacker gain access.

About the Incident

As more details have emerged, it has become clear that this campaign targeted customer credentials rather than Snowflake’s production environment. Later, on June 10th, Mandiant released a more detailed report describing some of the threat group’s activity discovered during the investigation.

Mandiant describes a typical scenario where threat actors compromise the computers of contractors that companies hire to build, manage, or administer their Snowflake deployment. In many cases, these contractors already have administrative privileges, so any compromise of their credentials can lead to detrimental effects. The existing administrative privileges indicate that the threat actor had no need to escalate privilege via an attack path or compromise alternative identities during this activity.

Mandiant describes the types of activity the attackers were observed to have implemented. They appear interested in enumerating database tables to find interesting information for exfiltration. An important observation is that, based on the reported activity, the compromised user seems to have admin or admin-adjacent privileges on the Snowflake account.

In this section, we will talk about each of these commands, what they do and how we can understand them in the context of our graph.

As Mandiant describes, the first command is a Discovery command meant to list all the tables available. According to the documentation, a user requires at least the USAGE privilege on the Schema object that contains the table to execute this command directly. It is common for a production Snowflake deployment to have many databases, each with many schemas, so access to tables will likely be limited to most non-admins. We can validate this in the graph, though!

https://docs.snowflake.com/en/user-guide/security-access-control-privileges#schema-privileges

Next, we see that they run the SELECT command. This indicates that they must have found one or more tables from the previous command that interested them. This command works similarly to the SQL query and returns the rows in the table. In this case, they are dumping the entire table. The privilege documentation states that a user must have the SELECT privilege on the specified table (<Target Table>) to execute this command. Additionally, the user must have the USAGE privilege on the parent database (<Target Database>) and schema (<Target Schema>).

https://docs.snowflake.com/en/user-guide/security-access-control-privileges#table-privileges

Like tables, stages exist within the schema context; thus, the requisite privilege, CREATE STAGE, exists at the schema level (aka <Redacted Schema>). The user would also require the USAGE privilege on the database (<Redacted Database>). Therefore, a user can have the ability to create a stage for one schema but not another. In general, this is a privilege that can be granted to a limited set of individuals, especially when it comes to sensitive databases/schemas.

https://docs.snowflake.com/en/user-guide/security-access-control-privileges#schema-privileges

Finally, the attackers call the COPY INTO command, which is a way to extract data from the Snowflake database. Obviously, Mandiant redacted the path, but one possible example would be to use the temporary stage to copy the data to an Amazon S3 bucket. In this case, the attacker uses the COPY INTO <location> variant, which requires the WRITE privilege. Of course, the attacker created the stage resource in the previous command, so they would likely have OWNERSHIP of the stage, granting them full control of the object.

https://docs.snowflake.com/en/user-guide/security-access-control-privileges#stage-privileges

Build Your Own Graph

At this point, some of you might be interested in checking out your Snowflake Access Graph. This section walks through how to gather the necessary Snowflake data, stand up Neo4j, and build the graph. It also provides some sample Cypher queries relevant to Snowflake’s recommendations.

Collecting Data

The first step is to collect the graph-relevant data from Snowflake. The cool thing is that this is actually a relatively simple process. I’ve found that Snowflake’s default web client, Snowsight, does a fine job gathering this information. You can navigate to Snowsight once you’ve logged in by clicking on the Query data button at the top of the Home page.

Once there, you will have the opportunity to execute commands. This section will describe the commands that collect the data necessary to build the graph. My parsing script is built for CSV files that follow a specific naming convention. Once your command has returned results, click the download button (downward pointing arrow) and select the “Download as .csv” option.

The model supports Accounts, Applications, Databases, Roles, Users, and Warehouses. This means we will have to query those entities, which will serve as the nodes in our graph. This will download the file with a name related to your account. My parsing script expects the output of certain commands to be named in a specific way. The expected name will be shared in the corresponding sections below.

I’ve found that I can query Applications, Databases, Roles, and Users as an unprivileged user. However, this is different for Accounts, which require ORGADMIN, and Warehouses, which require instance-specific access (e.g., ACCOUNTADMIN).

Applications

Databases

Roles

Users

Warehouses

Note: As mentioned above, users can only enumerate warehouses for which they have been granted privileges. One way to grant a non-ACCOUNTADMIN user visibility of all warehouses is to grant the MANAGE WAREHOUSESprivilege.

Accounts

At this point, we have almost all the entity data we need. We have one final query that will allow us to gather details about our Snowflake account. This query can only be done by the ORGADMIN role. Assuming your user has been granted ORGADMIN, go to the top right corner of the browser and click on your current role. This will result in a drop-down that displays all of the roles that are effectively granted to your user. Here, you will select ORGADMIN, allowing you to run commands in the context of the ORGADMIN role.

Once complete, run the following command to list the account details.

Grants

Finally, we must gather information on privilege grants. These are maintained in the ACCOUNT_USAGE schema of the default SNOWFLAKE database. By default, these views are only available to the ACCOUNTADMIN role. Still, users not granted USAGE of the ACCOUNTADMIN role can be granted the necessary read access via the SECURITY_VIEWER database role. The following command does this (if run as ACCOUNTADMIN):

GRANT DATABASE ROLE snowflake.SECURITY_VIEWER TO <Role>

Once you have the necessary privilege, you can query the relevant views and export them to a CSV file. The first view is grants_to_users, which maintains a list of which roles have been granted to which users. You can enumerate this list using the following command. Then save it to a CSV file and rename it grants_to_users.csv.

SELECT * FROM snowflake.account_usage.grants_to_users;

The final view is grants_to_roles, which maintains a list of all the privileges granted to roles. This glue ultimately allows users to interact with the different Snowflake entities. This view can be enumerated using the following command. The results should be saved as a CSV file named grants_to_roles.csv.

SELECT * FROM snowflake.account_usage.grants_to_roles WHERE GRANTED_ON IN ('ACCOUNT', 'APPLICATION', 'DATABASE', 'INTEGRATION', 'ROLE', 'USER', 'WAREHOUSE'); 

Setting up Neo4j

At this point, we have a Cypher statement that we can use to generate the Snowflake graph, but before we can do that, we need a Neo4j instance. The easiest way that I know of to do this is to use the BloodHound Community Edition docker-compose deployment option.

Note: While we won’t use BHCE specifically in this demo, the overarching docker-compose setup includes a Neo4j instance configured to support this example.

To do this, you must first install Docker on your machine. Once complete, download this example docker-compose yaml file I derived from the BHCE GitHub repository. Next, open docker-compose.yaml in a text editor and edit Line 51 to point to the folder on your host machine (e.g., /Users/jared/snowflake:/var/lib/neo4j/import/) where you wrote the Snowflake data files (e.g., grants_to_roles.csv). This will create a bind mount between your host and the container. You are now ready to start the container by executing the following command:

docker-compose -f /path/to/docker-composer.yaml up -d

This will cause Docker to download and run the relevant Docker containers. For this Snowflake graph, we will interact directly with Neo4j as this model has not been integrated into BloodHound. You can access the Neo4j web interface by browsing to 127.0.0.1:7474 and logging in using the default credentials (neo4j:bloodhoundcommunityedition).

Data Ingest

Once you’ve authenticated to Neo4j, it is time for data ingest. I originally wrote a PowerShell script that would parse the CSV files and handcraft Cypher queries to create the corresponding nodes and edges, but SadProcessor showed me a better way to approach ingestion. He suggested using the LOAD CSV clause. According to Neo4j, “LOAD CSV is used to import data from CSV files into a Neo4j database.” This dramatically simplifies ingesting your Snowflake data AND is much more efficient than my initial PowerShell script. This section describes the Cypher queries that I use to import Snowflake data. Before you begin, knowing that each command must be run individually is essential. Additionally, these commands assume that you’ve named your files as suggested. Therefore, the file listing of the folder you specified in the Docker Volume (e.g., /Users/jared/snowflake) should look this:

-rwx------@ 1 cobbler  staff    677 Jun 12 20:17 account.csv
-rwx------@ 1 cobbler staff 227 Jun 12 20:17 application.csv
-rwx------@ 1 cobbler staff 409 Jun 12 20:17 database.csv
-rwx------@ 1 cobbler staff 8362 Jun 12 20:17 grants_to_roles.csv
-rwx------@ 1 cobbler staff 344 Jun 12 20:17 grants_to_users.csv
-rwx------@ 1 cobbler staff 114 Jun 12 20:17 integration.csv
-rwx------@ 1 cobbler staff 895 Jun 12 20:17 role.csv
-rwx------@ 1 cobbler staff 12350 Jun 12 20:17 table.csv
-rwx------@ 1 cobbler staff 917 Jun 12 20:17 user.csv
-rwx------@ 1 cobbler staff 436 Jun 12 20:17 warehouse.csv

Note: If you don’t have a Snowflake environment, but still want to check out the graph, you can use my sample data set by replacing file:/// with https://gist.githubusercontent.com/jaredcatkinson/c5e560f7d3d0003d6e446da534a89e79/raw/c9288f20e606d236e3775b11ac60a29875b72dbc/ in each query.

Ingest Accounts

LOAD CSV WITH HEADERS FROM 'file:///account.csv' AS line
CREATE (:Account {name: line.account_locator, created_on: line.created_on, organization_name: line.organization_name, account_name: line.account_name, snowflake_region: line.snowflake_region, account_url: line.account_url, account_locator: line.account_locator, account_locator_url: line.account_locator_url})

Ingest Applications

LOAD CSV WITH HEADERS FROM 'file:///application.csv' AS line
CREATE (:Application {name: line.name, created_on: line.created_on, source_type: line.source_type, source: line.source})

Ingest Databases

LOAD CSV WITH HEADERS FROM 'file:///database.csv' AS line
CREATE (:Database {name: line.name, created_on: line.created_on, retention_time: line.retention_time, kind: line.kind})

Ingest Integrations

LOAD CSV WITH HEADERS FROM 'file:///integration.csv' AS line
CREATE (:Integration {name: line.name, created_on: line.created_on, type: line.type, category: line.category, enabled: line.enabled})

Ingest Roles

LOAD CSV WITH HEADERS FROM 'file:///role.csv' AS line
CREATE (:Role {name: line.name, created_on: line.created_on, assigned_to_users: line.assigned_to_users, granted_to_roles: line.granted_to_roles})

Ingest Users

LOAD CSV WITH HEADERS FROM 'file:///user.csv' AS line
CREATE (:User {name: line.name, created_on: line.created_on, login_name: line.login_name, first_name: line.first_name, last_name: line.last_name, email: line.email, disabled: line.disabled, ext_authn_duo: line.ext_authn_duo, last_success_login: line.last_success_login, has_password: line.has_password, has_rsa_public_key: line.has_rsa_public_key})

Ingest Warehouses

LOAD CSV WITH HEADERS FROM 'file:///warehouse.csv' AS line
CREATE (:Warehouse {name: line.name, created_on: line.created_on, state: line.state, size: line.size})

Ingest Grants to Users

LOAD CSV WITH HEADERS FROM 'file:///grants_to_users.csv' AS usergrant
CALL {
WITH usergrant
MATCH (u:User) WHERE u.name = usergrant.GRANTEE_NAME
MATCH (r:Role) WHERE r.name = usergrant.ROLE
MERGE (u)-[:USAGE]->(r)
}

Ingest Grants to Roles

:auto LOAD CSV WITH HEADERS FROM 'file:///grants_to_roles.csv' AS grant
CALL {
WITH grant
MATCH (src) WHERE grant.GRANTED_TO = toUpper(labels(src)[0]) AND src.name = grant.GRANTEE_NAME
MATCH (dst) WHERE grant.GRANTED_ON = toUpper(labels(dst)[0]) AND dst.name = grant.NAME
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'USAGE' THEN [1] ELSE [] END | MERGE (src)-[:USAGE]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'OWNERSHIP' THEN [1] ELSE [] END | MERGE (src)-[:OWNERSHIP]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'APPLYBUDGET' THEN [1] ELSE [] END | MERGE (src)-[:APPLYBUDGET]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'AUDIT' THEN [1] ELSE [] END | MERGE (src)-[:AUDIT]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'MODIFY' THEN [1] ELSE [] END | MERGE (src)-[:MODIFY]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'MONITOR' THEN [1] ELSE [] END | MERGE (src)-[:MONITOR]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'OPERATE' THEN [1] ELSE [] END | MERGE (src)-[:OPERATE]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'APPLY AGGREGATION POLICY' THEN [1] ELSE [] END | MERGE (src)-[:APPLY_AGGREGATION_POLICY]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'APPLY AUTHENTICATION POLICY' THEN [1] ELSE [] END | MERGE (src)-[:APPLY_AUTHENTICATION_POLICY]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'APPLY MASKING POLICY' THEN [1] ELSE [] END | MERGE (src)-[:APPLY_MASKING_POLICY]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'APPLY PACKAGES POLICY' THEN [1] ELSE [] END | MERGE (src)-[:APPLY_PACKAGES_POLICY]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'APPLY PASSWORD POLICY' THEN [1] ELSE [] END | MERGE (src)-[:APPLY_PASSWORD_POLICY]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'APPLY PROTECTION POLICY' THEN [1] ELSE [] END | MERGE (src)-[:APPLY_PROTECTION_POLICY]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'APPLY ROW ACCESS POLICY' THEN [1] ELSE [] END | MERGE (src)-[:APPLY_ROW_ACCESS_POLICY]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'APPLY SESSION POLICY' THEN [1] ELSE [] END | MERGE (src)-[:APPLY_SESSION_POLICY]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'ATTACH POLICY' THEN [1] ELSE [] END | MERGE (src)-[:ATTACH_POLICY]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'BIND SERVICE ENDPOINT' THEN [1] ELSE [] END | MERGE (src)-[:BIND_SERVICE_ENDPOINT]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CANCEL QUERY' THEN [1] ELSE [] END | MERGE (src)-[:CANCEL_QUERY]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CREATE ACCOUNT' THEN [1] ELSE [] END | MERGE (src)-[:CREATE_ACCOUNT]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CREATE API INTEGRATION' THEN [1] ELSE [] END | MERGE (src)-[:CREATE_API_INTEGRATION]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CREATE APPLICATION' THEN [1] ELSE [] END | MERGE (src)-[:CREATE_APPLICATION]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CREATE APPLICATION PACKAGE' THEN [1] ELSE [] END | MERGE (src)-[:CREATE_APPLICATION_PACKAGE]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CREATE COMPUTE POOL' THEN [1] ELSE [] END | MERGE (src)-[:CREATE_COMPUTE_POOL]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CREATE CREDENTIAL' THEN [1] ELSE [] END | MERGE (src)-[:CREATE_CREDENTIAL]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CREATE DATA EXCHANGE LISTING' THEN [1] ELSE [] END | MERGE (src)-[:CREATE_DATA_EXCHANGE_LISTING]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CREATE DATABASE' THEN [1] ELSE [] END | MERGE (src)-[:CREATE_DATABASE]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CREATE DATABASE ROLE' THEN [1] ELSE [] END | MERGE (src)-[:CREATE_DATABASE_ROLE]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CREATE EXTERNAL VOLUME' THEN [1] ELSE [] END | MERGE (src)-[:CREATE_EXTERNAL_VOLUME]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CREATE INTEGRATION' THEN [1] ELSE [] END | MERGE (src)-[:CREATE_INTEGRATION]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CREATE NETWORK POLICY' THEN [1] ELSE [] END | MERGE (src)-[:CREATE_NETWORK_POLICY]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CREATE REPLICATION GROUP' THEN [1] ELSE [] END | MERGE (src)-[:CREATE_REPLICATION_GROUP]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CREATE ROLE' THEN [1] ELSE [] END | MERGE (src)-[:CREATE_ROLE]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CREATE SCHEMA' THEN [1] ELSE [] END | MERGE (src)-[:CREATE_SCHEMA]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CREATE SHARE' THEN [1] ELSE [] END | MERGE (src)-[:CREATE_SHARE]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CREATE USER' THEN [1] ELSE [] END | MERGE (src)-[:CREATE_USER]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'CREATE WAREHOUSE' THEN [1] ELSE [] END | MERGE (src)-[:CREATE_WAREHOUSE]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'EXECUTE DATA METRIC FUNCTION' THEN [1] ELSE [] END | MERGE (src)-[:EXECUTE_DATA_METRIC_FUNCTION]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'EXECUTE MANAGED ALERT' THEN [1] ELSE [] END | MERGE (src)-[:EXECUTE_MANAGED_ALERT]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'EXECUTE MANAGED TASK' THEN [1] ELSE [] END | MERGE (src)-[:EXECUTE_MANAGED_TASK]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'EXECUTE TASK' THEN [1] ELSE [] END | MERGE (src)-[:EXECUTE_TASK]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'IMPORT SHARE' THEN [1] ELSE [] END | MERGE (src)-[:IMPORT_SHARE]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'MANAGE GRANTS' THEN [1] ELSE [] END | MERGE (src)-[:MANAGE_GRANTS]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'MANAGE WAREHOUSES' THEN [1] ELSE [] END | MERGE (src)-[:MANAGE_WAREHOUSES]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'MANAGEMENT SHARING' THEN [1] ELSE [] END | MERGE (src)-[:MANAGEMENT_SHARING]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'MONITOR EXECUTION' THEN [1] ELSE [] END | MERGE (src)-[:MONITOR_EXECUTION]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'OVERRIDE SHARE RESTRICTIONS' THEN [1] ELSE [] END | MERGE (src)-[:OVERRIDE_SHARE_RESTRICTIONS]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'PURCHASE DATA EXCHANGE LISTING' THEN [1] ELSE [] END | MERGE (src)-[:PURCHASE_DATA_EXCHANGE_LISTING]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'REFERENCE USAGE' THEN [1] ELSE [] END | MERGE (src)-[:REFERENCE_USAGE]->(dst))
FOREACH (_ IN CASE WHEN grant.PRIVILEGE = 'USE ANY ROLE' THEN [1] ELSE [] END | MERGE (src)-[:USE_ANY_ROLE]->(dst))
} IN TRANSACTIONS

Once you finish executing these commands you can validate that the data is in the graph by running a query. The query below returns any entity with a path to the Snowflake account.

MATCH p=()-[*1..]->(a:Account)
RETURN p

This is a common way to find admin users. While Snowflake has a few default admin Roles, such as ACCOUNTADMIN, ORGADMIN, SECURITYADMIN, SYSADMIN, and USERADMIN, granting administrative privileges to custom roles is possible.

Queries

Having a graph is great! However, the value is all about the questions you can ask. I’ve only been playing around with this Snowflake graph for a few days. Still, I created a few queries that will hopefully help you gather context around the activity reported in Mandiant’s report and your compliance with Snowflake’s recommendations.

Admins without MFA

Snowflake’s primary recommendation to reduce your exposure to this campaign and others like it is to enable MFA on all accounts. While achieving 100% coverage on all accounts may take some time, they also recommend enabling MFA on users who have been granted the ACCOUNTADMIN Role. Based on my reading of the reporting, the attackers likely compromised the credentials of admin users, so it seems reasonable to start with these highly privileged accounts first.

There are two approaches to determining which users have admin privileges. The first is to assume that admins will be granted one of the default admins roles, as shown below:

MATCH p=((n:User WHERE n.ext_authn_duo = "false")-[:USAGE*1..]->(r:Role WHERE r.name CONTAINS "ADMIN"))
RETURN p

Here, we see seven users who have been granted USAGE of a role with the string “ADMIN” in its name. While this is a good start, the string “ADMIN” does not necessarily mean that the role has administrative privileges, and its absence does not mean that the role does not have administrative privileges. Instead, I recommend searching for admins based on their effective privileges.

This second query considers that admin privileges can be granted to custom roles. For example, the MANAGE_GRANTS privilege, shown below, “grants the ability to grant or revoke privileges on any object as if the invoking role were the owner of the object.” This means that if a user has this privilege, they can grant themselves or anyone access to any object they want.

MATCH p=((n:User WHERE n.ext_authn_duo = "false")-[:USAGE*1..]->(r:Role)-[:MANAGE_GRANTS]->(a:Account))
RETURN p

Here, we see five users not registered for MFA who have MANAGE_GRANTS over the Snowflake Account. Two users are granted USAGE of the ACCOUNTADMINS role, and the other three are granted USAGE of a custom role. Both ACCOUNTADMINS and the custom role are granted USAGE of the SECURITYADMINS role, which is granted MANAGE_GRANTS on the account.

Restated in familiar terms, two users are members of the ACCOUNTADMINS group, which is nested inside the SECURITYADMINS group, which has SetDACL right on the Domain Head.

User Access to a Database

According to Mandiant, most of the attacker’s actions focused on data contained within database tables. While my graph does not currently support schema or table entities, it is important to point out that the documentation states that “operating on a table also requires the USAGE privilege on the parent database and schema.” This means that we can use the graph to understand which users have access to which database and then infer that they likely have access to the schema and tables within the database.

MATCH p=((u:User)-[:USAGE*1..]->(r:Role)-[:OWNERSHIP]->(d:Database WHERE d.name = "<DATABASE NAME GOES HERE>"))
RETURN p

Here, the Jared and SNOWFLAKE users have OWNERSHIP of the SNOWFLAKE_SAMPLE_DATA database via the ACCOUNTADMIN role.

This query shows all users that have access to a specified databases. If you would like to check access to all databases you can run this query:

MATCH p=((u:User)-[:USAGE*1..]->(r:Role)-[]->(d:Database))
RETURN p

Stale User Accounts

Another simple example is identifying users that have never been used (logged in to). Pruning unused users might reduce the overall attack surface area.

MATCH (n:User WHERE n.last_success_login = "")
RETURN n

Conclusion

I hope you found this overview and will find this graph capability useful. I’m looking forward to your feedback regarding the graph! If you write a useful query, please share it, and I will put it in the post with credit. Additionally, if you think of extending the graph, please let me know, and I’ll do my best to facilitate it.

Before I go, I want to comment on Snowflake’s recommendations in the aftermath of this campaign. As I mentioned, Snowflake’s primary recommendation is to enable MFA on all accounts. It is worth mentioning, in their defense, that Snowflake has always (at least since before this incident) recommended that MFA be enabled on any user granted the ACCOUNTADMIN role (the equivalent of Domain Admin).

That being said, the nature of web-based platforms means that if an attacker compromises a system with a Snowflake session, they likely can steal the session token and reuse it even if the user has MFA enabled. Austin Baker, who goes by @BakedSec on Twitter, pointed this out.

This indicates that we must look beyond how we stop attackers from getting access. We must understand the access landscape within our information systems. Ask yourself, “Can you answer which users can use the DATASCIENCE Database in your Snowflake deployment?” With this graph, that question is trivial to answer, but without one, we find that most organizations cannot answer these questions accurately. When nested groups (roles in this case) are involved, it is very easy for there to be a divergence between intended access and effective access. This only gets worse over time. I think of it as entropy.

We must use a similar approach for cloud accounts as on-prem administration. You don’t browse the web with your Domain Administrator account. No, you have two accounts, one for administration and one for day-to-day usage. You might even have a system that is dedicated to administrative tasks. These same ideas should apply to cloud solutions like Snowflake. Are you analyzing the data in a table? Great, use your Database Reader account. Now you need to grant Luke a role so he can access a warehouse? Okay, hop on your Privileged Access Workstation and use your SECURITYADMIN account. The same Tier 0 concept applies in this context. I look forward to hearing your feedback!

UPDATE: Luke Jennings from Push Security added a new technique to the SaaS Attack Matrix called Session Cookie Theft. This technique shows one way that attackers, specifically if they have access to the SaaS user’s workstation, can steal relevant browser cookies in order to bypass MFA. This does not mean that organizations should not strive to enable MFA on their users, especially admin accounts, however it does demonstrate the importance of reducing attack paths within the SaaS application’s access control model. One way to think of it is that MFA is meant to make it more difficult for attackers to get in, but once they’re in it is all about Attack Paths. The graph approach I demonstrate in this post is the first step to getting a handle of these Attack Paths to reduce the blast radius of a compromise.


Mapping Snowflake’s Access Landscape was originally published in Posts By SpecterOps Team Members on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post Mapping Snowflake’s Access Landscape appeared first on Security Boulevard.

Tile/Life360 Breach: ‘Millions’ of Users’ Data at Risk

13 June 2024 at 13:28
Life360 CEO Chris Hulls

Location tracking service leaks PII, because—incompetence? Seems almost TOO easy.

The post Tile/Life360 Breach: ‘Millions’ of Users’ Data at Risk appeared first on Security Boulevard.

“Mission Possible”: How DTEX is Supporting National Security from the Inside Out

12 June 2024 at 09:04

When considering the most significant cyber threats to the public sector, many immediately think of foreign adversaries breaching federal agencies. This perception is understandable, as nation-state cyber attacks often dominate headlines. However, the real threat might be closer to home. Imagine if the breach originated from within the agency itself – if a nation-state or … Continued

The post “Mission Possible”: How DTEX is Supporting National Security from the Inside Out appeared first on DTEX Systems Inc.

The post “Mission Possible”: How DTEX is Supporting National Security from the Inside Out appeared first on Security Boulevard.

Ticketmaster is Tip of Iceberg: 165+ Snowflake Customers Hacked

11 June 2024 at 11:15
Snowflake CISO Brad Jones

Not our fault, says CISO: “UNC5537” breached at least 165 Snowflake instances, including Ticketmaster, LendingTree and, allegedly, Advance Auto Parts.

The post Ticketmaster is Tip of Iceberg: 165+ Snowflake Customers Hacked appeared first on Security Boulevard.

Ghostwriter v4.2

10 June 2024 at 16:01

Ghostwriter v4.2: Project Documents & Reporting Enhancements

After April’s massive Ghostwriter v4.1 release, we received some great feedback and ideas. We got a little carried away working on these and created a release so big we had to call it v4.2. This release contains some fantastic changes and additions to the features introduced in April’s release. Let’s get to the highlights!

Improving Customizable Fields

Ghostwriter v4.1 introduced custom fields, and seeing the community use them so creatively was awesome. What we saw gave us some ideas for a few big improvements.

The rich text fields support the Jinja2 templating language, so loops quickly became a talking point. Looping over project objectives, findings, hosts, and other things to create dynamic lists, table rows, or sections is incredibly powerful, so we had to do it.

You can now use Jinja2-style expressions with the new li, tr, and p tags to create list items, table rows, and text sections. Here is an example of building a dynamic list inside Ghostwriter’s WYSIWYG editor.

Jinja2-style Loop in the WYSIWYG Editor

This screenshot includes examples of a few notable features. We’re using the new li tag with a for loop to create a bulleted list of findings. We have access to Jinja2 filters, including Ghostwriter’s custom filters, so we use the filter_severity filter to limit the loop to findings with a severity rating matching critical, high, or medium. The first and last bullets won’t be in the list in the final Word document.

The middle bullet will repeat for each finding to create our list. It includes the title and severity and uses the regex_search filter to pull the first sentence of the finding’s description. The use of severity_rt here is also worth a call-out. Some community members asked about nesting rich text fields inside of other rich text fields, like the pre-formatted severity_rt text for a finding. Not only can we use severity_rt inside this field, but we can also add formatting, like changing the text to bold.

Here is the above list rendered in a Word document. The pre-formatted finding.severity_rt appears with the proper color formatting and the bold formatting added in the WYSIWYG editor.

Output of the Jinja2 Loop in Microsoft Word

These additions introduced some complexity. A typo or syntax mistake could break the Jinja2 templating and prevent a report from generating, and the resulting error message wasn’t always helpful for tracking down the offending line of text. To help with this, Ghostwriter v4.2 validates your Jinja2 when you save your work, so you don’t need to worry about accidentally saving any invalid Jinja2 and causing a headache later on.

Validating the Jinja2 didn’t cover every possible issue, though. Even valid Jinja2 can fail to render if the report data causes an error (e.g., trying to divide by zero or referencing a non-existent value). To help with that, we significantly improved error handling to catch Jinja2 syntax and template errors at render time and surface more helpful information about them. When content prevents a report from generating, error messages point to the report’s section containing the problem.

In this example, a field called exec_summ contains this Jinja2 statement: {{ 100/0 }}. That’s valid Jinja2, but you can’t actually divide by zero, so rendering will always fail. With Ghostwriter v4.2’s error handling enhancements, the error message will direct us to the field name and include information we can use to track down the problem.

New Explicit Error Message for a Failed Report

More Reporting Improvements

Ghostwriter v4.2 also includes enhancements to the reporting engine. The most significant change is the introduction of project documents. You don’t always need to generate a full report with findings. You may need to generate other project-related documents before or after the assessment is done, like rules of engagement, statement of work, or attestation of testing documents. You could technically do this with older versions of Ghostwriter by generating a report with a different template, but that was not intuitive.

The project dashboard’s Reports tab is now called Reporting and contains an option to generate JSON, docx, or pptx reports with your project’s data. This works like the reports you’re familiar with but uses only the project data (e.g., no findings or report-specific values).

New Project Document Generation Under the Project Dashboard

We wanted to make it easier to generate project documents like this because custom fields open up many possibilities. For example, at SpecterOps, we provide clients with daily status reports after each day of testing. We can accomplish this task with a Daily Status Report field added to projects and the new project reporting feature. Consultants can now easily collaborate on updates to the daily status and then generate their daily status document from the same dashboard.

That’s not all, though. We realized one globally configured report filename would never apply to every possible document or report you might want to generate, so we made some changes. The biggest change is that your filename can now be configured using Jinja2-style templating with access to your project data, familiar filters, and a few additional values (e.g., `now` to include the current date and time). That means you can configure a filename like this that dynamically includes information about your project, client, and more:

{{now|format_datetime(“Y-m-d”)}} {{company_name}} — {{client.name}} {{project.project_type}} Report

That template would produce a filename like 2024–06–05 Acme Corp — SpecterOps Red Team Report.docx. That’s great for a default, but you’d probably be renaming documents often as you expanded the types of documents generated with Ghostwriter. To help with that, we made it possible to configure separate global defaults for reports and project documents.

Jinja2 Templating for Default Filenames for Report Downloads

We also added the option to override the filename per template! To revisit the daily status report example, your Word template for such a report can override the global filename template with a filename template like {{now|format_datetime(“Y-m-d”)}} {{company_name}} Daily Status Report.

Another enhancement for this feature is a third template type, Project DOCX. You can use this to flag Microsoft Word templates that are intended only for project documents. Templates flagged with this type will only appear in the dropdown menu on the project dashboard. This will keep them out of the list of templates for your reports.

Finally, Ghostwriter v4.2 also includes a feature proposed by community member Dominic Whewell, support for templating the document properties in your Word templates. If you set values like the document title, author(s), and company, you can now do that in your templates and let Ghostwriter fill in that information for you.

Jinja2 Templating in Document Properties

Background Tasks with Cron

Ghostwriter v4.2 also contains various bug fixes and quality-of-life improvements. There are too many details here, but one worth mentioning is support for cron expressions with scheduled background tasks. Previously, you could schedule tasks to run at certain time intervals, but you couldn’t control some of the finer points, like the day of the week.

You can now use cron to schedule a task on a more refined schedule, like only on a specific day of the week or weekdays. When scheduling a task, select Cron for the Schedule Type and provide a cron expression–e.g., 00 12 * * 1–5 to run the task every weekday at 12:00.

Using Cron to Schedule a Background Task

Check out the full CHANGELOG for all the details on this release.

Release Ghostwriter v4.2.0 · GhostManager/Ghostwriter


Ghostwriter v4.2 was originally published in Posts By SpecterOps Team Members on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post Ghostwriter v4.2 appeared first on Security Boulevard.

Is CVSS Alone Failing Us? Insights From Our Webinar With Verizon

10 June 2024 at 14:55
Is CVSS Alone Failing Us? Insights From Our Webinar With Verizon

In a recent webinar with Verizon, we discussed how organizations measure and prioritize their vulnerabilities. We reviewed insights from Verizon’s 2024 Data Breach Investigations Report, and double-clicked on data to answer several other key questions, such as: Is the Common Vulnerability Scoring System (CVSS) sufficient for prioritization? Does Exploit Prediction Scoring Systems (EPSS) with CVSS …

Read More

The post Is CVSS Alone Failing Us? Insights From Our Webinar With Verizon appeared first on Security Boulevard.

How to Create a Cyber Risk Assessment Report

10 June 2024 at 06:30

In today's fast-paced digital landscape, conducting a cyber risk assessment is crucial for organizations to safeguard their assets and maintain a robust security posture. A cyber risk assessment evaluates potential threats and vulnerabilities, providing actionable insights to mitigate risks and protect sensitive information. As the control environment evolves rapidly, it is essential to regularly update these reports to reflect the latest changes and trends, ensuring that security professionals have accurate and current data to guide their decisions.

The post How to Create a Cyber Risk Assessment Report appeared first on Security Boulevard.

Microsoft Recall is a Privacy Disaster

6 June 2024 at 13:20
Microsoft CEO Satya Nadella, with superimposed text: “Security”

It remembers everything you do on your PC. Security experts are raging at Redmond to recall Recall.

The post Microsoft Recall is a Privacy Disaster appeared first on Security Boulevard.

7 Reasons Why You Need To Replace Your Microsoft CA

6 June 2024 at 02:57

Maintaining a robust and efficient Public Key Infrastructure (PKI) has never been more important for digital security. PKI is not only used to protect public-facing websites and applications but also to secure machine-to-machine communications across a wide range of enterprise use cases, ensuring data privacy and integrity. As the application of PKI expands, there is […]

The post 7 Reasons Why You Need To Replace Your Microsoft CA appeared first on Security Boulevard.

The Dual Edges of AI in Cybersecurity: Insights from the 2024 Benchmark Survey Report

4 June 2024 at 10:00

Artificial intelligence (AI) in cybersecurity presents a complex picture of risks and rewards. According to Hyperproof’s 5th annual benchmark report, AI technologies are at the forefront of both enabling sophisticated cyberattacks and bolstering defenses against them. This duality underscores the critical need for nuanced application and vigilant management of AI in cybersecurity risk management practices....

The post The Dual Edges of AI in Cybersecurity: Insights from the 2024 Benchmark Survey Report appeared first on Hyperproof.

The post The Dual Edges of AI in Cybersecurity: Insights from the 2024 Benchmark Survey Report appeared first on Security Boulevard.

Top 5 CVEs and Vulnerabilities of May 2024

3 June 2024 at 06:46

May brought a fresh batch of security headaches. This month, we’re focusing on critical vulnerabilities in widely used software like Apache, Gitlab, and Github. These flaws could allow attackers to...

The post Top 5 CVEs and Vulnerabilities of May 2024 appeared first on Strobes Security.

The post Top 5 CVEs and Vulnerabilities of May 2024 appeared first on Security Boulevard.

❌
❌