AWS Security - flAWS Walkthrough

This is a bucket, but let's talk about a different kind of bucket today.  

For this blog post I will be discussing the security revolving the Amazon Web Services.  This is not my original ideas, but is a reference to the popular flAWS challenges written by Scott Piper (credit where credit is due).  I will be walking through several of the levels where we will be experimenting with common mistakes when configuring the AWS environments.  

Level 1

Let's start with Level 1.  This level is buckets of fun.  Let's see if we can find the first sub-domain.  The site flaws.cloud is hosted on an S3 bucket, but what is an S3 bucket?

Amazon S3 is cloud storage for the Internet.  You can upload your data (photos, videos, documents, html, etc).  To do this you must first create a bucket in one of the AWS Regions.  Then you can upload any number of objects to that bucket.  S3 provides a great way to host sites similar to host on other popular cloud infrastructure such as GitHub, Google, MySpace, etc.  

Interestingly enough, when hosting a site on an S3 bucket, the bucket name (in this instance flaws.cloud) must match the domain name (flaws.cloud).  Also, it should be noted that S3 buckets are a global name space, meaning two people cannot have buckets with the same name.  The result of this is if you could create a bucket named apple.com, the world's largest fruit vendor, would not be able to use that bucket via the S3 hosting.  

We can use some well known linux commands to determine if a site is hosted on an S3 bucket by running a DNS lookup on the domain name like this:  

dig +nocmd flaws.cloud any +multiline +noall +answer
# Returns: 
# flaws.cloud.			5 IN A 54.231.184.255

Using that IP address we can run a nslookup command to find out what domains names are pointing to that address.

nslookup 54.231.184.255
# Returns:
# Non-authoritative answer: 
# 255.184.231.54.in-addr.arpa	name = 
#     s3-website-us-west-2.amazonaws.com

Take note of the us-west-2 in the output above.  This is the region in which the S3 bucket is hosted.  All S3 buckets, when configured for web hosting, are given an AWS domain you can simply browse to it without setting up your own DNS.  In this case, flaws.cloud can also be visited by going to this link. Maybe the engineer misconfigured those permissions on who can see the S3 bucket.  What happens if we go straight to the S3 location? http://flaws.cloud.s3.amazonaws.com/

As you can see from the image above by browsing straight to the S3 bucket location we can see all of the contents that are currently residing in the bucket.  This is because the engineer responsible for setting up the S3 bucket did setup the correct permissions.   Looking through the contents we see the file at the bottom named secret-dd02c7c.html.  Browsing to that file we receive the secret file and the link to level 2.  

Level 2

AWS S3 buckets can be setup with many different permission settings, and functionality.  Much like was discussed in the previous level section, developers or users can setup S3 to host static files.  

On AWS you can setup S3 buckets with all sorts of permissions and functionality including using them to host static files.  There are quite a few number of people who accidentally open these buckets with permissions that are too loose.  Do you not believe me?  Well luckily a recent website used Shodan to search for these open buckets, and this is the result.  

By default, S3 buckets are private and secure when they are created.  To allow it to be accessed as a web page, in order to configure a S3 bucket as a website you have to turn on "Static Website Hosting" and change the bucket policy to allow everyone s3:GetObject privileges, which is fine if you plan to publicly host the bucket as a web page.  However, the author of flaws.cloud had to go out of the way to change the permission to add "Everyone" to have "List" permissions. Unfortunately, on AWS permissions configurations "Everyone" doesn't mean everyone authenticated on your application.  It means Everyone on the Internet.

For level 2 we actually need to signup for an AWS account, which can be done with a free tier level account.  However, it should be noted that it does require a credit card in case you spin-up some instances Amazon will be able to bill you for them.  You can sign up for a free account here.  

To setup an AWS free tier account and retrieve the access key ID that will be needed later in the post we can follow these instructions:

1. Open the IAM console.
2. In the navigation pane of the console, choose Users.
3. Choose your IAM user name (not the check box).
4. Choose the Security credentials tab and then choose Create access key.
5. To see the new access key, choose Show. Your credentials will look something like this:
	Access key ID: AKIAIOSFODNN7EXAMPLE
	Secret access key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
6. To download the key pair, choose Download .csv file. Store the keys in a secure   location.
7. Keep the keys confidential in order to protect your AWS account, and never email them. Do not share them outside your organization, even if an inquiry appears to come from AWS or Amazon.com. No one who legitimately represents Amazon will ever ask you for your secret key.

Once your account is setup you will need to obtain your own AWS key, and will also need to install the AWS Command Line Tool (CLI) tool.  If you are on Linux devices you can install the AWS CLI quite easily.  

# For Debian and Ubuntu Devices
apt install python-pip	#python 2
apt install python3-pip	#python 3

# For CentOS and RHEL
# yum install epel-release 
# yum install python-pip

# finally install the AWS CLI
pip install awscli --upgrade --user

# You can check the version by running
aws --version

Next we need to configure the AWS CLI with the account that was setup earlier in the blog post.  The AWS CLI tool will walk you through the setup if you run the following commands.  

aws configure --profile user2
AWS Access Key ID [None]: AKIAI44QH8DHBEXAMPLE
AWS Secret Access Key [None]: je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
Default region name [None]: us-east-1
Default output format [None]: text

Continuing on with the Level 2 challenge.  If you still have the link to level 2 from earlier you can now access the S3 bucket of this link with your newly setup AWS CLI tool.  The permissions for this level's bucket are too loose, but you will need your AWS account to see what is inside of it.  This can be done by running the following command:

aws s3 --profile YOUR_ACCOUNT ls s3://level2.url

At this point you should have found the next level's URL at http://level3-9afd3927f195e10225021a578e6f78df.flaws.cloud/.  This will complete the level 2 challenge.  

Level 3

Alright so for level 3 the developer decided he wanted to restrict who could see the static files in the S3 bucket.  Thus, the developer reconfigured the Amazon S3 bucket's permissions from Everyone to Any Authenticated AWS User.  The developer thinks that by only allowing Any Authenticated AWS Users it will only allow users of their account, but this thought process is in fact incorrect.  This permission setting will allow for anyone who has an AWS account to access the static files in the S3 bucket.  Much like the previous level, you should be using the AWS CLI tool to list the files in this directory.  

In looking through the document there are not many secret files here, however there is an interesting directory named .git.  Git is a version control system for tracking changes in computer files and coordinating work on those files among multiple people.  Versioning control will let you roll back to previous iterations or changes easily by keeping track of all changes to the project.  First we need to download the entire folder structure, which can be done by using the following command:

aws s3 sync s3://level3-9afd3927f195e10225021a578e6f78df.flaws.cloud/ . --no-sign-request --region us-west-2

Now this post is not going to be a full length git tutorial, but I will walkthrough some of the basic commands that are useful for this scenario.  Looking through some of the basic git commands we notice one is named git log.  Which will all of the previous changes that have been made.  Once we find a change that we would like to look at we can issue git checkout <filename in order to return to the previous iteration.  

git checkout f7cebc46b471ca9838a0bdd1074bb498a3f84c87

Now you should have found the AWS key and secret that were residing in the previous git version.  This is a real issue.  There are so many developers who are accidentally uploading their AWS keys and secret to git that Amazon has released tools that will help prevent developers from uploading their secrets.  Returning to the challenge at hand, we can use the AWS key and secret by configuring a new profile to be used.  

aws configure --profile flaws

Use the found AWS key and secret when prompted.  Now that we have a profile from the impersonated user we can list the contents of the bucket by running the following command:

aws --profile flaws s3 ls

Look at that!  We have found the 4th level's URL in the S3 bucket.  http://level4-1156739cfb264ced6de514971a4bef68.flaws.cloud/ So what have we learned from the Level 3 challenge?  Many people will accidentally leak their AWS key and secrets through version control.  It is not as simple as removing the file from git as the versions will still have the changes recorded.  In the event of an AWS key or secret that might have been compromised you should revoke the key from AWS.  

What can we do to prevent this type of vulnerability?  Rolling keys often is a great practice to follow.  Rolling secrets and keys means that you will revoke the keys and generate new ones.  Thus, if an attacker or prying eyes were able to find previous keys they will be invalidated much more quickly.