I'm looking forward to demonstrating real-life cloud security issues next week at "An evening of AWS Security" at Photobox's new offices in London
AWS Lambda is a widely used cloud service which allows customers to create event driven serverless functions of short duration. To find out whether a bad lambda function could lead to AWS account takeover, I created some code to deploy one and uploaded to GitHub.
The code returns the environment variables of the container in which the lambda function is being executed - this simulates what could happen if the Lambda function was vulnerable to an injection attack or a malicious dependency was included in the code.
An attacker can simply take these credentials and use the AWS command line to perform any actions within the AWS account which are allowed by the Identity and Access Management (IAM) role assigned to the Lambda function. In my example code, this is access to AWS Secrets Manager. In the worst case, it could be access to IAM allowing an attacker to set up their own administrator account.
So the answer is YES - if the AWS Lambda function is vulnerable, and the assigned role has excessive permissions, a bad Lambda function can lead to AWS account takeover.
To protect an AWS account from takeover due to bad Lambda functions, see my blog 10 Steps to Lambda security. Click on the read more link below for screenshots and technical details.
Continuous cloud compliance is essential to maintain the security of applications and systems in the cloud. At DevSecCon London next week I'll be talking about my experiences in this area, and how an effective solution needs to include prevention, detection and remediation elements.
In my talk "A journey to continuous cloud compliance", I'll give a live demonstration of techniques and approaches with a system I've built in AWS using Capital One's open source Cloud Custodian project, combined with Lambda functions and other AWS services to provide customised notifications via email and Slack.
Click on the read more link to see other examples of alerts and automated remediation.
Keys and secrets in code repositories have led to major data breaches and significant financial loss. An AWS secret key accidentally pushed to GitHub on a Friday reportedly led to a loss of $64,000 by Monday morning, as 244 virtual machines were spun up. The attacker who stole 57 million user and driver records from Uber appears to have made use of an AWS credential within a private GitHub repository with permissions to the S3 bucket used as a database backup.
Why do developers put keys and secrets in code repositories?
Developers and DevOps engineers want to automate application and infrastructure deployment and the most straightforward way to do this can be to include the necessary keys and secrets in code. Sometimes this starts off as an initial proof of concept, but then ends up in production.
It's also easy to accidentally push a credential to a repository. I've done this myself with an Azure service principal credential. Fortunately it was a repository on a private network with limited access.
How can I discover keys and secrets in code repositories?
I've created a Github repository and deliberately included some keys and secrets. As it's a small repository, you can probably find them all manually. You can also scan using a tool such as GitRob. Click on the Read More link to find out more.
Over the last few years there have been a large number of data breaches from S3 buckets, for example:
Some of the data breaches were simply due to the S3 bucket being configured as public instead of private. AWS improved the S3 console recently, to clearly warn the user when a bucket or object is being made public. But I know some DevOps engineers who only ever use code and never log in to the console - so they would never see these warnings.
The Dow Jones case I find interesting because the misconfiguration arose from the use of "authenticated users" in the access control list. You might think that means an authenticated user of the same AWS account the S3 bucket resides in. Actually it means any AWS account in the world.
The Uber S3 bucket wasn't misconfigured as such - it appears that an attacker got hold of GitHub credentials, so could access private Git repositories, the attacker then discovered an AWS key which had rights to the S3 bucket.
Effectively protecting an organisation against cloud security incidents such as these requires an in-depth understanding of cloud security architecture, security expertise relating to cloud provider services, combined with a DevSecOps approach to infrastructure code development, testing and deployment.
The talk I recently gave at Security BSides London is available on YouTube:
and the presentation can be seen here: