- Simple AWS
- Posts
- CI/CD Pipeline with AWS Code*
CI/CD Pipeline with AWS Code*
Using AWS Code* services (no GitHub Actions) to deploy to ECS
We have services deployed in ECS, forget about the rest. We made a change to our code, and we want to deploy it. Manual deployments are slow and prone to human error, so we want an entirely automated way to do it: a Continuous Integration / Continuous Delivery Pipeline (CI/CD pipeline).
Note: We're building this 100% in AWS, even using CodeCommit to store our git repos. You're probably more familiar with GitHub, GitLab or Bitbucket. I'll feature them in future issues, for now I wanted to show you how AWS does this.
We're going to use the following AWS services:
AWS CodeCommit: A fully-managed source control service that hosts Git repositories, allowing you to store and manage your app's source code. Think GitHub or Bitbucket, but done by AWS.
AWS CodeBuild: A fully-managed build service that compiles your app's source code, runs tests, and produces build artifacts.
AWS CodePipeline: A fully-managed continuous deployment service that helps you automate your release pipelines. You can orchestrate various stages, such as source code retrieval, build, and deployment, which are resolved by other services like CodeCommit, CodeBuild, and CodeDeploy.
AWS Code* CI/CD Pipeline to ECS
How to Build a CI/CD Pipeline Entirely in AWS
Step 1: Set up a git repo in CodeCommit
In case you don't know CodeCommit, it's basically AWS GitHub. You're not forced to use it, here's how to use GitHub and how to use Bitbucket. But let's stick to it for this guide, and see what happens.
Install Git (if not already installed):
Windows: https://gitforwindows.org/.
MacOS:
brew install git
or https://git-scm.com/download/mac.Linux: sudo apt-get install git or sudo yum install git.
Open the CodeCommit dashboard in the AWS Management Console.
Click on "Create repository" and configure the repository settings, such as the name, description, and tags.
Go to the IAM console and click on your IAM user.
Click on the Security credentials tab, scroll down to HTTPS Git credentials for AWS CodeCommit and click Generate credentials.
Copy the username and password, or download them as a CSV.
Open your terminal and navigate to the directory containing your app's source code.
Go back to the CodeCommit console and click on your repository.
Copy the git clone command under Step 3: Clone the repository.
Open a terminal in a new directory, paste that command and run it. When it asks for your username and password, use the ones you copied or downloaded in step 6.
Change directories to the directory the command just created, and copy all files and directories of your project into that directory.
Run
git add .
,git commit -m"Initial commit"
andgit push
.
Step 2: Create a buildspec file
The buildspec.yml file is a way to use code to tell CodeBuild what it needs to do. This one just logs in to Amazon ECR (so we can then push the Docker image), runs docker build
and docker push
. It's a little verbose maybe, and there's a few details like using docker buildx build
so we can specify the target platform, but that's the gist of it.
Create a file called buildspec.yml with the following contents. Replace $ECR_REPOSITORY, $AWS_DEFAULT_REGION, and $CONTAINER_NAME with the appropriate values for your project.
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
- REPOSITORY_URI=$(aws ecr describe-repositories --repository-names $ECR_REPOSITORY --query 'repositories[0].repositoryUri' --output text)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker buildx build --platform=linux/amd64 -t $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION .
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION
- docker tag $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:latest
- echo Writing image definitions file...
- printf '[{"name":"%s","imageUri":"%s"}]' $CONTAINER_NAME $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION > imagedefinitions.json
artifacts:
files: imagedefinitions.json
discard-paths: yes
Step 3: Configure the CodeBuild project
In this step we're going to create the CodeBuild resource (called project), and include stuff like what kind of instances it runs on, what OS, what IAM Role it's going to use, etc. There's no details on what CodeBuild needs to do, that's all defined in the buildspec.yml
file from the previous step.
Open the CodeBuild dashboard in the AWS Management Console.
Click on "Create build project" and configure the project settings:
Project name: Enter a unique name for the build project, such as SimpleAWSBuilder.
Source: Choose "CodeCommit" as the source provider. Then, select your git repo and the "master" branch.
Configure the environment settings for your build project:
Environment image: Select "Managed image."
Operating system: Choose "Amazon Linux 2."
Runtime(s): Select "Standard" and choose the latest available image version.
Image: Choose the latest one.
Check the Privileged checkbox.
Service role: Choose New service role and enter for Role name "SimpleAWSCodeBuildServiceRole".
For Buildspec, just leave the default settings. CodeBuild will use the buildspec.yml file you created earlier.
Click "Create build project" to create your new build project.
Step 4: Give CodeBuild the necessary IAM permissions
CodeBuild is pushing a Docker image to our ECR registry, and it needs IAM permissions to do so (unless the ECR registry is public, which we probably don't want). The previous step created the CodeBuild project with an IAM Role, in this step we're adding an IAM Policy to that role, to give it those permissions. We're also adding permissions to publish logs to CloudWatch Logs.
Go to the IAM console and on the menu on the left click Roles.
Search for the role you just created for CodeBuild (the name should be SimpleAWSCodeBuildServiceRole) and click on the name.
Click Add permissions and click Attach policies. Click Create policy.
Click the JSON tab and replace the contents with the contents below. Replace your-account-id with the ID of your AWS Account, your-ecr-registry with the name of your ECR registry, and change us-east-1 if you're using a different region.
Click Next (Tags), Next (Review), give your policy a name such as "SimpleAWSCodeBuildPolicy" and click Create.
Go back to the role creation tab (you can close the current one), click the Refresh button on the right, and type "SimpleAWSCodeBuildPolicy" in the search box.
Click the checkbox on the left of the SimpleAWSCodeBuildPolicy policy and click Add permissions.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:DescribeRepositories",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
],
"Resource": "arn:aws:ecr:us-east-1:your-account-id:repository/your-ecr-registry"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:us-east-1:your-account-id:*"
}
]
}
Step 5: Create a pipeline in CodePipeline
CodePipeline doesn't actually execute any steps, it just coordinates them and calls other services to do the real work. Like a Project Manager! (kidding!!).
The change detection is handled by CodeCommit. We're linking our CodePipeline pipeline to our CodeCommit repository, and CodeCommit will publish an event to CloudWatch Events when there's a change, which CodePipeline is listening for so it can kick off the pipeline.
The build phase is handled by CodeBuild. CodePipeline just passes the values and tells it to do its thing.
The deployment phase is handled by ECS itself. It's just a rolling update: It creates a new task, and once it's working it kills the old one. We could do much fancier stuff here with CodeDeploy, if we wanted.
In the AWS Console go to the CodePipeline dashboard.
Click on "Create pipeline."
Configure the pipeline settings:
Pipeline name: Give your pipeline a unique name, such as SimpleAWSPipeline.
Service role: Leave at "New service role" to create a new IAM role for your pipeline. You can change the Role name if you want, or leave it as it is.
Click "Next."
Configure the source stage:
Source provider: Choose "AWS CodeCommit."
Repository name: Select the CodeCommit repository you created earlier.
Branch name: Select the "master" branch.
Change detection options: Choose "Amazon CloudWatch Events (recommended)" to automatically trigger the pipeline when there's a new commit.
Click "Next."
Configure the build stage:
Build provider: Choose "AWS CodeBuild".
Region: Select your region.
Project name: Choose the CodeBuild project you created earlier.
Click "Next."
Configure the deploy stage:
Deploy provider: Choose "Amazon ECS".
Region: Select your region.
Cluster name: Select your ECS cluster.
Service name: Select your ECS service.
Click "Next."
Review your pipeline settings and click "Create pipeline".
Step 6: Push a change and check the deployment
Let's turn the ignition key, and see if it blows up. We're going to push a simple change, such as a comment, and see CodePipeline do the magic.
Make a change to your code: run
git add .
,git commit -m"Pipeline test"
andgit push
.Watch CodePipeline to see the pipeline progress from detecting the change, running the CodeBuild step and deploying to ECS.
Understanding AWS CodePipeline
Do we need all of that just for a CI/CD Pipeline?
Well, in AWS, yes we do. AWS favors solving problems by combining lots of specific services. Paraphrasing the Microservices Design article I wrote, they're like functional infrastructure microservices, where a working infrastructure is an emergent property. Quite complex, right? Well, that's the reason why I write Simple AWS!
Can't I just use GitHub Actions?
YES!!! You can! Just don't set long-lived AWS credentials as environment variables, do this instead.
Don't discount CodeDeploy though, everything it needs to do happens inside AWS, so it's pretty great at doing those things, like a Blue/Green deployment.
So, if I can use GitHub Actions, why are you even writing about this?
Well, GitHub Actions makes CI/CD as code much easier (we could have done all of this through a CloudFormation template of like 250 lines). But it's not entirely trivial. Here's a post on how the whole thing works. We'll do an issue on that in the future, where we'll also dive into CodeDeploy and deployment strategies.
By the way, if you want to use GitHub Actions because you want to be cloud agnostic, let me tell you you've got much bigger problems to deal with than a CI/CD Pipeline. And if you want to avoid vendor lock-in, trading lock-in with AWS (which you already have) for lock-in with GitHub doesn't solve that.
What is CI/CD anyway, and why do I need it?
Continuous Integration actually means integrating your code with other devs' code all the time. Technically, the only way to do that is with Trunk-Based Development (TBD). TBD is “A source-control branching model, where developers collaborate on code in a single branch called ‘trunk’, resist any pressure to create other long-lived development branches by employing documented techniques. They therefore avoid merge hell, do not break the build, and live happily ever after.” (from here). It's the opposite of any branch-based flow such as GitHub flow. Even if you're using branches (and technically not doing Continuous Integration), a CI/CD pipeline is still an extremely useful tool. Maybe it shouldn't be called CI/CD pipeline?
The CD part has two potential meanings: Continuous Delivery, which means our software is automatically built and made ready to be shipped to users (or installed in a prod environment) at the push of a button, or Continuous Deployment where there's no button and the deployment also happens automatically. Both are good, Deployment is better.
Why you need it is a very long discussion that I'll summarize like this: When building software, we're very often very wrong about what to build. For that reason, we want to shorten feedback cycles, so we can correct course faster and waste less time and money working on the wrong thing. The most valuable feedback is the actual user using the product, so we want to make small changes and get them in front of the user fast and often. The fastest way to do that without risking human errors is to automate that process. The tool that runs that process on automatic is called a CI/CD Pipeline.
By the way, that whole paragraph describes one big technical part of agility, as considered in the Agile Manifesto.
Best Practices for CI/CD Pipelines on AWS
Operational Excellence
Set up notifications: You can set up CloudWatch Events for CodePipeline events, and then maybe use SNS to get an email. Or use AWS Chatbot to get notified on Slack. Whatever you choose, stay on top of failed pipeline executions.
Implement testing stages: Include automated testing stages in your pipeline to validate code changes and ensure the quality and stability of your application. Automate these tests, obviously.
Security
Limit IAM permissions: Apply the principle of least privilege for IAM roles and permissions. Grant only the necessary permissions for CodeBuild and CodePipeline to access resources, and avoid using overly permissive policies.
Use temporary credentials: The step by step proposes using long-lived credentials tied to your IAM User. There's a better way, with federated identities and temporary credentials.
Store secrets in Secrets Manager: Manage and secure sensitive data using AWS Secrets Manager. Integrate Secrets Manager with CodeBuild and CodePipeline to provide secure access to secrets during the deployment process. If you're hitting the pull limit on Docker Hub and need to create an account to increase the pull limit, this is how you'd store and pass those credentials.
Reliability
Retry actions: Configure CodePipeline to retry failed actions.
Use a deployment strategy that doesn't cause downtime: In our case, we're using a rolling update performed by the ECS service. In short, ECS creates one new task, waits for it to succeed the health checks (container, LB, etc), kills one old task, and repeats the process until there are no more old tasks. Blue/Green is another option (which you can do with CodeDeploy), where a new version is deployed while keeping the old version alive, and traffic is switched to the new version, and switched back to the old one if the new version fails. Take your pick, but automate it, and automate the rollbacks.
Performance Efficiency
Optimize build times: Use caching and parallelization in CodeBuild to speed up build times, reducing overall pipeline execution time and improving resource utilization. This also improves the problem of developers switching context while they wait for the build to run.
Cost Optimization
Use on-demand pricing: Leverage on-demand pricing for CodeBuild to reduce costs and pay only for the build time you consume. Also, right-size your build instances. Don't spend too much time here though, you won't be paying a lot for CodeBuild.
Clean up old build artifacts: Implement a lifecycle policy in Amazon S3 to automatically delete old build artifacts.
Resources
I touched a bit upon Trunk-Based Development, as opposed to branches (be it feature branches, GitHub flow, etc). Here's a really long and really complete post by Martin Fowler on different patterns for that.
Check out these great courses by Derek Morgan (fellow AWS Community Builder). Lots of them are FREE!
Here's a workshop on CI/CD for ECS, in case you want to dive deeper into this issue's topic.
Did you like this issue? |
Reply