If you're like me, when you learned about containers, you got SUPER excited. There is something about wrapping up your code in a nice little package where it has everything it needs to run in a happy little space where it's nice and warm. BUT; how'm I supposed to get my happy warm little container out into the real world where it can actually do some good? This is harder question to answer.
For the longest time, DevOps and Product Development teams were separated by a big ol' wall. That wall is a wall of specialization. As with most walls, the people on either side of them start to do things without telling the people on the opposing side, and pretty soon you have people on either side of the wall driving different agendas and arriving at different goals. This makes it too easy to toss your code over the wall to the DevOps team and "they'll get to it when they get to it."
I'm not a big fan of setting up software shops using tautologies, so I thought that there must be a better way.
The end goal is to set it up so that when I check in code, it auto-builds and auto-deploys. That's the dream, right? That concept has a name and it's called Continuous Integration and Deployment, or CI/CD. To make that work across lots of languages products and stacks is a lot harder than it sounds.
Happily, Docker has made this pretty easy for us. It's easy to bundle a container and let it run on a docker host. The problem is setting up an environment where those wheels are already greased and ready to go.
I've noticed that Docker has some adoption pain. At my last job, it was really difficult to build a container and a CI/CD environment. I know this is a common problem, so I built a CloudFormation script to help you. Don't worry, I'm going to walk you through how it works.
To get all of that stuff into a place where we can use it, we need to utilize Amazon's Elastic Container Service (ECS), so we need to build a couple of things to leverage it.
We also need to set something up that can take our code, transform it (think minification, transpiling, compiling), put it into a container, then use the new container to update our TaskDefinition.
All of this is a pain in the ass to build manually! You can stumble around the AWS console for weeks, and still not feel like you have a firm grasp on what you're actually setting up. Not only that, but once you do successfully build it, depending on how good you are at taking notes, you might not actually remember what you did.
This is what makes CloudFormation so powerful. You can build an environment and then repeat it as many times as you need to.
This particular advantage has great power, because you can use it to set up all of your microservices. It's been set up in a such a way that all you have to do is give the script access to your github account, tell it which subnets and security groups to use, and it will auto-build all your stuff. Let me show you what I mean.
Let's start with our ECS resources.
Listener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- TargetGroupArn: !Ref TargetGroup
Type: forward
LoadBalancerArn: !Ref LoadBalancer
Port: 80
Protocol: HTTP
LoadBalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: sample-ecs-load-balancer
Scheme: internet-facing
SecurityGroups:
- !Ref SecurityGroupId
Subnets:
- Fn::Select:
- 0
- !Ref SubnetIds
- Fn::Select:
- 1
- !Ref SubnetIds
Type: application
TargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
DependsOn: LoadBalancer
Properties:
VpcId: !Ref VpcId
Port: 80
Protocol: HTTP
Matcher:
HttpCode: 200-299
HealthCheckIntervalSeconds: 80
HealthCheckPath: "/"
HealthCheckProtocol: HTTP
HealthCheckTimeoutSeconds: 50
HealthyThresholdCount: 2
UnhealthyThresholdCount: 5
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: '60'
TargetType: ip
EcrRepository:
Type: AWS::ECR::Repository
Properties:
RepositoryName: actionbotui
SampleCluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: SampleCluster
SampleService:
DependsOn:
- EcrRepository
- SampleTaskDefinition
- Listener
Type: AWS::ECS::Service
Properties:
Cluster: !Ref SampleCluster
DesiredCount: 0
HealthCheckGracePeriodSeconds: 30
LaunchType: FARGATE
LoadBalancers:
- ContainerName: actionbotui
ContainerPort: 80
TargetGroupArn: !Ref TargetGroup
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: ENABLED
SecurityGroups:
- !Ref SecurityGroupId
Subnets:
- Fn::Select:
- 0
- !Ref SubnetIds
- Fn::Select:
- 1
- !Ref SubnetIds
ServiceName: sample-service
TaskDefinition: !Ref SampleTaskDefinition
SampleTaskDefinition:
Type: AWS::ECS::TaskDefinition
DependsOn: EcrRepository
Properties:
ExecutionRoleArn: !Ref TaskExecutionRole
RequiresCompatibilities:
- FARGATE
NetworkMode: awsvpc
Cpu: 256
Memory: 0.5GB
ContainerDefinitions:
- Image:
!Sub
- "${accountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${repoName}"
- accountId: AWS::AccountId
region: AWS::Region
repoName: !Ref ECRRepoName
Name: actionbotui
Memory: 512
PortMappings:
- ContainerPort: 80
HostPort: 80
Protocol: tcp
TaskExecutionRole:
Type: AWS::IAM::Role
Properties:
RoleName:
!Sub
- ${repoName}-taskExecutionRole
- repoName: !Ref ECRRepoName
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- ecs.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
ContainerAgentRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- ecs.amazonaws.com
Action:
- sts:AssumeRole
Policies:
- PolicyName: code-pipeline-policy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- elasticloadbalancing:DescribeListeners
- elasticloadbalancing:DescribeLoadBalancers
- elasticloadbalancing:DescribeTargetGroups
- elasticloadbalancing:DescribeTargetHealth
- elasticloadbalancing:DescribeLoadBalancerAttributes
- elasticloadbalancing:DescribeTargetGroupAttributes
- elasticloadbalancing:CreateListener
- elasticloadbalancing:CreateRule
- elasticloadbalancing:CreateTargetGroup
- elasticloadbalancing:RegisterTargets
- elasticloadbalancing:DeregisterTargets
- elasticloadbalancing:ModifyListener
- elasticloadbalancing:ModifyLoadBalancerAttributes
- elasticloadbalancing:ModifyRule
- elasticloadbalancing:ModifyTargetGroup
- elasticloadbalancing:ModifyTargetGroupAttributes
- elasticloadbalancing:SetIpAddressType
- elasticloadbalancing:SetSecurityGroups
- elasticloadbalancing:SetRulePriorities
- elasticloadbalancing:SetSubnets
Resource: '*'
I know this seems like a lot, but I'm going to walk you through it.
This is a logical set of resources that sits between a TargetGroup and an Elastic Load Balancer. We're using an Application Load Balancer in this example so we can put it in front of our containers. The listener is what takes the payload from the ALB endpoint to the TargetGroup
This is where the IP addresses get registered for forwarding the request from the Load Balancer to the actual container
This should be relatively self-explanatory. In this example we're using an ALB as opposed to a Classic Load Balancer.
This is like a repo on DockerHub. THe big difference here is that you're only allowed to store 1000 versions of your image in this repo. That is a soft limit. If you want to store more versions than that, you can request a service limit increase. You would easily run in to that service limit if you're heavily relying on the tagging feature of Docker. There are a lot of examples out there on DockerHub of organizations that are doing this. If you have multiple products under the same container name, but with different tags, that will come in to play.
In this example, however, we are just using the last 8 digits of our Git SHA as the tag, so CodePipeline knows what to push as "latest" out to our service. For this example, 1000 should be more than enough
THis is the cluster where this microservice will run. If you have other items that need to be part of your app, and they can be logically grouped, it would be a good idea to place them in the same cluster.
This is the part that scales. You can increase or decrease the desired number of minimum and maximum running tasks. In the script the desired count is set to zero. This is because of the way the AWS Console is set up. We can't make the service depend on a TaskDefinition that isn't built yet. The TaskDefinition can't be "done" until the artifact is ready to run. Since we are providing code as an input to this whole process and not artifacts, it would cause a circular dependency if we set the DesiredCount to anything greater than 0.
Here is our container! Take note of the fact that the defaults for the Container port and the listener port are on port 80. If you need them to be not port 80, you can change them here. I thought about making this a parameter instead, but I thought that would be too specific of a question to ask at CloudFormation time. I'm not married to that idea, though. I can be convinced to make it a parameter to the CloudFormation script. OR you can do that OR you can just modify the script when you run it. So many possibilities. Imagine.
This is literally just naming a role and giving it the managed policy of AmazonECSTaskExecutionRolePolicy.
This is the role that your container runs with. The permissions it needs are basically having do to with the Elastic Load Balancer. It needs to be able to register itself as a target and accept traffic, etc.
Ok, here is the CodePipeline/CodeBuild section.
Resources:
SourceArtifactStore:
Type: AWS::S3::Bucket
BuildArtifactStore:
Type: AWS::S3::Bucket
CodeBuildProject:
Type: AWS::CodeBuild::Project
Properties:
Artifacts:
Type: CODEPIPELINE
Name: buildOutput
NamespaceType: BUILD_ID
Source:
Type: CODEPIPELINE
ServiceRole: !GetAtt CodePipelineRole.Arn
Environment:
Type: LINUX_CONTAINER #the only allowed type.
ComputeType: BUILD_GENERAL1_SMALL #https://docs.aws.amazon.com/codebuild/latest/userguide/create-project.html#create-project-cli
Image: aws/codebuild/docker:17.09.0 #https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html
Name: !Ref CodeBuildProjectName
CodePipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
ArtifactStore:
Type: S3
Location: !Ref SourceArtifactStore
Name: !Ref CodePipelineName
RestartExecutionOnUpdate: true #just a preference
RoleArn: !GetAtt CodePipelineRole.Arn
Stages:
- Name: Source
Actions:
- ActionTypeId:
Category: Source
Owner: ThirdParty
Provider: GitHub
Version: 1
Configuration:
Owner: !Ref GitHubUsername
Repo: !Ref GitHubRepo
Branch: !Ref GitHubBranch
OAuthToken: !Ref GitHubOAuthToken
Name: Source
OutputArtifacts:
- Name: SourceArtifacts
- Name: Build
Actions:
- ActionTypeId:
Category: Build
Owner: AWS
Provider: CodeBuild
Version: 1
Configuration:
ProjectName: !Ref CodeBuildProjectName
Name: Build
InputArtifacts:
- Name: SourceArtifacts
OutputArtifacts:
- Name: BuildArtifacts
- Name: Deploy
Actions:
- ActionTypeId:
Category: Deploy
Owner: AWS
Provider: ECS
Version: 1
Configuration:
ClusterName: !Ref SampleCluster
ServiceName: !Ref SampleService
FileName: build.json
Name: deploy-to-ecs
InputArtifacts:
- Name: BuildArtifacts
CodePipelineRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- codepipeline.amazonaws.com
- codebuild.amazonaws.com
Action:
- sts:AssumeRole
Policies:
- PolicyName: code-pipeline-policy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- s3:GetObject
- s3:GetObjectVersion
- s3:GetBucketVersioning
- s3:PutObject
- s3:CreateBucket
- codedeploy:CreateDeployment
- codedeploy:GetApplicationRevision
- codedeploy:GetDeployment
- codedeploy:GetDeploymentConfig
- codedeploy:RegisterApplicationRevision
- codebuild:*
- elasticbeanstalk:CreateApplicationVersion
- elasticbeanstalk:DescribeApplicationVersions
- elasticbeanstalk:DescribeEnvironments
- elasticbeanstalk:DescribeEvents
- elasticbeanstalk:UpdateEnvironment
- autoscaling:DescribeAutoScalingGroups
- autoscaling:DescribeLaunchConfigurations
- autoscaling:DescribeScalingActivities
- autoscaling:ResumeProcesses
- autoscaling:SuspendProcesses
- cloudformation:GetTemplate
- cloudformation:DescribeStackResource
- cloudformation:DescribeStackResources
- cloudformation:DescribeStackEvents
- cloudformation:DescribeStacks
- cloudformation:UpdateStack
- ec2:DescribeInstances
- ec2:DescribeImages
- ec2:DescribeAddresses
- ec2:DescribeSubnets
- ec2:DescribeVpcs
- ec2:DescribeSecurityGroups
- ec2:DescribeKeyPairs
- elasticloadbalancing:DescribeLoadBalancers
- rds:DescribeDBInstances
- rds:DescribeOrderableDBInstanceOptions
- sns:ListSubscriptionsByTopic
- lambda:invokefunction
- lambda:listfunctions
- s3:ListBucket
- s3:GetBucketPolicy
- s3:GetObjectAcl
- s3:PutObjectAcl
- s3:DeleteObject
- ssm:GetParameters
- logs:*
- ecr:DescribeImages
- ecr:GetAuthorizationToken
- ecr:PutImage
- ecr:UploadLayerPart
- ecr:InitiateLayerUpload
- ecr:SetRepositoryPolicy
- ecr:CompleteLayerUpload
- ecr:BatchCheckLayerAvailability
- ecs:*
- iam:PassRole
Resource: '*'
This is just the S3 bucket that stores the code as it gets pulled from GitHub. You may have noticed that it doesn't have a name. S3 buckets have Miranda rights to being named. If you don't choose a name, one will be provided for you. I can count on one hand the number of times I've had to look at my SourceArtifactStore, and those times were always motivated by curiosity. That said, a naming convention for these kinds of buckets would certainly be a good idea. I just didn't want to endorse or suggest one. Feel free to make the choice that works best for you here.
Same thing as SourceArtifactStore. This one I would recommend that you name. Debugging build output is almost always a worthwhile exercise. I didn't name it here for mainly because any name I chose would break it for anyone else that used the template. I didn't put it in the parameters because it's just as easy for you to put a name in, and it doesn't give you analysis paralysis when you're filling out parameters.
This is where things get interesting. This is the first thing we've done so far that actually moves and/or transforms code. Let's take a closer look.
CodeBuildProject:
Type: AWS::CodeBuild::Project
Properties:
Artifacts:
Type: CODEPIPELINE
Name: buildOutput
NamespaceType: BUILD_ID
Source:
Type: CODEPIPELINE
ServiceRole: !GetAtt CodePipelineRole.Arn
Environment:
Type: LINUX_CONTAINER #the only allowed type.
ComputeType: BUILD_GENERAL1_SMALL #https://docs.aws.amazon.com/codebuild/latest/userguide/create-project.html#create-project-cli
Image: aws/codebuild/docker:17.09.0 #https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html
Name: !Ref CodeBuildProjectName
So what is actually happening here? What we have actually created a tiny little CodeBuild project. We're telling it we want its output to be sent down the CodePipeline. We are saying we want to harvest the BuildId because we're going to use it when we tag our Docker image.
There is a ServiceRole here, too. We'll get to that later. Keep in mind too that your build projects might be different than what I had foreseen. I selected the smallest kind available which is BUILDGENERAL1SMALL. It has 3GB of Memory, 2vCPUs and 64GB of disk. This should be sufficient for most build jobs. If you need a beefier resource, check out https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-compute-types.html which lists out the two other, larger, options you have.
Lastly, I selected the aws/codebuild/docker:17.09.0. This version supports multi-stage builds WE NEED THAT The whole point of using Docker in this project is we can transform our code within a Docker container instead of needing a specialized build box. This is the magic that makes this project as simple and portable as it is.
This is the cool CI/CD stuff we were talking about earlier. AS you can see there are three stages
We are pulling code, transforming it and deploying it out to our service. As you can imagine all those actions require a fair number of permissions. I have a role built for you here with all of that work already done.
The Entire file: (which you can also reference here: s3://dan-ob-ecs-formation-us-east-1/ecs.yml)
Parameters:
CodePipelineName:
Type: String
GitHubOAuthToken:
Type: String
NoEcho: true
Description: Should have full repo permissions
GitHubRepo:
Type: String
Description: Your docker-ready GitHub repo.
GitHubBranch:
Type: String
Description: the branch you want this pipeline to watch
GitHubUsername:
Type: String
Description: your github username. If it's an org repo, use the organization name
CodeBuildProjectName:
Type: String
SecurityGroupId:
Type: String
Description: Security Group for your ECS cluster
SubnetIds:
Type: List<String>
Description: comma separated list of subnetIds (at least 2)
VpcId:
Type: String
ECRRepoName:
Type: String
Description: Name of container and the ECR Repo
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
-
Label:
default: Network Configuration
Parameters:
- VpcId
- SubnetIds
- SecurityGroupId
-
Label:
default: Source Control
Parameters:
- GitHubRepo
- GitHubUsername
- GitHubBranch
- GitHubOAuthToken
-
Label:
default: Continuous Integration and Deployment
Parameters:
- CodeBuildProjectName
- CodePipelineName
-
Label:
default: Elastic Container Service
Parameters:
- ECRRepoName
ParameterLabels:
VpcId:
default: Which VPC should this ECS Service be deployed to?
CodeBuildProjectName:
default: Name your CodeBuild Project
CodePipelineName:
default: Name your CodePipelineProject
GitHubRepo:
default: GitHub repo
GitHubUsername:
default: GitHub Username or Organization
GitHubOAuthToken:
default: GitHubOAuthToken
ECRRepo:
default: container repo and container name
Resources:
#################################################
# Codepipeline section
#################################################
SourceArtifactStore:
Type: AWS::S3::Bucket
BuildArtifactStore:
Type: AWS::S3::Bucket
CodeBuildProject:
Type: AWS::CodeBuild::Project
Properties:
Artifacts:
Type: CODEPIPELINE
Name: buildOutput
NamespaceType: BUILD_ID
Source:
Type: CODEPIPELINE
ServiceRole: !GetAtt CodePipelineRole.Arn
Environment:
Type: LINUX_CONTAINER #the only allowed type.
ComputeType: BUILD_GENERAL1_SMALL #https://docs.aws.amazon.com/codebuild/latest/userguide/create-project.html#create-project-cli
Image: aws/codebuild/docker:17.09.0 #https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html
Name: !Ref CodeBuildProjectName
CodePipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
ArtifactStore:
Type: S3
Location: !Ref SourceArtifactStore
Name: !Ref CodePipelineName
RestartExecutionOnUpdate: true #just a preference
RoleArn: !GetAtt CodePipelineRole.Arn
Stages:
- Name: Source
Actions:
- ActionTypeId:
Category: Source
Owner: ThirdParty
Provider: GitHub
Version: 1
Configuration:
Owner: !Ref GitHubUsername
Repo: !Ref GitHubRepo
Branch: !Ref GitHubBranch
OAuthToken: !Ref GitHubOAuthToken
Name: Source
OutputArtifacts:
- Name: SourceArtifacts
- Name: Build
Actions:
- ActionTypeId:
Category: Build
Owner: AWS
Provider: CodeBuild
Version: 1
Configuration:
ProjectName: !Ref CodeBuildProjectName
Name: Build
InputArtifacts:
- Name: SourceArtifacts
OutputArtifacts:
- Name: BuildArtifacts
- Name: Deploy
Actions:
- ActionTypeId:
Category: Deploy
Owner: AWS
Provider: ECS
Version: 1
Configuration:
ClusterName: !Ref SampleCluster
ServiceName: !Ref SampleService
FileName: build.json
Name: deploy-to-ecs
InputArtifacts:
- Name: BuildArtifacts
CodePipelineRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- codepipeline.amazonaws.com
- codebuild.amazonaws.com
Action:
- sts:AssumeRole
Policies:
- PolicyName: code-pipeline-policy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- s3:GetObject
- s3:GetObjectVersion
- s3:GetBucketVersioning
- s3:PutObject
- s3:CreateBucket
- codedeploy:CreateDeployment
- codedeploy:GetApplicationRevision
- codedeploy:GetDeployment
- codedeploy:GetDeploymentConfig
- codedeploy:RegisterApplicationRevision
- codebuild:*
- elasticbeanstalk:CreateApplicationVersion
- elasticbeanstalk:DescribeApplicationVersions
- elasticbeanstalk:DescribeEnvironments
- elasticbeanstalk:DescribeEvents
- elasticbeanstalk:UpdateEnvironment
- autoscaling:DescribeAutoScalingGroups
- autoscaling:DescribeLaunchConfigurations
- autoscaling:DescribeScalingActivities
- autoscaling:ResumeProcesses
- autoscaling:SuspendProcesses
- cloudformation:GetTemplate
- cloudformation:DescribeStackResource
- cloudformation:DescribeStackResources
- cloudformation:DescribeStackEvents
- cloudformation:DescribeStacks
- cloudformation:UpdateStack
- ec2:DescribeInstances
- ec2:DescribeImages
- ec2:DescribeAddresses
- ec2:DescribeSubnets
- ec2:DescribeVpcs
- ec2:DescribeSecurityGroups
- ec2:DescribeKeyPairs
- elasticloadbalancing:DescribeLoadBalancers
- rds:DescribeDBInstances
- rds:DescribeOrderableDBInstanceOptions
- sns:ListSubscriptionsByTopic
- lambda:invokefunction
- lambda:listfunctions
- s3:ListBucket
- s3:GetBucketPolicy
- s3:GetObjectAcl
- s3:PutObjectAcl
- s3:DeleteObject
- ssm:GetParameters
- logs:*
- ecr:DescribeImages
- ecr:GetAuthorizationToken
- ecr:PutImage
- ecr:UploadLayerPart
- ecr:InitiateLayerUpload
- ecr:SetRepositoryPolicy
- ecr:CompleteLayerUpload
- ecr:BatchCheckLayerAvailability
- ecs:*
- iam:PassRole
Resource: '*'
####################################
# ECS Section
####################################
Listener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- TargetGroupArn: !Ref TargetGroup
Type: forward
LoadBalancerArn: !Ref LoadBalancer
Port: 80
Protocol: HTTP
LoadBalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: sample-ecs-load-balancer
Scheme: internet-facing
SecurityGroups:
- !Ref SecurityGroupId
Subnets:
- Fn::Select:
- 0
- !Ref SubnetIds
- Fn::Select:
- 1
- !Ref SubnetIds
Type: application
TargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
DependsOn: LoadBalancer
Properties:
VpcId: !Ref VpcId
Port: 80
Protocol: HTTP
Matcher:
HttpCode: 200-299
HealthCheckIntervalSeconds: 80
HealthCheckPath: "/"
HealthCheckProtocol: HTTP
HealthCheckTimeoutSeconds: 50
HealthyThresholdCount: 2
UnhealthyThresholdCount: 5
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: '60'
TargetType: ip
EcrRepository:
Type: AWS::ECR::Repository
Properties:
RepositoryName: actionbotui
SampleCluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: SampleCluster
SampleService:
DependsOn:
- EcrRepository
- SampleTaskDefinition
- Listener
Type: AWS::ECS::Service
Properties:
Cluster: !Ref SampleCluster
DesiredCount: 0
HealthCheckGracePeriodSeconds: 30
LaunchType: FARGATE
LoadBalancers:
- ContainerName: actionbotui
ContainerPort: 80
TargetGroupArn: !Ref TargetGroup
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: ENABLED
SecurityGroups:
- !Ref SecurityGroupId
Subnets:
- Fn::Select:
- 0
- !Ref SubnetIds
- Fn::Select:
- 1
- !Ref SubnetIds
ServiceName: sample-service
TaskDefinition: !Ref SampleTaskDefinition
SampleTaskDefinition:
Type: AWS::ECS::TaskDefinition
DependsOn: EcrRepository
Properties:
ExecutionRoleArn: !Ref TaskExecutionRole
RequiresCompatibilities:
- FARGATE
NetworkMode: awsvpc
Cpu: 256
Memory: 0.5GB
ContainerDefinitions:
- Image:
!Sub
- "${accountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${repoName}"
- accountId: AWS::AccountId
region: AWS::Region
repoName: !Ref ECRRepoName
Name: actionbotui
Memory: 512
PortMappings:
- ContainerPort: 80
HostPort: 80
Protocol: tcp
TaskExecutionRole:
Type: AWS::IAM::Role
Properties:
RoleName:
!Sub
- ${repoName}-taskExecutionRole
- repoName: !Ref ECRRepoName
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- ecs.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
ContainerAgentRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- ecs.amazonaws.com
Action:
- sts:AssumeRole
Policies:
- PolicyName: code-pipeline-policy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- elasticloadbalancing:DescribeListeners
- elasticloadbalancing:DescribeLoadBalancers
- elasticloadbalancing:DescribeTargetGroups
- elasticloadbalancing:DescribeTargetHealth
- elasticloadbalancing:DescribeLoadBalancerAttributes
- elasticloadbalancing:DescribeTargetGroupAttributes
- elasticloadbalancing:CreateListener
- elasticloadbalancing:CreateRule
- elasticloadbalancing:CreateTargetGroup
- elasticloadbalancing:RegisterTargets
- elasticloadbalancing:DeregisterTargets
- elasticloadbalancing:ModifyListener
- elasticloadbalancing:ModifyLoadBalancerAttributes
- elasticloadbalancing:ModifyRule
- elasticloadbalancing:ModifyTargetGroup
- elasticloadbalancing:ModifyTargetGroupAttributes
- elasticloadbalancing:SetIpAddressType
- elasticloadbalancing:SetSecurityGroups
- elasticloadbalancing:SetRulePriorities
- elasticloadbalancing:SetSubnets
Resource: '*'
I've got you covered here, too. Check this out.
version: 0.2
phases:
pre_build:
commands:
- $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION)
- CONTAINER_NAME="YOUR_CONTAINER_NAME"
- TAG=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)
build:
commands:
- ACCOUNT_ID=$(aws sts get-caller-identity --output text --query 'Account')
- docker build -t $CONTAINER_NAME:latest .
- docker tag $CONTAINER_NAME:latest $ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$CONTAINER_NAME:$TAG
post_build:
commands:
- docker push $ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$CONTAINER_NAME:$TAG
- printf '[{"name":"%s","imageUri":"%s:%s"}]' $CONTAINER_NAME $ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$CONTAINER_NAME $TAG > build.json
artifacts:
files:
- build.json
discard-paths: yes
Let me walk you through this. This is the "build" section of the CodePipeline. This is where the artifacts actually get created from whatever your source code looked like when it left your dev's hands.
AWS CodeBuild projects looks for a buildspec.yml file at the root of the source code directory. If you want to leverage the .yml I've provided here, just add it to the root of your project as buildspec.yml and you should be ready to go.
Between this CloudFormation script and the buildspec.yml, you should have everything you need to get you started on leveraging ECS for your Docker projects. I say "get you started" because this is just the beginning. There is a lot we haven't talked about here in terms of unit testing, demo environments, blue-green deploys and much more.