Observian - All Things Cloud. Serverless. DevOps. Infrastructure as Code.

How do I deploy my Docker stuff??

Written by Dan Taylor | May 29, 2018 2:35:00 PM

Docker Deployment

DevOps is harder than it sounds

If you're like me, when you learned about containers, you got SUPER excited. There is something about wrapping up your code in a nice little package where it has everything it needs to run in a happy little space where it's nice and warm. BUT; how'm I supposed to get my happy warm little container out into the real world where it can actually do some good? This is harder question to answer.

I blame the culture, honestly

For the longest time, DevOps and Product Development teams were separated by a big ol' wall. That wall is a wall of specialization. As with most walls, the people on either side of them start to do things without telling the people on the opposing side, and pretty soon you have people on either side of the wall driving different agendas and arriving at different goals. This makes it too easy to toss your code over the wall to the DevOps team and "they'll get to it when they get to it."

I'm not a big fan of setting up software shops using tautologies, so I thought that there must be a better way.

Continuous Integration and Deployment

The end goal is to set it up so that when I check in code, it auto-builds and auto-deploys. That's the dream, right? That concept has a name and it's called Continuous Integration and Deployment, or CI/CD. To make that work across lots of languages products and stacks is a lot harder than it sounds.

Happily, Docker has made this pretty easy for us. It's easy to bundle a container and let it run on a docker host. The problem is setting up an environment where those wheels are already greased and ready to go.

Here is one that is already built

I've noticed that Docker has some adoption pain. At my last job, it was really difficult to build a container and a CI/CD environment. I know this is a common problem, so I built a CloudFormation script to help you. Don't worry, I'm going to walk you through how it works.

Basically we need 4 things to get started
  1. A GitHub repo
  2. An AWS Account
  3. A VPC
  4. 2 subnets

To get all of that stuff into a place where we can use it, we need to utilize Amazon's Elastic Container Service (ECS), so we need to build a couple of things to leverage it.

  1. An ECR (Elastic Container Repository, like a Docker Repo, but AWS flavor)
  2. An ECS Cluster: this is just a "logical grouping of tasks and services" according to AWS. If your app has many micro-services, they'll probably all live on one cluster.
  3. An ECS Service (think Docker Swarm). This is where the scaling happens
  4. An ECS Task Definition. This is the part where you actually define the container and all the run parameters. If you were remoting into the hardware that was running the container, the Task Definition is what would be sending the command-line parameters. You can define everything from the image, to run ports to environment variables here.

We also need to set something up that can take our code, transform it (think minification, transpiling, compiling), put it into a container, then use the new container to update our TaskDefinition.

Here is the rub!

All of this is a pain in the ass to build manually! You can stumble around the AWS console for weeks, and still not feel like you have a firm grasp on what you're actually setting up. Not only that, but once you do successfully build it, depending on how good you are at taking notes, you might not actually remember what you did.

This is what makes CloudFormation so powerful. You can build an environment and then repeat it as many times as you need to.

This particular advantage has great power, because you can use it to set up all of your microservices. It's been set up in a such a way that all you have to do is give the script access to your github account, tell it which subnets and security groups to use, and it will auto-build all your stuff. Let me show you what I mean.

Let's get started.

Let's start with our ECS resources.

ECS

Listener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
    DefaultActions:
    - TargetGroupArn: !Ref TargetGroup
        Type: forward
    LoadBalancerArn: !Ref LoadBalancer
    Port: 80
    Protocol: HTTP
LoadBalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
    Name: sample-ecs-load-balancer 
    Scheme: internet-facing
    SecurityGroups:
    - !Ref SecurityGroupId
    Subnets:
    - Fn::Select:
        - 0
        - !Ref SubnetIds
    - Fn::Select:
        - 1
        - !Ref SubnetIds
    Type: application
TargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
DependsOn: LoadBalancer
Properties:
    VpcId: !Ref VpcId
    Port: 80
    Protocol: HTTP
    Matcher: 
    HttpCode: 200-299
    HealthCheckIntervalSeconds: 80
    HealthCheckPath: "/"
    HealthCheckProtocol: HTTP
    HealthCheckTimeoutSeconds: 50
    HealthyThresholdCount: 2
    UnhealthyThresholdCount: 5
    TargetGroupAttributes:
    - Key: deregistration_delay.timeout_seconds
    Value: '60'
    TargetType: ip
EcrRepository:
Type: AWS::ECR::Repository
Properties:
    RepositoryName: actionbotui
SampleCluster:
Type: AWS::ECS::Cluster
Properties:
    ClusterName: SampleCluster
SampleService:
DependsOn: 
    - EcrRepository
    - SampleTaskDefinition
    - Listener
Type: AWS::ECS::Service
Properties:
    Cluster: !Ref SampleCluster
    DesiredCount: 0
    HealthCheckGracePeriodSeconds: 30
    LaunchType: FARGATE
    LoadBalancers:
    - ContainerName: actionbotui
        ContainerPort: 80
        TargetGroupArn: !Ref TargetGroup
    NetworkConfiguration:
    AwsvpcConfiguration:
        AssignPublicIp: ENABLED
        SecurityGroups:
        - !Ref SecurityGroupId
        Subnets:
        - Fn::Select:
            - 0
            - !Ref SubnetIds
        - Fn::Select:
            - 1
            - !Ref SubnetIds
    ServiceName: sample-service
    TaskDefinition: !Ref SampleTaskDefinition
SampleTaskDefinition:
Type: AWS::ECS::TaskDefinition
DependsOn: EcrRepository
Properties:
    ExecutionRoleArn: !Ref TaskExecutionRole
    RequiresCompatibilities:
    - FARGATE
    NetworkMode: awsvpc
    Cpu: 256
    Memory: 0.5GB
    ContainerDefinitions:
    - Image: 
        !Sub
            - "${accountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${repoName}"
            - accountId: AWS::AccountId
            region: AWS::Region
            repoName: !Ref ECRRepoName
        Name: actionbotui
        Memory: 512
        PortMappings:
        - ContainerPort: 80
            HostPort: 80
            Protocol: tcp
TaskExecutionRole:
Type: AWS::IAM::Role
Properties:
    RoleName:
    !Sub
        - ${repoName}-taskExecutionRole
        - repoName: !Ref ECRRepoName
    AssumeRolePolicyDocument:
    Version: 2012-10-17
    Statement:
        - Effect: Allow
        Principal:
            Service:
            - ecs.amazonaws.com
        Action:
            - sts:AssumeRole
    ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
ContainerAgentRole:
Type: AWS::IAM::Role
Properties:
    AssumeRolePolicyDocument:
    Version: 2012-10-17
    Statement:
        - Effect: Allow
        Principal:
            Service:
            - ecs.amazonaws.com
        Action:
            - sts:AssumeRole
    Policies:
    - PolicyName: code-pipeline-policy
        PolicyDocument:
        Version: 2012-10-17
        Statement:
            - Effect: Allow
            Action:
                - elasticloadbalancing:DescribeListeners
                - elasticloadbalancing:DescribeLoadBalancers
                - elasticloadbalancing:DescribeTargetGroups
                - elasticloadbalancing:DescribeTargetHealth
                - elasticloadbalancing:DescribeLoadBalancerAttributes
                - elasticloadbalancing:DescribeTargetGroupAttributes
                - elasticloadbalancing:CreateListener
                - elasticloadbalancing:CreateRule
                - elasticloadbalancing:CreateTargetGroup
                - elasticloadbalancing:RegisterTargets
                - elasticloadbalancing:DeregisterTargets
                - elasticloadbalancing:ModifyListener
                - elasticloadbalancing:ModifyLoadBalancerAttributes
                - elasticloadbalancing:ModifyRule
                - elasticloadbalancing:ModifyTargetGroup
                - elasticloadbalancing:ModifyTargetGroupAttributes
                - elasticloadbalancing:SetIpAddressType
                - elasticloadbalancing:SetSecurityGroups
                - elasticloadbalancing:SetRulePriorities
                - elasticloadbalancing:SetSubnets
            Resource: '*' 

I know this seems like a lot, but I'm going to walk you through it.

Listener

This is a logical set of resources that sits between a TargetGroup and an Elastic Load Balancer. We're using an Application Load Balancer in this example so we can put it in front of our containers. The listener is what takes the payload from the ALB endpoint to the TargetGroup

TargetGroup

This is where the IP addresses get registered for forwarding the request from the Load Balancer to the actual container

LoadBalancer

This should be relatively self-explanatory. In this example we're using an ALB as opposed to a Classic Load Balancer.

ECRRepository

This is like a repo on DockerHub. THe big difference here is that you're only allowed to store 1000 versions of your image in this repo. That is a soft limit. If you want to store more versions than that, you can request a service limit increase. You would easily run in to that service limit if you're heavily relying on the tagging feature of Docker. There are a lot of examples out there on DockerHub of organizations that are doing this. If you have multiple products under the same container name, but with different tags, that will come in to play.

In this example, however, we are just using the last 8 digits of our Git SHA as the tag, so CodePipeline knows what to push as "latest" out to our service. For this example, 1000 should be more than enough

Sample Cluster

THis is the cluster where this microservice will run. If you have other items that need to be part of your app, and they can be logically grouped, it would be a good idea to place them in the same cluster.

Sample Service

This is the part that scales. You can increase or decrease the desired number of minimum and maximum running tasks. In the script the desired count is set to zero. This is because of the way the AWS Console is set up. We can't make the service depend on a TaskDefinition that isn't built yet. The TaskDefinition can't be "done" until the artifact is ready to run. Since we are providing code as an input to this whole process and not artifacts, it would cause a circular dependency if we set the DesiredCount to anything greater than 0.

TaskDefinition

Here is our container! Take note of the fact that the defaults for the Container port and the listener port are on port 80. If you need them to be not port 80, you can change them here. I thought about making this a parameter instead, but I thought that would be too specific of a question to ask at CloudFormation time. I'm not married to that idea, though. I can be convinced to make it a parameter to the CloudFormation script. OR you can do that OR you can just modify the script when you run it. So many possibilities. Imagine.

TaskExecutionRole

This is literally just naming a role and giving it the managed policy of AmazonECSTaskExecutionRolePolicy.

ContainerAgentRole

This is the role that your container runs with. The permissions it needs are basically having do to with the Elastic Load Balancer. It needs to be able to register itself as a target and accept traffic, etc.

CodePipeline

Ok, here is the CodePipeline/CodeBuild section.

Resources:
  SourceArtifactStore:
    Type: AWS::S3::Bucket
  BuildArtifactStore:
    Type: AWS::S3::Bucket
  CodeBuildProject:
    Type: AWS::CodeBuild::Project
    Properties:
      Artifacts:
        Type: CODEPIPELINE
        Name: buildOutput
        NamespaceType: BUILD_ID
      Source:
        Type: CODEPIPELINE
      ServiceRole: !GetAtt CodePipelineRole.Arn
      Environment:
        Type: LINUX_CONTAINER #the only allowed type.
        ComputeType: BUILD_GENERAL1_SMALL #https://docs.aws.amazon.com/codebuild/latest/userguide/create-project.html#create-project-cli
        Image: aws/codebuild/docker:17.09.0 #https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html
      Name: !Ref CodeBuildProjectName
  CodePipeline:
    Type: AWS::CodePipeline::Pipeline
    Properties:
      ArtifactStore:
        Type: S3
        Location: !Ref SourceArtifactStore
      Name: !Ref CodePipelineName
      RestartExecutionOnUpdate: true #just a preference
      RoleArn: !GetAtt CodePipelineRole.Arn
      Stages:
        - Name: Source 
          Actions:
            - ActionTypeId:
                Category: Source
                Owner: ThirdParty
                Provider: GitHub
                Version: 1
              Configuration:
                Owner: !Ref GitHubUsername
                Repo: !Ref GitHubRepo
                Branch: !Ref GitHubBranch
                OAuthToken: !Ref GitHubOAuthToken
              Name: Source
              OutputArtifacts: 
                - Name: SourceArtifacts
        - Name: Build
          Actions:
            - ActionTypeId:
                Category: Build
                Owner: AWS
                Provider: CodeBuild
                Version: 1
              Configuration:
                ProjectName: !Ref CodeBuildProjectName
              Name: Build
              InputArtifacts: 
                - Name: SourceArtifacts
              OutputArtifacts:
                - Name: BuildArtifacts
        - Name: Deploy
          Actions:
            - ActionTypeId:
                Category: Deploy
                Owner: AWS
                Provider: ECS
                Version: 1
              Configuration:
                ClusterName: !Ref SampleCluster
                ServiceName: !Ref SampleService
                FileName: build.json
              Name: deploy-to-ecs
              InputArtifacts:
                - Name: BuildArtifacts
  CodePipelineRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - codepipeline.amazonaws.com
                - codebuild.amazonaws.com
            Action:
              - sts:AssumeRole
      Policies:
        - PolicyName: code-pipeline-policy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - s3:GetObject
                  - s3:GetObjectVersion
                  - s3:GetBucketVersioning
                  - s3:PutObject
                  - s3:CreateBucket
                  - codedeploy:CreateDeployment
                  - codedeploy:GetApplicationRevision
                  - codedeploy:GetDeployment
                  - codedeploy:GetDeploymentConfig
                  - codedeploy:RegisterApplicationRevision
                  - codebuild:*
                  - elasticbeanstalk:CreateApplicationVersion
                  - elasticbeanstalk:DescribeApplicationVersions
                  - elasticbeanstalk:DescribeEnvironments
                  - elasticbeanstalk:DescribeEvents
                  - elasticbeanstalk:UpdateEnvironment
                  - autoscaling:DescribeAutoScalingGroups
                  - autoscaling:DescribeLaunchConfigurations
                  - autoscaling:DescribeScalingActivities
                  - autoscaling:ResumeProcesses
                  - autoscaling:SuspendProcesses
                  - cloudformation:GetTemplate
                  - cloudformation:DescribeStackResource
                  - cloudformation:DescribeStackResources
                  - cloudformation:DescribeStackEvents
                  - cloudformation:DescribeStacks
                  - cloudformation:UpdateStack
                  - ec2:DescribeInstances
                  - ec2:DescribeImages
                  - ec2:DescribeAddresses
                  - ec2:DescribeSubnets
                  - ec2:DescribeVpcs
                  - ec2:DescribeSecurityGroups
                  - ec2:DescribeKeyPairs
                  - elasticloadbalancing:DescribeLoadBalancers
                  - rds:DescribeDBInstances
                  - rds:DescribeOrderableDBInstanceOptions
                  - sns:ListSubscriptionsByTopic
                  - lambda:invokefunction
                  - lambda:listfunctions
                  - s3:ListBucket
                  - s3:GetBucketPolicy
                  - s3:GetObjectAcl
                  - s3:PutObjectAcl
                  - s3:DeleteObject
                  - ssm:GetParameters
                  - logs:*
                  - ecr:DescribeImages
                  - ecr:GetAuthorizationToken
                  - ecr:PutImage
                  - ecr:UploadLayerPart
                  - ecr:InitiateLayerUpload
                  - ecr:SetRepositoryPolicy
                  - ecr:CompleteLayerUpload
                  - ecr:BatchCheckLayerAvailability
                  - ecs:*
                  - iam:PassRole
                Resource: '*'

SourceArtifactStore

This is just the S3 bucket that stores the code as it gets pulled from GitHub. You may have noticed that it doesn't have a name. S3 buckets have Miranda rights to being named. If you don't choose a name, one will be provided for you. I can count on one hand the number of times I've had to look at my SourceArtifactStore, and those times were always motivated by curiosity. That said, a naming convention for these kinds of buckets would certainly be a good idea. I just didn't want to endorse or suggest one. Feel free to make the choice that works best for you here.

BuildArtifactStore

Same thing as SourceArtifactStore. This one I would recommend that you name. Debugging build output is almost always a worthwhile exercise. I didn't name it here for mainly because any name I chose would break it for anyone else that used the template. I didn't put it in the parameters because it's just as easy for you to put a name in, and it doesn't give you analysis paralysis when you're filling out parameters.

CodeBuildProject

This is where things get interesting. This is the first thing we've done so far that actually moves and/or transforms code. Let's take a closer look.

 CodeBuildProject:
    Type: AWS::CodeBuild::Project
    Properties:
      Artifacts:
        Type: CODEPIPELINE
        Name: buildOutput
        NamespaceType: BUILD_ID
      Source:
        Type: CODEPIPELINE
      ServiceRole: !GetAtt CodePipelineRole.Arn
      Environment:
        Type: LINUX_CONTAINER #the only allowed type.
        ComputeType: BUILD_GENERAL1_SMALL #https://docs.aws.amazon.com/codebuild/latest/userguide/create-project.html#create-project-cli
        Image: aws/codebuild/docker:17.09.0 #https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html
      Name: !Ref CodeBuildProjectName
Whaaaaat??

So what is actually happening here? What we have actually created a tiny little CodeBuild project. We're telling it we want its output to be sent down the CodePipeline. We are saying we want to harvest the BuildId because we're going to use it when we tag our Docker image.

There is a ServiceRole here, too. We'll get to that later. Keep in mind too that your build projects might be different than what I had foreseen. I selected the smallest kind available which is BUILDGENERAL1SMALL. It has 3GB of Memory, 2vCPUs and 64GB of disk. This should be sufficient for most build jobs. If you need a beefier resource, check out https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-compute-types.html which lists out the two other, larger, options you have.

Lastly, I selected the aws/codebuild/docker:17.09.0. This version supports multi-stage builds WE NEED THAT The whole point of using Docker in this project is we can transform our code within a Docker container instead of needing a specialized build box. This is the magic that makes this project as simple and portable as it is.

CodePipelineProject

This is the cool CI/CD stuff we were talking about earlier. AS you can see there are three stages

  • Source
  • Build
  • Deploy

We are pulling code, transforming it and deploying it out to our service. As you can imagine all those actions require a fair number of permissions. I have a role built for you here with all of that work already done.

The Entire file: (which you can also reference here: s3://dan-ob-ecs-formation-us-east-1/ecs.yml)

Parameters:
  CodePipelineName:
    Type: String
  GitHubOAuthToken:
    Type: String
    NoEcho: true
    Description: Should have full repo permissions
  GitHubRepo:
    Type: String
    Description: Your docker-ready GitHub repo.
  GitHubBranch:
    Type: String
    Description: the branch you want this pipeline to watch
  GitHubUsername:
    Type: String
    Description: your github username.  If it's an org repo, use the organization name
  CodeBuildProjectName:
    Type: String
  SecurityGroupId:
    Type: String
    Description: Security Group for your ECS cluster
  SubnetIds:
    Type: List<String>
    Description: comma separated list of subnetIds (at least 2)
  VpcId:
    Type: String
  ECRRepoName:
    Type: String
    Description: Name of container and the ECR Repo
Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
      -
        Label:
          default: Network Configuration
        Parameters:
          - VpcId
          - SubnetIds
          - SecurityGroupId
      -
        Label:
          default: Source Control
        Parameters:
          - GitHubRepo
          - GitHubUsername
          - GitHubBranch
          - GitHubOAuthToken
      -
        Label:
          default: Continuous Integration and Deployment
        Parameters:
          - CodeBuildProjectName
          - CodePipelineName
      -
        Label:
          default: Elastic Container Service
        Parameters:
          - ECRRepoName
    ParameterLabels:
      VpcId: 
        default: Which VPC should this ECS Service be deployed to?
      CodeBuildProjectName:
        default: Name your CodeBuild Project
      CodePipelineName:
        default: Name your CodePipelineProject
      GitHubRepo:
        default: GitHub repo
      GitHubUsername:
        default: GitHub Username or Organization
      GitHubOAuthToken:
        default: GitHubOAuthToken
      ECRRepo:
        default: container repo and container name

Resources:
  #################################################
  # Codepipeline section
  #################################################
  SourceArtifactStore:
    Type: AWS::S3::Bucket
  BuildArtifactStore:
    Type: AWS::S3::Bucket
  CodeBuildProject:
    Type: AWS::CodeBuild::Project
    Properties:
      Artifacts:
        Type: CODEPIPELINE
        Name: buildOutput
        NamespaceType: BUILD_ID
      Source:
        Type: CODEPIPELINE
      ServiceRole: !GetAtt CodePipelineRole.Arn
      Environment:
        Type: LINUX_CONTAINER #the only allowed type.
        ComputeType: BUILD_GENERAL1_SMALL #https://docs.aws.amazon.com/codebuild/latest/userguide/create-project.html#create-project-cli
        Image: aws/codebuild/docker:17.09.0 #https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html
      Name: !Ref CodeBuildProjectName
  CodePipeline:
    Type: AWS::CodePipeline::Pipeline
    Properties:
      ArtifactStore:
        Type: S3
        Location: !Ref SourceArtifactStore
      Name: !Ref CodePipelineName
      RestartExecutionOnUpdate: true #just a preference
      RoleArn: !GetAtt CodePipelineRole.Arn
      Stages:
        - Name: Source 
          Actions:
            - ActionTypeId:
                Category: Source
                Owner: ThirdParty
                Provider: GitHub
                Version: 1
              Configuration:
                Owner: !Ref GitHubUsername
                Repo: !Ref GitHubRepo
                Branch: !Ref GitHubBranch
                OAuthToken: !Ref GitHubOAuthToken
              Name: Source
              OutputArtifacts: 
                - Name: SourceArtifacts
        - Name: Build
          Actions:
            - ActionTypeId:
                Category: Build
                Owner: AWS
                Provider: CodeBuild
                Version: 1
              Configuration:
                ProjectName: !Ref CodeBuildProjectName
              Name: Build
              InputArtifacts: 
                - Name: SourceArtifacts
              OutputArtifacts:
                - Name: BuildArtifacts
        - Name: Deploy
          Actions:
            - ActionTypeId:
                Category: Deploy
                Owner: AWS
                Provider: ECS
                Version: 1
              Configuration:
                ClusterName: !Ref SampleCluster
                ServiceName: !Ref SampleService
                FileName: build.json
              Name: deploy-to-ecs
              InputArtifacts:
                - Name: BuildArtifacts
  CodePipelineRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - codepipeline.amazonaws.com
                - codebuild.amazonaws.com
            Action:
              - sts:AssumeRole
      Policies:
        - PolicyName: code-pipeline-policy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - s3:GetObject
                  - s3:GetObjectVersion
                  - s3:GetBucketVersioning
                  - s3:PutObject
                  - s3:CreateBucket
                  - codedeploy:CreateDeployment
                  - codedeploy:GetApplicationRevision
                  - codedeploy:GetDeployment
                  - codedeploy:GetDeploymentConfig
                  - codedeploy:RegisterApplicationRevision
                  - codebuild:*
                  - elasticbeanstalk:CreateApplicationVersion
                  - elasticbeanstalk:DescribeApplicationVersions
                  - elasticbeanstalk:DescribeEnvironments
                  - elasticbeanstalk:DescribeEvents
                  - elasticbeanstalk:UpdateEnvironment
                  - autoscaling:DescribeAutoScalingGroups
                  - autoscaling:DescribeLaunchConfigurations
                  - autoscaling:DescribeScalingActivities
                  - autoscaling:ResumeProcesses
                  - autoscaling:SuspendProcesses
                  - cloudformation:GetTemplate
                  - cloudformation:DescribeStackResource
                  - cloudformation:DescribeStackResources
                  - cloudformation:DescribeStackEvents
                  - cloudformation:DescribeStacks
                  - cloudformation:UpdateStack
                  - ec2:DescribeInstances
                  - ec2:DescribeImages
                  - ec2:DescribeAddresses
                  - ec2:DescribeSubnets
                  - ec2:DescribeVpcs
                  - ec2:DescribeSecurityGroups
                  - ec2:DescribeKeyPairs
                  - elasticloadbalancing:DescribeLoadBalancers
                  - rds:DescribeDBInstances
                  - rds:DescribeOrderableDBInstanceOptions
                  - sns:ListSubscriptionsByTopic
                  - lambda:invokefunction
                  - lambda:listfunctions
                  - s3:ListBucket
                  - s3:GetBucketPolicy
                  - s3:GetObjectAcl
                  - s3:PutObjectAcl
                  - s3:DeleteObject
                  - ssm:GetParameters
                  - logs:*
                  - ecr:DescribeImages
                  - ecr:GetAuthorizationToken
                  - ecr:PutImage
                  - ecr:UploadLayerPart
                  - ecr:InitiateLayerUpload
                  - ecr:SetRepositoryPolicy
                  - ecr:CompleteLayerUpload
                  - ecr:BatchCheckLayerAvailability
                  - ecs:*
                  - iam:PassRole
                Resource: '*'
  ####################################
  # ECS Section
  ####################################
  Listener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      DefaultActions:
        - TargetGroupArn: !Ref TargetGroup
          Type: forward
      LoadBalancerArn: !Ref LoadBalancer
      Port: 80
      Protocol: HTTP
  LoadBalancer:
    Type: AWS::ElasticLoadBalancingV2::LoadBalancer
    Properties:
      Name: sample-ecs-load-balancer 
      Scheme: internet-facing
      SecurityGroups:
        - !Ref SecurityGroupId
      Subnets:
        - Fn::Select:
            - 0
            - !Ref SubnetIds
        - Fn::Select:
            - 1
            - !Ref SubnetIds
      Type: application
  TargetGroup:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    DependsOn: LoadBalancer
    Properties:
      VpcId: !Ref VpcId
      Port: 80
      Protocol: HTTP
      Matcher: 
        HttpCode: 200-299
      HealthCheckIntervalSeconds: 80
      HealthCheckPath: "/"
      HealthCheckProtocol: HTTP
      HealthCheckTimeoutSeconds: 50
      HealthyThresholdCount: 2
      UnhealthyThresholdCount: 5
      TargetGroupAttributes:
      - Key: deregistration_delay.timeout_seconds
        Value: '60'
      TargetType: ip
  EcrRepository:
    Type: AWS::ECR::Repository
    Properties:
      RepositoryName: actionbotui
  SampleCluster:
    Type: AWS::ECS::Cluster
    Properties:
      ClusterName: SampleCluster
  SampleService:
    DependsOn: 
      - EcrRepository
      - SampleTaskDefinition
      - Listener
    Type: AWS::ECS::Service
    Properties:
      Cluster: !Ref SampleCluster
      DesiredCount: 0
      HealthCheckGracePeriodSeconds: 30
      LaunchType: FARGATE
      LoadBalancers:
        - ContainerName: actionbotui
          ContainerPort: 80
          TargetGroupArn: !Ref TargetGroup
      NetworkConfiguration:
        AwsvpcConfiguration:
          AssignPublicIp: ENABLED
          SecurityGroups:
            - !Ref SecurityGroupId
          Subnets:
            - Fn::Select:
                - 0
                - !Ref SubnetIds
            - Fn::Select:
                - 1
                - !Ref SubnetIds
      ServiceName: sample-service
      TaskDefinition: !Ref SampleTaskDefinition
  SampleTaskDefinition:
    Type: AWS::ECS::TaskDefinition
    DependsOn: EcrRepository
    Properties:
      ExecutionRoleArn: !Ref TaskExecutionRole
      RequiresCompatibilities:
        - FARGATE
      NetworkMode: awsvpc
      Cpu: 256
      Memory: 0.5GB
      ContainerDefinitions:
        - Image: 
            !Sub
              - "${accountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${repoName}"
              - accountId: AWS::AccountId
                region: AWS::Region
                repoName: !Ref ECRRepoName
          Name: actionbotui
          Memory: 512
          PortMappings:
            - ContainerPort: 80
              HostPort: 80
              Protocol: tcp
  TaskExecutionRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName:
        !Sub
          - ${repoName}-taskExecutionRole
          - repoName: !Ref ECRRepoName
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - ecs.amazonaws.com
            Action:
              - sts:AssumeRole
      ManagedPolicyArns:
          - arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
  ContainerAgentRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - ecs.amazonaws.com
            Action:
              - sts:AssumeRole
      Policies:
        - PolicyName: code-pipeline-policy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - elasticloadbalancing:DescribeListeners
                  - elasticloadbalancing:DescribeLoadBalancers
                  - elasticloadbalancing:DescribeTargetGroups
                  - elasticloadbalancing:DescribeTargetHealth
                  - elasticloadbalancing:DescribeLoadBalancerAttributes
                  - elasticloadbalancing:DescribeTargetGroupAttributes
                  - elasticloadbalancing:CreateListener
                  - elasticloadbalancing:CreateRule
                  - elasticloadbalancing:CreateTargetGroup
                  - elasticloadbalancing:RegisterTargets
                  - elasticloadbalancing:DeregisterTargets
                  - elasticloadbalancing:ModifyListener
                  - elasticloadbalancing:ModifyLoadBalancerAttributes
                  - elasticloadbalancing:ModifyRule
                  - elasticloadbalancing:ModifyTargetGroup
                  - elasticloadbalancing:ModifyTargetGroupAttributes
                  - elasticloadbalancing:SetIpAddressType
                  - elasticloadbalancing:SetSecurityGroups
                  - elasticloadbalancing:SetRulePriorities
                  - elasticloadbalancing:SetSubnets
                Resource: '*' 

 

Now wait just a second, Dan. I saw what you did. What is with that build.json in the Deploy step of the CodePipeline?

I've got you covered here, too.  Check this out.

    
version: 0.2
phases:
  pre_build:
    commands:
      - $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION)
      - CONTAINER_NAME="YOUR_CONTAINER_NAME"
      - TAG=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)
  build:
    commands:
      - ACCOUNT_ID=$(aws sts get-caller-identity --output text --query 'Account')
      - docker build -t $CONTAINER_NAME:latest .
      - docker tag $CONTAINER_NAME:latest $ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$CONTAINER_NAME:$TAG
  post_build:
    commands:
      - docker push $ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$CONTAINER_NAME:$TAG
      - printf '[{"name":"%s","imageUri":"%s:%s"}]' $CONTAINER_NAME $ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$CONTAINER_NAME $TAG > build.json
artifacts:
  files:
    - build.json
  discard-paths: yes
    


Let me walk you through this. This is the "build" section of the CodePipeline. This is where the artifacts actually get created from whatever your source code looked like when it left your dev's hands.

 

pre_build

This is where you set up all the stuff that you're going to reference for the rest of the build.
  • the first command here is logging into your Elastic Container Repository (ECR). Take note that it will log in using whatever your default region is. If you want to use a specific region instead of the default one, just substitute the region name with the variable $AWS_DEFAULT_REGION
  • CONTAINER_NAME should be the same name as the container name you choose for the CloudFormation template above.
  • the TAG variable here is the last 8 digits of your commit SHA from git. This is what we're going to tag our images with. That way, we can roll back to a specific commit (manually) if we have to.

build

  • ACCOUNT_ID is what makes this template portable across accounts. A lot of people hard-code this value to their own account, but that doesn't work if you have a multiple-account strategy, and want to use the same buildspec.yml (or whatever you name your file)
  • docker build -t, one of my favorite cli commands. This is where the magic happens, and where your artifacts actually get built.
  • the docker tag command is where we actually use our git SHA to tag the image.

post_build

  • docker push pushes our brand-new fresh image out to our ECR. Take note of the variables used for portability here. If you wanted to hard code those for specificity you could. But again, multi-account strategies are a thing.
  • WTF is this printf line? AHHHHHH HAHHHHHH. This is the rub that can be difficult to wrap your mind around. Since we are using an ECS deployment, ECS needs to know the container name and tag to get the image that needs to be deployed. The way AWS has decided to do that is to require a json file that contains this information. So this printf command just builds it, then it's added to the outgoing artifacts of this pipeline step so it can be referenced in the deployment step.

Add this .yml file to the root of your Docker project

AWS CodeBuild projects looks for a buildspec.yml file at the root of the source code directory.  If you want to leverage the .yml I've provided here, just add it to the root of your project as buildspec.yml and you should be ready to go.



Conclusion

Between this CloudFormation script and the buildspec.yml, you should have everything you need to get you started on leveraging ECS for your Docker projects.  I say "get you started" because this is just the beginning.  There is a lot we haven't talked about here in terms of unit testing, demo environments, blue-green deploys and much more.