Deploying a docker image with AWS CDK


In this article I will provide a basic template for cost-effectively deploying a docker image to AWS using CDK (v2). At the time of this writing, it will cost quite a bit to run this example for a month. Like around $60 USD. Bear that in mind as you follow along. You can find all of this code on github:

I didn't find a comprehensive guide on this topic, so here I am trying to be the change I want to see in the world.

Let's get started.


I manually configure the Route53 zone and setup my domain DNS to point there. Assuming you have a Zone already configured, you can go ahead and look it up directly in CDK.

const hostedZone = route53.HostedZone.fromHostedZoneAttributes(
        zoneName: siteDomain,
        hostedZoneId: 'your-hosted-zone-id',

ECS Cluster Resources

Next we need to configure the ecs cluster. This requires making a VPC and giving it a minimum of 2 availability zones (any fewer and CDK will get angry).

const vpc = new ec2.Vpc(this, 'HopefullyNominalVpc', { maxAzs: 2 });
const ecsCluster = new ecs.Cluster(this, 'HopefullyNominalCluster', {
    clusterName: 'HopefullyNominalCluster',


So I am hosting an express server in this docker image. Locally, it exposes port 3000. For reasons that are irrelevant my Dockerfile is a few directories up. You can just configure the image folder and CDK will look for the Dockerfile in that place and automatically build+upload it. Pretty cool, eh?

Because I'm using a monorepo, I've got node_modules with the weight of a thousand suns scattered everywhere. IgnoreMode.DOCKER was crucial as it enforces the ignore globs outlined in your .dockerignore file. Failure to add an ignore policy might lead to paths that are too long and/or giant image sizes.

const dockerAsset = new ecr_assets.DockerImageAsset(
        directory: '../../',
        asset_name: 'Dockerfile',
        ignoreMode: IgnoreMode.DOCKER,

Fargate Service and Task

Alright now the expensive part. Here we're going to define how many pods to spin up (desiredCount) and what the resource limits are of these pods (cpu and memoryLimitMiB). At the time of this writing, cpu: 256 and memoryLimitMiB: 512 will get you the cheapest instance types. Don't take my word for it though! This shit is expensive.

After defining our task and service we wire up that docker image and CDK will do all the magic of building and deploying that image straight to an ecs cluster. I also add some logging to cloudwatch here. It's totally optional, but if you want to retain your sanity for debugging purposes it can be quite valuable.

A note about being cheap: If you're like me and you want only a single pod of the cheapest server type, you're gonna have a bad time unless you add a health check heartbeat. Why? Anecdotally, it seems like these pods sometimes "sleep". I don't see anything about that in their documentation, but after a while there is absolutely some kind of cold start problem. Keeping them alive with a heartbeat is a surefire way to make sure they're always ready for your requests.

Per the documentation, CMD-SHELL is required. Don't try and use /bin/sh or whatever. I spent like an hour debugging that issue. Learn from my mistake!

const taskDef = new ecs.FargateTaskDefinition(
        cpu: 256,
        memoryLimitMiB: 512,
        networkMode: ecs.NetworkMode.AWS_VPC,

const container = taskDef.addContainer('HopefullyNominalContainerImage', {
    image: ecs.ContainerImage.fromDockerImageAsset(dockerAsset),
    logging: new AwsLogDriver({
        streamPrefix: 'hopefullynominal',
        logRetention: RetentionDays.FIVE_DAYS,
    healthCheck: {
        command: [
            'curl -f http://localhost:3000/healthcheck || exit 1',
        interval: Duration.seconds(30),
        timeout: Duration.seconds(10),
        retries: 5,

    containerPort: 3000,
    hostPort: 3000,
    protocol: ecs.Protocol.TCP,

const fargateService = new ecs.FargateService(
        cluster: ecsCluster,
        taskDefinition: taskDef,
        serviceName: 'HopefullyNominalService',
        desiredCount: 1,

Exposing an actual endpoint

Next, we want to expose an actual endpoint here. I create a TLS certificate for the hosted zone with the url and that's what the server will ultimately run under. Next we'll add a load balancer, create an A record that binds our to the load balancer endpoint. And then wire up the Fargate service to serve content.

The port settings here actually bind 443 (https) externally to port 3000 internally. It's extremely easy to make sure this connection is secure, so we might as well add the 5 extra lines that get us that sweet sweet encrypted connection.

// TLS certificate
const apiCertificate = new acm.Certificate(this, 'ApiCertificate', {
    domainName: '',
    validation: acm.CertificateValidation.fromDns(hostedZone),

const alb = new elbv2.ApplicationLoadBalancer(this, 'HopefullyNominalAlb', {
    vpc: vpc,
    internetFacing: true,
    loadBalancerName: 'HopefullyNominalLB',

new route53.ARecord(this, 'HopefullyNominalApiDnsRecord', {
    recordName: 'api.' + siteDomain,
    zone: hostedZone,
    target: route53.RecordTarget.fromAlias(new LoadBalancerTarget(alb)),

const listener = alb.addListener('listener', {
    open: true,
    port: 443,
    certificates: [apiCertificate],
    protocol: ApplicationProtocol.HTTPS,

listener.addTargets('service1', {
    targetGroupName: 'Service1Target',
    port: 3000,
    protocol: Protocol.HTTP,
    targets: [fargateService],


And that's it! You've got a docker image running in an AWS Fargate cluster.