How to deploy your docker containers on Amazon ECS with ecs-cli

Bruno Batista
6 min readJul 5, 2020

Yo espero que mi boca
Nunca se calle
También espero que las turbinas de este avión nunca me fallen
No tengo todo calculado
Ni mi vida resuelta
Solo tengo una sonrisa
Y espero una de vuelta

Ever wondered how to deploy your docker containers on Amazon ECS? Today I'm going to show you a quick way to do it via ecs-cli. So, without further ado…

But first, what's ECS really?

Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service. You can choose to run your ECS clusters using AWS Fargate, which is serverless compute for containers. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. Second, ECS is used extensively within Amazon to power services such as Amazon SageMaker, AWS Batch, Amazon Lex, and Amazon.com’s recommendation engine, ensuring ECS is tested extensively for security, reliability, and availability. —AWS ECS Website

Now that you know what ECS really is, let's get our hands dirty!

1. Create an IAM role to manage your ECS instances

AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources —AWS IAM website.

Go to IAM dashboard by clicking Services then typing IAM in the search field

Then you should see a console like the one below:

Click Roles and in the next page click on Create Role. You should see something like this:

Now, you'll see that AWS service is selected by default, leave it the way it is and in the "Choose a use case" >> "Or select a service to view its use cases" section, select Elastic Container Service and in the "Select your use case" section, select EC2 Role for Elastic Container Service then click next and you should see this:

If for some reason the AmazonEC2ContainerServiceforEC2Role policy isn't attached by default, simply type it in the search bar and then pick it from the list.

Click Next: Tags, then Next: Review and give your new role a name:

Well done, that was the first step :)

2. Create a key pair (required if you intend to log in your EC2 instances via SSH, optional if you don't)

Amazon EC2 uses public key cryptography to encrypt and decrypt login information. Public key cryptography uses a public key to encrypt a piece of data, and then the recipient uses the private key to decrypt the data. The public and private keys are known as a key pair. Public key cryptography enables you to securely access your instances using a private key instead of a password. — AWS EC2 website

Go to Services >> EC2

On the left sidebar click Key pairs then Create key pair:

And you should see something like:

Give it a name of your choice then click Create key pair

Once you're done, you'll be asked by the browser to download it. Remember to keep it safe. If you loose this key pair you won't be able to attach a new one to already-running instances.

On your local machine, open the terminal, find the key pair and then give it the permissions required by AWS.

$ chmod 400 your_keypair_file.pem

You've completed another step. Today was a good day, huh?

3. Almost there…

First, make sure you have ecs-cli up and running on your local machine. And since this step depend on the OS you're working with, I'm going to leave you with the AWS tutorial on how to do it, but you've gotta promise me to get back, ok? Ok, I believe you, there you go:

Great job! Now that you have aws' ecs-cli, it's time to set it up.

Let's create an IAM user to grant us access to aws services via cli. Remember the IAM console in the first step? Get back there, then click users >> Add User, you should see something like:

Give the user a name then check Programmatic access in the "Select AWS access type" section then click Next: Permissions, select Attach existing policies directly and give the user AmazonECS_FullAccess.

Click Next: Tags, Next: Review then finally Create user and you should see something like:

Copy both the Access key ID and the Secret access key, keep them safe somewhere (ideally, ~/.aws/credentials file) and let's move on to the next step.

4. And now, to the GRAND FINALE!

Setup a profile using the access and secret keys previously created:

$ ecs-cli configure profile --profile-name $PROFILE_NAME_OF_YOUR_CHOICE --access-key $ACCESS_KEY --secret-key $SECRET_KEY

Create a cluster:

$ ecs-cli configure --cluster $CLUSTER_NAME --default-launch-type $LAUNCH_TYPE --region $REGION --config-name $CONFIG_NAME_OF_YOUR_CHOICE

$LAUNCH_TYPE = EC2 (t2.micro by default, free tier eligible) or FARGATE

AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications. — AWS Fargate website

Create a stack (CloudFormation):

AWS CloudFormation provides a common language for you to model and provision AWS and third party application resources in your cloud environment. AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. This gives you a single source of truth for your AWS and third party resources. — AWS CloudFormation Website

Remember the ecs role we've created a few steps early? It's time to use it:

$ ecs-cli up --instance-role $YOUR_INSTANCE_ROLE --profile $YOUR_PROFILE

Docker and docker-compose’s build issue

Note: ecs-cli doesn’t support docker-compose’s build command by default, also the version specified in the compose file MUST be the string “1”, “1.0”, “2”, “2.0”, “3”, or “3.0”

Create a docker hub's account if you don't have one already. Then:

$ docker login

To work around the docker-compose's build issue, add the env var DOCKERHUB_USER to your PATH, then:

Use the gist below to redo your docker-compose’s file, tag and upload your docker image, you can also perform all these steps manually if you want to.

From the gist, create a build_from_compose.py file in the root folder of your application, then run:

$ python build_from_compose.py

You should now see a docker-compose.yml-$RANDOMSTRING file, open it then remove any volume references that might still exist in the file and finally deploy to ECS:

$ ecs-cli --profile $YOUR_PROFILE compose --file $GENERATED_DOCKER_COMPOSE_FILE service up

And that's pretty much it, if everything worked as expected you should now see on your AWS account:

  • An ECS cluster
  • A ECS service (Status must be RUNNING) with the external IP of your container
  • A ECS Task (Status must be RUNNING)
  • A CloudFormation Stack
  • An EC2 instance

ecs-cli command line reference: click here

Note: If for some unknown reason you're not able to access your newly created service you might wanna check the permissions within your sg (security group) and make sure you have HTTP and/or TCP ranges allowed.

That's it for today.

Thanks for stopping by!

--

--

Bruno Batista

You can’t build enterprise software without thinking, and all any book can do is give you more information to base your decisions on. — Martin Fowler