AWS CodeCommit, IAM, CodeBuild with awscli

This blog describes using awscli to setup codecommit, ssh-keys, codebuild and IAM roles.

Now, an admin of a AWS acct could allow a user;

  • to provide a ssh public key – easily uploaded to IAM by awsadmin
  • give the user the new project location, after easily creating a project for them
  • git clone, always get the latest code – then make changes
  • upload new packer file, ready for another cookie-cutter rebuild – with new timestamp


  • awscli is setup / aws credentials configured
  • IAM user is already created (i’m using awsadmin role), which as sufficient EC2 & CodeCommit privileges.
  • git is installed
  • AWS region is ap-southeast-2 (Sydney)

Setup IAM

Create a SSH key

cd $HOME/.ssh
ssh-keygen -f YourSSHKeyName -t rsa -C "YourEmailAddress" -b 4096

Upload public key to AWS IAM and save output to $NEW_SSHPublicKeyId

NEW_SSHPublicKeyId=$(aws iam upload-ssh-public-key --user-name awsadmin --ssh-public-key-body "$(cat ~/.ssh/" --output text --query 'SSHPublicKey.SSHPublicKeyId')

Configure Linux to use CodeCommit as a git repo

update your $HOME/.ssh/config file

echo "
Host git-codecommit.*
 User $NEW_SSHPublicKeyId
 IdentityFile ~/.ssh/YourSSHKeyName" >> config
chmod 600 config

Test the new ssh key works



create a CodeCommit Repo

aws codecommit create-repository --repository-name testrepo

Clone the empty repo

This makes a local directory with a git structure

git clone ssh://

Make your buildspec.yml file – it must be named this.

The packer filename can be anything but the buildspec.yml must reference it correctly.

buildspec.yaml must contain the following. Cut/paste;

version: 0.2

 - echo "Installing HashiCorp Packer..."
 - curl -qL -o && unzip
 - echo "Installing jq..."
 - curl -qL -o jq && chmod +x ./jq
 - echo "Validating amazon-linux_packer-template.json"
 - ./packer validate amazon-linux_packer-template.json
 ### HashiCorp Packer cannot currently obtain the AWS CodeBuild-assigned role and its credentials
 ### Manually capture and configure the AWS CLI to provide HashiCorp Packer with AWS credentials
 ### More info here:
 - echo "Configuring AWS credentials"
 - curl -qL -o aws_credentials.json$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI > aws_credentials.json
 - aws configure set region $AWS_REGION
 - aws configure set aws_access_key_id `./jq -r '.AccessKeyId' aws_credentials.json`
 - aws configure set aws_secret_access_key `./jq -r '.SecretAccessKey' aws_credentials.json`
 - aws configure set aws_session_token `./jq -r '.Token' aws_credentials.json`
 - echo "Building HashiCorp Packer template, amazon-linux_packer-template.json"
 - ./packer build amazon-linux_packer-template.json
 - echo "HashiCorp Packer build completed on `date`"

amazon-linux_packer-template.json . This is a just a sample, make your own changes as needed.

 "variables": {
 "aws_region": "{{env `AWS_REGION`}}",
 "aws_ami_name": "amazon-linux_{{isotime \"02Jan2006\"}}"

"builders": [{
 "type": "amazon-ebs",
 "region": "{{user `aws_region`}}",
 "instance_type": "t2.micro",
 "ssh_username": "ec2-user",
 "ami_name": "{{user `aws_ami_name`}}",
 "ami_description": "Customized Amazon Linux",
 "associate_public_ip_address": "true",
 "source_ami_filter": {
 "filters": {
 "virtualization-type": "hvm",
 "name": "amzn-ami*-ebs",
 "root-device-type": "ebs"
 "owners": ["137112412989", "591542846629", "801119661308", "102837901569", "013907871322", "206029621532", "286198878708", "443319210888"],
 "most_recent": true

"provisioners": [
 "type": "shell",
 "inline": [
 "sudo yum update -y",
 "sudo /usr/sbin/update-motd --disable",
 "echo 'No unauthorized access permitted' | sudo tee /etc/motd",
 "sudo rm /etc/issue",
 "sudo ln -s /etc/motd /etc/issue",
 "sudo yum install -y elinks screen"

Commit to CodeCommitRepo

git add .
git commit -m "initial commit"
git push origin master


We need a role to run the job, this is how.
Create IAM service role for codebuild and then add hashicorp inline policy + Service Role from

Create a policy file called CodeBuild-IAM-policy.json
I’ve added logs.CreateLogStream and logs.createLogGroup to the Hashicorp policy.

 "Version": "2012-10-17",
 "Statement": [{
 "Effect": "Allow",
 "Action" : [
 "Resource" : "*"

Create the role and attach the inline policy, and then the ServiceRole policy

aws iam create-role --role-name codebuild-1stpacker-service-role --assume-role-policy-document file://CodeBuild-IAM-policy.json
aws iam attach-role-policy --role-name codebuild-1stpacker-service-role --policy-arn arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole

then get the arn to update the script below

aws iam get-role --role-name codebuild-1stpacker-service-role --query "Role.Arn" --output text

Create a project

Create a create-project.json file, and update the ARN for the serviceRole you created above.

 "name": "AMI_Builder",
 "description": "AMI builder CodeBuild project",
 "source": {
 "type": "CODECOMMIT",
 "location": "",
 "buildspec": "buildspec"
 "artifacts": {
 "type": "NO_ARTIFACTS"
 "environment": {
 "image": "aws/codebuild/ubuntu-base:14.04",
 "computeType": "BUILD_GENERAL1_SMALL"
 "serviceRole": "arn:aws:iam::AWS_ACCT:role/service-role/codebuild-1stpacker-service-role"

then build the project

aws codebuild create-project --cli-input-json file://create-project.json

Start a build

this take around 5 mins using a t2.micro and default scripts

aws codebuild start-build --project-name AMI_Builder

check your new AMIs

 aws ec2 describe-images --owner $(aws sts get-caller-identity --output text --query 'Account') --query 'Images[?contains(Name, `Ubuntu`)]'


All done. A fairly simple, yet powerful and dynamic way to allow users/sysadmins to churn out new AMIs.

Next improvements would be to;

  • Trigger a CodePipeline build for code commits
  • SNS topic for email notifications of build status
  • Replace it all with CFN

Inspiration for this AWS CLI focused blog came from AWS DevOps Blog


Are you AWS Well Architected?

What is Well-Architected?

AWS has developed and documented a set of Design Principles and published a whitepaper entitled “AWS Well-Architected Framework” (WAF). This document was recently updated (November 2017) to contain the latest and most up to date design principles that you should use as your foundation to deploy into the cloud.

The AWS Well-Architected Framework has 5 key pillars:

  • Operational Excellence – aligns operational practices with business objectives
  • Security – security in depth, tracability
  • Reliability – capacity, recovery and testing
  • Performance Efficiency – resource optimisation
  • Cost Optimization – consumption, analysis and focus on reducing costs

More recently, the need for workload specific WAFs has been identified. These workload specific WAFs are known as a Lens. Currently, Serverless Applications and High Performance Compute lenses are available.

Getting Well-Architected on AWS

By optimising your architecture around these pillars, you gain the benefits of a well-architected design in the cloud:

  • Stop guessing your capacity needs
  • Test systems at production scale
  • Automate for easier architectural experimentation
  • Allow for evolutionary architectures
  • Data-driven architectures
  • Improve through game days

How the AWS Well-Architected Framework works?

Approved AWS Partners, like Bulletproof, offers a Well-Architected Review designed to educate customers on architectural best practices for building and operating reliable, secure, efficient, and cost-effective systems in the cloud, and to assess their current workloads.

AWS Partners use the AWS Well-Architected Framework design principles and tools to deliver a comprehensive review of a single workload. A workload is defined as a set of machines, instances, or servers that together enable the delivery of a service or application to internal or external customers. Examples include an eCommerce website, business application, web application, etc. Through this review and guidance from the AWS Well-Architected Framework, can provide the customer an analysis of the workload through the lens of the WAF pillars, as well as a plan to address areas that do not comply with framework recommendations.

The value of a Well Architected Review

The value is under-priced. For a small price, you gain huge insights.

  • Understand potential impacts
  • Visibility of risks
  • Consistent approach to architecture review
  • Recommended remediation steps

Well-Architected Review

The review process usually consists of;

  • A one-day on-site, review and architecture deep-dive with your stakeholders by AWS Certified Solution Professional Architects to gather data on your workloads in your business/IT context
  • Analysis of the collected data, cross-analysis of the workload via the guidance offered by the Framework, and formulation of any recommended remediation, if applicable
  • A Well-Architected report, outlining all findings and recommendations

I love doing architecture reviews, they provide the opportunity to dive deep with customers and give real actionable insights.

AWS ELB vs ALB cost comparison

A lot of decisions (even simple questions) are based on cost. I was recently asked about the new ALB costs.  So, here is a quick comparison (examples at bottom) of the AWS Classic Load Balancer (ELB) and the newer Application Load Balancer (ALB).

The Hourly rate for the use of an Application Load Balancer is 10% lower than the cost of a Classic Load Balancer, but that’s not the whole story.

For Sydney region (ap-southeast-2) prices are;

  • Application Load Balancers = $0.0252 per hour.
  • Classic Load Balancers = $0.028 per hour

AWS have changed the way they bill on the ALB adding LCU (Load Balancer Capacity Units) which uses the highest values of the following:

  • New connections (per second) – Each LCU provides 25 new connections per second (averaged over the hour)
  • Active connections (per minute) – Each LCU provides 3,000 active connections per minute.
  • Bandwidth (Mbps) – Each LCU provides 2.22 (2220) Mbps averaged over the hour

LB resource costs per month estimates;

Application Load Balancer

$0.0252/h * 24 hours * 30 days = $18.14 per month + $0.008 per LCU per hour (see below)

Classic Load Balancer

$0.028 p/h * 24 hours * 30 days = $20.16 per month + $0.008 per GB of data processed

For the common bandwidth factor, the below math is an easy way to compare side by side (at a high level).

Example 1
Production environment (high usage) – bandwidth 2.5 Mbps (2500), Total data transferred 5000 GB
ALB 2500/2220 = 1.126126 LCUs 1.126126 * 0.008 * 24 * 30 = $6.48 + $18.14 = $24.62 = 49.8% saving
ELB 5000 * 0.008 * 24 * 30 = $28.80 + $20.16 = $48.96

Example 2
Test environment (medium usage)- bandwidth 200 kbps (200), Total data transferred 250 GB
ALB 200/2220 = 0.0900900 LCUs 0.0900900 * 0.008 * 24 * 30= $0.52 + $18.14 = $18.65 = 19.1% saving
ELB 250 * 0.008 * 24 * 30 = $2.88 + $20.16 = $23.04


Example 3
Dev environment (low usage) – bandwidth 10 kbps Total data transferred 5 GB
ALB 10/2220 = 0.0045045 LCUs =$0.26 + $18.14 = $18.40 = 10.1% saving
ELB 5 * 0.008 * 24 * 30 = $0.29 + $20.16 = $20.45


So, in nearly all cases, ALB is cheaper than an ELB for a general side by side comparison.
But, the best way to measure your costs, is with Resource Tagging and Billing Reports with some real world testing.

Amazon Alexa: Converting mp3 files

When working with Amazon Alexa audio/mp3 files and using SSML there are a few simple rules.

This blog assumes, awscli,wget and ffmepg are installed and configured.

Here are the must haves for using mp3 audio in your Alexa skill;

  • Public access file (S3 is a good location), see below for upload tips
  • HTTPS endpoint
  • MP3 format
  • less than 90 seconds
  • 48 kbps
  • 16000 Hz

Now, let’s give it a try.

Download a sample audio file

We’ll use this Crowd Excited free sample

cd /tmp

Convert it to the right Alexa format

Use this ffmpeg command for an easy cli converter and then use awscli to upload to a S3 bucket and set a public read ACL on the object.

ffmpeg -i crowd-excited.mp3 -ac 2 -codec:a libmp3lame -b:a 48k -ar 16000 $OUTFILE
aws s3 cp $OUTFILE s3://$BUCKET/ --storage-class REDUCED_REDUNDANCY
aws s3api put-object-acl --bucket $BUCKET --key $OUTFILE --acl public-read

Extra Tip — Working with the Alexa SDK Node.js

I got caught out when working with Alexa-SDK-NodeJs. Make sure declaration string ends with a space slash ” /” (without quotes). This might go in your index.js

// in index.js 
cheerAudio = `<audio src="" />`;


Docker Bridge Networking – a working example


Run through these commands to get familiar with docker bridge networking. Some experience with docker is assumed, including that docker is installed.
This example will create an internal docker bridge network, using the specified IP CIDR of

#download alpine container image
docker pull alpine 
# list docker networks
docker network ls 
# create docker network
docker network create myOverlayNet --internal --subnet
# list docker networks to check new network creation
docker network ls 
docker network inspect myOverlayNet

# Run a few images in the new network 
# - d daemon mode
# --name is the name for your new container, which can be resolved by dns
# --network add to network
docker run -ti --name host1 --network myOverlayNet alpine
# Press Ctrl-p and Ctrl-q to exit the container and leave it running
docker run -ti --name host2 --network myOverlayNet --link host1 alpine
# Press Ctrl-p and Ctrl-q to exit the container and leave it running
# Run a new container outside of myOverlayNet
docker run -ti --name OutsideHost --hostname OutsideHost alpine
docker start host1 host2 OutsideHost
# check they are all running
docker ps -a
# inspect the network to ensure the new containers (host1 and host2) have IPs. They will likely be and OutsideHost will be missing (as expected)
docker network inspect myOverlayNet
# inspect the default bridge network, OutsideHost will be there
docker network inspect bridge


This demonstrates the ability to connect within the myOverlayNet network and also how other networks, like default bridge where OutsideHost is don’t have access.

 docker exec

allows us to run a command in a container and see the output in stdout.

# should succeed, Ctrl-C to stop the ping
docker exec host1 ping host2
# should succeed, Ctrl-C to stop the ping
docker exec host2 ping host1

# should fail, Ctrl-C to stop the ping
docker exec host1 ping OutsideHost
# should fail, Ctrl-C to stop the ping
docker exec OutsideHost ping host1

Clean up

Clean up everything we created, hosts and overlay network

# kill the running containers
docker kill host1 host2 OutsideHost
# remove containers
docker rm host1 host2 OutsideHost
# remove network
docker network rm myOverlayNet
# check docker networks
docker network ls
#check containers are removed
docker ps -a


Well done, all completed and you now understand docker overlay networking a little better. Reach out via the comments or twitter if you have any questions.
by @shallawell, inspired by @ArjenSchwarz

amzn2-linux + bees

Have you ever needed to load test your website to validate how many requests per seconds it can handle? Checkout this repo to a packer file to build a AWS EC2 AMI for using with bees-with-machine-guns.

AMI Features

  • built with packer, so its repeatable
  • uses the latest AWS Linux2 AMI (at time of writing)
  • yum update –security on each build
  • AWS keys are variables, not hard-coded
  • bees-with-machine-guns (and dependencies) installed

Check it Out

Everything you need to load check your website, so go build some bees and Attack!!

Let me know how your website went.



awscli – Advanced Query Output

Advanced JMESPath Query – good help and examples here.

Use these combinations of awscli commands to generate the JSON output you need.

Let me know via comments or twitter if you need some help 🙂  HTH.

# @shallawell
# Program Name:
# Purpose: Demonstrate JMESPath Query examples
# version 0.1
# The awscli uses JMESPath Query expressions, rather than regex.

#Advanced JMESPath Query - good help and examples here.

# List all users, (basic query)
aws iam list-users --output text --query "Users[].UserName"
# List all users, NOT NULL
aws iam list-users --output text --query 'Users[?UserName!=`null`].UserName'
# list users STARTS_WITH "a"
aws iam list-users --output text --query 'Users[?starts_with(UserName, `a`) == `true`].UserName'
# list users CONTAINS "ad"
aws iam list-users --output text --query 'Users[?contains(UserName, `ad`) == `false`].UserName'

# get the latest mysql engine version
aws rds describe-db-engine-versions \
--query 'DBEngineVersions[]|[?contains(Engine, `mysql`) == `true`].[Engine,DBEngineVersionDescription]' \
| sort -r -k 2 | head -1

easy – aws keypair

Create a new keypair for use with AWS EC2 instances.
HINT: its a good idea to name your keys with a region, as keypairs are region specific.

aws ec2 create-key-pair --key-name=$KEY --region $REGION --query="KeyMaterial" --output=text > ~/.ssh/$KEY.pem
#readonly the key
chmod 400 ~/.ssh/$KEY.pem

Ready for use.

Must have awscli installed.