AWS CodeCommit, IAM, CodeBuild with awscli

This blog describes using awscli to setup codecommit, ssh-keys, codebuild and IAM roles.

Now, an admin of a AWS acct could allow a user;

  • to provide a ssh public key – easily uploaded to IAM by awsadmin
  • give the user the new project location, after easily creating a project for them
  • git clone, always get the latest code – then make changes
  • upload new packer file, ready for another cookie-cutter rebuild – with new timestamp


  • awscli is setup / aws credentials configured
  • IAM user is already created (i’m using awsadmin role), which as sufficient EC2 & CodeCommit privileges.
  • git is installed
  • AWS region is ap-southeast-2 (Sydney)

Setup IAM

Create a SSH key

cd $HOME/.ssh
ssh-keygen -f YourSSHKeyName -t rsa -C "YourEmailAddress" -b 4096

Upload public key to AWS IAM and save output to $NEW_SSHPublicKeyId

NEW_SSHPublicKeyId=$(aws iam upload-ssh-public-key --user-name awsadmin --ssh-public-key-body "$(cat ~/.ssh/" --output text --query 'SSHPublicKey.SSHPublicKeyId')

Configure Linux to use CodeCommit as a git repo

update your $HOME/.ssh/config file

echo "
Host git-codecommit.*
 User $NEW_SSHPublicKeyId
 IdentityFile ~/.ssh/YourSSHKeyName" >> config
chmod 600 config

Test the new ssh key works



create a CodeCommit Repo

aws codecommit create-repository --repository-name testrepo

Clone the empty repo

This makes a local directory with a git structure

git clone ssh://

Make your buildspec.yml file – it must be named this.

The packer filename can be anything but the buildspec.yml must reference it correctly.

buildspec.yaml must contain the following. Cut/paste;

version: 0.2

 - echo "Installing HashiCorp Packer..."
 - curl -qL -o && unzip
 - echo "Installing jq..."
 - curl -qL -o jq && chmod +x ./jq
 - echo "Validating amazon-linux_packer-template.json"
 - ./packer validate amazon-linux_packer-template.json
 ### HashiCorp Packer cannot currently obtain the AWS CodeBuild-assigned role and its credentials
 ### Manually capture and configure the AWS CLI to provide HashiCorp Packer with AWS credentials
 ### More info here:
 - echo "Configuring AWS credentials"
 - curl -qL -o aws_credentials.json$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI > aws_credentials.json
 - aws configure set region $AWS_REGION
 - aws configure set aws_access_key_id `./jq -r '.AccessKeyId' aws_credentials.json`
 - aws configure set aws_secret_access_key `./jq -r '.SecretAccessKey' aws_credentials.json`
 - aws configure set aws_session_token `./jq -r '.Token' aws_credentials.json`
 - echo "Building HashiCorp Packer template, amazon-linux_packer-template.json"
 - ./packer build amazon-linux_packer-template.json
 - echo "HashiCorp Packer build completed on `date`"

amazon-linux_packer-template.json . This is a just a sample, make your own changes as needed.

 "variables": {
 "aws_region": "{{env `AWS_REGION`}}",
 "aws_ami_name": "amazon-linux_{{isotime \"02Jan2006\"}}"

"builders": [{
 "type": "amazon-ebs",
 "region": "{{user `aws_region`}}",
 "instance_type": "t2.micro",
 "ssh_username": "ec2-user",
 "ami_name": "{{user `aws_ami_name`}}",
 "ami_description": "Customized Amazon Linux",
 "associate_public_ip_address": "true",
 "source_ami_filter": {
 "filters": {
 "virtualization-type": "hvm",
 "name": "amzn-ami*-ebs",
 "root-device-type": "ebs"
 "owners": ["137112412989", "591542846629", "801119661308", "102837901569", "013907871322", "206029621532", "286198878708", "443319210888"],
 "most_recent": true

"provisioners": [
 "type": "shell",
 "inline": [
 "sudo yum update -y",
 "sudo /usr/sbin/update-motd --disable",
 "echo 'No unauthorized access permitted' | sudo tee /etc/motd",
 "sudo rm /etc/issue",
 "sudo ln -s /etc/motd /etc/issue",
 "sudo yum install -y elinks screen"

Commit to CodeCommitRepo

git add .
git commit -m "initial commit"
git push origin master


We need a role to run the job, this is how.
Create IAM service role for codebuild and then add hashicorp inline policy + Service Role from

Create a policy file called CodeBuild-IAM-policy.json
I’ve added logs.CreateLogStream and logs.createLogGroup to the Hashicorp policy.

 "Version": "2012-10-17",
 "Statement": [{
 "Effect": "Allow",
 "Action" : [
 "Resource" : "*"

Create the role and attach the inline policy, and then the ServiceRole policy

aws iam create-role --role-name codebuild-1stpacker-service-role --assume-role-policy-document file://CodeBuild-IAM-policy.json
aws iam attach-role-policy --role-name codebuild-1stpacker-service-role --policy-arn arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole

then get the arn to update the script below

aws iam get-role --role-name codebuild-1stpacker-service-role --query "Role.Arn" --output text

Create a project

Create a create-project.json file, and update the ARN for the serviceRole you created above.

 "name": "AMI_Builder",
 "description": "AMI builder CodeBuild project",
 "source": {
 "type": "CODECOMMIT",
 "location": "",
 "buildspec": "buildspec"
 "artifacts": {
 "type": "NO_ARTIFACTS"
 "environment": {
 "image": "aws/codebuild/ubuntu-base:14.04",
 "computeType": "BUILD_GENERAL1_SMALL"
 "serviceRole": "arn:aws:iam::AWS_ACCT:role/service-role/codebuild-1stpacker-service-role"

then build the project

aws codebuild create-project --cli-input-json file://create-project.json

Start a build

this take around 5 mins using a t2.micro and default scripts

aws codebuild start-build --project-name AMI_Builder

check your new AMIs

 aws ec2 describe-images --owner $(aws sts get-caller-identity --output text --query 'Account') --query 'Images[?contains(Name, `Ubuntu`)]'


All done. A fairly simple, yet powerful and dynamic way to allow users/sysadmins to churn out new AMIs.

Next improvements would be to;

  • Trigger a CodePipeline build for code commits
  • SNS topic for email notifications of build status
  • Replace it all with CFN

Inspiration for this AWS CLI focused blog came from AWS DevOps Blog


Are you AWS Well Architected?

What is Well-Architected?

AWS has developed and documented a set of Design Principles and published a whitepaper entitled “AWS Well-Architected Framework” (WAF). This document was recently updated (November 2017) to contain the latest and most up to date design principles that you should use as your foundation to deploy into the cloud.

The AWS Well-Architected Framework has 5 key pillars:

  • Operational Excellence – aligns operational practices with business objectives
  • Security – security in depth, tracability
  • Reliability – capacity, recovery and testing
  • Performance Efficiency – resource optimisation
  • Cost Optimization – consumption, analysis and focus on reducing costs

More recently, the need for workload specific WAFs has been identified. These workload specific WAFs are known as a Lens. Currently, Serverless Applications and High Performance Compute lenses are available.

Getting Well-Architected on AWS

By optimising your architecture around these pillars, you gain the benefits of a well-architected design in the cloud:

  • Stop guessing your capacity needs
  • Test systems at production scale
  • Automate for easier architectural experimentation
  • Allow for evolutionary architectures
  • Data-driven architectures
  • Improve through game days

How the AWS Well-Architected Framework works?

Approved AWS Partners, like Bulletproof, offers a Well-Architected Review designed to educate customers on architectural best practices for building and operating reliable, secure, efficient, and cost-effective systems in the cloud, and to assess their current workloads.

AWS Partners use the AWS Well-Architected Framework design principles and tools to deliver a comprehensive review of a single workload. A workload is defined as a set of machines, instances, or servers that together enable the delivery of a service or application to internal or external customers. Examples include an eCommerce website, business application, web application, etc. Through this review and guidance from the AWS Well-Architected Framework, can provide the customer an analysis of the workload through the lens of the WAF pillars, as well as a plan to address areas that do not comply with framework recommendations.

The value of a Well Architected Review

The value is under-priced. For a small price, you gain huge insights.

  • Understand potential impacts
  • Visibility of risks
  • Consistent approach to architecture review
  • Recommended remediation steps

Well-Architected Review

The review process usually consists of;

  • A one-day on-site, review and architecture deep-dive with your stakeholders by AWS Certified Solution Professional Architects to gather data on your workloads in your business/IT context
  • Analysis of the collected data, cross-analysis of the workload via the guidance offered by the Framework, and formulation of any recommended remediation, if applicable
  • A Well-Architected report, outlining all findings and recommendations

I love doing architecture reviews, they provide the opportunity to dive deep with customers and give real actionable insights.

AWS ELB vs ALB cost comparison

A lot of decisions (even simple questions) are based on cost. I was recently asked about the new ALB costs.  So, here is a quick comparison (examples at bottom) of the AWS Classic Load Balancer (ELB) and the newer Application Load Balancer (ALB).

The Hourly rate for the use of an Application Load Balancer is 10% lower than the cost of a Classic Load Balancer, but that’s not the whole story.

For Sydney region (ap-southeast-2) prices are;

  • Application Load Balancers = $0.0252 per hour.
  • Classic Load Balancers = $0.028 per hour

AWS have changed the way they bill on the ALB adding LCU (Load Balancer Capacity Units) which uses the highest values of the following:

  • New connections (per second) – Each LCU provides 25 new connections per second (averaged over the hour)
  • Active connections (per minute) – Each LCU provides 3,000 active connections per minute.
  • Bandwidth (Mbps) – Each LCU provides 2.22 (2220) Mbps averaged over the hour

LB resource costs per month estimates;

Application Load Balancer

$0.0252/h * 24 hours * 30 days = $18.14 per month + $0.008 per LCU per hour (see below)

Classic Load Balancer

$0.028 p/h * 24 hours * 30 days = $20.16 per month + $0.008 per GB of data processed

For the common bandwidth factor, the below math is an easy way to compare side by side (at a high level).

Example 1
Production environment (high usage) – bandwidth 2.5 Mbps (2500), Total data transferred 5000 GB
ALB 2500/2220 = 1.126126 LCUs 1.126126 * 0.008 * 24 * 30 = $6.48 + $18.14 = $24.62 = 49.8% saving
ELB 5000 * 0.008 * 24 * 30 = $28.80 + $20.16 = $48.96

Example 2
Test environment (medium usage)- bandwidth 200 kbps (200), Total data transferred 250 GB
ALB 200/2220 = 0.0900900 LCUs 0.0900900 * 0.008 * 24 * 30= $0.52 + $18.14 = $18.65 = 19.1% saving
ELB 250 * 0.008 * 24 * 30 = $2.88 + $20.16 = $23.04


Example 3
Dev environment (low usage) – bandwidth 10 kbps Total data transferred 5 GB
ALB 10/2220 = 0.0045045 LCUs =$0.26 + $18.14 = $18.40 = 10.1% saving
ELB 5 * 0.008 * 24 * 30 = $0.29 + $20.16 = $20.45


So, in nearly all cases, ALB is cheaper than an ELB for a general side by side comparison.
But, the best way to measure your costs, is with Resource Tagging and Billing Reports with some real world testing.

Amazon Alexa: Converting mp3 files

When working with Amazon Alexa audio/mp3 files and using SSML there are a few simple rules.

This blog assumes, awscli,wget and ffmepg are installed and configured.

Here are the must haves for using mp3 audio in your Alexa skill;

  • Public access file (S3 is a good location), see below for upload tips
  • HTTPS endpoint
  • MP3 format
  • less than 90 seconds
  • 48 kbps
  • 16000 Hz

Now, let’s give it a try.

Download a sample audio file

We’ll use this Crowd Excited free sample

cd /tmp

Convert it to the right Alexa format

Use this ffmpeg command for an easy cli converter and then use awscli to upload to a S3 bucket and set a public read ACL on the object.

ffmpeg -i crowd-excited.mp3 -ac 2 -codec:a libmp3lame -b:a 48k -ar 16000 $OUTFILE
aws s3 cp $OUTFILE s3://$BUCKET/ --storage-class REDUCED_REDUNDANCY
aws s3api put-object-acl --bucket $BUCKET --key $OUTFILE --acl public-read

Extra Tip — Working with the Alexa SDK Node.js

I got caught out when working with Alexa-SDK-NodeJs. Make sure declaration string ends with a space slash ” /” (without quotes). This might go in your index.js

// in index.js 
cheerAudio = `<audio src="" />`;


amzn2-linux + bees

Have you ever needed to load test your website to validate how many requests per seconds it can handle? Checkout this repo to a packer file to build a AWS EC2 AMI for using with bees-with-machine-guns.

AMI Features

  • built with packer, so its repeatable
  • uses the latest AWS Linux2 AMI (at time of writing)
  • yum update –security on each build
  • AWS keys are variables, not hard-coded
  • bees-with-machine-guns (and dependencies) installed

Check it Out

Everything you need to load check your website, so go build some bees and Attack!!

Let me know how your website went.




Snippets I’ve used or collected.

Configure awscli

Enter keys, region and output defaults.

$ aws configure
AWS Access Key ID [****************FEBA]:
AWS Secret Access Key [****************I4b/]:
Default region name [ap-southeast-2]:
Default output format [json]:

Add bash completion via Tab key

echo "complete -C aws_completer aws" >> ~/.bash_profile
source ~/.bash_profile

Other shell completion options

Supecharged AWSCLI SAWS or AWS-shell

Both offer improvements over the standard bash completions above.

S3 Accelerate Transfer
if you need to sync a large number of small files to S3, the increasing the following values added to your ~/.aws/config config file will speed up the sync process. (also need to enable accelerate transfer on bucket – see next command)

Modify .aws/config

[profile default]
s3 =
  max_concurrent_requests = 100
  max_queue_size = 10000
  use_accelerate_endpoint = true

Enable S3 bucket for Accelerate Transfer

The following example sets Status=Enabled to enable Transfer Acceleration on a bucket. You use Status=Suspended to suspend Transfer Acceleration.  This incurs an additional cost, so should be disable about the bulk upload is completed.

 $ aws s3api put-bucket-accelerate-configuration --bucket bucketname --accelerate-configuration Status=Enabled

Set default output type (choose 1)



echo "Waiting for EBS snapshot"
aws ec2 wait snapshot-completed --snapshot-ids snap-aabbccdd
echo "EBS snapshot completed"

Get the public IPs or DNS of EC2 instances

Uses jq for filtering output.

apt-get install jq
aws ec2 ls --ec2-tag-value mystack-inst* | jq --raw-output ".Reservations [].Instances [].PublicIpAddress

AWS Tags – jq improvements

With jq, AWSCLI json output can be hard to query can but you can map it into a normal object like this:

jq '<path to Tags> | map({"key": .Key, "value": .Value}) | from_entries'

Find untag EC2 instance. 

This cmd finds instance NOT tagged ‘owner’.

aws ec2 describe-instances   –output text    –query ‘Reservations[].Instances[?!not_null(Tags[?Key == `owner`].Value)] | [].[InstanceId]’
Create a tag for EC2 instance

 NB. Add a for loop to tag all instances

aws ec2 create-tags –resources $i –tags Key=owner,Value=me

Easy way to rotate AWS access keys

We all know we should change passwords often, well same goes for access keys.

This handy INTERACTIVE bash script walks you through to create a new AWS Access Key, save the .pem file in your .ssh directory.  And gives you the option to delete the old keys.

You can download from my gitlab – here

Hope this helps someone 🙂

# @shallawell
# Program Name:
# Purpose: Manage AWS access keys
# version 0.1

# new key content will be created in this file.
#remove the old file first
rm $FILE
### create a key
echo -n "Do you want to create a new Access/Secret key. (y/n) [ENTER]: "
#get user input
read response2
if [ "$response2" == "y" ]; then
echo "Ok.. Creating a new keys !!!"
aws iam create-access-key --output json | grep Access | tail -2 | tee -a $FILE
#Alternative create key command
#aws ec2 create-key-pair --key-name=$KEY --region $REGION --query="KeyMaterial" --output=text > ~/.ssh/$KEY.pem
#readonly the key
#chmod 400 ~/.ssh/$KEY.pem
echo "key created."
echo "REMEMBER: You should rotate keys at least once a year! Max of 2 keys per user."
echo "$FILE created for Access and Secret Keys"
echo "HINT: Run aws configure to update keys. (you just rotated your keys!)"
else [ "$response2" == "n" ]
echo "Not creating keys."
exit 0

### list a key, save to DELKEY var
#this command LISTS the access keys for current user, sorts by CreateDate,
#gets the latest AccessKeyId result. awk grabs the Access key (excludes date field)
DELKEY=$(aws iam list-access-keys \
--query 'AccessKeyMetadata[].[AccessKeyId,CreateDate]' \
| sort -r -k 2 | tail -1 | awk {'print $1'})

echo "list-Access-key sorted to find OLDEST key."
echo -n "Key Found : $DELKEY. Do you want to delete this key. (y/n) [ENTER]: "
#get user input
read response
if [ "$response" == "y" ]; then
echo "you said yes. Deleteing Key in 3 secs!!!"
sleep 3
echo "delete-access-key disabled, NO REAL DELETE OCCURRED"
### delete a key, uncomment to activate the delete function.
#aws iam delete-access-key --access-key-id $DELKEY
echo "deleted $DELKEY"
else [ "$response" == "n" ]
echo "you said no. Not Deleting"

echo "done."