AWS CodeCommit, IAM, CodeBuild with awscli

This blog describes using awscli to setup codecommit, ssh-keys, codebuild and IAM roles.

Now, an admin of a AWS acct could allow a user;

  • to provide a ssh public key – easily uploaded to IAM by awsadmin
  • give the user the new project location, after easily creating a project for them
  • git clone, always get the latest code – then make changes
  • upload new packer file, ready for another cookie-cutter rebuild – with new timestamp


  • awscli is setup / aws credentials configured
  • IAM user is already created (i’m using awsadmin role), which as sufficient EC2 & CodeCommit privileges.
  • git is installed
  • AWS region is ap-southeast-2 (Sydney)

Setup IAM

Create a SSH key

cd $HOME/.ssh
ssh-keygen -f YourSSHKeyName -t rsa -C "YourEmailAddress" -b 4096

Upload public key to AWS IAM and save output to $NEW_SSHPublicKeyId

NEW_SSHPublicKeyId=$(aws iam upload-ssh-public-key --user-name awsadmin --ssh-public-key-body "$(cat ~/.ssh/" --output text --query 'SSHPublicKey.SSHPublicKeyId')

Configure Linux to use CodeCommit as a git repo

update your $HOME/.ssh/config file

echo "
Host git-codecommit.*
 User $NEW_SSHPublicKeyId
 IdentityFile ~/.ssh/YourSSHKeyName" >> config
chmod 600 config

Test the new ssh key works



create a CodeCommit Repo

aws codecommit create-repository --repository-name testrepo

Clone the empty repo

This makes a local directory with a git structure

git clone ssh://

Make your buildspec.yml file – it must be named this.

The packer filename can be anything but the buildspec.yml must reference it correctly.

buildspec.yaml must contain the following. Cut/paste;

version: 0.2

 - echo "Installing HashiCorp Packer..."
 - curl -qL -o && unzip
 - echo "Installing jq..."
 - curl -qL -o jq && chmod +x ./jq
 - echo "Validating amazon-linux_packer-template.json"
 - ./packer validate amazon-linux_packer-template.json
 ### HashiCorp Packer cannot currently obtain the AWS CodeBuild-assigned role and its credentials
 ### Manually capture and configure the AWS CLI to provide HashiCorp Packer with AWS credentials
 ### More info here:
 - echo "Configuring AWS credentials"
 - curl -qL -o aws_credentials.json$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI > aws_credentials.json
 - aws configure set region $AWS_REGION
 - aws configure set aws_access_key_id `./jq -r '.AccessKeyId' aws_credentials.json`
 - aws configure set aws_secret_access_key `./jq -r '.SecretAccessKey' aws_credentials.json`
 - aws configure set aws_session_token `./jq -r '.Token' aws_credentials.json`
 - echo "Building HashiCorp Packer template, amazon-linux_packer-template.json"
 - ./packer build amazon-linux_packer-template.json
 - echo "HashiCorp Packer build completed on `date`"

amazon-linux_packer-template.json . This is a just a sample, make your own changes as needed.

 "variables": {
 "aws_region": "{{env `AWS_REGION`}}",
 "aws_ami_name": "amazon-linux_{{isotime \"02Jan2006\"}}"

"builders": [{
 "type": "amazon-ebs",
 "region": "{{user `aws_region`}}",
 "instance_type": "t2.micro",
 "ssh_username": "ec2-user",
 "ami_name": "{{user `aws_ami_name`}}",
 "ami_description": "Customized Amazon Linux",
 "associate_public_ip_address": "true",
 "source_ami_filter": {
 "filters": {
 "virtualization-type": "hvm",
 "name": "amzn-ami*-ebs",
 "root-device-type": "ebs"
 "owners": ["137112412989", "591542846629", "801119661308", "102837901569", "013907871322", "206029621532", "286198878708", "443319210888"],
 "most_recent": true

"provisioners": [
 "type": "shell",
 "inline": [
 "sudo yum update -y",
 "sudo /usr/sbin/update-motd --disable",
 "echo 'No unauthorized access permitted' | sudo tee /etc/motd",
 "sudo rm /etc/issue",
 "sudo ln -s /etc/motd /etc/issue",
 "sudo yum install -y elinks screen"

Commit to CodeCommitRepo

git add .
git commit -m "initial commit"
git push origin master


We need a role to run the job, this is how.
Create IAM service role for codebuild and then add hashicorp inline policy + Service Role from

Create a policy file called CodeBuild-IAM-policy.json
I’ve added logs.CreateLogStream and logs.createLogGroup to the Hashicorp policy.

 "Version": "2012-10-17",
 "Statement": [{
 "Effect": "Allow",
 "Action" : [
 "Resource" : "*"

Create the role and attach the inline policy, and then the ServiceRole policy

aws iam create-role --role-name codebuild-1stpacker-service-role --assume-role-policy-document file://CodeBuild-IAM-policy.json
aws iam attach-role-policy --role-name codebuild-1stpacker-service-role --policy-arn arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole

then get the arn to update the script below

aws iam get-role --role-name codebuild-1stpacker-service-role --query "Role.Arn" --output text

Create a project

Create a create-project.json file, and update the ARN for the serviceRole you created above.

 "name": "AMI_Builder",
 "description": "AMI builder CodeBuild project",
 "source": {
 "type": "CODECOMMIT",
 "location": "",
 "buildspec": "buildspec"
 "artifacts": {
 "type": "NO_ARTIFACTS"
 "environment": {
 "image": "aws/codebuild/ubuntu-base:14.04",
 "computeType": "BUILD_GENERAL1_SMALL"
 "serviceRole": "arn:aws:iam::AWS_ACCT:role/service-role/codebuild-1stpacker-service-role"

then build the project

aws codebuild create-project --cli-input-json file://create-project.json

Start a build

this take around 5 mins using a t2.micro and default scripts

aws codebuild start-build --project-name AMI_Builder

check your new AMIs

 aws ec2 describe-images --owner $(aws sts get-caller-identity --output text --query 'Account') --query 'Images[?contains(Name, `Ubuntu`)]'


All done. A fairly simple, yet powerful and dynamic way to allow users/sysadmins to churn out new AMIs.

Next improvements would be to;

  • Trigger a CodePipeline build for code commits
  • SNS topic for email notifications of build status
  • Replace it all with CFN

Inspiration for this AWS CLI focused blog came from AWS DevOps Blog


Amazon Alexa: Converting mp3 files

When working with Amazon Alexa audio/mp3 files and using SSML there are a few simple rules.

This blog assumes, awscli,wget and ffmepg are installed and configured.

Here are the must haves for using mp3 audio in your Alexa skill;

  • Public access file (S3 is a good location), see below for upload tips
  • HTTPS endpoint
  • MP3 format
  • less than 90 seconds
  • 48 kbps
  • 16000 Hz

Now, let’s give it a try.

Download a sample audio file

We’ll use this Crowd Excited free sample

cd /tmp

Convert it to the right Alexa format

Use this ffmpeg command for an easy cli converter and then use awscli to upload to a S3 bucket and set a public read ACL on the object.

ffmpeg -i crowd-excited.mp3 -ac 2 -codec:a libmp3lame -b:a 48k -ar 16000 $OUTFILE
aws s3 cp $OUTFILE s3://$BUCKET/ --storage-class REDUCED_REDUNDANCY
aws s3api put-object-acl --bucket $BUCKET --key $OUTFILE --acl public-read

Extra Tip — Working with the Alexa SDK Node.js

I got caught out when working with Alexa-SDK-NodeJs. Make sure declaration string ends with a space slash ” /” (without quotes). This might go in your index.js

// in index.js 
cheerAudio = `<audio src="" />`;


Docker Bridge Networking – a working example


Run through these commands to get familiar with docker bridge networking. Some experience with docker is assumed, including that docker is installed.
This example will create an internal docker bridge network, using the specified IP CIDR of

#download alpine container image
docker pull alpine 
# list docker networks
docker network ls 
# create docker network
docker network create myOverlayNet --internal --subnet
# list docker networks to check new network creation
docker network ls 
docker network inspect myOverlayNet

# Run a few images in the new network 
# - d daemon mode
# --name is the name for your new container, which can be resolved by dns
# --network add to network
docker run -ti --name host1 --network myOverlayNet alpine
# Press Ctrl-p and Ctrl-q to exit the container and leave it running
docker run -ti --name host2 --network myOverlayNet --link host1 alpine
# Press Ctrl-p and Ctrl-q to exit the container and leave it running
# Run a new container outside of myOverlayNet
docker run -ti --name OutsideHost --hostname OutsideHost alpine
docker start host1 host2 OutsideHost
# check they are all running
docker ps -a
# inspect the network to ensure the new containers (host1 and host2) have IPs. They will likely be and OutsideHost will be missing (as expected)
docker network inspect myOverlayNet
# inspect the default bridge network, OutsideHost will be there
docker network inspect bridge


This demonstrates the ability to connect within the myOverlayNet network and also how other networks, like default bridge where OutsideHost is don’t have access.

 docker exec

allows us to run a command in a container and see the output in stdout.

# should succeed, Ctrl-C to stop the ping
docker exec host1 ping host2
# should succeed, Ctrl-C to stop the ping
docker exec host2 ping host1

# should fail, Ctrl-C to stop the ping
docker exec host1 ping OutsideHost
# should fail, Ctrl-C to stop the ping
docker exec OutsideHost ping host1

Clean up

Clean up everything we created, hosts and overlay network

# kill the running containers
docker kill host1 host2 OutsideHost
# remove containers
docker rm host1 host2 OutsideHost
# remove network
docker network rm myOverlayNet
# check docker networks
docker network ls
#check containers are removed
docker ps -a


Well done, all completed and you now understand docker overlay networking a little better. Reach out via the comments or twitter if you have any questions.
by @shallawell, inspired by @ArjenSchwarz


Snippets I’ve used or collected.

Configure awscli

Enter keys, region and output defaults.

$ aws configure
AWS Access Key ID [****************FEBA]:
AWS Secret Access Key [****************I4b/]:
Default region name [ap-southeast-2]:
Default output format [json]:

Add bash completion via Tab key

echo "complete -C aws_completer aws" >> ~/.bash_profile
source ~/.bash_profile

Other shell completion options

Supecharged AWSCLI SAWS or AWS-shell

Both offer improvements over the standard bash completions above.

S3 Accelerate Transfer
if you need to sync a large number of small files to S3, the increasing the following values added to your ~/.aws/config config file will speed up the sync process. (also need to enable accelerate transfer on bucket – see next command)

Modify .aws/config

[profile default]
s3 =
  max_concurrent_requests = 100
  max_queue_size = 10000
  use_accelerate_endpoint = true

Enable S3 bucket for Accelerate Transfer

The following example sets Status=Enabled to enable Transfer Acceleration on a bucket. You use Status=Suspended to suspend Transfer Acceleration.  This incurs an additional cost, so should be disable about the bulk upload is completed.

 $ aws s3api put-bucket-accelerate-configuration --bucket bucketname --accelerate-configuration Status=Enabled

Set default output type (choose 1)



echo "Waiting for EBS snapshot"
aws ec2 wait snapshot-completed --snapshot-ids snap-aabbccdd
echo "EBS snapshot completed"

Get the public IPs or DNS of EC2 instances

Uses jq for filtering output.

apt-get install jq
aws ec2 ls --ec2-tag-value mystack-inst* | jq --raw-output ".Reservations [].Instances [].PublicIpAddress

AWS Tags – jq improvements

With jq, AWSCLI json output can be hard to query can but you can map it into a normal object like this:

jq '<path to Tags> | map({"key": .Key, "value": .Value}) | from_entries'

Find untag EC2 instance. 

This cmd finds instance NOT tagged ‘owner’.

aws ec2 describe-instances   –output text    –query ‘Reservations[].Instances[?!not_null(Tags[?Key == `owner`].Value)] | [].[InstanceId]’
Create a tag for EC2 instance

 NB. Add a for loop to tag all instances

aws ec2 create-tags –resources $i –tags Key=owner,Value=me

cloud9 IDE

If you need a Lnux Virtual Desktop for developers, then check out

Within 5 minutes, I had registered, logged in, view the sample code, deployed it and an Apache web server and run the code. Super Easy!!

Some of the constraints of other solutions may be;

* constraints
VMWare View 6 (v1.0 linux desktops)
Hardware purchases
Building and Managing infrastructure
Short cycle project work

Easy way to rotate AWS access keys

We all know we should change passwords often, well same goes for access keys.

This handy INTERACTIVE bash script walks you through to create a new AWS Access Key, save the .pem file in your .ssh directory.  And gives you the option to delete the old keys.

You can download from my gitlab – here

Hope this helps someone 🙂

# @shallawell
# Program Name:
# Purpose: Manage AWS access keys
# version 0.1

# new key content will be created in this file.
#remove the old file first
rm $FILE
### create a key
echo -n "Do you want to create a new Access/Secret key. (y/n) [ENTER]: "
#get user input
read response2
if [ "$response2" == "y" ]; then
echo "Ok.. Creating a new keys !!!"
aws iam create-access-key --output json | grep Access | tail -2 | tee -a $FILE
#Alternative create key command
#aws ec2 create-key-pair --key-name=$KEY --region $REGION --query="KeyMaterial" --output=text > ~/.ssh/$KEY.pem
#readonly the key
#chmod 400 ~/.ssh/$KEY.pem
echo "key created."
echo "REMEMBER: You should rotate keys at least once a year! Max of 2 keys per user."
echo "$FILE created for Access and Secret Keys"
echo "HINT: Run aws configure to update keys. (you just rotated your keys!)"
else [ "$response2" == "n" ]
echo "Not creating keys."
exit 0

### list a key, save to DELKEY var
#this command LISTS the access keys for current user, sorts by CreateDate,
#gets the latest AccessKeyId result. awk grabs the Access key (excludes date field)
DELKEY=$(aws iam list-access-keys \
--query 'AccessKeyMetadata[].[AccessKeyId,CreateDate]' \
| sort -r -k 2 | tail -1 | awk {'print $1'})

echo "list-Access-key sorted to find OLDEST key."
echo -n "Key Found : $DELKEY. Do you want to delete this key. (y/n) [ENTER]: "
#get user input
read response
if [ "$response" == "y" ]; then
echo "you said yes. Deleteing Key in 3 secs!!!"
sleep 3
echo "delete-access-key disabled, NO REAL DELETE OCCURRED"
### delete a key, uncomment to activate the delete function.
#aws iam delete-access-key --access-key-id $DELKEY
echo "deleted $DELKEY"
else [ "$response" == "n" ]
echo "you said no. Not Deleting"

echo "done."

GIT – setup a new private repo and pull,push code

This is suitable for GITLAB, but with a little modification would work with other GITs.

Install git on Ubuntu

apt-get install -y git

How to setup git

git config --global "name"
git config --global ""
git remote add origin http://IP/path/to/repository

How to create new PRIVATE repo (for GITLAB)

This needs to be done via the GITLAB API.  Grab your Personal Token from GITLAB Account Settings and use this bash script.

Get it here

# @shallawell
# Program Name:
# Version = 0.0.2
# if the remote repo does not exist, use this script before you attempt to
# git push
# Can create a simple Gitlab repo
# start ===============================
#set vars
#Public or Private repo, set to true for PUBLIC REPO

# test var
if [ $TOKEN == YOURTOKENHERE ]; then
 echo "Error : update the git TOKEN variable."
 exit 0

if [ ${#@} == 0 ]; then
 echo "Error : Repo name required."
 echo "Usage: $0 REPO_NAME"
 echo "You need to provider the repostiory/project name to create on the remote server."
 echo "Ensure $TOKEN has been set."
# main command
 curl -H "Content-Type:application/json" $REMOTE_GIT/api/v3/projects?private_token=$TOKEN -d \
 "{ \"name\": \"$REPO_NAME\", \"public\": $PUBLIC }"
# end ===============================

How to Pull, Checkout and Push

git pull <repo-link>
git checkout -b master
<make your code changes>
<make changes to .gitignore to exclude files>
git add .
git commit -m "updated the code to/for ..."
git show
git push origin master

Other GIT handy commands

git fetch <repo-name>       # get changes, rather than pull whole code base
git init                    # initialize new local dir
git status                  # print status
git remote get-url origin   # check the origin

Nice list of Aliases for .bashrc

$cat .bashrc
# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
 . /etc/bashrc

# Uncomment the following line if you don't like systemctl's auto-paging feature:

# User specific aliases and functions
alias cp='cp -iv' # Preferred 'cp' implementation
alias mv='mv -iv' # Preferred 'mv' implementation
alias mkdir='mkdir -pv' # Preferred 'mkdir' implementation
alias ll='ls -FGlAhp' # Preferred 'ls' implementation
alias la='ll -FGlAhpa' # Preferred 'ls -a' implementation
alias less='less -FSRXc' # Preferred 'less' implementation
cd() { builtin cd "$@"; ll; } # Always list directory contents upon 'cd'
alias cd..='cd ../' # Go back 1 directory level (for fast typers)
alias ..='cd ../' # Go back 1 directory level
alias ...='cd ../../' # Go back 2 directory levels
alias .3='cd ../../../' # Go back 3 directory levels
alias .4='cd ../../../../' # Go back 4 directory levels
alias .5='cd ../../../../../' # Go back 5 directory levels
alias .6='cd ../../../../../../' # Go back 6 directory levels
alias findall='find / -name' # find on the whole filesystem
alias sudo="sudo " # A trailing space in value causes the next word to be checked for alias substitution when the alias is expanded

# lr: Full Recursive Directory Listing
# ------------------------------------------
alias lr='ls -R | grep ":$" | sed -e '\''s/:$//'\'' -e '\''s/[^-][^\/]*\//--/g'\'' -e '\''s/^/ /'\'' -e '\''s/-/|/'\'' | less'

# showa: to remind yourself of an alias (given some part of it)
# ------------------------------------------------------------
 showa () { /usr/bin/grep --color=always -i -a1 $@ ~/Library/init/bash/aliases.bash | grep -v '^\s*$' | less -FSRXc ; }

alias mypubip='curl -s' # mypubip: Public facing IP Address

alias openPorts='ss -t -a -l' # openPorts: List Listening TCP ports

# ii: display useful host related informaton
# -------------------------------------------------------------------
 ii() {
 echo -e "\nYou are logged on ${RED}$HOST"
 echo -e "\nAdditionnal information:$NC " ; uname -a
 echo -e "\n${RED}Users logged on:$NC " ; w -h
 echo -e "\n${RED}Current date :$NC " ; date
 echo -e "\n${RED}Machine stats :$NC " ; uptime
 echo -e "\n${RED}Public facing IP Address :$NC " ;mypubip

# httpDebug: Download a web page and show info on what took time
# -------------------------------------------------------------------
 httpDebug () { /usr/bin/curl $@ -o /dev/null -w "dns: %{time_namelookup} connect: %{time_connect} pretransfer: %{time_pretransfer} starttransfer: %{time_starttransfer} total: %{time_total}\n" ; }