Amazon Alexa: Converting mp3 files

When working with Amazon Alexa audio/mp3 files and using SSML there are a few simple rules.

This blog assumes, awscli,wget and ffmepg are installed and configured.

Here are the must haves for using mp3 audio in your Alexa skill;

  • Public access file (S3 is a good location), see below for upload tips
  • HTTPS endpoint
  • MP3 format
  • less than 90 seconds
  • 48 kbps
  • 16000 Hz

Now, let’s give it a try.

Download a sample audio file

We’ll use this Crowd Excited free sample

cd /tmp

Convert it to the right Alexa format

Use this ffmpeg command for an easy cli converter and then use awscli to upload to a S3 bucket and set a public read ACL on the object.

ffmpeg -i crowd-excited.mp3 -ac 2 -codec:a libmp3lame -b:a 48k -ar 16000 $OUTFILE
aws s3 cp $OUTFILE s3://$BUCKET/ --storage-class REDUCED_REDUNDANCY
aws s3api put-object-acl --bucket $BUCKET --key $OUTFILE --acl public-read

Extra Tip — Working with the Alexa SDK Node.js

I got caught out when working with Alexa-SDK-NodeJs. Make sure declaration string ends with a space slash ” /” (without quotes). This might go in your index.js

// in index.js 
cheerAudio = `<audio src="" />`;



AWS Account Number from awscli

Easy way to get your AWS account number from the awscli

aws sts get-caller-identity --output text --query 'Account'

To put it to good use, combine it with ec2 describe images  command finds all your YOUR AMI images

aws ec2 describe-images --owner $(aws sts get-caller-identity --output text --query 'Account')


Docker Bridge Networking – a working example


Run through these commands to get familiar with docker bridge networking. Some experience with docker is assumed, including that docker is installed.
This example will create an internal docker bridge network, using the specified IP CIDR of

#download alpine container image
docker pull alpine 
# list docker networks
docker network ls 
# create docker network
docker network create myOverlayNet --internal --subnet
# list docker networks to check new network creation
docker network ls 
docker network inspect myOverlayNet

# Run a few images in the new network 
# - d daemon mode
# --name is the name for your new container, which can be resolved by dns
# --network add to network
docker run -ti --name host1 --network myOverlayNet alpine
# Press Ctrl-p and Ctrl-q to exit the container and leave it running
docker run -ti --name host2 --network myOverlayNet --link host1 alpine
# Press Ctrl-p and Ctrl-q to exit the container and leave it running
# Run a new container outside of myOverlayNet
docker run -ti --name OutsideHost --hostname OutsideHost alpine
docker start host1 host2 OutsideHost
# check they are all running
docker ps -a
# inspect the network to ensure the new containers (host1 and host2) have IPs. They will likely be and OutsideHost will be missing (as expected)
docker network inspect myOverlayNet
# inspect the default bridge network, OutsideHost will be there
docker network inspect bridge


This demonstrates the ability to connect within the myOverlayNet network and also how other networks, like default bridge where OutsideHost is don’t have access.

 docker exec

allows us to run a command in a container and see the output in stdout.

# should succeed, Ctrl-C to stop the ping
docker exec host1 ping host2
# should succeed, Ctrl-C to stop the ping
docker exec host2 ping host1

# should fail, Ctrl-C to stop the ping
docker exec host1 ping OutsideHost
# should fail, Ctrl-C to stop the ping
docker exec OutsideHost ping host1

Clean up

Clean up everything we created, hosts and overlay network

# kill the running containers
docker kill host1 host2 OutsideHost
# remove containers
docker rm host1 host2 OutsideHost
# remove network
docker network rm myOverlayNet
# check docker networks
docker network ls
#check containers are removed
docker ps -a


Well done, all completed and you now understand docker overlay networking a little better. Reach out via the comments or twitter if you have any questions.
by @shallawell, inspired by @ArjenSchwarz

amzn2-linux + bees

Have you ever needed to load test your website to validate how many requests per seconds it can handle? Checkout this repo to a packer file to build a AWS EC2 AMI for using with bees-with-machine-guns.

AMI Features

  • built with packer, so its repeatable
  • uses the latest AWS Linux2 AMI (at time of writing)
  • yum update –security on each build
  • AWS keys are variables, not hard-coded
  • bees-with-machine-guns (and dependencies) installed

Check it Out

Everything you need to load check your website, so go build some bees and Attack!!

Let me know how your website went.



awscli – Advanced Query Output

Advanced JMESPath Query – good help and examples here.

Use these combinations of awscli commands to generate the JSON output you need.

Let me know via comments or twitter if you need some help 🙂  HTH.

# @shallawell
# Program Name:
# Purpose: Demonstrate JMESPath Query examples
# version 0.1
# The awscli uses JMESPath Query expressions, rather than regex.

#Advanced JMESPath Query - good help and examples here.

# List all users, (basic query)
aws iam list-users --output text --query "Users[].UserName"
# List all users, NOT NULL
aws iam list-users --output text --query 'Users[?UserName!=`null`].UserName'
# list users STARTS_WITH "a"
aws iam list-users --output text --query 'Users[?starts_with(UserName, `a`) == `true`].UserName'
# list users CONTAINS "ad"
aws iam list-users --output text --query 'Users[?contains(UserName, `ad`) == `false`].UserName'

# get the latest mysql engine version
aws rds describe-db-engine-versions \
--query 'DBEngineVersions[]|[?contains(Engine, `mysql`) == `true`].[Engine,DBEngineVersionDescription]' \
| sort -r -k 2 | head -1

easy – aws keypair

Create a new keypair for use with AWS EC2 instances.
HINT: its a good idea to name your keys with a region, as keypairs are region specific.

aws ec2 create-key-pair --key-name=$KEY --region $REGION --query="KeyMaterial" --output=text > ~/.ssh/$KEY.pem
#readonly the key
chmod 400 ~/.ssh/$KEY.pem

Ready for use.

Must have awscli installed.

Enable IIS in Windows 10 with Powershell

SOMETIMES you need a local web server on your Windows 10 desktop.

To configure IIS on Windows 10 (or Windows 8), use these powershell commands to help get you started.

To enable (turn on) IIS

Enable-WindowsOptionalFeature –online –featurename IIS-WebServerRole

To find out if IIS is running

Get-Service W3SVC

To restart IIS


Once you are done (testing maybe?), uninstall it.  No-one wants to have their desktop compromised due to a IIS bug. 🙂

Disable-WindowsOptionalFeature –online –featurename IIS-WebServerRole

Here is a powershell script that will download, setup and enable IIS.

#download file function
function Download-File
 param (

Write-Host "Downloading $url to $saveAs"
 $downloader = new-object System.Net.WebClient
 $downloader.DownloadFile($url, $saveAs)

#download Windows Remote Server Admin Tool (allows Server Manager Module)
$SaveAsFileName = "C:\temp\WindowsTH-RSAT_WS2016-x64.msu"
$File1DownloadPath = ""
Download-File $File1DownloadPath $SaveAsFileName
# install RSAT
msiexec /i C:\temp\WindowsTH-RSAT_WS2016-x64.msu
# install IIS
Enable-WindowsOptionalFeature –online –featurename IIS-WebServerRole
# check it is running
Get-Service W3SVC

Hope this helps.




Snippets I’ve used or collected.

Configure awscli

Enter keys, region and output defaults.

$ aws configure
AWS Access Key ID [****************FEBA]:
AWS Secret Access Key [****************I4b/]:
Default region name [ap-southeast-2]:
Default output format [json]:

Add bash completion via Tab key

echo "complete -C aws_completer aws" >> ~/.bash_profile
source ~/.bash_profile

Other shell completion options

Supecharged AWSCLI SAWS or AWS-shell

Both offer improvements over the standard bash completions above.

S3 Accelerate Transfer
if you need to sync a large number of small files to S3, the increasing the following values added to your ~/.aws/config config file will speed up the sync process. (also need to enable accelerate transfer on bucket – see next command)

Modify .aws/config

[profile default]
s3 =
  max_concurrent_requests = 100
  max_queue_size = 10000
  use_accelerate_endpoint = true

Enable S3 bucket for Accelerate Transfer

The following example sets Status=Enabled to enable Transfer Acceleration on a bucket. You use Status=Suspended to suspend Transfer Acceleration.  This incurs an additional cost, so should be disable about the bulk upload is completed.

 $ aws s3api put-bucket-accelerate-configuration --bucket bucketname --accelerate-configuration Status=Enabled

Set default output type (choose 1)



echo "Waiting for EBS snapshot"
aws ec2 wait snapshot-completed --snapshot-ids snap-aabbccdd
echo "EBS snapshot completed"

Get the public IPs or DNS of EC2 instances

Uses jq for filtering output.

apt-get install jq
aws ec2 ls --ec2-tag-value mystack-inst* | jq --raw-output ".Reservations [].Instances [].PublicIpAddress

AWS Tags – jq improvements

With jq, AWSCLI json output can be hard to query can but you can map it into a normal object like this:

jq '<path to Tags> | map({"key": .Key, "value": .Value}) | from_entries'

Find untag EC2 instance.

This cmd finds instance NOT tagged ‘owner’.

aws ec2 describe-instances   --output text    --query 'Reservations[].Instances[?!not_null(Tags[?Key == `owner`].Value)] | [].[InstanceId]'

Create a tag for EC2 instance

NB. Add a for loop to tag all instances

aws ec2 create-tags --resources $i --tags Key=owner,Value=me


update: More jq

Describe stacks by tags.. Thanks to @arjenschwarz

aws cloudformation describe-stacks --region ap-southeast-2 --query 'Stacks[*].{name: StackName, tags: Tags}' | jq '.[] | if (.tags | from_entries | .template_suffix == "dev-yellow") then .name else "" end | select(. != "")'

Check out this blog on AWS Comprehend for a few more examples as well.


cloud9 IDE

If you need a Lnux Virtual Desktop for developers, then check out

Within 5 minutes, I had registered, logged in, view the sample code, deployed it and an Apache web server and run the code. Super Easy!!

Some of the constraints of other solutions may be;

* constraints
VMWare View 6 (v1.0 linux desktops)
Hardware purchases
Building and Managing infrastructure
Short cycle project work