6 tips for AWS Well-Architected Cost Optimisation + Serverless bonuses

Here are 6 tips or tools for improving your Cost Optimisation strategies for AWS. Grab the free tools below to get a deeper cost insight on your account.

Trusted Advisor

The basic level of Trusted Advisor checks over 30 key metrics for Security and Service Limits, but the real gold is having Business or Enterprise support which will let you access a larger range of checks (Cost Optimisation) and get programmatic access to the Trusted Advisor API. You can then monitor Reserved Instance (RIs) usage and under utilised or idle EC2 or RDS resources. Try the AWS tools https://github.com/aws/Trusted-Advisor-Tools which uses Cloudwatch Events and Lambda to help automate best practices.

Billing Alarms

Billing alarms allows you to be proactive about being notified about cost increases. Create a billing alarm on the Billing console. Set limits, (create multiple alarms for different thresholds $10, $50, $1000) and pair with SNS/Email notifications and when they are breached, you’ll be told. Grab this free cloudformation template for create a quick billing alarm. https://gitlab.com/shallawell/cfn-billing-alarm

Cost Explorer

Visualise your spend by resources. Use this AWS SAM (Serverless Application Module) function to generate a monthly excel report to have it delivered to your inbox. https://github.com/aws-samples/aws-cost-explorer-report

Resource Tagging Strategy

Use a tagging strategy to assist cost allocation by allowing cost reporting by tags. Consider the Business Tags suggestions on this page for some ideas. Tagging Strategies . Used in conjunction with the Cost Explorer script (update the Cost Tags), you can start to gain insights to new information.

AWS Simply Cost Calculator

This handy calculator lets you estimate resource costs before deployment, useful in some scenarios, but it has limited (hence the name: simple) options for some resource types. Good for a simple 3 tier app with typical storage and data transfer. Or maybe you just need a quick EC2 cost comparison tool to compare multiple instance types side by side.

AWS News

This is by far the easiest way to stay on top of new announcements and services (operational excellence) and another way to be cost aware. For example – M5 vs M4 EC2 will deliver 14% better price/performance on a per core basis. If you are still using M4’s consider a migration to the newer, faster (performance efficiency), cheaper (cost optimisation) technology

Got a RSS reader? no, get one – and subscribe to AWS News. I use a browser based one (do a search), so updates are only a click away.

HTH

Advertisements

2 handy AWS S3 public-read commands

I needed to host a simple public html file. S3 is easy.

To make a file publicly readable in S3 is easy without having to give up whole bucket access. This method also uses the newer ONE_IA storage option to save a few cents.

It uses the S3 and S3api commands from awscli for this.

Update the FILE and BUCKET variables to make it yours. Tested on Ubuntu.

FILE=privacy_policy.html
BUCKET=myAwsomeBucketName 
aws s3 cp $FILE s3://$BUCKET/ --storage-class ONE_IA 
aws s3api put-object-acl --bucket $BUCKET --key $FILE --acl public-read

HTH

Streamlined Security for AWS Well-Architected solutions using Inspector & Guard Duty

Improving security is always a good idea. Some people still think only about static security, such as Security Groups, SSL or VPNs.

Consideration should also be given to the application vulnerability .  Adding a regular scanning process adds another layer of defence. Remember; Test is best.

AWS Inspector

AWS Inspector  has been around since October 2015 so its not new, but its implementation and use make it easy to get started – even if you haven’t tried it yet.

It uses an awsagent installed inside the AMI to allow the Inspector service to connect and perform the scan. Results are rated as High, Medium, Low and Informational severity, via the Inspector console and Findings tab.

Benefits for each Well-Architected pillar are;

  • Improve Security, easy and early detection of security issues for your applications, a streamlined compliance approach and an overall better security position
  • Gain Performance Efficiency, integrate with DevOps through your CI/CD pipeline and increase your development agility
  • Ensure Reliability, as Inspector can be run it as often as needed, in parallel (but only 1 concurrent scan per agent), with repeatable and measurable results
  • Improve Operational Excellence, by using a suitable managed service, your operating burden is a reduced and by leveraging expertise of security experts with preconfigured tests. You also save time by not having to submit a request to AWS Support for approval to vulnerability scan whic is needed as part of the Acceptable Use Policy for Security.
  • Cost Optimisation, its cost is certainly justified in most cases compared with a roll-your-own solution

A Use Case study – Hardening a custom AMI to verify CIS Benchmark compliance

  • Build your own custom AMI
  • Patch and secure it using CIS Benchmark best practices.
  • Tag instances (I used InspectorReady:yes)
  • Download and install inspector agent with these commands
# install Inspector Agent software
wget https://d1wk0tztpsntt1.cloudfront.net/linux/latest/install 
sudo bash install 
sudo /opt/aws/awsagent/bin/awsagent status    # check agent status
  • Go to Inspector console, create a IAM Role for Inspector to use (if not already done)
  • Create/Select targets for assessment based on the tag you created
  • Create Assessment Template, adding multiple Rules Packages (will take longer to scan the more you add)  
    • CIS Operating System Security Configuration Benchmarks-1.0
    • Security Best Practices-1.0
  • Run Inspector Assessment on Assessment Targets
  • Review Findings, they can be downloaded as CSV as well
  • Remediate High risk issues as a priority using the Findings as a task list.

Using the Common Vulnerability and Exposure 1.1 rules package, a scan took around an hour to complete, using a (poor choice!) t2.micro instance.  A faster instance type (M5 or C5) would yield quicker results.

Inspector Findings, on a new Bitnami WordPress install showed (52 overall) 30 High risk issues.

inspectorFindings

Pricing is reasonable. 25 runs x $0.30 = $7.50. You also get the first 250 free in the initial 90 days as part of the AWS Free Tier service. So it’s basically free to give it a try and then start to incorporate it into your CI/CD pipeline.

AWS Guard Duty

Further compliment your good work with Inspector by enabling AWS Guard Duty for all your accounts. Unlike Inspector which checks for threats within the AMI or at the OS level via an agent, Guard Duty does the same for your AWS account activity – continuously without agents.  Within 30 minutes of enabling Guard Duty (and launching a new test instance with unrestricted ports), I checked the Findings.  It showed these results; GuardDuty Results

Guard Duty has associated costs, based on number of CloudTrail events and GB/logs assessed.  Once you sign up you get 30 days free.  Via the Guard Duty console, you can check you Free Tier usage progress as well as a estimate of monthly costs you could expect after your free tier offer expires.  Helps you to make informed cost decisions.

Another good idea is to subscribe to a Vulnerability Notification list for your applications to ensure you are staying up to date with potential issues. It’s good practice to patch security often.

As a champion for a Well-Architected system, these tools ticks all the boxes for the 5 pillars, not just security.

HTH

3 tools for Well-Architected Security and Operational Pillars + bonuses

Everyone loves tools. Here are a few free ones to add to your toolkit.

Using checklists in life is a great way to keep track of the important (and sometimes less-so) things.  Same applies when considering IT Operations and Security.

Below are a few things I have put on ‘my list’ and they also serve as a great guide to giving your AWS environment a healthy and friendly assessment of its readiness for operations and security as well as providing a great set of benchmark documentation for you to keep.

Use these as a guide depending on the size of your environment or company (Enterprises usually have much more rigour than a start-up or smaller shop), complex and compliance needs.  Of course, you can always create your own to suit your  requirements as well. Review these for inspiration.

AWS Checklists

AWS_Security_Checklist_General.pdf

AWS_Auditing_Security_Checklist.pdf

AWS_Operational_Checklists.pdf

The Auditing Checklist can be used to assist auditors of your environment understand the how auditing in AWS can be achieved, considering controls such as the AWS Shared Responsibility Model.

Plus the bonuses

Using a checklist for operation and security fitness are  key foundations of the Security and Operational Pillars for a AWS Well-Architected environment, using the Well-Architected Framework as a basis.

CIS –  AWS Foundation Benchmark

The Centre for Internet Security (CIS) has released an extensive set of security recommendations specifically for use with AWS environments. Use this excellent AWS CIS benchmark document to improve and validate your security posture.  Advanced techniques within this guide are included.

PCI Cloud Compliance Technical Workbook

If you are operating in a more sensitive environment to meet compliance requirements you might find it useful to also check out this handy technical workbook from Anitian.  It outlines controls which can be used with AWS to achieve PCI-DSS compliance. Also check out the AWS Risk and Compliance Whitepaper for further compliance information for various standards.

HTH

AWS CodeCommit, IAM, CodeBuild with awscli

This blog describes using awscli to setup codecommit, ssh-keys, codebuild and IAM roles.

Now, an admin of a AWS acct could allow a user;

  • to provide a ssh public key – easily uploaded to IAM by awsadmin
  • give the user the new project location, after easily creating a project for them
  • git clone, always get the latest code – then make changes
  • upload new packer file, ready for another cookie-cutter rebuild – with new timestamp

Assumptions

  • awscli is setup / aws credentials configured
  • IAM user is already created (i’m using awsadmin role), which as sufficient EC2 & CodeCommit privileges.
  • git is installed
  • AWS region is ap-southeast-2 (Sydney)

Setup IAM

Create a SSH key

cd $HOME/.ssh
ssh-keygen -f YourSSHKeyName -t rsa -C "YourEmailAddress" -b 4096

Upload public key to AWS IAM and save output to $NEW_SSHPublicKeyId

NEW_SSHPublicKeyId=$(aws iam upload-ssh-public-key --user-name awsadmin --ssh-public-key-body "$(cat ~/.ssh/YourSSHKeyName.pub)" --output text --query 'SSHPublicKey.SSHPublicKeyId')

Configure Linux to use CodeCommit as a git repo

update your $HOME/.ssh/config file

echo "
Host git-codecommit.*.amazonaws.com
 User $NEW_SSHPublicKeyId
 IdentityFile ~/.ssh/YourSSHKeyName" >> config
chmod 600 config

Test the new ssh key works

ssh git-codecommit.ap-southeast-2.amazonaws.com

CodeCommit

create a CodeCommit Repo

aws codecommit create-repository --repository-name
 testrepo
ssh://git-codecommit.ap-southeast-2.amazonaws.com/v1/repos/testrepo

Clone the empty repo

This makes a local directory with a git structure

git clone ssh://git-codecommit.ap-southeast-2.amazonaws.com/v1/repos/1stPackerCodeCommitRepo

Make your buildspec.yml file – it must be named this.

The packer filename can be anything but the buildspec.yml must reference it correctly.

buildspec.yaml must contain the following. Cut/paste;

---
version: 0.2

phases:
 pre_build:
 commands:
 - echo "Installing HashiCorp Packer..."
 - curl -qL -o packer.zip https://releases.hashicorp.com/packer/1.2.1/packer_1.2.1_linux_amd64.zip && unzip packer.zip
 - echo "Installing jq..."
 - curl -qL -o jq https://stedolan.github.io/jq/download/linux64/jq && chmod +x ./jq
 - echo "Validating amazon-linux_packer-template.json"
 - ./packer validate amazon-linux_packer-template.json
 build:
 commands:
 ### HashiCorp Packer cannot currently obtain the AWS CodeBuild-assigned role and its credentials
 ### Manually capture and configure the AWS CLI to provide HashiCorp Packer with AWS credentials
 ### More info here: https://github.com/mitchellh/packer/issues/4279
 - echo "Configuring AWS credentials"
 - curl -qL -o aws_credentials.json http://169.254.170.2/$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI > aws_credentials.json
 - aws configure set region $AWS_REGION
 - aws configure set aws_access_key_id `./jq -r '.AccessKeyId' aws_credentials.json`
 - aws configure set aws_secret_access_key `./jq -r '.SecretAccessKey' aws_credentials.json`
 - aws configure set aws_session_token `./jq -r '.Token' aws_credentials.json`
 - echo "Building HashiCorp Packer template, amazon-linux_packer-template.json"
 - ./packer build amazon-linux_packer-template.json
 post_build:
 commands:
 - echo "HashiCorp Packer build completed on `date`"

amazon-linux_packer-template.json . This is a just a sample, make your own changes as needed.

{
 "variables": {
 "aws_region": "{{env `AWS_REGION`}}",
 "aws_ami_name": "amazon-linux_{{isotime \"02Jan2006\"}}"
 },

"builders": [{
 "type": "amazon-ebs",
 "region": "{{user `aws_region`}}",
 "instance_type": "t2.micro",
 "ssh_username": "ec2-user",
 "ami_name": "{{user `aws_ami_name`}}",
 "ami_description": "Customized Amazon Linux",
 "associate_public_ip_address": "true",
 "source_ami_filter": {
 "filters": {
 "virtualization-type": "hvm",
 "name": "amzn-ami*-ebs",
 "root-device-type": "ebs"
 },
 "owners": ["137112412989", "591542846629", "801119661308", "102837901569", "013907871322", "206029621532", "286198878708", "443319210888"],
 "most_recent": true
 }
 }],

"provisioners": [
 {
 "type": "shell",
 "inline": [
 "sudo yum update -y",
 "sudo /usr/sbin/update-motd --disable",
 "echo 'No unauthorized access permitted' | sudo tee /etc/motd",
 "sudo rm /etc/issue",
 "sudo ln -s /etc/motd /etc/issue",
 "sudo yum install -y elinks screen"
 ]
 }
 ]
}

Commit to CodeCommitRepo

git add .
git commit -m "initial commit"
git push origin master

CodeBuild

We need a role to run the job, this is how.
Create IAM service role for codebuild and then add hashicorp inline policy + Service Role from https://aws.amazon.com/blogs/devops/how-to-create-an-ami-builder-with-aws-codebuild-and-hashicorp-packer/

Create a policy file called CodeBuild-IAM-policy.json
I’ve added logs.CreateLogStream and logs.createLogGroup to the Hashicorp policy.

{
 "Version": "2012-10-17",
 "Statement": [{
 "Effect": "Allow",
 "Action" : [
 "ec2:AttachVolume",
 "ec2:AuthorizeSecurityGroupIngress",
 "ec2:CopyImage",
 "ec2:CreateImage",
 "ec2:CreateKeypair",
 "ec2:CreateSecurityGroup",
 "ec2:CreateSnapshot",
 "ec2:CreateTags",
 "ec2:CreateVolume",
 "ec2:DeleteKeypair",
 "ec2:DeleteSecurityGroup",
 "ec2:DeleteSnapshot",
 "ec2:DeleteVolume",
 "ec2:DeregisterImage",
 "ec2:DescribeImageAttribute",
 "ec2:DescribeImages",
 "ec2:DescribeInstances",
 "ec2:DescribeRegions",
 "ec2:DescribeSecurityGroups",
 "ec2:DescribeSnapshots",
 "ec2:DescribeSubnets",
 "ec2:DescribeTags",
 "ec2:DescribeVolumes",
 "ec2:DetachVolume",
 "ec2:GetPasswordData",
 "ec2:ModifyImageAttribute",
 "ec2:ModifyInstanceAttribute",
 "ec2:ModifySnapshotAttribute",
 "ec2:RegisterImage",
 "ec2:RunInstances",
 "ec2:StopInstances",
 "ec2:TerminateInstances",
 "logs:CreateLogStream",
 "logs:createLogGroup"
 ],
 "Resource" : "*"
 }]
}

Create the role and attach the inline policy, and then the ServiceRole policy

aws iam create-role --role-name codebuild-1stpacker-service-role --assume-role-policy-document file://CodeBuild-IAM-policy.json
aws iam attach-role-policy --role-name codebuild-1stpacker-service-role --policy-arn arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole

then get the arn to update the script below

aws iam get-role --role-name codebuild-1stpacker-service-role --query "Role.Arn" --output text

Create a project

Create a create-project.json file, and update the ARN for the serviceRole you created above.

{
 "name": "AMI_Builder",
 "description": "AMI builder CodeBuild project",
 "source": {
 "type": "CODECOMMIT",
 "location": "https://git-codecommit.ap-southeast-2.amazonaws.com/v1/repos/1stPackerCodeCommitRepo",
 "buildspec": "buildspec"
 },
 "artifacts": {
 "type": "NO_ARTIFACTS"
 },
 "environment": {
 "type": "LINUX_CONTAINER",
 "image": "aws/codebuild/ubuntu-base:14.04",
 "computeType": "BUILD_GENERAL1_SMALL"
 },
 "serviceRole": "arn:aws:iam::AWS_ACCT:role/service-role/codebuild-1stpacker-service-role"
}

then build the project

aws codebuild create-project --cli-input-json file://create-project.json

Start a build

this take around 5 mins using a t2.micro and default scripts

aws codebuild start-build --project-name AMI_Builder

check your new AMIs

 aws ec2 describe-images --owner $(aws sts get-caller-identity --output text --query 'Account') --query 'Images[?contains(Name, `Ubuntu`)]'

Conclusion

All done. A fairly simple, yet powerful and dynamic way to allow users/sysadmins to churn out new AMIs.

Next improvements would be to;

  • Trigger a CodePipeline build for code commits
  • SNS topic for email notifications of build status
  • Replace it all with CFN

Inspiration for this AWS CLI focused blog came from AWS DevOps Blog

Are you AWS Well Architected?

What is Well-Architected?

AWS has developed and documented a set of Design Principles and published a whitepaper entitled “AWS Well-Architected Framework” (WAF). This document was recently updated (November 2017) to contain the latest and most up to date design principles that you should use as your foundation to deploy into the cloud.

The AWS Well-Architected Framework has 5 key pillars:

  • Operational Excellence – aligns operational practices with business objectives
  • Security – security in depth, tracability
  • Reliability – capacity, recovery and testing
  • Performance Efficiency – resource optimisation
  • Cost Optimization – consumption, analysis and focus on reducing costs

More recently, the need for workload specific WAFs has been identified. These workload specific WAFs are known as a Lens. Currently, Serverless Applications and High Performance Compute lenses are available.

Getting Well-Architected on AWS

By optimising your architecture around these pillars, you gain the benefits of a well-architected design in the cloud:

  • Stop guessing your capacity needs
  • Test systems at production scale
  • Automate for easier architectural experimentation
  • Allow for evolutionary architectures
  • Data-driven architectures
  • Improve through game days

How the AWS Well-Architected Framework works?

Approved AWS Partners, like Bulletproof, offers a Well-Architected Review designed to educate customers on architectural best practices for building and operating reliable, secure, efficient, and cost-effective systems in the cloud, and to assess their current workloads.

AWS Partners use the AWS Well-Architected Framework design principles and tools to deliver a comprehensive review of a single workload. A workload is defined as a set of machines, instances, or servers that together enable the delivery of a service or application to internal or external customers. Examples include an eCommerce website, business application, web application, etc. Through this review and guidance from the AWS Well-Architected Framework, can provide the customer an analysis of the workload through the lens of the WAF pillars, as well as a plan to address areas that do not comply with framework recommendations.

The value of a Well Architected Review

The value is under-priced. For a small price, you gain huge insights.

  • Understand potential impacts
  • Visibility of risks
  • Consistent approach to architecture review
  • Recommended remediation steps

Well-Architected Review

The review process usually consists of;

  • A one-day on-site, review and architecture deep-dive with your stakeholders by AWS Certified Solution Professional Architects to gather data on your workloads in your business/IT context
  • Analysis of the collected data, cross-analysis of the workload via the guidance offered by the Framework, and formulation of any recommended remediation, if applicable
  • A Well-Architected report, outlining all findings and recommendations

I love doing architecture reviews, they provide the opportunity to dive deep with customers and give real actionable insights.

AWS ELB vs ALB cost comparison

A lot of decisions (even simple questions) are based on cost. I was recently asked about the new ALB costs.  So, here is a quick comparison (examples at bottom) of the AWS Classic Load Balancer (ELB) and the newer Application Load Balancer (ALB).

The Hourly rate for the use of an Application Load Balancer is 10% lower than the cost of a Classic Load Balancer, but that’s not the whole story.

For Sydney region (ap-southeast-2) prices are;

  • Application Load Balancers = $0.0252 per hour.
  • Classic Load Balancers = $0.028 per hour

AWS have changed the way they bill on the ALB adding LCU (Load Balancer Capacity Units) which uses the highest values of the following:

  • New connections (per second) – Each LCU provides 25 new connections per second (averaged over the hour)
  • Active connections (per minute) – Each LCU provides 3,000 active connections per minute.
  • Bandwidth (Mbps) – Each LCU provides 2.22 (2220) Mbps averaged over the hour

LB resource costs per month estimates;

Application Load Balancer

$0.0252/h * 24 hours * 30 days = $18.14 per month + $0.008 per LCU per hour (see below)

Classic Load Balancer

$0.028 p/h * 24 hours * 30 days = $20.16 per month + $0.008 per GB of data processed

For the common bandwidth factor, the below math is an easy way to compare side by side (at a high level).


Example 1
Production environment (high usage) – bandwidth 2.5 Mbps (2500), Total data transferred 5000 GB
ALB 2500/2220 = 1.126126 LCUs 1.126126 * 0.008 * 24 * 30 = $6.48 + $18.14 = $24.62 = 49.8% saving
or
ELB 5000 * 0.008 * 24 * 30 = $28.80 + $20.16 = $48.96


Example 2
Test environment (medium usage)- bandwidth 200 kbps (200), Total data transferred 250 GB
ALB 200/2220 = 0.0900900 LCUs 0.0900900 * 0.008 * 24 * 30= $0.52 + $18.14 = $18.65 = 19.1% saving
or
ELB 250 * 0.008 * 24 * 30 = $2.88 + $20.16 = $23.04


 

Example 3
Dev environment (low usage) – bandwidth 10 kbps Total data transferred 5 GB
ALB 10/2220 = 0.0045045 LCUs =$0.26 + $18.14 = $18.40 = 10.1% saving
or
ELB 5 * 0.008 * 24 * 30 = $0.29 + $20.16 = $20.45


Conclusion

So, in nearly all cases, ALB is cheaper than an ELB for a general side by side comparison.
But, the best way to measure your costs, is with Resource Tagging and Billing Reports with some real world testing.
HTH

Amazon Alexa: Converting mp3 files

When working with Amazon Alexa audio/mp3 files and using SSML there are a few simple rules.

This blog assumes, awscli,wget and ffmepg are installed and configured.

Here are the must haves for using mp3 audio in your Alexa skill;

  • Public access file (S3 is a good location), see below for upload tips
  • HTTPS endpoint
  • MP3 format
  • less than 90 seconds
  • 48 kbps
  • 16000 Hz

Now, let’s give it a try.

Download a sample audio file

We’ll use this Crowd Excited free sample

cd /tmp
wget http://www.pacdv.com/sounds/people_sound_effects/crowd-excited.mp3

Convert it to the right Alexa format

Use this ffmpeg command for an easy cli converter and then use awscli to upload to a S3 bucket and set a public read ACL on the object.

OUTFILE=crowd-excited-alexa.mp3
ffmpeg -i crowd-excited.mp3 -ac 2 -codec:a libmp3lame -b:a 48k -ar 16000 $OUTFILE
BUCKET=bucketName
aws s3 cp $OUTFILE s3://$BUCKET/ --storage-class REDUCED_REDUNDANCY
aws s3api put-object-acl --bucket $BUCKET --key $OUTFILE --acl public-read

Extra Tip — Working with the Alexa SDK Node.js

I got caught out when working with Alexa-SDK-NodeJs. Make sure declaration string ends with a space slash ” /” (without quotes). This might go in your index.js

// in index.js 
cheerAudio = `<audio src="https://s3.amazonaws.com/bucketName/crowd-excited-alexa.mp3" />`;

HTH

amzn2-linux + bees

Have you ever needed to load test your website to validate how many requests per seconds it can handle? Checkout this https://gitlab.com/shallawell/pack-bees repo to a packer file to build a AWS EC2 AMI for using with bees-with-machine-guns.

AMI Features

  • built with packer, so its repeatable
  • uses the latest AWS Linux2 AMI (at time of writing)
  • yum update –security on each build
  • AWS keys are variables, not hard-coded
  • bees-with-machine-guns (and dependencies) installed

Check it Out

Everything you need to load check your website, so go build some bees and Attack!!

Let me know how your website went.

 

HTH