6 tips for AWS Well-Architected Cost Optimisation + Serverless bonuses

Here are 6 tips or tools for improving your Cost Optimisation strategies for AWS. Grab the free tools below to get a deeper cost insight on your account.

Trusted Advisor

The basic level of Trusted Advisor checks over 30 key metrics for Security and Service Limits, but the real gold is having Business or Enterprise support which will let you access a larger range of checks (Cost Optimisation) and get programmatic access to the Trusted Advisor API. You can then monitor Reserved Instance (RIs) usage and under utilised or idle EC2 or RDS resources. Try the AWS tools https://github.com/aws/Trusted-Advisor-Tools which uses Cloudwatch Events and Lambda to help automate best practices.

Billing Alarms

Billing alarms allows you to be proactive about being notified about cost increases. Create a billing alarm on the Billing console. Set limits, (create multiple alarms for different thresholds $10, $50, $1000) and pair with SNS/Email notifications and when they are breached, you’ll be told. Grab this free cloudformation template for create a quick billing alarm. https://gitlab.com/shallawell/cfn-billing-alarm

Cost Explorer

Visualise your spend by resources. Use this AWS SAM (Serverless Application Module) function to generate a monthly excel report to have it delivered to your inbox. https://github.com/aws-samples/aws-cost-explorer-report

Resource Tagging Strategy

Use a tagging strategy to assist cost allocation by allowing cost reporting by tags. Consider the Business Tags suggestions on this page for some ideas. Tagging Strategies . Used in conjunction with the Cost Explorer script (update the Cost Tags), you can start to gain insights to new information.

AWS Simply Cost Calculator

This handy calculator lets you estimate resource costs before deployment, useful in some scenarios, but it has limited (hence the name: simple) options for some resource types. Good for a simple 3 tier app with typical storage and data transfer. Or maybe you just need a quick EC2 cost comparison tool to compare multiple instance types side by side.

AWS News

This is by far the easiest way to stay on top of new announcements and services (operational excellence) and another way to be cost aware. For example – M5 vs M4 EC2 will deliver 14% better price/performance on a per core basis. If you are still using M4’s consider a migration to the newer, faster (performance efficiency), cheaper (cost optimisation) technology

Got a RSS reader? no, get one – and subscribe to AWS News. I use a browser based one (do a search), so updates are only a click away.

HTH

Advertisements

2 handy AWS S3 public-read commands

I needed to host a simple public html file. S3 is easy.

To make a file publicly readable in S3 is easy without having to give up whole bucket access. This method also uses the newer ONE_IA storage option to save a few cents.

It uses the S3 and S3api commands from awscli for this.

Update the FILE and BUCKET variables to make it yours. Tested on Ubuntu.

FILE=privacy_policy.html
BUCKET=myAwsomeBucketName 
aws s3 cp $FILE s3://$BUCKET/ --storage-class ONE_IA 
aws s3api put-object-acl --bucket $BUCKET --key $FILE --acl public-read

HTH

Getting my AWS Advanced Networking & Security Specialty Certifications

I was asked to write a short post about my experiences and what I can share with others about my recent AWS certification journey. Thanks to @bulletproofnet for their support along the way.

Over the past few months I have been working towards obtaining all of my AWS Specialty certifications. In January I completed the last of the #all5 resulting in Solution Architect Professional and DevOps Engineer Professional certs.

The AWS Security Speciality had just been released as beta so keeping up the momentum I scheduled my exam for late January. Being a beta exam, the results weren’t immediately available like other AWS exams. I had to wait until today to receive my results. Another difference between other exams is that you receive a percentage score. For the beta it is simply ‘pass’.  A fortnight or so later, I schedule and completed the Network exam.  I wasn’t successful on my first attempt – another humbling experience.  Back to the ‘books’ for a few more weeks, really cramming the knowledge.  I felt I knew where I was lacking so focus on those areas. Second times a charm, they say – and I passed. Goes to show that persistence pays off.

A short background, I have around 5 years experience with AWS designing and implementing and 25 years in IT generally, covering everything from operations, engineering, architecture and consulting. I’ve learnt a thing or two about networking and security over the years within ‘traditional IT’ so I was comfortable with most general aspects I expected to face in the exam.

The Exams are between 170 and 180 minutes and the questions are multiple choice with multiple answers appearing to be correct. Choose the best fit. But consider what the question is asking for. Is it asking for Reliability or Cost Optimisation? They usually conflict when trying to build a solution but use the same technology. To attempt the the Specialty exams, you must have an Associate level certification first.  I’d suggest at least 1 of the Professional certifications will also give you additional confidence to attempt the Specialty exams.

Time is a factor in all of the exams but unlike the SA Pro Exam, I found I completed these exams with more time to spare against the clock.

Study techniques and material

  • I have consistently used acloud.guru for all of my structured learning. I completed the Advanced Networking and Security Specialty courses (thanks @KroonenburgRyan & team). These are great courses and cover a range of ever expanding topics. I can highly recommend them.
  • Exam blueprint and Practice Questions. None of the Advanced Exams have practice exams. Security Exam didn’t have practice questions as it was in beta. That will likely change. But do the questions early – before studying even. This helped me focus on certain topics a little more. You can try them again after studying to measure your improvement.
  • I read nearly all of the suggested whitepapers (find them in the blueprints). That’s a lot of reading. Even if I skimmed through some, where I felt I understood or knew the content, there is lots of information gold.
  • Get hands-on, tinker, build something. It’s usually only a few cents/dollars to try a new idea.
  • YouTube Re:Invent videos. A couple of great collections. Advanced Networking Playlist. Security and Compliance Playlist. SA Pro Playlist. Pick and choose (so many great ones but they can be up to 1 hour long), but watch as many as you can. There is days of content there.
  • If you have colleagues, talk about your AWS solutions and even AWS blogs. Challenge and test each other.  I’m constantly being challenged which makes me dive deep to reinforce that knowledge I’ve gained.

Overall, I would have studied for 50+ hours for each exam (even at double speed videos!), and the same for the Professional level exams.

Ok, the good stuff.

For Networking focus on;

  • Deep DX, VLANs/802.1q, public and private VIFs, how to setup DX with 3rd party telcos.
  • Network redundancy and how VPNs, DX, BGP and inter-region traffic can work.
  • Solid routing skills covering VGW, route tables, BGP, AS-Path and MED for influencing traffic.
  • Deep knowledge of VPC and what NATGW, IGW, VGW, CGW are, limits of each and why you use them, redundancy and reliability are tested a lot.
  • Good understanding of VPNs, VPC peering, multi-VPC design and IP subnetting for VPCs.
  • Deep ELB knowledge, including cross zone LB, ELB balancing algorithms, ELB security policies, HTTPS, headers and  IP requirements,
  • Brush up on CloudHub – not used often (I’ve never used it) but serves a purpose.
  • Excellent knowledge of how DNS between cloud and corporate network works for multiple scenarios, including DNS Forwarders and Route53 knowledge
  • Data charges for network traffic over IGW, DX, S3 and Cloudfront. Check out this guide to AWS Data Charges in a nice diagram.
  • Learn to read VPC Flow Logs and understand why you might have ACCEPT and REJECT traffic. Understand your ‘firewall controls’ in AWS
  • AWS Enhanced Networking including how to use it (think instance types, ENAs, Placement Groups), what the limits are, speeds, when to use them
  • AWS support levels (Personal Health Dashboard, Trust Advisor), Automating incident response and troubleshooting network issues features heavily as well

For Security Specialty focus on;

  • Tested knowledge of ports or protocols. Think about VPN, SSL, Windows
  • Good understanding of IAM and STS.
  • Web attacks, SQL injection and DDoS defence techniques, detecting port scans, performing penetration scanning and know how deep packet inspection is done and the services you might use. Think about WAF and Shield, techniques to throttle traffic.
  • Know the AWS Shared Responsibility Model
  • Using Security Groups, NACLs, S3 ACLs to control access to resources.
  • Understand how you could control traffic for 169.254 networks, external networks, how you control outbound web filtering and restrict EC2 traffic at the host level
  • Know how auditing in AWS is achieved using different logging services, Cloudtrail and VPC Flow logs
  • Know the difference between Guard Duty, Inspector and Trusted Advisor security features. Check out my other blog for a quick run down on Guard Duty and Inspector for a 3 min refresher.
  • Thorough understand of how you encrypt at rest and in transit, including when KMS, CloudHSM and other tools which might be used for data encryption and VPNs, Bastions, ACM, SSL for securing traffic.
  • Workspaces is part of this exam, so understand how it is implemented
  • Security Incident Response in AWS

Conclusion

It has been a great experience overall. I feel humbled to be part of the first group to get the Security Certification. In terms of difficulty compared to other exams, the Advanced Network Speciality was 2nd hardest to the Solution Architect Professional exam. The Security Specialty seemed a little easier (at least for me). Everyone will come to the exam with different experiences so it may be different for you.    My AWS transcript can be found here.

If you have the chance to give it a try – go for it. The exam is now out of beta and available generally. Good luck!

What’s next?, well Big Data Specialty of course.

AWS CodeCommit, IAM, CodeBuild with awscli

This blog describes using awscli to setup codecommit, ssh-keys, codebuild and IAM roles.

Now, an admin of a AWS acct could allow a user;

  • to provide a ssh public key – easily uploaded to IAM by awsadmin
  • give the user the new project location, after easily creating a project for them
  • git clone, always get the latest code – then make changes
  • upload new packer file, ready for another cookie-cutter rebuild – with new timestamp

Assumptions

  • awscli is setup / aws credentials configured
  • IAM user is already created (i’m using awsadmin role), which as sufficient EC2 & CodeCommit privileges.
  • git is installed
  • AWS region is ap-southeast-2 (Sydney)

Setup IAM

Create a SSH key

cd $HOME/.ssh
ssh-keygen -f YourSSHKeyName -t rsa -C "YourEmailAddress" -b 4096

Upload public key to AWS IAM and save output to $NEW_SSHPublicKeyId

NEW_SSHPublicKeyId=$(aws iam upload-ssh-public-key --user-name awsadmin --ssh-public-key-body "$(cat ~/.ssh/YourSSHKeyName.pub)" --output text --query 'SSHPublicKey.SSHPublicKeyId')

Configure Linux to use CodeCommit as a git repo

update your $HOME/.ssh/config file

echo "
Host git-codecommit.*.amazonaws.com
 User $NEW_SSHPublicKeyId
 IdentityFile ~/.ssh/YourSSHKeyName" >> config
chmod 600 config

Test the new ssh key works

ssh git-codecommit.ap-southeast-2.amazonaws.com

CodeCommit

create a CodeCommit Repo

aws codecommit create-repository --repository-name
 testrepo
ssh://git-codecommit.ap-southeast-2.amazonaws.com/v1/repos/testrepo

Clone the empty repo

This makes a local directory with a git structure

git clone ssh://git-codecommit.ap-southeast-2.amazonaws.com/v1/repos/1stPackerCodeCommitRepo

Make your buildspec.yml file – it must be named this.

The packer filename can be anything but the buildspec.yml must reference it correctly.

buildspec.yaml must contain the following. Cut/paste;

---
version: 0.2

phases:
 pre_build:
 commands:
 - echo "Installing HashiCorp Packer..."
 - curl -qL -o packer.zip https://releases.hashicorp.com/packer/1.2.1/packer_1.2.1_linux_amd64.zip && unzip packer.zip
 - echo "Installing jq..."
 - curl -qL -o jq https://stedolan.github.io/jq/download/linux64/jq && chmod +x ./jq
 - echo "Validating amazon-linux_packer-template.json"
 - ./packer validate amazon-linux_packer-template.json
 build:
 commands:
 ### HashiCorp Packer cannot currently obtain the AWS CodeBuild-assigned role and its credentials
 ### Manually capture and configure the AWS CLI to provide HashiCorp Packer with AWS credentials
 ### More info here: https://github.com/mitchellh/packer/issues/4279
 - echo "Configuring AWS credentials"
 - curl -qL -o aws_credentials.json http://169.254.170.2/$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI > aws_credentials.json
 - aws configure set region $AWS_REGION
 - aws configure set aws_access_key_id `./jq -r '.AccessKeyId' aws_credentials.json`
 - aws configure set aws_secret_access_key `./jq -r '.SecretAccessKey' aws_credentials.json`
 - aws configure set aws_session_token `./jq -r '.Token' aws_credentials.json`
 - echo "Building HashiCorp Packer template, amazon-linux_packer-template.json"
 - ./packer build amazon-linux_packer-template.json
 post_build:
 commands:
 - echo "HashiCorp Packer build completed on `date`"

amazon-linux_packer-template.json . This is a just a sample, make your own changes as needed.

{
 "variables": {
 "aws_region": "{{env `AWS_REGION`}}",
 "aws_ami_name": "amazon-linux_{{isotime \"02Jan2006\"}}"
 },

"builders": [{
 "type": "amazon-ebs",
 "region": "{{user `aws_region`}}",
 "instance_type": "t2.micro",
 "ssh_username": "ec2-user",
 "ami_name": "{{user `aws_ami_name`}}",
 "ami_description": "Customized Amazon Linux",
 "associate_public_ip_address": "true",
 "source_ami_filter": {
 "filters": {
 "virtualization-type": "hvm",
 "name": "amzn-ami*-ebs",
 "root-device-type": "ebs"
 },
 "owners": ["137112412989", "591542846629", "801119661308", "102837901569", "013907871322", "206029621532", "286198878708", "443319210888"],
 "most_recent": true
 }
 }],

"provisioners": [
 {
 "type": "shell",
 "inline": [
 "sudo yum update -y",
 "sudo /usr/sbin/update-motd --disable",
 "echo 'No unauthorized access permitted' | sudo tee /etc/motd",
 "sudo rm /etc/issue",
 "sudo ln -s /etc/motd /etc/issue",
 "sudo yum install -y elinks screen"
 ]
 }
 ]
}

Commit to CodeCommitRepo

git add .
git commit -m "initial commit"
git push origin master

CodeBuild

We need a role to run the job, this is how.
Create IAM service role for codebuild and then add hashicorp inline policy + Service Role from https://aws.amazon.com/blogs/devops/how-to-create-an-ami-builder-with-aws-codebuild-and-hashicorp-packer/

Create a policy file called CodeBuild-IAM-policy.json
I’ve added logs.CreateLogStream and logs.createLogGroup to the Hashicorp policy.

{
 "Version": "2012-10-17",
 "Statement": [{
 "Effect": "Allow",
 "Action" : [
 "ec2:AttachVolume",
 "ec2:AuthorizeSecurityGroupIngress",
 "ec2:CopyImage",
 "ec2:CreateImage",
 "ec2:CreateKeypair",
 "ec2:CreateSecurityGroup",
 "ec2:CreateSnapshot",
 "ec2:CreateTags",
 "ec2:CreateVolume",
 "ec2:DeleteKeypair",
 "ec2:DeleteSecurityGroup",
 "ec2:DeleteSnapshot",
 "ec2:DeleteVolume",
 "ec2:DeregisterImage",
 "ec2:DescribeImageAttribute",
 "ec2:DescribeImages",
 "ec2:DescribeInstances",
 "ec2:DescribeRegions",
 "ec2:DescribeSecurityGroups",
 "ec2:DescribeSnapshots",
 "ec2:DescribeSubnets",
 "ec2:DescribeTags",
 "ec2:DescribeVolumes",
 "ec2:DetachVolume",
 "ec2:GetPasswordData",
 "ec2:ModifyImageAttribute",
 "ec2:ModifyInstanceAttribute",
 "ec2:ModifySnapshotAttribute",
 "ec2:RegisterImage",
 "ec2:RunInstances",
 "ec2:StopInstances",
 "ec2:TerminateInstances",
 "logs:CreateLogStream",
 "logs:createLogGroup"
 ],
 "Resource" : "*"
 }]
}

Create the role and attach the inline policy, and then the ServiceRole policy

aws iam create-role --role-name codebuild-1stpacker-service-role --assume-role-policy-document file://CodeBuild-IAM-policy.json
aws iam attach-role-policy --role-name codebuild-1stpacker-service-role --policy-arn arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole

then get the arn to update the script below

aws iam get-role --role-name codebuild-1stpacker-service-role --query "Role.Arn" --output text

Create a project

Create a create-project.json file, and update the ARN for the serviceRole you created above.

{
 "name": "AMI_Builder",
 "description": "AMI builder CodeBuild project",
 "source": {
 "type": "CODECOMMIT",
 "location": "https://git-codecommit.ap-southeast-2.amazonaws.com/v1/repos/1stPackerCodeCommitRepo",
 "buildspec": "buildspec"
 },
 "artifacts": {
 "type": "NO_ARTIFACTS"
 },
 "environment": {
 "type": "LINUX_CONTAINER",
 "image": "aws/codebuild/ubuntu-base:14.04",
 "computeType": "BUILD_GENERAL1_SMALL"
 },
 "serviceRole": "arn:aws:iam::AWS_ACCT:role/service-role/codebuild-1stpacker-service-role"
}

then build the project

aws codebuild create-project --cli-input-json file://create-project.json

Start a build

this take around 5 mins using a t2.micro and default scripts

aws codebuild start-build --project-name AMI_Builder

check your new AMIs

 aws ec2 describe-images --owner $(aws sts get-caller-identity --output text --query 'Account') --query 'Images[?contains(Name, `Ubuntu`)]'

Conclusion

All done. A fairly simple, yet powerful and dynamic way to allow users/sysadmins to churn out new AMIs.

Next improvements would be to;

  • Trigger a CodePipeline build for code commits
  • SNS topic for email notifications of build status
  • Replace it all with CFN

Inspiration for this AWS CLI focused blog came from AWS DevOps Blog

Amazon Alexa: Converting mp3 files

When working with Amazon Alexa audio/mp3 files and using SSML there are a few simple rules.

This blog assumes, awscli,wget and ffmepg are installed and configured.

Here are the must haves for using mp3 audio in your Alexa skill;

  • Public access file (S3 is a good location), see below for upload tips
  • HTTPS endpoint
  • MP3 format
  • less than 90 seconds
  • 48 kbps
  • 16000 Hz

Now, let’s give it a try.

Download a sample audio file

We’ll use this Crowd Excited free sample

cd /tmp
wget http://www.pacdv.com/sounds/people_sound_effects/crowd-excited.mp3

Convert it to the right Alexa format

Use this ffmpeg command for an easy cli converter and then use awscli to upload to a S3 bucket and set a public read ACL on the object.

OUTFILE=crowd-excited-alexa.mp3
ffmpeg -i crowd-excited.mp3 -ac 2 -codec:a libmp3lame -b:a 48k -ar 16000 $OUTFILE
BUCKET=bucketName
aws s3 cp $OUTFILE s3://$BUCKET/ --storage-class REDUCED_REDUNDANCY
aws s3api put-object-acl --bucket $BUCKET --key $OUTFILE --acl public-read

Extra Tip — Working with the Alexa SDK Node.js

I got caught out when working with Alexa-SDK-NodeJs. Make sure declaration string ends with a space slash ” /” (without quotes). This might go in your index.js

// in index.js 
cheerAudio = `<audio src="https://s3.amazonaws.com/bucketName/crowd-excited-alexa.mp3" />`;

HTH

Docker Bridge Networking – a working example

Setup

Run through these commands to get familiar with docker bridge networking. Some experience with docker is assumed, including that docker is installed.
This example will create an internal docker bridge network, using the specified IP CIDR of 10.0.0.0/16

#download alpine container image
docker pull alpine 
# list docker networks
docker network ls 
# create docker network
docker network create myOverlayNet --internal --subnet 10.0.0.0/16
# list docker networks to check new network creation
docker network ls 
docker network inspect myOverlayNet

# Run a few images in the new network 
# - d daemon mode
# --name is the name for your new container, which can be resolved by dns
# --network add to network
docker run -ti --name host1 --network myOverlayNet alpine
# Press Ctrl-p and Ctrl-q to exit the container and leave it running
docker run -ti --name host2 --network myOverlayNet --link host1 alpine
# Press Ctrl-p and Ctrl-q to exit the container and leave it running
# Run a new container outside of myOverlayNet
docker run -ti --name OutsideHost --hostname OutsideHost alpine
docker start host1 host2 OutsideHost
# check they are all running
docker ps -a
# inspect the network to ensure the new containers (host1 and host2) have IPs. They will likely be 10.0.0.2 and 10.0.0.3. OutsideHost will be missing (as expected)
docker network inspect myOverlayNet
# inspect the default bridge network, OutsideHost will be there
docker network inspect bridge

Testing

This demonstrates the ability to connect within the myOverlayNet network and also how other networks, like default bridge where OutsideHost is don’t have access.

 docker exec

allows us to run a command in a container and see the output in stdout.

# should succeed, Ctrl-C to stop the ping
docker exec host1 ping host2
# should succeed, Ctrl-C to stop the ping
docker exec host2 ping host1

# should fail, Ctrl-C to stop the ping
docker exec host1 ping OutsideHost
# should fail, Ctrl-C to stop the ping
docker exec OutsideHost ping host1

Clean up

Clean up everything we created, hosts and overlay network

# kill the running containers
docker kill host1 host2 OutsideHost
# remove containers
docker rm host1 host2 OutsideHost
# remove network
docker network rm myOverlayNet
# check docker networks
docker network ls
#check containers are removed
docker ps -a

Conclusion

Well done, all completed and you now understand docker overlay networking a little better. Reach out via the comments or twitter if you have any questions.
by @shallawell, inspired by @ArjenSchwarz

AWSCLI tips

Snippets I’ve used or collected.


Configure awscli

Enter keys, region and output defaults.

$ aws configure
AWS Access Key ID [****************FEBA]:
AWS Secret Access Key [****************I4b/]:
Default region name [ap-southeast-2]:
Default output format [json]:

Add bash completion via Tab key

echo "complete -C aws_completer aws" >> ~/.bash_profile
source ~/.bash_profile

Other shell completion options

Supecharged AWSCLI SAWS or AWS-shell

Both offer improvements over the standard bash completions above.


S3 Accelerate Transfer
if you need to sync a large number of small files to S3, the increasing the following values added to your ~/.aws/config config file will speed up the sync process. (also need to enable accelerate transfer on bucket – see next command)

Modify .aws/config

[profile default]
...
s3 =
  max_concurrent_requests = 100
  max_queue_size = 10000
  use_accelerate_endpoint = true

Enable S3 bucket for Accelerate Transfer

The following example sets Status=Enabled to enable Transfer Acceleration on a bucket. You use Status=Suspended to suspend Transfer Acceleration.  This incurs an additional cost, so should be disable about the bulk upload is completed.

 $ aws s3api put-bucket-accelerate-configuration --bucket bucketname --accelerate-configuration Status=Enabled


Set default output type (choose 1)

[default]
output=text|json|table

Wait

#!/bin/bash
echo "Waiting for EBS snapshot"
aws ec2 wait snapshot-completed --snapshot-ids snap-aabbccdd
echo "EBS snapshot completed"

Get the public IPs or DNS of EC2 instances

Uses jq for filtering output.

apt-get install jq
aws ec2 ls --ec2-tag-value mystack-inst* | jq --raw-output ".Reservations [].Instances [].PublicIpAddress

AWS Tags – jq improvements

With jq, AWSCLI json output can be hard to query can but you can map it into a normal object like this:

jq '<path to Tags> | map({"key": .Key, "value": .Value}) | from_entries'

Find untag EC2 instance.

This cmd finds instance NOT tagged ‘owner’.

aws ec2 describe-instances   --output text    --query 'Reservations[].Instances[?!not_null(Tags[?Key == `owner`].Value)] | [].[InstanceId]'

Create a tag for EC2 instance

NB. Add a for loop to tag all instances

aws ec2 create-tags --resources $i --tags Key=owner,Value=me

 

update: More jq

Describe stacks by tags.. Thanks to @arjenschwarz

aws cloudformation describe-stacks --region ap-southeast-2 --query 'Stacks[*].{name: StackName, tags: Tags}' | jq '.[] | if (.tags | from_entries | .template_suffix == "dev-yellow") then .name else "" end | select(. != "")'

Check out this blog on AWS Comprehend for a few more examples as well.

 

cloud9 IDE http://c9.io

If you need a Lnux Virtual Desktop for developers, then check out http://c9.io

Within 5 minutes, I had registered, logged in, view the sample code, deployed it and an Apache web server and run the code. Super Easy!!

Some of the constraints of other solutions may be;

* constraints
VMWare View 6 (v1.0 linux desktops)
Hardware purchases
Building and Managing infrastructure
Short cycle project work

Easy way to rotate AWS access keys

We all know we should change passwords often, well same goes for access keys.

This handy INTERACTIVE bash script walks you through to create a new AWS Access Key, save the .pem file in your .ssh directory.  And gives you the option to delete the old keys.

You can download from my gitlab – here

Hope this helps someone 🙂

#!/bin/bash
# @shallawell
# Program Name: aws-iam-access-keys.sh
# Purpose: Manage AWS access keys
# version 0.1

# new key content will be created in this file.
FILE=new_aws_key.txt
#remove the old file first
rm $FILE
### create a key
echo -n "Do you want to create a new Access/Secret key. (y/n) [ENTER]: "
#get user input
read response2
if [ "$response2" == "y" ]; then
echo "Ok.. Creating a new keys !!!"
aws iam create-access-key --output json | grep Access | tail -2 | tee -a $FILE
#Alternative create key command
#KEY=myIndiaAWSKeytest
#REGION=ap-south-1
#aws ec2 create-key-pair --key-name=$KEY --region $REGION --query="KeyMaterial" --output=text > ~/.ssh/$KEY.pem
#readonly the key
#chmod 400 ~/.ssh/$KEY.pem
echo "key created."
echo "REMEMBER: You should rotate keys at least once a year! Max of 2 keys per user."
echo "$FILE created for Access and Secret Keys"
echo "HINT: Run aws configure to update keys. (you just rotated your keys!)"
else [ "$response2" == "n" ]
echo "Not creating keys."
exit 0
fi

### list a key, save to DELKEY var
#this command LISTS the access keys for current user, sorts by CreateDate,
#gets the latest AccessKeyId result. awk grabs the Access key (excludes date field)
DELKEY=$(aws iam list-access-keys \
--query 'AccessKeyMetadata[].[AccessKeyId,CreateDate]' \
| sort -r -k 2 | tail -1 | awk {'print $1'})

echo "list-Access-key sorted to find OLDEST key."
echo -n "Key Found : $DELKEY. Do you want to delete this key. (y/n) [ENTER]: "
#get user input
read response
if [ "$response" == "y" ]; then
echo "you said yes. Deleteing Key in 3 secs!!!"
sleep 3
echo "delete-access-key disabled, NO REAL DELETE OCCURRED"
### delete a key, uncomment to activate the delete function.
#aws iam delete-access-key --access-key-id $DELKEY
echo "deleted $DELKEY"
else [ "$response" == "n" ]
echo "you said no. Not Deleting"
fi

echo "done."