autozane

Automating AWS CloudWatch Logs on Ubuntu

The AWS CloudWatch Logs service acts like a Logstash agent on your EC2 instances. It can be configured to capture log entires and send them to CloudWatch. There are a lot of different customization options with AWS CloudWatch Logs, such as how to format log entries, log group names, etc. In this post we will automate the installation of AWS CloudWatch Logs on an Ubuntu instance using PackerIO. The example service that we will capture logs for will be an Aptly API server. PackerIO will automate the installation and configuration of the service for us, and Terraform will be used to configure the IAM Role and Instance Profile we will need to be able to interact with CloudWatch and Log services on AWS. This is a working deployment strategy for Aptly server here. We will just focus on the awslogs agent install and required Terraform.

Let's start with the PackerIO

»

Managing Terraform Versions on Jenkins

When leveraging Terraform to code your infrastructure you will notice that the release cadence for new versions of Terraform is fast. In this post I will show you how to manage upgrading versions of Terraform on a per Jenkins Job basis. This will allow you to run multiple versions of Terraform on your Jenkins system and gives you the flexibility to control when to upgrade a given Terraform state to a newer version of Terraform.

We will be leveraging this great open source wrapper: tfenv. Thank you kamatam41 for sharing this.

To get this installed on a Jenkins system I have provide a small Chef snippet for this:

This will setup a couple versions of Terraform for our job to work with. Here is the example Jenkins Terraform job. We will leverage some Terraform code I put on Github that will create an AWS S3 bucket, a S3 bucket policy

»

Terraform Recipe: Continuous Integration on Amazon Web Services

Most Terraform use cases involve relatively static infrastructure. In this recipe I will explore a more dynamic use case. Here we will leverage Terraform to run a continuous code integration into Amazon Web Services (AWS). Below I will provide a visual of what is achieved, followed up with a description of each step in what I call the "Infrastructure Time Line".

  1. We start off with a code release that is already running in an ASG from a previous deployment. This is referenced as Amazon Machine Image-x (AMI-x).

  2. Provision your new code in a new AMI via PackerIO (I will cover this specific step in a future post, stay tuned). The resulting AMI is referred to as AMI-y. We then pass AMI-y as a variable to Terraform. Terraform is then leveraged to deploy a new Autoscaling Group (ASG) to run along side the current running ASG (Using AMI-y).

  3. Terraform waits for

»

Leave a message at the beep, I mean in the queue

+

Let's say you don't have an AWS Direct Connect or a VPN connection from your AWS account to your on-premise datacenter. But you have some processed data that needs to be sent back to your on-premise Hadoop cluster upon completion. This requires you to somehow initiate a process (in this example a Hadoop Distcp) from your on-premise. How do we know when the processed data is ready? How do we know when to start the data copy process? Just leave a message!

Here is an example of a small AWS SQS consumer application that works with Hadoop and AWS s3 to copy processed data from s3 to your local Hadoop cluster.

In this example scenario, we have Amazon EMR processing data that is outputted to s3. The last step in the EMR workflow is to post a message to an AWS SQS queue. The body of this message contains the

»

Use Bash Utilities to Update aws/credentials for AssumeRole

Some AWS assume roles bash foo that can come in handy.

aws sts assume-role --role-arn <ROLEARN> --role-session-name <ROLESESSIONNAME> |\  
    tr '{}' ',,' |\
    awk -F:  '
                    BEGIN { RS = "," ; print "[PROFILENAME]"}
                    /:/{ gsub(/"/, "", $2) }
                    /AccessKeyId/{ print "aws_access_key_id = " $2 }
                    /SecretAccessKey/{ print "aws_secret_access_key = " $2 }
                    /SessionToken/{ print "aws_session_token = " $2 }
    '  >> ~/.aws/credentials

OR if you don't want to touch your .aws/credentials file

aws sts assume-role --role-arn arn:aws:iam::1111111111111:role/role-test --role-session-name "RoleSessionTest" |\  
    grep -w 'AccessKeyId\|SecretAccessKey\|SessionToken' |\ 
    awk  '{print $2}' | sed  's/\"//g;s/\,//' > awscre
    export AWS_ACCESS_KEY_ID=`sed -n '3p' awscre`
    export AWS_SECRET_ACCESS_KEY=`sed -n '1p' awscre`
    export AWS_SECURITY_TOKEN=`sed -n '2p' awscre`
»

Older Posts