AWS Partner Network (APN) Blog

Learn about the New Sumo Logic App for AWS Lambda

by Kate Miller | on | in APN Competency Partner, APN Partner Highlight, APN Technology Partners, AWS Lambda | | Comments

Today, Advanced all-in APN Technology Partner Sumo Logic, also an AWS Big Data and Security Competency Partner, made its App for AWS Lambda generally available and introduced a purpose-built Lambda function, which is immediately available for launch in the AWS Blueprint library.

My colleague Bryan Liston wrote a great blog post about the launch over on the AWS Compute Blog. Click here to check it out, and learn more about how the Sumo Logic App for AWS Lambda works.

Our friends over at Sumo Logic also issued a press release on the launch, which you can read here.

Want to learn more about Sumo Logic’s journey as an APN Partner? Watch the company’s AWS Partner Success video below:

AWS Sample Integrations for Atlassian Bitbucket Pipelines

by Josh Campbell | on | in APN Competencies, APN Competency Partner, APN Partner Highlight, AWS Partner Solutions Architect (SA) Guest Post, DevOps on AWS | | Comments

Today, APN Partner and AWS DevOps Competency Partner Atlassian announced the beta of Bitbucket Pipelines, which allows customers to trigger build, test, and deploy actions every time they commit code to Bitbucket Cloud. Bitbucket is a source code control service that hosts Git and Mercurial repositories. Here at AWS, we have built a number of sample integrations to demonstrate how customers can use this new feature to deploy their code changes from Bitbucket to update their running applications on AWS.

As an example, customers can now deploy functions to AWS Lambda, Docker containers to Amazon EC2 Container Service, or an application to AWS Elastic Beanstalk, all by simply pushing code to a Bitbucket repository. Take a look at sample configuration scripts on Bitbucket to get an idea of how you can take advantage of this new Bitbucket functionality and automate code deployments to your AWS applications.

Using Atlassian Bitbucket Pipelines with AWS

You can easily enable Bitbucket Pipelines on a Bitbucket repository by choosing a new icon in the menu bar.  The commits page in your repository will also have a new column called “Builds” where you can see the result of the Pipelines actions that were run on that commit. The Pipelines page shows further information about the commits.

Once you enable Bitbucket Pipelines, you’ll need to include a YAML configuration file called bitbucket-pipelines.yml that details the actions to take for your branches. The configuration file describes a set of build steps to take for each branch in Bitbucket. It provides the flexibility to limit build steps to certain branches or take different actions for specific branches. For example, you might want a deployment to AWS Lambda step to be taken only when a commit is made on the “master” branch.

Under the hood, Bitbucket Pipelines uses a Docker container to perform the build steps. You can specify any Docker image that is accessible by Bitbucket, including private images if you specify credentials to access them. The container starts up and then runs the build steps in the order specified in your configuration file. One thing to note is that creating your own Docker image with all required tools and libraries for your build steps helps speed up build time.

The build steps specified in the configuration file are nothing more than shell commands that get executed on the Docker image. Therefore, you can run scripts, in any language supported by the Docker image you choose, as part of the build steps. These scripts can be stored either directly in your repository or in an Internet-accessible location such as Amazon S3. To support the launch of Bitbucket Pipelines, AWS has published sample scripts, using Python and the boto3 SDK, that help you get started on integrating with several AWS services, including AWS Lambda, AWS Elastic Beanstalk, AWS CodeDeploy, and AWS CloudFormation. You can try these samples out with three easy steps:

  1. Copy the Python script into your repository.
  2. Incorporate the bundling and execution of the script in your YAML configuration file.
  3. Configure any environment variables required by the scripts.

Detailed instructions on how to use these samples are specified in the README file in their repositories.  More information on using Bitbucket Pipelines can be found in Atlassian’s official documentation.

Check out Atlassian’s blog post on the Bitbucket Pipelines beta here.

 

How Can APN Partners Help You Migrate or Deploy XenApp & XenDesktop to the Cloud?

by Kate Miller | on | in APN Consulting Partners, APN Technology Partners, Networking | | Comments

Are you interested in moving your Citrix XenApp, XenDesktop and/or NetScaler workloads to the AWS Cloud?

We’ve assembled an AWS Accelerator with Advanced APN Technology Partner Citrix to help you plan and execute a successful trial migration while using your existing licenses. As my colleague Jeff Barr explains on the AWS Blog, “The migration process makes use of the new Citrix Lifecycle Management (CLM) tool. CLM includes a set of proven migration blueprints that will help you to move your existing deployment to AWS. You can also deploy the XenApp and XenDesktop Service using Citrix Cloud, and tap CLM to manage your AWS-based resources.”

There are a number of APN Partners who can help you to work through the blueprint and help you to tailor it to the needs of your organization. The AWS Accelerator Launch Services Partners include Accenture, Booz Allen Hamilton, CloudNation, Cloudreach,Connectria, Equinix (EPS Cloud), REAN Cloud, and SSI-Net. Our Launch Direct Connect partner is Level 3.

To learn more about the AWS Accelerator and how you can get started, visit the AWS Blog.

Announcing AWS Marketplace Metering Service Availability in EU (Ireland) Region

by Brad Lyman | on | in APN Consulting Partners, APN Technology Partners, AWS Marketplace | | Comments

Brad Lyman is a Sr. Product Manager – Technical Products at AWS Marketplace

Earlier this year, AWS Marketplace announced the availability of AWS Marketplace Metering Service (MMS), which supports new pricing dimensions such as users, hosts, and data. After a seller integrates their product with AWS Marketplace Metering Service, that product will emit an hourly record capturing the usage of any single pricing dimension. The AWS Marketplace team is pleased to announce that these products are now available in the EU (Ireland) Region. Customers can now easily subscribe to software priced by users, hosts, or data, deploy into a data center in the EU, and only pay for what they use.

What does this mean for APN Partners who sell, or want to sell, on AWS Marketplace?

This release increases the number of customers you can reach with AWS Marketplace Metering Service. Sellers on AWS Marketplace can now make their products available to customers who use the EU (Ireland) Region for data sovereignty, latency reduction, or multi-region redundancy.

How do I get started?

If you already use AWS Marketplace to sell a product using the AWS Marketplace Metering Service, you can contact the AWS Marketplace Seller Ops team to make your product available in the EU (Ireland) Region. To register a new product to use the new AWS Marketplace Metering Service, take the following steps:

  • If you’re a new seller on AWS Marketplace, register by visiting AWS Marketplace, or go directly to: https://aws.amazon.com/marketplace/management/register/.
  • Next, you will have to register your product for a new type of usage dimension. On the form, indicate the dimension you want to meter.
  • Next, you’ll be able to acquire the specific productCode and usageDimension. When you register your product, you can download a SDK to help you format your metering records. You will have the option of downloading a SDK for any of the languages the AWS SDK supports.

To learn more about selling software on any of the new consumption dimensions, including the steps to modify your product to use the AWS Marketplace Metering Service, visit the AWS Marketplace Seller Portal.

Upcoming Partner Webinars: Getting Started on APN Marketing Central

by Kate Miller | on | in APN Launches, AWS Marketing | | Comments

We recently announced a new marketing benefit for Standard tier and above APN Partners, APN Marketing Central, that provides marketing tools and resources that enable you to generate demand for your solutions on AWS. With APN Marketing Central, you can gain access to self-service marketing tools that allow you quickly co-brand and launch solution-based campaigns.

To help you get started on APN Marketing Central, we are hosting three upcoming webinars. In these webinars, you’ll learn more about APN Marketing Central and how to get started with your first campaign.

 

Getting Started on APN Marketing Central – EMEA

Wednesday, May 18 @ 7am PDT, 10am EDT, 2pm GMT, 3pm BST, 4pm CEST

Register here »

Getting Started on APN Marketing Central – NAMER/LATAM

Wednesday, May 18 @ 10am PDT, 1pm EDT, 2pm BRT

Register here »

Getting Started on APN Marketing Central – APAC

Thursday, May 19 @ 9am SGT, 11am AEST

Register here »

 

To learn more about APN Marketing Central, click here.

 

APN Consulting Partner Case Study Spotlight

by Kate Miller | on | in APN Competencies, APN Competency Partner, APN Consulting Partners, APN Partner Highlight | | Comments

It’s no secret that the AWS partner ecosystem drives customer success across verticals and industries. Today, I want to call out a few great recent wins from some of our top APN Consulting Partners. Enjoy!

 

Amazon Aurora Migration 

Customer: Large renewable energy company

APN Partner: ClearScale, a Premier Consulting Partner, and multiple AWS Competency holder, including Big Data, Marketing & Commerce, Mobile, and DevOps

Read more here >>


Ditect Mortgage Co. Cloud Migration

Customer: Ditect Mortgage Co.

APN Partner: REAN Cloud, a Premier Consulting Partner, AWS MSP, and multiple AWS Competency holder, including Life Sciences and DevOps

Read more here >>


DocuTAP – An Advanced HIPAA/HITECH Urgent Care Solution in the Cloud

Customer: DocuTAP

APN Partner: Connectria, an Advanced Consulting Partner, AWS MSP, and AWS Healthcare Competency Partner

Read more here >>


SCOR Velogica on AWS 

Customer: SCOR Velogica

APN Partner: 2nd Watch, a Premier Consulting Partner, AWS MSP, and multiple AWS Competency holder, including Microsoft SharePoint, Big Data, Life Sciences, Marketing & Commerce, and DevOps

Read more here >>


VSCO | Artifact Uprising

Customer: VSCO

Partner: DataPipe, a Premier Consulting Partner, AWS MSP, and multiple AWS Competency holder, including Microsoft SharePoint, Microsoft Exchange, and Oracle

Read more here >>

 

To read about more partner wins, visit our AWS Partner Success page.

Key Metrics for Amazon Aurora

by Kate Miller | on | in Amazon Aurora, Database, Partner Guest Post | | Comments

This is a guest post by John Matson of Datadog. An expanded version of this post is available on the Datadog blog. Datadog is an Advanced APN Technology Partner, and is a Certified AWS MSP Technology Partner.

Amazon Aurora is a MySQL-compatible database offered on Amazon RDS (Relational Database Service). In addition to a number of performance benefits, Aurora provides valuable metrics that are not available for other RDS database engines.

In this article we’ll highlight a few key metrics that can give you a detailed view of your database’s performance.

There are three ways to access metrics from Aurora: you can collect standard RDS metrics through Amazon CloudWatch​, detailed system-level metrics ​via ​enhanced RDS monitoring, and numerous MySQL​-specific​ metrics from the database engine​.​ Standard RDS metrics are reported at one-minute intervals; the other metrics can be collected at higher time resolution​. The nuts-and-bolts section of this post discusses how to collect all these metrics.

Selected query metrics

 

Metric description CloudWatch name MySQL name
Queries Queries (per second) Queries (count)
Reads SelectThroughput (per second) Com_select + Qcache_hits (count)
Writes DMLThroughput (per second) Com_insert + Com_update + Com_delete (count)
Read query latency, in milliseconds SelectLatency
Write query latency, in milliseconds DMLLatency

The first priority in monitoring is making sure that work is being done as expected. In the case of a database, that means monitoring how queries are being executed.

You can monitor total query throughput as well as the read/write breakdown by collecting metrics directly from CloudWatch or by summing native MySQL metrics from the database engine. In MySQL, reads increment one of two status variables (Com_select or Qcache_hits), depending on whether or not the read is served from the query cache. A write increments one of three status variables depending on whether it is an INSERT, UPDATE, or DELETE.

In CloudWatch, all reads and writes are rolled into SelectThroughput and DMLThroughput, respectively, and their latencies are reported in the valuable Aurora-only metrics SelectLatency and DMLLatency.

For a deeper look into query performance, the performance schema stores lower-level statistics from the database server. More about the performance schema below.

Selected resource metrics

Metric description CloudWatch name Enhanced monitoring name MySQL name
Read I/O operations per second ReadIOPS diskIO.readIOsPS
Write I/O operations per second WriteIOPS diskIO.writeIOsPS
Percent CPU utilized CPUUtilization cpuUtilization.total
Available RAM in gigabytes FreeableMemory memory.free
Network traffic to Aurora instance NetworkReceiveThroughput (MB/s) network.rx (packets)
Network traffic from Aurora instance NetworkTransmitThroughput (MB/s) network.tx (packets)
Open database connections DatabaseConnections Threads_connected
Failed connection attempts LoginFailures (per second) Aborted_connects (count)

As Baron Schwartz, co-author of High Performance MySQL, notes, a database needs four fundamental resources: CPU, memory, disk, and network. Metrics on all four fundamental resources are available via CloudWatch.

RDS now also offers enhanced monitoring that exposes detailed system-level metrics. With additional configuration, users can monitor load, disk I/O, processes, and more with very high time resolution.

Disk I/O metrics

The CloudWatch metrics ReadIOPS and WriteIOPS track how much your database is interacting with backing storage. If your storage volumes cannot keep pace with the volume of requests, you will start to see I/O operations queuing up, as reflected in the DiskQueueDepth metric.

CPU metrics

High CPU utilization is not necessarily a bad sign. But if your database is performing poorly while metrics for IOPS and network are in normal ranges, and while the instance appears to have sufficient memory, the CPUs of your chosen instance type may be the bottleneck.

Memory metrics

Databases perform best when most of the working set of data can be held in memory. For this reason, you should monitor FreeableMemory to ensure that your database instance is not memory-constrained.

Network metrics

For Aurora, the NetworkReceiveThroughput and NetworkTransmitThroughput metrics track only network traffic to and from clients, not traffic between the database instances and storage volumes.

Connection metrics

Aurora has a configurable connection limit, which can be checked or modified by navigating to the RDS console and selecting the parameter group that your RDS instance belongs to.

If your server reaches its connection limit and starts to refuse connections, it will increment the CloudWatch metric LoginFailures, as well as the similar MySQL metric Aborted_connects and the more specific MySQL Connection_errors_max_connections counter.

Collecting Aurora metrics

As mentioned at the outset, Aurora users can access metrics from Amazon CloudWatch and many more from the MySQL-compatible database engine. Below we’ll show you how to collect both CloudWatch and engine metrics for a comprehensive view. To collect and correlate all your metrics, you can use a monitoring tool that integrates both with CloudWatch and with the database instance itself. The final part of this post details how to monitor Aurora with Datadog​, which will also allow you to monitor the new suite of RDS enhanced metrics​. To monitor enhanced metrics on another platform, consult the AWS documentation.

Collecting CloudWatch metrics

Below we’ll walk through two ways of retrieving metrics from CloudWatch:

  • Using the AWS Management Console
  • Using the command line interface

Using the AWS Console

The AWS Console allows you view recent metrics and set up simple alerts on metric thresholds. In the CloudWatch console, select RDS from the list of services and click on “Per-Database Metrics” to see your available metrics:

Just select the checkbox next to the metrics you want to visualize, and they will appear in the graph at the bottom of the console.

Using the command line interface

To query RDS metrics from the command line, you need to install the CloudWatch command line tool. You can then view your metrics with simple queries like this one to check the SelectLatency metric across a one-hour window:

mon-get-stats SelectLatency
--namespace="AWS/RDS"
--dimensions="DBInstanceIdentifier=instance-name"
--statistics Maximum
--start-time 2016-02-18T17:00:00
--end-time 2016-02-18T18:00:00

Full documentation for the mon-get-stats command is available here.

Collecting database engine metrics

To get a deeper look at Aurora performance you will often need metrics from the database instance itself. Here we cover three methods of metric collection:

  • Querying server status variables
  • Querying the performance schema and sys schema
  • Using the MySQL Workbench GUI

Connecting to your RDS instance

The design of RDS means that you cannot directly access the machines running your database, as you could if you manually installed MySQL or MariaDB on a standalone server. That said, you can connect to the database using standard tools, provided that the security group for your Aurora instance allows it.

If Aurora accepts traffic only from inside its security group, you can launch an EC2 instance in that security group, and then apply a second security group rule to the EC2 instance to accept inbound SSH traffic. By SSHing to the EC2 instance, you can then connect to Aurora using the mysql command line tool:

mysql -h instance-name.xxxxxx.us-east-1.rds.amazonaws.com -P 3306 -u yourusername -p

Your instance’s endpoint (ending in rds.amazonaws.com) can be found in the RDS console.

Querying server status variables

Once you connect to your database instance, you can query any of the hundreds of metrics available, known as server status variables. To check metrics on connection errors, for instance:

mysql> SHOW GLOBAL STATUS LIKE '%Connection_errors%';

Querying the performance schema and sys schema

Server status variables largely capture high-level server activity. To collect metrics at the query level—for instance, to link latency or error metrics to individual queries—you can use the performance schema, which captures detailed statistics on server events.

Enabling the performance schema

Set the performance_schema parameter to 1 in the Aurora instance’s parameter group using the AWS console. This change requires an instance reboot.

Once the performance schema is enabled, server metrics will be stored in tables in the performance_schema database, which can be queried with ordinary SELECT statements.

Using the sys schema

Though you can query the performance schema directly, it is usually easier to extract meaningful metrics from the tables in the sys schema.

To install the sys schema, first clone the GitHub repo to a machine that can connect to your Aurora instance and position yourself within the newly created directory:

$ git clone https://github.com/mysql/mysql-sys
$ cd mysql-sys

Then, create an Aurora-compatible file for the sys schema:

$ ./generate_sql_file.sh -v 56 -b -u CURRENT_USER

Finally, load the file into Aurora, using the filename returned in the step above:

$ mysql -h instance-name.xxxxxx.us-east-1.rds.amazonaws.com -P 3306 -u yourusername -p < gen/sys_1.5.0_56_inline.sql

Now you can connect to Aurora using the mysql command line tool to access the sys schema’s many tables and functions. For instance, to summarize all the statements executed, along with their associated latencies, you would run:

mysql> select * from sys.user_summary_by_statement_type;

Using the MySQL Workbench GUI

MySQL Workbench is a free application for managing and monitoring MySQL databases. It provides a high-level performance dashboard, as well as a simple interface for browsing performance metrics (using the views provided by the sys schema).

If you have configured an EC2 instance to communicate with Aurora, you can connect MySQL Workbench to your Aurora instance via SSH tunneling:

You can then view recent metrics on the performance dashboard or click through the statistics available from the sys schema:

Monitor Aurora Using Datadog

You’ve now seen that you can easily collect metrics from CloudWatch and from the database engine itself for ad hoc performance checks. For a more comprehensive view of your database’s health and performance, however, you need a monitoring system that can correlate CloudWatch metrics with database engine metrics, that lets you see historical trends with full granularity, and that provides flexible visualization and alerting functionality. This post will show you how to connect Aurora to Datadog in two steps:

  • Connect Datadog to CloudWatch
  • Integrate Datadog with Aurora’s database engine

You can also use Datadog to collect, graph, and alert on the new enhanced monitoring metrics that are available for RDS. Full instructions are available in this post.

Connect Datadog to CloudWatch

To start monitoring metrics from RDS, you just need to configure our CloudWatch integration. Create a new user via the IAM console in AWS and grant that user read-only permissions to these three services, at a minimum:

  1. EC2
  2. CloudWatch
  3. RDS

You can attach managed policies for each service by clicking on the name of your user in the IAM console and selecting “Permissions”.

Once these settings are configured within AWS, create access keys for your read-only user and enter those credentials in the Datadog app.

Integrate Datadog with Aurora’s database engine

To access all the metrics available for Aurora, you can monitor the database instance itself in addition to collecting standard metrics from CloudWatch.

Installing the Datadog Agent on EC2

Datadog’s Agent integrates seamlessly with MySQL and compatible databases to gather and report key performance metrics. Where the same metrics are available through the Datadog Agent and through standard CloudWatch metrics, the higher-resolution Agent metrics should be preferred. Installing the Agent usually requires just a single command.

Because you cannot install anything on an RDS database instance, you must run the Agent on another machine, such as an EC2 instance in the same security group.

Configuring the Agent for RDS

Complete instructions for capturing Aurora metrics with the Agent are available here. Experienced Datadog users will note that monitoring Aurora is just like monitoring MySQL locally, with two small configuration exceptions:

  1. Provide the Aurora instance endpoint as the server name (e.g., instance_name.xxxxxxx.us-east-1.rds.amazonaws.com) instead of localhost
  2. Tag your Aurora metrics with the DB instance identifier (dbinstanceidentifier:instance_name) to separate database metrics from the host-level metrics of your EC2 instance

Unifying your metrics

Once you set up the Agent, all the metrics from your database instance will be uniformly tagged with dbinstanceidentifier:instance_name for easy retrieval, whether those metrics come from CloudWatch or from the database engine itself.

View your Aurora dashboard

Once you have integrated Datadog with RDS, a comprehensive dashboard called “Amazon – RDS (Aurora)” will appear in your list of integration dashboards. The dashboard gathers the key metrics highlighted at the start of this post and more.

You can filter your RDS metrics by selecting a particular dbinstanceidentifier in the upper left of the dashboard.

Enhanced monitoring dashboard

If you have set up enhanced monitoring for Aurora, you can also access a specialized RDS Enhanced Metrics dashboard in Datadog.

Monitor all the things

Monitoring Amazon Aurora gives you critical visibility into your database’s health and performance. Plus, Datadog integrates with 100+ infrastructure technologies, so you can correlate Aurora performance with metrics and events from the rest of your stack. If you don’t yet have a Datadog account, you can sign up for a free trial here.


The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

Taking NAT to the Next Level in AWS CloudFormation Templates

by Santiago Cardenas | on | in AWS CloudFormation, AWS Partner Solutions Architect (SA) Guest Post, NAT, Networking | | Comments

Santiago Cardenas is a Partner Solutions Architect (SA) at AWS.  

So you’re already creating your cloud infrastructure using AWS CloudFormation to automate your deployments through infrastructure-as-code. As you design the virtual network, you’re probably already using public and private subnets within your Amazon Virtual Private Cloud (Amazon VPC), given the best practices of subnet isolation for front-end and back-end resources. A public subnet includes a direct route to the Internet through the Internet gateway that is attached to the VPC. A private subnet, on the other hand, has no such route and must hop through another device, such as a Network Address Translation (NAT) instance, to get to the Internet. Typically, you’ll use a Linux instance that is configured to act as a NAT device by configuring IPv4 forwarding and using iptables for IP masquerading. The challenge, however, is ensuring that these instances are highly available, scale with traffic, and don’t become the bottleneck or single point of failure.

Cue in NAT gateways!

A managed NAT gateway has built-in redundancy at the Availability Zone level and a simple management console and API. In addition to this and other benefits, it can be easily configured through AWS CloudFormation, making it a great choice for new and existing templates.

A typical CloudFormation template that uses NAT instances will have the following resources of interest:

{
    ...
    "Resources" : {
        ...
        "NAT1EIP" : {
            "Type" : "AWS::EC2::EIP",
            "Properties" : {
                "Domain" : "vpc",
                "InstanceId" : {
                    "Ref" : "NAT1"
                }
            }
        },
        "NAT1" : {
            "Type" : "AWS::EC2::Instance",
            "DependsOn" : "VPCGatewayAttachment",
            "Properties" : {
                "ImageId" : {
                    "Fn::FindInMap" : [
                        "AWSRegionArchNatAMI",
                        {
                            "Ref" : "AWS::Region"
                        },
                        {
                            "Fn::FindInMap" : [
                                 "AWSInstanceType2Arch",
                                 {
                                     "Ref" : "NATInstanceType"
                                 },
                                 "Arch"
                            ]
                        }
                    ]
                },
                "InstanceType" : {
                    "Ref" : "NATInstanceType"
                },
                "Tags" : [
                    {
                        "Key" : "Name",
                        "Value" : "NAT1"
                    }
                ],
                "NetworkInterfaces" : [
                    {
                        "GroupSet" : [
                            {
                                "Ref" : "NATSecurityGroup"
                            }
                        ],
                        "AssociatePublicIpAddress" : "true",
                        "DeviceIndex" : "0",
                        "DeleteOnTermination" : "true",
                        "SubnetId" : {
                            "Ref" : "PublicSubnetAZ1"
                        }
                    }
                ],
                "KeyName"           : {
                    "Ref" : "KeyPairName"
                },
                "SourceDestCheck"   : "false"
           }
       },
       "PrivateRoute1" : {
           "Type" : "AWS::EC2::Route",
           "Properties" : {
               "RouteTableId" : {
                   "Ref" : "PrivateRouteTable"
               },
               "DestinationCidrBlock" : "0.0.0.0/0",
               "InstanceId" : {
                   "Ref" : "NAT1"
               }
           }
       },
       ...
   }
   ...
}

This example snippet includes the following resources that are directly involved with a NAT instance deployment. These are things like:

  • The Elastic IP address that is to be attached to the NAT instance.
  • The NAT instance itself, coming from an Amazon Linux NAT AMI (Amazon Machine Image).
  • The route to the Internet via the NAT instance. This route is later added to the route table associated with the private subnets in the same Availability Zone.

We also recommend that your architecture include at least two Availability Zones, which means that you would include the code above at least twice in your CloudFormation template (once per Availability Zone).

Modifying your CloudFormation template to discontinue the use of NAT instances and consume NAT gateways is straightforward. You would:

  • Allocate an Elastic IP address. However, it would not be directly assigned to an instance.
  • Create a NAT gateway resource.
  • Create a route to the Internet, but via the NAT gateway instead of going through a NAT instance. As in the code for NAT instances, this route would then be associated with the route table for the private subnets in the same Availability Zone.

The updated example would look something like this:

{
    ...
    "Resources" : {
        ...
        "NATGateway1EIP" : {
            "Type" : "AWS::EC2::EIP",
            "Properties" : {
                "Domain" : "vpc"
            }
        },
        "NATGateway1" : {
            "Type" : "AWS::EC2::NatGateway",
            "DependsOn" : "VPCGatewayAttachment",
            "Properties" : {
                "AllocationId" : {
                    "Fn::GetAtt" : [
                        "NATGateway1EIP",
                        "AllocationId"
                    ]
                },
                "SubnetId" : {
                    "Ref" : "PublicSubnetAZ1"
                }
            }
        },
        "PrivateRoute1" : {
            "Type" : "AWS::EC2::Route",
            "Properties" : {
                "RouteTableId" : {
                    "Ref" : "PrivateRouteTable1"
                },
                "DestinationCidrBlock" : "0.0.0.0/0",
                "NatGatewayId" : {
                    "Ref" : "NATGateway1"
                }
            }
        },
        ...
    }
    ...
}

As you can observe from the updated example, updating your CloudFormation templates to utilize a NAT gateway is a modest change that improves the architecture of the VPC while removing the burden of configuring, monitoring, and scaling the NAT instances. For more information about the NAT gateway resource type, see the AWS CloudFormation User Guide.

There are a few additional things to keep in mind about NAT gateways at this time:

  • They are available in most (but not all) regions. For supported regions, see the NAT gateway section of the Amazon VPC Pricing
  • They only handle outbound traffic.
  • They don’t support complex iptables rules.
  • If you used the NAT instance as a bastion, you would need to stand up a separate bastion now. This is a better practice anyway, because it properly separates the instance roles and duties, therefore allowing finer grained security control.

For further reading, please see:

Have you read our recent posts on Amazon VPC for on-premises network engineers? Check out part one and part two.

Last Chance to Be the First to Take Our New Data Warehousing on AWS Course!

by Kate Miller | on | in AWS Training and Certification, Big Data | | Comments

Next week, AWS will be delivering a private beta of the new Data Warehousing on AWS course, which will be publicly released later this year. Data Warehousing on AWS is a three-day course for database architects, database administrators, database developers, data analysts and scientists, and anybody else who needs to understand how to implement a data warehousing solution in AWS.

This instructor-led preview class is scheduled May 10-12 in Seattle, WA.

Learn more and register here >>

For more information on AWS Training and Certification for APN Partners, click here.

2016 AWS Partner Summit Singapore – Recap

by Kate Miller | on | in APN Competencies, APN Consulting Partners, APN Partner Highlight, APN Technology Partners, ASEAN, AWS Events, AWS for Windows, AWS IoT, AWS Marketing, AWS Marketplace, AWS Training and Certification, Big Data, Big Data Competency, Cloud Managed Services, DevOps on AWS, Migration, Networking, SaaS on AWS | | Comments

Last Friday, the AWS ASEAN partner team hosted the AWS Partner Summit – Singapore. The event was intended to provide partners with valuable insight into key trends on AWS, and to help partners to connect with both one another and with AWS team members. I was fortunate enough to be a part of the event, and had a great opportunity to connect with a number of consulting and technology partners from across the region. Similar to the Partner Summit – Sydney, the event featured guest speakers from AWS, APN member firms, and AWS customer firms.

Partner Keynote

Cam McNaught, Head of ASEAN Channels & Alliances kicked off the event. He was joined by Terry Wise, WW VP of Channels & Alliances, AWS, who spoke to partners about what we’ve seen to be some keys to partner success on AWS, which include:

Terry also discussed key trends representing a large opportunity for Partners on AWS, and Premier Consulting Partner BlazeClan took the stage to discuss a successful cloud migration the company completed for a large company in region.

Sessions

Three sessions were hosted across different topics:

  • Succeeding with AWS by Delivering High Value Services, presented by Darren Thayre, AWS Professional Services
  • Building a Big Data and Analytics Practice – Partner Opportunities with AWS, presented by Craig Stires, Head of Big Data and Analytics, APAC
  • Beyond the Fridge: Every Thing Will Be Connected, presented by Dr. Werner Vogels, CTO, Amazon.com

The sessions featured guest speakers from Seaco and Trax. One of my favorite quotes came from ‘Succeeding with AWS by Delivering High Value Services’, when Brandon O’Neill, Director of IT Services, Seaco, explained what the company was looking for in a partner: “We were looking for a mentor to help guide us onto the cloud”.

AWS Partner Recognition

The ASEAN team chose to recognize three APN Partners in three different categories: AWS Rising Star Partner, AWS Innovation Partner, and AWS Partner of the Year ASEAN 2015. Congratulations to the following APN Partners!

Networking

I loved one of the main approaches the team took to bring partners together. Throughout the event, folks had the opportunity to grab coffee…but they couldn’t grab just a single coffee. Coffees were paired together, and partners were meant to grab two coffees and then ‘partner up’, share the coffees, and strike up a conversation. The team also set up networking booths in the main hall for partners to learn more about key AWS services and programs, including AWS Partner Marketing and AWS Training & Certification.

 

More Information

Did you miss the Partner Summit? We live tweeted the day from @AWS_Partners – follow us and re-visit the event minute-by-minute!