AWS Blog

Now Available – I3 Instances for Demanding, I/O Intensive Applications

by Jeff Barr | on | in Amazon EC2, Launch | | Comments

On the first day of AWS re:Invent I published an EC2 Instance Update and promised to share additional information with you as soon as I had it.

Today I am happy to be able to let you know that we are making six sizes of our new I3 instances available in fifteen AWS regions! Designed for I/O intensive workloads and equipped with super-efficient NVMe SSD storage, these instances can deliver up to 3.3 million IOPS at a 4 KB block and up to 16 GB/second of sequential disk throughput. This makes them a great fit for any workload that requires high throughput and low latency including relational databases, NoSQL databases, search engines, data warehouses, real-time analytics, and disk-based caches. When compared to the I2 instances, I3 instances deliver storage that is less expensive and more dense, with the ability to deliver substantially more IOPS and more network bandwidth per CPU core.

The Specs
Here are the instance sizes and the associated specs:

Instance Name vCPU Count Memory
Instance Storage (NVMe SSD) Price/Hour
i3.large 2 15.25 GiB 0.475 TB $0.15
i3.xlarge 4 30.5 GiB 0.950 TB $0.31
i3.2xlarge 8 61 GiB 1.9 TB $0.62
i3.4xlarge 16 122 GiB 3.8 TB (2 disks) $1.25
i3.8xlarge 32 244 GiB 7.6 TB (4 disks) $2.50
i3.16xlarge 64 488 GiB 15.2 TB (8 disks) $4.99

The prices shown are for On-Demand instances in the US East (Northern Virginia) Region; see the EC2 pricing page for more information.

I3 instances are available in On-Demand, Reserved, and Spot form in the US East (Northern Virginia), US West (Oregon), US West (Northern California), US East (Ohio), Canada (Central), South America (São Paulo), EU (Ireland), EU (London), EU (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Sydney), and AWS GovCloud (US) Regions. You can also use them as Dedicated Hosts and as Dedicated Instances.

These instances support Hardware Virtualization (HVM) AMIs only, and must be run within a Virtual Private Cloud. In order to benefit from the performance made possible by the NVMe storage, you must run one of the following operating systems:

  • Amazon Linux AMI
  • RHEL – 6.5 or better
  • CentOS – 7.0 or better
  • Ubuntu – 16.04 or 16.10
  • SUSE 12
  • SUSE 11 with SP3
  • Windows Server 2008 R2, 2012 R2, and 2016

The I3 instances offer up to 8 NVMe SSDs. In order to achieve the best possible throughput and to get as many IOPS as possible, you can stripe multiple volumes together, or spread the I/O workload across them in another way.

Each vCPU (Virtual CPU) is a hardware hyperthread on an Intel E5-2686 v4 (Broadwell) processor running at 2.3 GHz. The processor supports the AVX2 instructions, along with Turbo Boost and NUMA.

Go For Launch
The I3 instances are available today in fifteen AWS regions and you can start to use them right now.

Jeff;

 

Launch: AWS Elastic Beanstalk launches support for Custom Platforms

by Tara Walker | on | in AWS Elastic Beanstalk, Launch | | Comments

There is excitement in the air! I am thrilled to announce that customers can now create custom platforms in AWS Elastic Beanstalk. With this latest release of the AWS Elastic Beanstalk service, developers and systems admins can now create and manage their own custom Elastic Beanstalk platform images allowing complete control over the instance configuration. As you know, AWS Elastic Beanstalk is a service for deploying and scaling web applications and services on common web platforms. With the service, you upload your code and it automatically handles the deployment, capacity provisioning, load balancing, and auto-scaling.

Previously, AWS Elastic Beanstalk provided a set of pre-configured platforms of multiple configurations using various programming languages, Docker containers, and/or web containers of each aforementioned type. Elastic Beanstalk would take the selected configuration and provision the software stack and resources needed to run the targeted application on one or more Amazon EC2 instances. With this latest release, there is now a choice to create a platform from your own customized Amazon Machine Image (AMI). The custom image can be built from one of the supported operating systems of Ubuntu, RHEL, or Amazon Linux. In order to simplify the creation of these specialized Elastic Beanstalk platforms, machine images are now created using the Packer tool. Packer is an open source tool that runs on all major operating systems, used for creating machine and container images for multiple platforms from a single configuration.

Custom platforms allow you to manage and enforce standardization and best practices across your Elastic Beanstalk environments. For example, you can now create your own platforms on Ubuntu or Red Hat Enterprise and customize your instances with languages/frameworks currently not supported by Elastic Beanstalk e.g. Rust, Sinatra etc.

Creating a Custom Platform

In order to create your custom platform, you start with a Packer template. After the Packer template is created, you would create platform definition file, a platform.yaml file, platform hooks, which will define the builder type for the platform, and script files,. With these files in hand, you would create a zip archive file, called a platform definition archive, to package the files, associated scripts and/or additional items needed to build your Amazon Machine Image (AMI).  A sample of a basic folder structure for building a platform definition archive looks as follows:

|– builder Contains files used by Packer to create the custom platform
|– custom_platform.json Packer template
|– platform.yaml Platform definition file
|– ReadMe.txt Describes the sample

The best way to take a deeper look into the new custom platform feature of Elastic Beanstalk is to put the feature to the test and try to build a custom AMI and platform using Packer. To start the journey, I am going to build a custom Packer template. I go to the Packer site, and download the Packer tool and ensured that the binary is in my environment path.

Now let’s build the template. The Packer template is the configuration file in JSON format, used to define the image we want to build.   I will open up Visual Studio and use this as the IDE to create a new JSON file to build my Packer template.

The Packer template format has a set of keys designed for the configuration of various components of the image. The keys are:

  • variables (optional): one or more key/value strings defining user variables
  • builders (required): array that defines the builders used to create machine images and configuration of each
  • provisioners (optional): array defining provisioners to be used to install and configure software for the machine image
  • description (optional): string providing a description of template
  • min_packer_version (optional): string of minimum Packer version that is required to parse the template.
  • post-processors (optional): array defining post-processing steps to take once image build is completed

If you want a great example of the Packer template that can be used to create a custom image used for a custom Elastic Beanstalk platform, the Elastic Beanstalk documentation has samples of valid Packer templates for your review.

In the template, I will add a provisioner to run a build script to install Node with information about the script location and the command(s) needed to execute the script. My completed JSON file, tara-ebcustom-platform.json, looks as follows:

Now that I have my template built, I will validate the template with Packer on the command line.

 

What is cool is that my Packer template fails because, in the template, I specify a script, eb_builder.sh, that is located in a builder folder. However, I have not created the builder folder nor shell script noted in my Packer template. A little confused that I am happy that my file failed? I believe that this is great news because I can catch errors in my template and/or missing files needed to build my machine image before uploading it to the Elastic Beanstalk service. Now I will fix these errors by building the folder and file for the builder script.

Using the sample of the scripts provided in the Elastic Beanstalk documentation, I build my Dev folder with the structure noted above. Within the context of Elastic Beanstalk custom platform creation, the aforementioned scripts used from the sample are called platform hooks. Platform Hooks are run during lifecycle events and in response to management operations.

An example of the builder script used in my custom platform implementation is shown below:

My builder folder structure holds the builder script, platform hooks, and other scripts, referred to as platform scripts, used to build the custom platform. Platform scripts are the shell scripts that you can use to get environment variables and other information in platform hooks. The platform hooks are located in a subfolder of my builder folder and follows the structure shown below:

All of these items; Packer template, platform.yaml, builder script, platform hooks, setup, config files and platform scripts make up the platform definition contained in my builder folder you see below.

I will leverage the platform.yaml provided in the sample .yaml file and change it as appropriate for my Elastic Beanstalk custom platform implementation. The result is following completed platform.yaml file:

version: "1.0"

provisioner:
  type: packer
  template: tara-ebcustom-platform.json
  flavor: amazon

metadata:
  maintainer: TaraW
  description: Tara Sample NodeJs Container.
  operating_system_name: Amazon linux
  operating_system_version: 2016.09.1
  programming_language_name: ECMAScript
  programming_language_version: ECMA-262
  framework_name: NodeJs
  framework_version: 4.4.4
  app_server_name: "none"
  app_server_version: "none"

option_definitions:
  - namespace: "aws:elasticbeanstalk:container:custom:application"
    option_name: "NPM_START"
    description: "Default application startup command"
    default_value: "node application.js"

Now, I will validate my Packer template again on the command line.

 

All that is left for me is to create the platform using the EB CLI. This functionality is available with EB CLI version 3.10.0 or later. You can install the EB CLI from here and follow the instructions for installation in the Elastic Beanstalk developer guide.

To use the EB CLI to create a custom platform, I would select the folder containing the files extracted from the platform definition archive. Within the context of that folder, I need perform the following steps:

  1. Use the EB CLI to initialize the platform repository and follow the prompts
    • eb platform init or ebp init
  2. Launch the Packer environment with the template and scripts
    • eb platform create or ebp create
  3. Validate an IAM role was successfully created for the instance. This instance profile role will be automatically created via the EB create process.
    • aws-elasticbeanstalk-custom-platform-ec2-role
  4. Verify status of platform creation
    • eb platform status or ebp status

I will now go to the Command Line and use EB CLI command to initialize the platform by running the eb platform init command.

Next step is to create the custom platform using the EB CLI, so I’ll run the shortened command, ebp create, in my platform folder.

Success! A custom Elastic Beanstalk platform has been created and we can deploy this platform for our web solution. It is important to remember that when you create a custom platform, you launch a single instance environment without an EIP that runs Packer, and additionally you can reuse this environment for multiple platforms, as well as, multiple versions of each platform. Additionally, custom platforms are region-specific, therefore, you must create your platforms separately in each region if you use Elastic Beanstalk in multiple regions.

Deploying Custom Platforms

With the custom platform now created, you can deploy an application either via the AWS CLI or via the AWS Elastic Beanstalk Console. The ability to create an environment with an already created custom platform is only available for the new environment wizard.

You can select an already created custom platform on the Create a new environment web page by selecting the Custom Platform radio option under Platform. You would then select the custom platform you previously created from the list of available custom platforms.

Additionally, the EB CLI can be used to deploy the latest version of your custom platform. Using the command line to deploy the previously created custom platform would look as follows:

  • eb deploy -p tara-ebcustom-platform

Summary

You can get started building your own custom platforms for Elastic Beanstalk today. To learn more about Elastic Beanstalk or custom platforms by going the AWS Elastic Beanstalk product page or the Elastic Beanstalk developer guide.

 

Tara

 

 

AWS Marketplace Adds Healthcare & Life Sciences Category

by Ana Visneski | on | in AWS Marketplace, Launch | | Comments

Wilson To and Luis Daniel Soto are our guest bloggers today, telling you about a new industry vertical category that is being added to the AWS Marketplace.Check it out!

-Ana


AWS Marketplace is a managed and curated software catalog that helps customers innovate faster and reduce costs, by making it easy to discover, evaluate, procure, immediately deploy and manage 3rd party software solutions.  To continue supporting our customers, we’re now adding a new industry vertical category: Healthcare & Life Sciences.

healthpost

This new category brings together best-of-breed software tools and solutions from our growing vendor ecosystem that have been adapted to, or built from the ground up, to serve the healthcare and life sciences industry.

Healthcare
Within the AWS Marketplace HCLS category, you can find solutions for Clinical information systems, population health and analytics, health administration and compliance services. Some offerings include:

  1. Allgress GetCompliant HIPAA Edition – Reduce the cost of compliance management and adherence by providing compliance professionals improved efficiency by automating the management of their compliance processes around HIPAA.
  2. ZH Healthcare BlueEHS – Deploy a customizable, ONC-certified EHR that empowers doctors to define their clinical workflows and treatment plans to enhance patient outcomes.
  3. Dicom Systems DCMSYS CloudVNA – DCMSYS Vendor Neutral Archive offers a cost-effective means of consolidating disparate imaging systems into a single repository, while providing enterprise-wide access and archiving of all medical images and other medical records.

Life Sciences

  1. National Instruments LabVIEW – Graphical system design software that provides scientists and engineers with the tools needed to create and deploy measurement and control systems through simple yet powerful networks.
  2. NCBI Blast – Analysis tools and datasets that allow users to perform flexible sequence similarity searches.
  3. Acellera AceCloud – Innovative tools and technologies for the study of biophysical phenomena. Acellera leverages the power of AWS Cloud to enable molecular dynamics simulations.

Healthcare and life sciences companies deal with huge amounts of data, and many of their data sets are some of the most complex in the world. From physicians and nurses to researchers and analysts, these users are typically hampered by their current systems. Their legacy software cannot let them efficiently store or effectively make use of the immense amounts of data they work with. And protracted and complex software purchasing cycles keep them from innovating at speed to stay ahead of market and industry trends. Data analytics and business intelligence solutions in AWS Marketplace offer specialized support for these industries, including:

  • Tableau Server – Enable teams to visualize across costs, needs, and outcomes at once to make the most of resources. The solution helps hospitals identify the impact of evidence-based medicine, wellness programs, and patient engagement.
  • TIBCO Spotfire and JasperSoft. TIBCO provides technical teams powerful data visualization, data analytics, and predictive analytics for Amazon Redshift, Amazon RDS, and popular database sources via AWS Marketplace.
  • Qlik Sense Enterprise. Qlik enables healthcare organizations to explore clinical, financial and operational data through visual analytics to discover insights which lead to improvements in care, reduced costs and delivering higher value to patients.

With more than 5,000 listings across more than 35 categories, AWS Marketplace simplifies software licensing and procurement by enabling customers to accept user agreements, choose pricing options, and automate the deployment of software and associated AWS resources with just a few clicks. AWS Marketplace also simplifies billing for customers by delivering a single invoice detailing business software and AWS resource usage on a monthly basis.

With AWS Marketplace, we can help drive operational efficiencies and reduce costs in these ways:

  • Easily bring in new solutions to solve increasingly complex issues, gain quick insight into the huge amounts of data users handle.
  • Healthcare data will be more actionable. We offer pay-as-you-go solutions that make it considerably easier and more cost-effective to ingest, store, analyze, and disseminate data.
  • Deploy healthcare and life sciences software with 1-Click ease — then evaluate and deploy it in minutes. Users can now speed up their historically slow cycles in software procurement and implementation.
  • Pay only for what’s consumed — and manage software costs on your AWS bill.
  • In addition to the already secure AWS Cloud, AWS Marketplace offers industry-leading solutions to help you secure operating systems, platforms, applications and data that can integrate with existing controls in your AWS Cloud and hybrid environment.

Click here to see who the current list of vendors are in our new Healthcare & Life Sciences category.

Come on In
If you are a healthcare ISV and would like to list and sell your products on AWS, visit our Sell in AWS Marketplace page.

– Wilson To and Luis Daniel Soto

Introducing Allgress Regulatory Product Mapping

by Ana Visneski | on | in AWS Marketplace | | Comments

This guest post is brought to you by Benjamin Andrew  and Tim Sandage.

-Ana


It’s increasingly difficult for organizations within regulated industries (such as government, financial, and healthcare) to demonstrate compliance with security requirements. The burden to comply is compounded by the use of legacy security frameworks and a lack of understanding of which services enable appropriate threat mitigations. It is further complicated by security responsibilities in relation to cloud computing, Internet of Things (IoT), and mobile applications.

Allgress helps minimize this burden by helping enterprise security and risk professionals assess, understand, and manage corporate risk. Allgress and AWS are working to offer a way to establish clear mappings from AWS services and 3rd party software solutions in AWS Marketplace to common security frameworks. The result for regulated customers within the AWS Cloud will be minimized business impact, increased security effectiveness, and reduced risk.

The name of this new solution is Allgress Marketplace Regulatory Product Mapping Tool (RPM) Allgress designed this tool specifically for customers deployed within AWS who want to reduce the complexity, increase the speed, and shorten the time frame of achieving compliance, including compliance with legislation such as Sarbanes Oxley, HIPAA, and FISMA. Allgress RPM is designed to achieve these results by letting customers quickly map their regulatory security frameworks (such as ISO, NIST, and PCI-DSS controls) to AWS services, solutions in AWS Markeplace, and APN technology partner solutions. The tool even guides customers through the compliance process, providing focused content every step of the way.

Here are the four simple steps to get a regulatory assessment:

  1. If you’re a new user, you can Login as a guest into the tool. Registration is not required. If you’re an existing user, you can log in using your Username and Password to return to a saved assessment:

01[1]

  1. Once you’ve logged in, you can select your Regulatory Security Framework (e.g. FedRAMP or PCI). After you’ve selected your framework, you have two explorer options: Coverage Overview and Product Explorer (explained in detail below).02[1]

The Coverage Overview includes three use cases: AWS customer controls review, regulatory requirement mapping, and gap-assessment planning. The Product Explorer tool provides detailed control coverage for the AWS services selected and/or all available AWS Marketplace vendor solutions.

  1. You can select Coverage Overview to review AWS Inherited, Shared, Operation, and AWS Marketplace Control mappings.03[1]

Coverage overview – This view breaks down security frameworks into four categories:

  1. AWS Inherited Controls — Controls that you fully inherit from AWS.
  2. AWS Shared Controls — AWS provides the control implementation for the infrastructure, and you provide your own control implementation within its use of AWS services. (e.g. Fault Tolerance)
  3. Operational Controls – These are procedural controls that AWS or an AWS consulting partner can help you implement within your AWS environment.
  4. AWS Marketplace Controls — These are technical controls that can be implemented (partially or fully) with an AWS technology partner and vendors from AWS Marketplace.

Note: Features in this tool include the ability to zoom into the controls using your mouse. With point-and-click ease, you can zoom in at the domain (Control Family) level, or into individual controls:

04[1]05[1]

  1. The additional capabilities within RPM is Product Explorer, which Identifies solutions in AWS Marketplace that can partially or fully implement the requirements of a security control. The screen below illustrates the 327 control for FedRAMP moderate — as well as several solutions available from software vendors on AWS Marketplace that can help remediate the control requirements.

06[1]

The Product Explorer page has several capabilities to highlight both service and control association:

  1. At the top of the page you can remove controls that do not currently have associated mapping.
  2. You can also zoom into Domains, Sub-domains, and Controls.
  3. You can select single products or multiple products with quick view options.
  4. You can select single or multiple products, and then select Product Cart to review detailed control implementations.

07_CORRECT[1]

Product Explorer Note: Non-associated controls have been removed in order to clearly see potential product mappings.

08[1]

Product Explorer — Zoom function for a specific control (e.g. AU-11) identifies all potential AWS services and associated products which can be leveraged for control implementation.

 09[1]

Product Explorer – Single product control coverage view. For a detail view you can Click on the Product Cart and view detailed implementation notes.

10[1]

Product Explorer – You can also add multiple services and solutions into a product cart and then connect to Marketplace for each software vendor solution available through our public managed software catalog.

11[1]

More about Allgres RPM
The AWS Services, Consulting, and Technology vendors that Allgress RPM is designed to map, have all demonstrated technical proficiency as a security solution, and can treat security controls across multiple regulated industries. At launch, RPM includes 10 vendors who all have deep experience working with regulated customers to deliver mission-critical workloads and applications on AWS. You can reach Allgress here.

View more Security solutions in AWS Marketplace. Please note that many of the products available in AWS Marketplace offer free trials. You can request free credits here: AWS Marketplace – Get Infrastructure Credits.

We wish to thank our launch partners, who worked with AWS and the Allgress team to map their products and services: Allgress, Alert Logic, Barracuda, Trend Micro, Splunk, Palo Alto Networks, OKTA, CloudCheckr, Evident.io and CIS (Center for Internet Security).

We wish to thank our launch partners, who worked with AWS and the Allgress team to map their products and services: Allgress, Alert Logic, Barracuda, Trend Micro, Splunk, Palo Alto Networks, OKTA, CloudCheckr, Evident.io and CIS (Center for Internet Security).

-Benjamin Andrew and Tim Sandage.

Amazon Chime – Unified Communications Service

by Jeff Barr | on | in Amazon Chime, Announcements, Launch | | Comments

If your working day is anything like mine, you probably spend a lot of time communicating with your colleagues. Every day, I connect with and collaborate with people all over the world. Some of them are sitting in their office in front of their PCs; others are on the go and using their phones to connect and to communicate. We chat informally, we meet on regular schedules, we exchange documents and images, and we share our screens.

For many years, most “business productivity” tools have been anything but. Many of these tools support just one or two modes of communication or styles of collaboration and can end up getting in the way. Licensing and training costs and a lack of support for collaboration that crosses organizational boundaries don’t make things any better.

Time to change that…

Introducing Amazon Chime
Today I would like to tell you about Amazon Chime. This is a new unified communication service that is designed to make meetings easier and more efficient than ever before. Amazon Chime lets you start high-quality audio and video meetings with a click. Once you are in the meeting you can chat, share content, and share screens in a smooth experience that spans PC and Mac desktops, iOS devices, and Android devices.

Because Amazon Chime is a fully managed service, there’s no upfront investment, software deployment, or ongoing maintenance. Users simply download the Amazon Chime app and start using it within minutes.

Let’s take a quick look at some of the most important features of Amazon Chime:

On-Time Meetings – You no longer need to dial in to meetings. There’s no need to enter long meeting identifiers or equally long passwords. Instead, Amazon Chime will alert you when the meeting starts, and allow you to join (or to indicate that you are running behind) with a single click or tap.

Meeting Roster – Instead of endless “who just joined” queries, Amazon Chime provides a visual roster of attendees, late-comers, and those who skipped out entirely. It also provides broadly accessible mute controls in case another participant is typing or their dog is barking.

Broad AccessAmazon Chime was built for mobile use, with apps that run on PCs and mobile devices. Even better, Amazon Chime allows you to join a meeting from one device and then seamlessly switch to another.

Easy Sharing – Collaborating is a core competency for Amazon Chime. Meeting participants can share their screens as desired, with no need to ask for permission. Within Amazon Chime‘s chat rooms, participants can work together and create a shared history that is stored in encrypted fashion.

Clear CallsAmazon Chime delivers high quality noise-cancelled audio and crisp, clear HD video that works across all user devices and with most conference room video systems.

Amazon Chime in Action
Let’s run through the most important aspects of Amazon Chime, starting with the main screen:

I can click on Meetings and then schedule a meeting in my Outlook calendar or my Google calendar:

Outlook scheduling makes use of the Amazon Chime add-in; I was prompted to install it when I clicked on Schedule with Outlook. I simply set up an invite as usual:

Amazon Chime lets me know when the meeting is starting:

I simply click on Answer and choose my audio option:

And my meeting is under way. I can invite others, share my screen or any desired window, use my webcam, and so forth:

I have many options that I can change while the meeting is underway:

Amazon Chime also includes persistent, 1 to 1 chat and chat rooms. Here’s how I create a new chat room:

After I create it I can invite my fellow bloggers and we can have a long-term, ongoing conversation.

As usual, I have only shown you a few of the features! To get started, visit the Amazon Chime site and try it out for yourself.

Amazon Chime Editions
Amazon Chime is available in three editions:

  • Basic Edition is available at no charge. It allows you to attend meetings, make 1 to 1 video calls, and to use all Amazon Chime chat features.
  • Plus Edition costs $2.50 per user per month. It allows user management of entire email domains, supports 1 GB of message retention per user, and connects to Active Directory.
  • Pro Edition costs $15.00 per user per month. It allows hosting of meetings of up to 100 people.

Amazon Chime Pro is free to try for 30 days, with no credit card required. After 30 days, you can continue to use Amazon Chime Basic for free, for as long as you’d like, or you can purchase Amazon Chime Pro for $15.00 per user per month. There is no upfront commitment, and you can change or cancel your subscription at any time.

Available Now
Amazon Chime is available now and you can sign up to start using it today!

Jeff;

 

Amazon EBS Update – New Elastic Volumes Change Everything

by Jeff Barr | on | in Amazon EC2, Amazon Elastic Block Store, Launch | | Comments

It is always interesting to speak with our customers and to learn how the dynamic nature of their business and their applications drives their block storage requirements. These needs change over time, creating the need to modify existing volumes to add capacity or to change performance characteristics. Today’s 24×7 operating models leaves no room for downtime; as a result, customers want to make changes without going offline or otherwise impacting operations.

Over the years, we have introduced new EBS offerings that support an ever-widening set of use cases. For example, we introduced two new volume types in 2016 – Throughput Optimized HDD (st1) and Cold HDD (sc1). Our customers want to use these volume types as storage tiers, modifying the volume type to save money or to change the performance characteristics, without impacting operations.

In other words, our customers want their EBS volumes to be even more elastic!

New Elastic Volumes
Today we are launching a new EBS feature we call Elastic Volumes and making it available for all current-generation EBS volumes attached to current-generation EC2 instances. You can now increase volume size, adjust performance, or change the volume type while the volume is in use. You can continue to use your application while the change takes effect.

This new feature will greatly simplify (or even eliminate) many of your planning, tuning, and space management chores. Instead of a traditional provisioning cycle that can take weeks or months, you can make changes to your storage infrastructure instantaneously, with a simple API call.

You can address the following scenarios (and many more that you can come up with on your own) using Elastic Volumes:

Changing Workloads – You set up your infrastructure in a rush and used the General Purpose SSD volumes for your block storage. After gaining some experience you figure out that the Throughput Optimized volumes are a better fit, and simply change the type of the volume.

Spiking Demand – You are running a relational database on a Provisioned IOPS volume that is set to handle a moderate amount of traffic during the month, with a 10x spike in traffic  during the final three days of each month due to month-end processing.  You can use Elastic Volumes to dial up the provisioning in order to handle the spike, and then dial it down afterward.

Increasing Storage – You provisioned a volume for 100 GiB and an alarm goes off indicating that it is now at 90% of capacity. You increase the size of the volume and expand the file system to match, with no downtime, and in a fully automated fashion.

Using Elastic Volumes
You can manage all of this from the AWS Management Console, via API calls, or from the AWS Command Line Interface (CLI).

To make a change from the Console, simply select the volume and choose Modify Volume from the Action menu:

Then make any desired changes to the volume type, size, and Provisioned IOPS (if appropriate). Here I am changing my 75 GiB General Purpose (gp2) volume into a 400 GiB Provisioned IOPS volume, with 20,000 IOPS:

When I click on Modify I confirm my intent, and click on Yes:

The volume’s state reflects the progress of the operation (modifying, optimizing, or complete):

The next step is to expand the file system so that it can take advantage of the additional storage space. To learn how to do that, read Expanding the Storage Space of an EBS Volume on Linux or Expanding the Storage Space of an EBS Volume on Windows. You can expand the file system as soon as the state transitions to optimizing (typically a few seconds after you start the operation). The new configuration is in effect at this point, although optimization may continue for up to 24 hours. Billing for the new configuration begins as soon as the state turns to optimizing (there’s no charge for the modification itself).

Automatic Elastic Volume Operations
While manual changes are fine, there’s plenty of potential for automation. Here are a couple of ideas:

Right-Sizing – Use a CloudWatch alarm to watch for a volume that is running at or near its IOPS limit. Initiate a workflow and approval process that could provision additional IOPS or change the type of the volume. Or, publish a “free space” metric to CloudWatch and use a similar approval process to resize the volume and the filesystem.

Cost Reduction – Use metrics or schedules to reduce IOPS or to change the type of a volume. Last week I spoke with a security auditor at a university. He collects tens of gigabytes of log files from all over campus each day and retains them for 60 days. Most of the files are never read, and those that are can be scanned at a leisurely pace. They could address this use case by creating a fresh General Purpose volume each day, writing the logs to it at high speed, and then changing the type to Throughput Optimized.

As I mentioned earlier, you need to resize the file system in order to be able to access the newly provisioned space on the volume. In order to show you how to automate this process, my colleagues built a sample that makes use of CloudWatch Events, AWS Lambda, EC2 Systems Manager, and some PowerShell scripting. The rule matches the modifyVolume event emitted by EBS and invokes the logEvents Lambda function:

The function locates the volume, confirms that it is attached to an instance that is managed by EC2 Systems Manager, and then adds a “maintenance tag” to the instance:

from __future__ import print_function
import boto3
ec2 = boto3.client('ec2')
ssm = boto3.client('ssm')
tags = ['maintenance']

def lambda_handler(event, context):
    volume = [event['resources'][0].split('/')[1]]
    attach = ec2.describe_volumes(VolumeIds=volume)['Volumes'][0]['Attachments']
    if attach:
        instance = attach[0]['InstanceId']
        filters = [{'key': 'InstanceIds', 'valueSet': [instance]}]
        info = ssm.describe_instance_information(
            InstanceInformationFilterList=filters)['InstanceInformationList']
        if info:
            ec2.create_tags(Resources=[instance], Tags=tags)
            print('{} Instance {} has been tagged for maintenance'.format(info[0]['PlatformName'], instance))

Later (either manually or on a schedule), EC2 Systems Manager is used to run a PowerShell script on all of the instances that are tagged for maintenance. The script looks at the instance’s disks and partitions, and resizes all of the drives (filesystems) to the maximum allowable size. Here’s an excerpt:

foreach ($DriveLetter in $DriveLetters) {
	$Error.Clear()
        $SizeMax = (Get-PartitionSupportedSize -DriveLetter $DriveLetter).SizeMax
}

Available Today
The Elastic Volumes feature is available today and you can start using it right now!

To learn about some important special cases and a few limitations on instance types, read Considerations When Modifying EBS Volumes.

Jeff;

PS – If you would like to design and build cool, game-changing storage services like EBS, take a look at our EBS Jobs page!

 

AWS Direct Connect Update – Link Aggregation Groups, Bundles, and re:Invent Recap

by Jeff Barr | on | in Amazon VPC, AWS Direct Connect | | Comments

AWS Direct Connect helps our large-scale customers to create private, dedicated network connections to their office, data center, or colocation facility. Our customers create 1 Gbps and 10 Gbps connections in order to reduce their network costs, increase data transfer throughput, and to get a more consistent network experience than is possible with an Internet-based connection.

Today I would like to tell you about a new Link Aggregation feature for Direct Connect. I’d also like to tell you about our new Direct Connect Bundles and to tell you more about how we used Direct Connect to provide a first-class customer experience at AWS re:Invent 2016.

Link Aggregation Groups
Some of our customers would like to set up multiple connections (generally known as ports) between their location and one of the 46 Direct Connect locations. Some of them would like to create a highly available link that is resilient in the face of network issues outside of AWS; others simply need more data transfer throughput.

In order to support this important customer use case, you can now purchase up to 4 ports and treat them as a single managed connection, which we call a Link Aggregation Group or LAG. After you have set this up, traffic is load-balanced across the ports at the level of individual packet flows. All of the ports are active simultaneously, and are represented by a single BGP session. Traffic across the group is managed via Dynamic LACP (Link Aggregation Control Protocol – or ISO/IEC/IEEE 8802-1AX:2016). When you create your group, you also specify the minimum number of ports that must be active in order for the connection to be activated.

You can order a new group with multiple ports and you can aggregate existing ports into a new group. Either way, all of the ports must have the same speed (1 Gbps or 10 Gbps).

All of the ports in as group will connect to the same device on the AWS side. You can add additional ports to an existing group as long as there’s room on the device (this information is now available in the Direct Connect Console). If you need to expand an existing group and the device has no open ports, you can simply order a new group and migrate your connections.

Here’s how you can make use of link aggregation from the Console. First, creating a new LAG from scratch:

And second, creating a LAG from existing connections:


Link Aggregation Groups are now available in the US East (Northern Virginia), US West (Northern California), US East (Ohio), US West (Oregon), Canada (Central), South America (São Paulo), Asia Pacific (Mumbai), and Asia Pacific (Seoul) Regions and you can create them today. We expect to make them available in the remaining regions by the end of this month.

Direct Connect Bundles
We announced some powerful new Direct Connect Bundles at re:Invent 2016. Each bundle is an advanced, hybrid reference architecture designed to reduce complexity and to increase performance. Here are the new bundles:

Level 3 Communications Powers Amazon WorkSpaces – Connects enterprise applications, data, user workspaces, and end-point devices to offer reliable performance and a better end-user experience:

SaaS Architecture enhanced by AT&T NetBond – Enhances quality and user experience for applications migrated to the AWS Cloud:

Aviatrix User Access Integrated with Megaport DX – Supports encrypted connectivity between AWS Cloud Regions, between enterprise data centers and AWS, and on VPN access to AWS:

Riverbed Hybrid SDN/NFV Architecture over Verizon Secure Cloud Interconnect – Allows enterprise customers to provide secure, optimized access to AWS services in a hybrid network environment:

Direct Connect at re:Invent 2016
In order to provide a top-notch experience for attendees and partners at re:Invent, we worked with Level 3 to set up a highly available and fully redundant set of connections. This network was used to support breakout sessions, certification exams, the hands-on labs, the keynotes (including the live stream to over 25,000 viewers in 122 countries), the hackathon, bootcamps, and workshops. The re:Invent network used four 10 Gbps connections, two each to US West (Oregon) and US East (Northern Virginia):

It supported all of the re:Invent venues:

Here are some video resources that will help you to learn more about how we did this, and how you can do it yourself:

Jeff;

Amazon Rekognition Update – Estimated Age Range for Faces

by Jeff Barr | on | in Amazon Rekognition, Launch | | Comments

Amazon Rekognition is one of our artificial intelligence services. In addition to detecting objects, scenes, and faces in images, Rekognition can also search and compare faces. Behind the scenes, Rekognition uses deep neural network models to analyze billions of images daily (read Amazon Rekognition – Image Detection and Recognition Powered by Deep Learning to learn more).

Amazon Rekognition returns an array of attributes for each face that it locates in an image. Today we are adding a new attribute, an estimated age range. This value is expressed in years, and is returned as a pair of integers. The age ranges can overlap; the face of a 5 year old might have an estimated range of 4 to 6 but the face of a 6 year old might have an estimated range of 4 to 8. You can use this new attribute to power public safety applications, collect demographics, or to assemble a set of photos that span a desired time frame.

In order to have some fun with this new feature (I am writing this post on a Friday afternoon), I dug into my photo archives and asked Rekognition to estimate my age. Here are the results.

Let’s start at the beginning! I was probably about 2 years old here:

This picture was taken at my grandmother’s house in the spring of 1966:

I was 6 years old; Rekognition estimated that I was between 6 and 13:

My first official Amazon PR photo from 2003 when I was 43:

That’s a range of 17 years and my actual age was right in the middle.

And my most recent (late 2015) PR photo, age 55:

Again a fairly wide range, and I’m right in the middle of it! In general, Rekognition the actual age for each face will fall somewhere within the indicated range, but you should not count on it falling precisely in the middle.

This feature is available now and you can start using it today.

Jeff;

 

More Amazon Wind and Solar Farms are Live!

by Ana Visneski | on | in Guest Post, Science | | Comments

windfarmsWe’re kicking off the New Year with some great news on the AWS sustainability front – three additional wind and solar projects went live at the end of 2016 and are now delivering energy onto the electric grid that powers AWS data centers!

As a quick recap, at re:Invent 2016, Vice President and Distinguished Engineer James Hamilton announced on the main stage that we had exceeded our goal of being powered by 40% renewable energy by the end of 2016, and thanks to the commitment of the AWS team and our great energy partners, we set a new goal to be at 50% by the end of 2017.

In addition to Amazon Wind Farm Fowler Ridge in Benton County, Indiana, which went into production in early 2016, three new projects came online in December, including:

Amazon Wind Farm US East – We first announced the partnership with Avangrid Renewables (then called Iberdrola Renewables) for Amazon Wind Farm US East in July of last year to begin construction of the wind farm. It is the first commercial-scale wind farm in North Carolina and one of the first in the southeastern United States, spanning Pasquotank and Perquimans counties in North Carolina.

Amazon Solar Farm US East – AWS teamed up with Community Energy in June 2015 to construct the Amazon Solar Farm US East in Accomack County, Virginia, which will generate approximately 170,000 megawatt hours of solar power annually. We have five additional solar farms under construction in Virginia and expect them to go online in 2017.

Amazon Wind Farm US Central – In November 2015, we partnered with EDP Renewables to construct the 100 megawatt wind farm in Paulding County, Ohio, which will generate approximately 320,000 megawatt hours of wind energy annually. It will be followed by Amazon Wind Farm US Central 2 (also in Ohio), which will launch in 2017.

So far AWS has announced a total of 10 renewable energy projects and these wind and solar farms are expected to produce 2.6 million megawatt hours of energy — enough energy to power over 240,000 U.S. homes annually!

To follow our march towards our long-term goal of 100% renewable energy, be sure to check out the AWS & Sustainability web page.

Beyond the sustainability initiatives focused on powering the AWS global infrastructure, Amazon is investing in several other clean energy activities across the company. Some of our other projects include Amazon Wind Farm Texas – a 253MW wind farm in Scurry County, Texas — green rooftops, and the District Energy Project that uses recycled energy for heating Amazon offices in Seattle. For more information on Amazon’s sustainability initiatives, visit www.amazon.com/sustainability.

AWS Online Tech Talks – February 2017

by Tara Walker | on | in Webinars | | Comments

The New Year is underway, so there is no better time to dive into learning more about the latest AWS services. Each month, we have a series of webinars targeting best practices and new service features in AWS Cloud.

 

February Online Tech Talks (formerly known as Monthly Webinar Series)

I am excited to share the webinars schedule for the month of February. Remember all webinars noted are free, but they may fill up quickly so be sure to register ahead of time. Webinars are typically one hour in length and scheduled times are in Pacific Time (PT) time zone.

 

Webinars featured this month are as follows:

Tuesday, February 14

Mobile

10:30 AM – 11:30 AM: Test your Android App with Espresso and AWS Device Farm

 

Wednesday, February 15

Big Data

9:00 AM – 10:00 AM: Amazon Elasticsearch Service with Elasticsearch 5 and Kibana 5

Mobile

12:00 Noon – 1:00 PM: Deep Dive on AWS Mobile Hub for Enterprise Mobile Applications

 

Thursday, February 16

Security

9:00 AM – 10:00 AM: DNS DDoS mitigation using Amazon Route 53 and AWS Shield

 

Tuesday, February 21

Databases

09:00 AM – 10:00 AM: Best Practices for NoSQL Workloads on Amazon EC2 and Amazon EBS

10:30 AM – 11:30 AM: Consolidate MySQL Shards Into Amazon Aurora Using AWS Database Migration Service

IoT

12:00 Noon – 1:00 PM: Getting Started with AWS IoT

 

Wednesday, February 22

IoT

10:30 AM – 11:30 AM: Best Practices with IoT Security

Databases

12:00 Noon – 1:00 PM: Migrate from SQL Server or Oracle into Amazon Aurora using AWS Database Migration Service

 

Wednesday, February 23

Enterprise / Training

8:00 AM – 9:00 AM: How to Prepare for AWS Certification and Advance your Career

Storage

10:30 AM – 11:30 AM: Deep Dive on Elastic File System

Databases

12:00 Noon – 1:00 PM: Optimize MySQL Workloads with Amazon Elastic Block Store

 

Wednesday, February 24

Big Data

9:00 AM – 10:00 AM: Deep Dive of Flink & Spark on Amazon EMR

10:30 AM – 11:30 AM: Deep Dive on Amazon Redshift

 

The AWS Online Tech Talks series covers a broad range of topics at varying technical levels. These technical sessions are led by AWS solutions architects and engineers and feature live demonstrations & customer examples. You can check out the AWS online series here and the AWS on-demand webinar series on the AWS YouTube channel.

Tara W.