AWS Blog

IPv6 Support Update – CloudFront, WAF, and S3 Transfer Acceleration

by Jeff Barr | on | in Amazon CloudFront, Amazon S3, AWS Web Application Firewall | | Comments

As a follow-up to our recent announcement of IPv6 support for Amazon S3, I am happy to be able to tell you that IPv6 support is now available for Amazon CloudFront, Amazon S3 Transfer Acceleration, and AWS WAF and that all 50+ CloudFront edge locations now support IPv6. We are enabling IPv6 in a phased rollout that starts today and will extend across all of the networks over the next few weeks.

CloudFront IPv6 Support
You can now enable IPv6 support for individual Amazon CloudFront distributions. Viewers and networks that connect to a CloudFront edge location over IPv6 will automatically be served content over IPv6. Those that connect over IPv4 will continue to work as before. Connections to your origin servers will be made using IPv4.

Newly created distributions are automatically enabled for IPv6; you can modify an existing distribution by checking Enable IPv6 in the console or setting it via the CloudFront API:

Here are a couple of important things to know about this new feature:

  • Alias Records – After you enable IPv6  support for a distribution, the DNS entry for the distribution will be updated to include an AAAA record. If you are using Amazon Route 53 and an alias record to map all or part of your domain to the distribution, you will need to add an AAAA alias to the domain.
  • Log Files – If you have enabled CloudFront Access Logs, IPv6 addresses will start to show up in the c-ip field; make sure that your log processing system knows what to do with them.
  • Trusted Signers -If you make use of Trusted Signers in conjunction with an IP address whitelist, we strongly recommend the use of an IPv4-only distribution for Trusted Signer URLs that have an IP whitelist and a separate, IPv4/IPv6 distribution for the actual content. This model sidesteps an issue that would arise if the signing request arrived over an IPv4 address and was signed as such, only to have the request for the content arrive via a different, IPv6 address that is not on the whitelist.
  • CloudFormation – CloudFormation support is in the works. With today’s launch, distributions that are created from a CloudFormation template will not be enabled for IPv6. If you update an existing stack, the setting will remain as-is for any distributions referenced in the stack..
  • AWS WAF – If you use AWS WAF in conjunction with CloudFront, be sure to update your WebACLs and your IP rulesets as appropriate in order to whitelist or blacklist IPv6 addresses.
  • Forwarded Headers – When you enable IPv6 for a distribution, the X-Forwarded-For header that is presented to the origin will contain an IPv6 address. You need to make sure that the origin is able to process headers of this form.

To learn more, read IPv6 Support for Amazon CloudFront.

AWS WAF IPv6 Support
AWS WAF helps you to protect your applications from application-layer attacks (read New – AWS WAF to learn more).

AWS WAF can now inspect requests that arrive via IPv4 or IPv6 addresses. You can create web ACLs that match IPv6 addresses, as described in Working with IP Match Conditions:

All existing WAF features will work with IPv6 and there will be no visible change in performance. The IPv6 will appear in the Sampled Requests collected and displayed by WAF:

S3 Transfer Acceleration IPv6 Support
This important new S3 feature (read AWS Storage Update – Amazon S3 Transfer Acceleration + Larger Snowballs in More Regions for more info) now has IPv6 support. You can simply switch to the new dual-stack endpoint for your uploads. Simply change:

https://BUCKET.s3-accelerate.amazonaws.com

to

https://BUCKET.s3-accelerate.dualstack.amazonaws.com

Here’s some code that uses the AWS SDK for Java to create a client object and enable dual-stack transfer:

AmazonS3Client s3 = new AmazonS3Client();
s3.setS3ClientOptions(S3ClientOptions.builder().enableDualstack().setAccelerateModeEnabled(true).build());

Most applications and network stacks will prefer IPv6 automatically, and no further configuration should be required. You should plan to take a look at the IAM policies for your buckets in order to make sure that they will work as expected in conjunction with IPv6 addresses.

To learn more, read about Making Requests to Amazon S3 over IPv6.

Don’t Forget to Test
As a reminder, if IPv6 connectivity to any AWS region is limited or non-existent, IPv4 will be used instead. Also, as I noted in my earlier post, the client system can be configured to support IPv6 but connected to a network that is not configured to route IPv6 packets to the Internet. Therefore, we recommend some application-level testing of end-to-end connectivity before you switch to IPv6.

Jeff;

 

New – CloudWatch Plugin for collectd

by Jeff Barr | on | in Amazon CloudWatch, Amazon EC2 | | Comments

You have had the power to store your own business, application, and system metrics in Amazon CloudWatch for quite some time (see New – Custom Metrics for Amazon CloudWatch to learn more).  As I wrote way back in 2011 when I introduced this feature, “You can view graphs, set alarms, and initiate automated actions based on these metrics, just as you can for the metrics that CloudWatch already stores for your AWS resources.”

Today we are simplifying the process of collecting statistics from your system and getting them in to CloudWatch with the introduction of a new CloudWatch plugin for collectd. By combining collectd‘s ability to gather many different types of statistics with the CloudWatch features for storage, display, alerting, and alarming, you can become better informed about the state and performance of your EC2 instances and your on-premises hardware and the applications running on them. The plugin is being released as an open source project and we are looking forward to your pull requests.

The collectd daemon is written in C for performance and portability. It supports over one hundred plugins, allowing you to collect statistics on Apache and Nginx web server performance, memory usage, uptime, and much more.

Installation and Configuration
I installed and configured collectd and the new plugin on an EC2 instance in order to see it in action.

To get started I created an IAM Policy with permission to write metrics data to CloudWatch:

Then I created an IAM Role that allows EC2 (and hence the collectd code running on my instance) to use my Policy:

If I was planning to use the plugin to collect statistics from my on-premises servers or if my EC2 instances were already running, I could have skipped these steps, and created an IAM user with the appropriate permissions instead. Had I done this, I would have had to put the user’s credentials on the servers or instances.

With the Policy and the Role in place, I launched an EC2 instance and selected the Role:

I logged in and installed collectd:

$ sudo yum -y install collectd

Then I fetched the plugin and the install script, made the script executable, and ran it:

$ chmod a+x setup.py
$ sudo ./setup.py

I answered a few questions and the setup ran without incident, starting up collectd after configuring it:

Installing dependencies ... OK
Installing python dependencies ... OK
Copying plugin tar file ... OK
Extracting plugin ... OK
Moving to collectd plugins directory ... OK
Copying CloudWatch plugin include file ... OK

Choose AWS region for published metrics:
  1. Automatic [us-east-1]
  2. Custom
Enter choice [1]: 1

Choose hostname for published metrics:
  1. EC2 instance id [i-057d2ed2260c3e251]
  2. Custom
Enter choice [1]: 1

Choose authentication method:
  1. IAM Role [Collectd_PutMetricData]
  2. IAM User
Enter choice [1]: 1

Choose how to install CloudWatch plugin in collectd:
  1. Do not modify existing collectd configuration
  2. Add plugin to the existing configuration
Enter choice [2]: 2
Plugin configuration written successfully.
Stopping collectd process ... NOT OK
Starting collectd process ... OK
$

With collectd running and the plugin installed and configured, the next step was to decide on the statistics of interest and configure the plugin to publish them to CloudWatch (note that there is a per-metric cost so this is an important step).

The file /opt/collectd-plugins/cloudwatch/config/blocked_metrics contains a list of metrics that have been collected but not published to CloudWatch:

$ cat /opt/collectd-plugins/cloudwatch/config/blocked_metrics
# This file is automatically generated - do not modify this file.
# Use this file to find metrics to be added to the whitelist file instead.
cpu-0-cpu-user
cpu-0-cpu-nice
cpu-0-cpu-system
cpu-0-cpu-idle
cpu-0-cpu-wait
cpu-0-cpu-interrupt
cpu-0-cpu-softirq
cpu-0-cpu-steal
interface-lo-if_octets-
interface-lo-if_packets-
interface-lo-if_errors-
interface-eth0-if_octets-
interface-eth0-if_packets-
interface-eth0-if_errors-
memory--memory-used
load--load-
memory--memory-buffered
memory--memory-cached

I was interested in memory consumption so I added one line to /opt/collectd-plugins/cloudwatch/config/whitelist.conf:

memory--memory-.*

The collectd configuration file (/etc/collectd.conf) contains additional settings for collectd and the plugins. I did not need to make any changes to it.

I restarted collectd so that it would pick up the change:

$ sudo service collectd restart

I exercised my instance a bit in order to consume some memory, and then opened up the CloudWatch Console to locate and display my metrics:

This screenshot includes a preview of an upcoming enhancement to the CloudWatch Console; don’t worry if yours doesn’t look as cool (stay tuned for more information on this).

If I had been monitoring a production instance, I could have installed one or more of the collectd plugins. Here’s a list of what’s available on the Amazon Linux AMI:

$ sudo yum list | grep collectd
collectd.x86_64                        5.4.1-1.11.amzn1               @amzn-main
collectd-amqp.x86_64                   5.4.1-1.11.amzn1               amzn-main
collectd-apache.x86_64                 5.4.1-1.11.amzn1               amzn-main
collectd-bind.x86_64                   5.4.1-1.11.amzn1               amzn-main
collectd-curl.x86_64                   5.4.1-1.11.amzn1               amzn-main
collectd-curl_xml.x86_64               5.4.1-1.11.amzn1               amzn-main
collectd-dbi.x86_64                    5.4.1-1.11.amzn1               amzn-main
collectd-dns.x86_64                    5.4.1-1.11.amzn1               amzn-main
collectd-email.x86_64                  5.4.1-1.11.amzn1               amzn-main
collectd-generic-jmx.x86_64            5.4.1-1.11.amzn1               amzn-main
collectd-gmond.x86_64                  5.4.1-1.11.amzn1               amzn-main
collectd-ipmi.x86_64                   5.4.1-1.11.amzn1               amzn-main
collectd-iptables.x86_64               5.4.1-1.11.amzn1               amzn-main
collectd-ipvs.x86_64                   5.4.1-1.11.amzn1               amzn-main
collectd-java.x86_64                   5.4.1-1.11.amzn1               amzn-main
collectd-lvm.x86_64                    5.4.1-1.11.amzn1               amzn-main
collectd-memcachec.x86_64              5.4.1-1.11.amzn1               amzn-main
collectd-mysql.x86_64                  5.4.1-1.11.amzn1               amzn-main
collectd-netlink.x86_64                5.4.1-1.11.amzn1               amzn-main
collectd-nginx.x86_64                  5.4.1-1.11.amzn1               amzn-main
collectd-notify_email.x86_64           5.4.1-1.11.amzn1               amzn-main
collectd-postgresql.x86_64             5.4.1-1.11.amzn1               amzn-main
collectd-rrdcached.x86_64              5.4.1-1.11.amzn1               amzn-main
collectd-rrdtool.x86_64                5.4.1-1.11.amzn1               amzn-main
collectd-snmp.x86_64                   5.4.1-1.11.amzn1               amzn-main
collectd-varnish.x86_64                5.4.1-1.11.amzn1               amzn-main
collectd-web.x86_64                    5.4.1-1.11.amzn1               amzn-main

Things to Know
If you are using version 5.5 or newer of collectd, four metrics are now published by default:

  • df-root-percent_bytes-used – disk utilization
  • memory–percent-used – memory utilization
  • swap–percent-used – swap utilization
  • cpu–percent-active – cpu utilization

You can remove these from the whitelist.conf file if you don’t want them to be published.

The primary repositories for the Amazon Linux AMI, Ubuntu, RHEL, and CentOS currently provide older versions of collectd; please be aware of this change in the default behavior if you install from a custom repo or build from source.

Lots More
There’s quite a bit more than I had time to show you. You can  install more plugins and then configure whitelist.conf to publish even more metrics to CloudWatch. You can create CloudWatch Alarms, set up Custom Dashboards, and more.

To get started, visit AWS Labs on GitHub and download the CloudWatch plugin for collectd.

Jeff;

 

Running Express Applications on AWS Lambda and Amazon API Gateway

by Jeff Barr | on | in Amazon API Gateway, AWS Lambda | | Comments

Express is a web framework for Node.js . It simplifies the development of “serverless” web sites, web applications, and APIs. In a serverless environment, most or all of the backend logic is run on-demand in a stateless fashion (see Mike Roberts’ article on Serverless Architectures for a more detailed introduction). AWS Lambda, when used in conjunction with the new Amazon API Gateway features that I blogged about earlier this month (API Gateway Update – New Features Simplify API Development), allows existing Express applications to be run in serverless fashion. When you use API Gateway you have the opportunity to take advantage of additional features such as Usage Plans which allow you to build a developer ecosystem around your APIs, and caching which allows you to build applications that are responsive and cost-effective.

In order to help you to migrate your Express applications to Lambda and API Gateway, we have created the aws-serverless-express package. This package contains a working example that you can use as a starting point for your own work.

I have two resources that you can use to migrate your Express code and applications to API Gateway and Lambda:

Jeff;

AWS Week in Review – September 27, 2016

by Jeff Barr | on | in Webinars | | Comments

Fourteen (14) external and internal contributors worked together to create this edition of the AWS Week in Review. If you would like to join the party (with the possibility of a free lunch at re:Invent), please visit the AWS Week in Review on GitHub.

Monday

September 26

Tuesday

September 27

Wednesday

September 28

Thursday

September 29

Friday

September 30

Saturday

October 1

Sunday

October 2

New & Notable Open Source

  • dynamodb-continuous-backup sets up continuous backup automation for DynamoDB.
  • lambda-billing uses NodeJS to automate billing to AWS tagged projects, producing PDF invoices.
  • vyos-based-vpc-wan is a complete Packer + CloudFormation + Troposphere powered setup of AMIs to run VyOS IPSec tunnels across multiple AWS VPCs, using BGP-4 for dynamic routing.
  • s3encrypt is a utility that encrypts and decrypts files in S3 with KMS keys.
  • lambda-uploader helps to package and upload Lambda functions to AWS.
  • AWS-Architect helps to deploy microservices to Lambda and API Gateway.
  • awsgi is an WSGI gateway for API Gateway and Lambda proxy integration.
  • rusoto is an AWS SDK for Rust.
  • EBS_Scripts contains some EBS tricks and triads.
  • landsat-on-aws is a web application that uses Amazon S3, Amazon API Gateway, and AWS Lambda to create an infinitely scalable interface to navigate Landsat satellite data.

New SlideShare Presentations

New AWS Marketplace Listings

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Prepare for re:Invent 2016 – Attend our Upcoming Webinars!

by Jeff Barr | on | in AWS re:Invent, Webinars | | Comments

We are 60 days away from AWS re:Invent 2016! I’m already working on a big pile of blog posts and my colleagues are working night and day to make sure that you have plenty of opportunities to learn about our services. With two new venues, hands-on labs, certification exams, a full day of additional content, and the re:Source Mini Con (full day technical deep dives), and twice as many breakout  sessions as last year, I am confident that you will have plenty to do during the day. After that, we have some great after-hours experiences on the agenda, including the famous wing eating contest, and expanded Pub Crawl, a Harley Ride, the re:Play party, and the re:Invent 5k (that’s kilometers, not kilobytes).

In order to make sure that you are fully prepared before you arrive in Las Vegas, we have put together three webinars. These are free and optional, but I’d recommend attending in order to make sure that you are positioned to make the most of your trip.

Here’s what we have:

How to Reserved Your Seats -We expect to have over 450 breakout sessions spread across 20 or so tracks. This year, based on your feedback, you will be able to reserve your seats ahead of time. That way you can make better plans and remove some uncertainty. In the webinar, you’ll learn how to reserve seats and build your calendar using our online tool.  Here are the webinar sessions (all times are PT):

Deep Dive on AWS re:Invent 2016 Breakout Sessions – Each of the tracks that I mentioned above will contain sessions at the introductory, advanced, and expert levels. This webinar will help you to learn more about the tracks and the sessions so that you can start to prepare your schedule. Again, the sessions, with times in PT:

Know Before You Go – The webinar will provide you with the big picture: keynotes, breakout sessions, training and certification, re:Invent Central, networking opportunities, and our popular after-hours activities.

Jeff;

 

PS -As we get closer to the big event, I know that many past attendees and AWS Partners will be publishing how-to guides of their own. Time permitting (hah!) I’ll put together a round-up of these posts.

 

 

New P2 Instance Type for Amazon EC2 – Up to 16 GPUs

by Jeff Barr | on | in Amazon EC2, Launch | | Comments

I like to watch long-term technology and business trends and watch as they shape the products and services that I get to use and to write about. As I was preparing to write today’s post, three such trends came to mind:

  • Moore’s Law – Coined in 1965, Moore’s Law postulates that the number of transistors on a chip doubles every year.
  • Mass Market / Mass Production – Because all of the technologies that we produce, use, and enjoy every day consume vast numbers of chips, there’s a huge market for them.
  • Specialization  – Due to the previous trend, even niche markets can be large enough to be addressed by purpose-built products.

As the industry pushes forward in accord with these trends, a couple of interesting challenges have surfaced over the past decade or so. Again, here’s a quick list (yes, I do think in bullet points):

  • Speed of Light – Even as transistor density increases, the speed of light imposes scaling limits (as computer pioneer Grace Hopper liked to point out, electricity can travel slightly less than 1 foot in a nanosecond).
  • Semiconductor Physics – Fundamental limits in the switching time (on/off) of a transistor ultimately determine the minimum achievable cycle time for a CPU.
  • Memory Bottlenecks – The well-known von Neumann Bottleneck imposes limits on the value of additional CPU power.

The GPU (Graphics Processing Unit) was born of these trends, and addresses many of the challenges! Processors have reached the upper bound on clock rates, but Moore’s Law gives designers more and more transistors to work with. Those transistors can be used to add more cache and more memory to a traditional architecture, but the von Neumann Bottleneck limits the value of doing so. On the other hand, we now have large markets for specialized hardware (gaming comes to mind as one of the early drivers for GPU consumption). Putting all of this together, the GPU scales out (more processors and parallel banks of memory) instead of up (faster processors and bottlenecked memory). Net-net: the GPU is an effective way to use lots of transistors to provide massive amounts of compute power!

With all of this as background, I would like to tell you about the newest EC2 instance type, the P2. These instances were designed to chew through tough, large-scale machine learning, deep learning, computational fluid dynamics (CFD), seismic analysis, molecular modeling, genomics, and computational finance workloads.

New P2 Instance Type
This new instance type incorporates up to 8 NVIDIA Tesla K80 Accelerators, each running a pair of NVIDIA GK210 GPUs. Each GPU provides 12 GiB of memory (accessible via 240 GB/second of memory bandwidth), and 2,496 parallel processing cores. They also include ECC memory protection, allowing them to fix single-bit errors and to detect double-bit errors. The combination of ECC memory protection and double precision floating point operations makes these instances a great fit for all of the workloads that I mentioned above.

Here are the instance specs:

Instance Name GPU Count vCPU Count Memory Parallel Processing Cores
GPU Memory
Network Performance
p2.xlarge 1 4 61 GiB 2,496 12 GiB High
p2.8xlarge 8 32 488 GiB 19,968 96 GiB 10 Gigabit
p2.16xlarge 16 64 732 GiB 39,936 192 GiB 20 Gigabit

All of the instances are powered by an AWS-Specific version of Intel’s Broadwell processor, running at 2.7 GHz. The p2.16xlarge gives you control over C-states and P-states, and can turbo boost up to 3.0 GHz when running on 1 or 2 cores.

The GPUs support CUDA 7.5 and above, OpenCL 1.2, and the GPU Compute APIs. The GPUs on the p2.8xlarge and the p2.16xlarge are connected via a common PCI fabric. This allows for low-latency, peer to peer GPU to GPU transfers.

All of the instances make use of our new Enhanced Network Adapter (ENA – read Elastic Network Adapter – High Performance Network Interface for Amazon EC2 to learn more) and can, per the table above, support up to 20 Gbps of low-latency networking when used within a Placement Group.

Having a powerful multi-vCPU processor and multiple, well-connected GPUs on a single instance, along with low-latency access to other instances with the same features creates a very impressive hierarchy for scale-out processing:

  • One vCPU
  • Multiple vCPUs
  • One GPU
  • Multiple GPUs in an instance
  • Multiple GPUs in multiple instances within a Placement Group

P2 instances are VPC only, require the use of 64-bit, HVM-style, EBS-backed AMIs, and you can launch them today in the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) regions as On-Demand Instances, Spot Instances, Reserved Instances, or Dedicated Hosts.

Here’s how I installed the NVIDIA drivers and the CUDA toolkit on my P2 instance, after first creating, formatting, attaching, and mounting (to /ebs) an EBS volume that had enough room for the CUDA toolkit and the associated samples (10 GiB is more than enough):

$ cd /ebs
$ sudo yum update -y
$ sudo yum groupinstall -y "Development tools"
$ sudo yum install -y kernel-devel-`uname -r`
$ wget http://us.download.nvidia.com/XFree86/Linux-x86_64/352.99/NVIDIA-Linux-x86_64-352.99.run
$ wget http://developer.download.nvidia.com/compute/cuda/7.5/Prod/local_installers/cuda_7.5.18_linux.run
$ chmod +x NVIDIA-Linux-x86_64-352.99.run
$ sudo ./NVIDIA-Linux-x86_64-352.99.run
$ chmod +x cuda_7.5.18_linux.run
$ sudo ./cuda_7.5.18_linux.run   # Don't install driver, just install CUDA and sample
$ sudo nvidia-smi -pm 1
$ sudo nvidia-smi -acp 0
$ sudo nvidia-smi --auto-boost-permission=0
$ sudo nvidia-smi -ac 2505,875

Note that NVIDIA-Linux-x86_64-352.99.run and cuda_7.5.18_linux.run are interactive programs; you need to accept the license agreements, choose some options, and enter some paths. Here’s how I set up the CUDA toolkit and the samples when I ran cuda_7.5.18_linux.run:

P2 and OpenCL in Action
With everything set up, I took this Gist and compiled it on a p2.8xlarge instance:

[ec2-user@ip-10-0-0-242 ~]$ gcc test.c -I /usr/local/cuda/include/ -L /usr/local/cuda-7.5/lib64/ -lOpenCL -o test

Here’s what it reported:

[ec2-user@ip-10-0-0-242 ~]$ ./test
1. Device: Tesla K80
 1.1 Hardware version: OpenCL 1.2 CUDA
 1.2 Software version: 352.99
 1.3 OpenCL C version: OpenCL C 1.2
 1.4 Parallel compute units: 13
2. Device: Tesla K80
 2.1 Hardware version: OpenCL 1.2 CUDA
 2.2 Software version: 352.99
 2.3 OpenCL C version: OpenCL C 1.2
 2.4 Parallel compute units: 13
3. Device: Tesla K80
 3.1 Hardware version: OpenCL 1.2 CUDA
 3.2 Software version: 352.99
 3.3 OpenCL C version: OpenCL C 1.2
 3.4 Parallel compute units: 13
4. Device: Tesla K80
 4.1 Hardware version: OpenCL 1.2 CUDA
 4.2 Software version: 352.99
 4.3 OpenCL C version: OpenCL C 1.2
 4.4 Parallel compute units: 13
5. Device: Tesla K80
 5.1 Hardware version: OpenCL 1.2 CUDA
 5.2 Software version: 352.99
 5.3 OpenCL C version: OpenCL C 1.2
 5.4 Parallel compute units: 13
6. Device: Tesla K80
 6.1 Hardware version: OpenCL 1.2 CUDA
 6.2 Software version: 352.99
 6.3 OpenCL C version: OpenCL C 1.2
 6.4 Parallel compute units: 13
7. Device: Tesla K80
 7.1 Hardware version: OpenCL 1.2 CUDA
 7.2 Software version: 352.99
 7.3 OpenCL C version: OpenCL C 1.2
 7.4 Parallel compute units: 13
8. Device: Tesla K80
 8.1 Hardware version: OpenCL 1.2 CUDA
 8.2 Software version: 352.99
 8.3 OpenCL C version: OpenCL C 1.2
 8.4 Parallel compute units: 13

As you can see, I have a ridiculous amount of compute power available at my fingertips!

New Deep Learning AMI
As I said at the beginning, these instances are a great fit for machine learning, deep learning, computational fluid dynamics (CFD), seismic analysis, molecular modeling, genomics, and computational finance workloads.

In order to help you to make great use of one or more P2 instances, we are launching a  Deep Learning AMI today. Deep learning has the potential to generate predictions (also known as scores or inferences) that are more reliable than those produced by less sophisticated machine learning, at the cost of a most complex and more computationally intensive training process. Fortunately, the newest generations of deep learning tools are able to distribute the training work across multiple GPUs on a single instance as well as across multiple instances each containing multiple GPUs.

The new AMI contains the following frameworks, each installed, configured, and tested against the popular MNIST database:

MXNet – This is a flexible, portable, and efficient library for deep learning. It supports declarative and imperative programming models across a wide variety of programming languages including C++, Python, R, Scala, Julia, Matlab, and JavaScript.

Caffe – This deep learning framework was designed with  expression, speed, and modularity in mind. It was developed at the Berkeley Vision and Learning Center (BVLC) with assistance from many community contributors.

Theano – This Python library allows you define, optimize, and evaluate mathematical expressions that involve multi-dimensional arrays.

TensorFlow – This is an open source library for numerical calculation using data flow graphs (each node in the graph represents a mathematical operation; each edge represents multidimensional data communicated between them).

Torch – This is a GPU-oriented scientific computing framework with support for machine learning algorithms, all accessible via LuaJIT.

Consult the README file in ~ec2-user/src to learn more about these frameworks.

AMIs from NVIDIA
You may also find the following AMIs to be of interest:

Jeff;

EC2 Reserved Instance Update – Convertible RIs and Regional Benefit

by Jeff Barr | on | in Amazon EC2 | | Comments

We launched EC2 Reserved Instances almost eight years ago. The model that we originated in 2009 provides you with two separate benefits: capacity reservations and a significant discount on the use of specific instances in an Availability Zone. Over time, based on customer feedback, we have refined the model and made additional options available including Scheduled Reserved Instances, the ability to Modify Reserved Instances Reservations, and the ability to buy and sell Reserved Instances (RIs) on the Reserved Instance Marketplace.

Today we are enhancing the Reserved Instance model once again. Here’s what we are launching:

Regional Benefit -Many customers have told us that the discount is more important than the capacity reservation, and that they would be willing to trade it for increased flexibility. Starting today, you can choose to waive the capacity reservation associated with Standard RI, run your instance in any AZ in the Region, and have your RI discount automatically applied.

Convertible Reserved Instances -Convertible RIs give you even more flexibility and offer a significant discount (typically 45% compared to On-Demand). They allow you to change the instance family and other parameters associated with a Reserved Instance at any time. For example, you can convert C3 RIs to C4 RIs to take advantage of a newer instance type, or convert C4 RIs to M4 RIs if your application turns out to need more memory. You can also use Convertible RIs to take advantage of EC2 price reductions over time.

Let’s take a closer look…

Regional Benefit
Reserved Instances (either Standard or Convertible) can now be set to automatically apply across all Availability Zones in a region. The regional benefit automatically applies your RIs to instances across all Availability Zones in a region, broadening the application of your RI discounts. When this benefit is used, capacity is not reserved since the selection of an Availability Zone is required to provide a capacity reservation. In dynamic environments where you frequently launch, use, and then terminate instances this new benefit will expand your options and reduce the amount of time you spend seeking optimal alignment between your RIs and your instances. In horizontally scaled architectures using instances launched via Auto Scaling and connected via Elastic Load Balancing, this new benefit can be of considerable value.

After you click on Purchase Reserved Instances in the AWS Management Console, clicking on Search will display RI’s that have this new benefit:

You can check Only show offerings that reserve capacity if you want to shop for RIs that apply to a single Availability Zone and also reserve capacity:

Convertible RIs
Perhaps you, like many of our customers, purchase RIs to benefit from the best pricing for their workloads. However, if you don’t have a good understanding of your long-term requirements you may be able to make use of our new Convertible RI. If your needs change, you simply exchange your Convertible Reserved Instances for other ones. You can change into Convertible RIs that have a new instance type, operating system, or tenancy without resetting the term. Also, there’s no fee for making an exchange and you can do so as often as you like.

When you make the exchange, you must acquire new RIs that are of equal or greater value than those you started with; in some cases you’ll need to make a true-up payment in order to balance the books. The exchange process is based on the list value of each Convertible RI; this value is simply the sum of all payments you’ll make over the remaining term of the original RI.

You can shop for a Convertible RI by making sure that the Offering Class to Convertible before clicking on Search:

The Convertible RIs offer capacity assurance, are typically priced at a 45% discount when compared to On-Demand, and are available for all current EC2 instance types on a three year term. All three payment options (No Upfront, Partial Upfront, and All Upfront) are available.

Available Now
All of the purchasing and exchange options that I described above can be accessed from the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, or the Reserved Instance APIs (DescribeReservedInstances, PurchaseReservedInstances, ModifyReservedInstances, and so forth).

Convertible RIs and the regional benefit are available in all public AWS Regions, excluding AWS GovCloud (US) and China (Beijing), which are coming soon.

Jeff;

 

Welcome to the Newest AWS Community Heroes (Fall 2016)

by Jeff Barr | on | in AWS Community Heroes | | Comments

I would like to extend a very warm welcome to the newest AWS Community Heroes:

  • Cyrus Wong
  • Paul Duvall
  • Vit Niennattrakul
  • Habeeb Rahman
  • Francisco Edilton
  • Jeevan Dongre

The Heroes share their knowledge and demonstrate their enthusiasm for AWS via social media, blog posts, user groups, and workshops. Let’s take a look at their bios to learn more.

Cyrus Wong
Based in Hong Kong, Cyrus is a Data Scientist in the IT Department of the Hong Kong Institute of Vocational Education. He actively promotes the use of AWS at live events and via social media, and has received multiple awards for his AWS-powered Data Science and Machine Learning Projects.

Cyrus provides professional AWS training to students in Hong Kong, with an eye toward certification. One of his most popular blog posts is How to get all AWS Certifications in Asia, where he recommends watching the entire set of re:Invent videos at a 2.0 to 2.5x speedup!

You can connect with Cyrus on LinkedIn or at a meeting of the AWS Hong Kong User Group.

Paul Duvall
As co-founder and CTO of Stelligent (an AWS Advanced Consulting Partner), Paul has been using AWS to implement Continuous Delivery Systems since 2009.

Based in Northern Virginia, he’s an AWS Certified SysOps Administrator and and AWS Certified Solutions Architect, and has been designing, implementing, and managing software and systems for over 20 years. Paul has written over 30 articles on AWS, automation, and DevOps and is currently writing a book on Enterprise DevOps in AWS.

You can connect with Paul on LinkedIn, follow him on Twitter, or read his posts on the Stelligent Blog.

Vit Niennattrakul
Armed with a Ph.D. in time series data mining and passionate about machine learning, artificial intelligence, and natural language processing, Vit is a consummate entrepreneur who has already founded four companies including Dailitech, an AWS Consulting Partner. They focus on cloud migration and cloud-native applications, and have also created cloud-native solutions for their customers.

Shortly after starting to use AWS in 2013, Vit decided that it could help to drive innovation in Thailand. In order to make this happen, he founded the AWS User Group Thailand and has built it up to over 2,000 members.

 

Habeeb Rahman
Based in India, Habeeb is interested in cognitive science and leadership, and works on application delivery automation at Citrix. Before that, he helped to build AWS-powered SaaS infrastructure at Apigee, and held several engineering roles at Cable & Wireless.

After presenting at AWS community meetups and conferences, Habeen helped to organize the AWS User Group in Bangalore and is actively pursuing his goal of making it the best user group in India for peer learning.

You can connect with Habeeb on LinkedIn or follow him on Twitter.

Francisco Edilton
As a self-described “full-time geek,” Francisco likes to study topics related to cloud computing, and is also interested in the stock market, travel, and food. He brings over 15 years of network security and Linux server experience to the table, and is currently deepening his knowledge of AW by learning about serverless computing, and data science.

Francisco works for TDSIS, a Brazilian company that specializes in cloud architecture, software development, and network security, and helps customers of all sizes to make the move to the cloud. On the AWS side, Francisco organizes regular AWS Meetups in São Paulo, Brazil, writes blog posts, and posts code to his GitHub repo.

Jeevan Dongre
As a DevOps Engineer based in India, Jeevan has built his career around application development, e-commerce, and product development. His passions include automation, cloud computing, and the management of large-scale web applications.

Back in 2011, Jeevan and several other like-minded people formed the Bengaluru AWS User Group in order to share and develop AWS knowledge and skills. The group is still going strong and Jeevan expects it to become the premier group for peer-to-peer learning.

You can connect with Jeevan on LinkedIn or follow him on Twitter.

Welcome
Please join me in offering a warm welcome to our newest AWS Community Heroes!

Jeff;

First Annual Alexa Prize – $2.5 Million to Advance Conversational AI

by Jeff Barr | on | in Alexa | | Comments

Every evening we ask Alexa for the time of sunset, subtract 10 minutes to account for the Olympic Mountains on the horizon, and plan our walk accordingly!

My family and my friends love the Amazon Echo in our kitchen! In the past week we have asked for jokes, inquired about the time of the impending sunset, played music, and checked on the time for the next Seattle Seahawks game. Many of our guests already know how to make requests of Alexa. The others learn after hearing an example or two, and quickly take charge.

While Alexa is pretty cool as-is, we are highly confident that it can be a lot cooler. We want our customers to be able to hold lengthy, meaningful conversations with their Alexa-powered devices. Imagine the day when Alexa is as fluent as LCARS, the computer in Star Trek!

Alexa Prize
In order to advance conversational Artificial Intelligence (AI) a reality, I am happy to announce the first annual Alexa Prize. This is an annual university competition aimed at advancing the field of conversational AI, with Amazon investing up to 2.5 million dollars in the first year.

Teams of university students (each led by a faculty sponsor) can use the Alexa Skills Kit (ASK) to build a “socialbot” that is able to converse with people about popular topics and news events. Participants will have access to a corpus of digital content from multiple sources including the Washington Post, which has agreed to make their corpus available to the students for non-commercial use.

Millions of Alexa customers will initiate conversations with the socialbots on topics ranging from celebrity gossip, scientific breakthroughs, sports, and technology (to name a few). After each conversation concludes Alexa users will provide feedback that will help the students to improve their socialbot. This feedback will also help Amazon to select the socialbots that will advance to the final phase.

Apply Now
Teams have until October 28, 2016 to apply. Up to 10 teams will be sponsored by Amazon and will receive a $100,000 stipend, Alexa-enabled devices, free AWS services, and support from the Alexa team; other teams may also be invited to participate.

On November 14, we’ll announce the selected teams and the competition will begin.

In November 2017, the competition will conclude at AWS re:Invent. At that time, the team behind the best-performing socialbot will be awarded a $500,000 prize, with an additional $1,000,000 awarded to their university if their socialbot achieves the grand challenge of conversing coherently and engagingly with humans for 20 minutes.

To learn more, read the Alexa Prize Rules , read the Alexa Prize FAQ, and visit the Alexa Prize page. This contest is governed by the Alexa Prize Rules.

Jeff;

Coming in 2017 – New AWS Region in France

by Jeff Barr | on | in Announcements | | Comments

As cloud computing becomes the new normal for organizations all over the world and as our customer base becomes larger and more diverse, we will continue to build and launch additional AWS Regions.

Bonjour la France
I am happy to announce that we will be opening an AWS Region in Paris, France in 2017. The new Region will give AWS partners and customers the ability to run their workloads and store their data in France.

This will be the fourth AWS Region in Europe. We currently have two other Regions in Europe — EU (Ireland) and EU (Frankfurt) and an additional Region in the UK expected to launch in the coming months. Together, these Regions will provide our customers with a total of 10 Availability Zones (AZs) and allow them to architect highly fault tolerant applications while storing their data in the EU.

Today’s announcement means that our global infrastructure now comprises 35 Availability Zones across 13 geographic regions worldwide, with another five AWS Regions (and 12 Availability Zones) in France, Canada, China, Ohio, and the United Kingdom coming online throughout the next year (see the AWS Global Infrastructure page for more info).

As always, we are looking forward to serving new and existing French customers and working with partners across Europe. Of course, the new Region will also be open to existing AWS customers who would like to process and store data in France.

To learn more about the AWS France Region feel free to contact our team in Paris at aws-in-france@amazon.com.


A venir en 2017 – Une nouvelle région AWS en France

Je suis heureux d’annoncer que nous allons ouvrir une nouvelle région AWS à Paris, en France, en 2017. Cette nouvelle région offrira aux partenaires et clients AWS la possibilité de gérer leurs charges de travail et de stocker leurs données en France.

Cette Région sera la quatrième en Europe. Nous avons actuellement deux autres régions en Europe – EU (Irlande) et EU (Francfort) et une région supplémentaire ouvrira dans les prochains mois au Royaume-Uni. Cela portera à dix le total des Zones de Disponibilités (AZ) en Europe permettant aux clients de concevoir des applications tolérantes aux pannes et de stocker leurs données au sein de l’Union Européenne.

Cette annonce signifie que notre infrastructure globale comprend désormais 35 Zones de Disponibilités, réparties sur 13 régions dans le monde et que s’ajoute à cela l’ouverture l’année prochaine de cinq régions AWS (et 12 Zones de Disponibilités) en France, au Canada, en Chine, dans l’Ohio, et au Royaume-Uni (pour plus d’informations, visitez la page d’AWS Global Infrastructure).

Comme toujours, nous sommes impatients de répondre aux besoins de nos clients français, actuels et futurs, et de travailler avec nos partenaires en Europe. Bien entendu, cette nouvelle région sera également disponible pour tous les clients AWS souhaitant traiter et stocker leurs données en France.

Pour en savoir plus sur la région AWS en France, vous pouvez contacter nos équipes à Paris: aws-in-france@amazon.com.


Jeff;