AWS Official Blog
-
AWS Week in Review – January 5, 2015
Let’s take a quick look at what happened in AWS-land last week:
Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.
— Jeff;
-
Now Available – New C4 Instances
Late last year ago I told you about the New Compute-Optimized EC2 Instances and asked you to stay tuned for additional pricing and technical information. I am happy to announce that we are launching these instances today in seven AWS Regions!
The New C4 Instance Type
The new C4 instances are based on the Intel Xeon E5-2666 v3 (code name Haswell) processor. This custom processor, optimized for EC2, runs at a base speed of 2.9 GHz, and can achieve clock speeds as high as 3.5 GHz with Intel® Turbo Boost (complete specifications are available here). These instances are designed to deliver the highest level of processor performance on EC2. Here’s the complete lineup:Instance Name vCPU Count RAM Network Performance Dedicated EBS Throughput Linux On-Demand Price c4.large 2 3.75 GiB Moderate 500 Mbps $0.116/hour c4.xlarge 4 7.5 GiB Moderate 750 Mbps $0.232/hour c4.2xlarge 8 15 GiB High 1,000 Mbps $0.464/hour c4.4xlarge 16 30 GiB High 2,000 Mbps $0.928/hour c4.8xlarge 36 60 GiB 10 Gbps 4,000 Mbps $1.856/hour The prices listed above are for the US East (Northern Virginia) and US West (Oregon) regions (the instances are also available in the Europe (Ireland), Asia Pacific (Tokyo), US West (Northern California), Asia Pacific (Singapore), and Asia Pacific (Sydney) regions). For more pricing information, take a look at the EC2 Pricing page.
As I noted in my original post, EBS Optimization is enabled by default for all C4 instance sizes. This feature provides 500 Mbps to 4,000 Mbps of dedicated throughput to EBS above and beyond the general purpose network throughput provided to the instance, and is available to you at no extra charge. Like the existing C3 instances, the new C4 instances also provide Enhanced Networking for higher packet per second (PPS) performance, lower network jitter, and lower network latency. You can also run two or more C4 instances within a placement group in order to arrange for low-latency connectivity within the group.
c4.8xlarge Goodies
EC2 uses virtualization technology to provide secure compute, network, and block storage resources that are easy to manage through Web APIs. For a compute optimized instance family like C4, our goal is to provide as much of the performance of the underlying hardware as safely possible, while still providing virtualized I/O with very low jitter. We are always working to make our systems more efficient, and through that effort we are able to deliver more cores in the form of 36 vCPUs on the c4.8xlarge instance type (some operating systems have a limit of 32 vCPUs and may not be compatible with the c4.8xlarge instance type. For more information, refer to our documentation on Operating System Support).Like earlier Intel processors, the Intel Xeon E5-2666 v3 in the C4 instances support Turbo Boost. This technology allows the processor to run faster than the rated speed (2.9 GHz) as long as it stays within its design limits for power consumption and heat generation. The effect depends on the number of cores in use and the exact workload, and can boost the clock speed to as high as 3.5 GHz under optimal conditions. In general, workloads that use just a few cores are the most likely to benefit from this speedup. Turbo Boost is enabled by default and your applications can benefit from it with no effort on your part.
Here’s an inside look at an actual Haswell micro architecture die (this photo is of a version of the die that is similar to, but not an exact match for, the one used in the C4 instances). The cache is in the middle, flanked to the top and the bottom by the CPU cores:
If your workload is able to take advantage of all of those cores, you’ll get the rated 2.9 GHz speed, with help from Turbo Boost whenever the processor decides that it is able to raise the clock speed without exceeding any of the processor’s design limits for heat generation and dissipation.
In some cases, your workload might not need all 18 of the cores (each of which runs two hyperthreads, for a total of 36 vCPUs on c4.8xlarge). To tune your application for better performance, you can manage the power consumption on a per-core basis. This is known as C-state management, and gives you control over the sleep level that a core may enter when idle. Let’s say that your code needs just two cores. Your operating system can set the other 16 cores to a state that draws little or no power, thereby creating some thermal headroom that will give the remaining cores an opportunity to Turbo Boost. You also have control over the desired performance (CPU clock frequency); this is known as P-state management. You should consider changing C-state settings to decrease CPU latency variability (cores in a sleep state consume less power, but deeper sleep states require longer to become active when needed) and consider changing P-state settings to adjust the variability in CPU frequency in order to best meet the needs of your application. Please note that C-state and P-state management requires operating system support and is currently available only when running Linux.
You can use the turbostat command (available on the Amazon Linux AMI) to display the processor frequency and C-state information.
Helpful resources for C-State and P-State management include Jeremy Eder’s post on processor.max_cstate, intel_idle.max_cstate and /dev/cpu_dma_latency, Dell’s technical white paper, Controlling Processor C-State Usage in Linux, and the discussion of Are hardware power management features causing latency spikes in my application? You should also read our new documentation on Processor State Control.
Intel® Xeon® Processor (E5-2666 v3) in Depth
The Intel Haswell micro architecture is a notable improvement on its predecessors. It is better at predicting branches and more efficient at prefetching instructions and data. It can also do a better job of taking advantage of opportunities to execute multiple instructions in parallel. This improves performance on integer math and on branches. This new processor also incorporates Intel’s Advanced Vector Extensions 2. AVX2 supports 256-bit integer vectors and can process 32 single precision or 16 double precision floating point operations per cycle. It also supports instructions for packing and extracting bit fields, decoding variable-length bit streams, gathering bits, arbitrary precision arithmetic, endian conversion, hashing, and cryptography. The AVX2 instructions and the updated microarchitecture can double the floating-point performance for compute-intensive workloads. The improvements to the microarchitecture can boost the performance of existing applications by 30% or more. In order to take advantage of these new features, you will need to use a development toolchain that knows how to generate code that makes use of these new instructions; see the Intel Developer Zone article, Write your First Program with Haswell new Instructions for more info.Launch a C4 Instance Today
As I noted earlier, the new C4 instances are available today in seven AWS regions (and coming soon to the others). You can launch them as On-Demand, purchase Reserved Instances, and you can also access them via the Spot Market. You can also launch applications from the AWS Marketplace on C4 instances in any Region where they are supported.We are always interested in hearing from our customers. If you have feedback on the C4 instance type and would like to share it with us, please send it to ec2-c4-feedback@amazon.com.
– Jeff;
-
Data Encryption Made Easier – New Encryption Options for Amazon RDS
Encryption of stored data (often referred to as “data at rest”) is an important part of any data protection plan. Today we are making it easier for you to encrypt data at rest in Amazon Relational Database Service (RDS) database instances running MySQL, PostgreSQL, and Oracle Database.
Before today’s release you had the following options for encryption of data at rest:
- RDS for Oracle Database – AWS-managed keys for Oracle Enterprise Edition (EE).
- RDS for SQL Server – AWS-managed keys for SQL Server Enterprise Edition (EE).
In addition to these options, we are adding the following options to your repertoire:
- RDS for MySQL – Customer-managed keys using AWS Key Management Service (KMS).
- RDS for PostgreSQL – Customer-managed keys using AWS Key Management Service (KMS).
- RDS for Oracle Database – Customer-managed keys for Oracle Enterprise Edition using AWS CloudHSM.
For all of the database engines and key management options listed above, encryption (AES-256) and decryption are applied automatically and transparently to RDS storage and to database snapshots. You don’t need to make any changes to your code or to your operating model in order to benefit from this important data protection feature.
Let’s take a closer look at all three of these options!
Customer-Managed Keys for MySQL and PostgreSQL
We launched the AWS Key Management Service last year at AWS re:Invent. As I noted at the time, KMS provides you with seamless, centralized control over your encryption keys. It was designed to help you to implement key management at enterprise scale with facility to create and rotate keys, establish usage policies, and to perform audits on key usage (visit the AWS Key Management Service (KMS) home page for more information).You can enable this feature and start to use customer-managed keys for your RDS database instances running MySQL or PostgreSQL with a couple of clicks when you create a new database instance. Turn on Enable Encryption and choose the default (AWS-managed) key or create your own using KMS and select it from the dropdown menu:
That’s all it takes to start using customer-managed encryption for your MySQL or PostgreSQL database instances. To learn more, read the documentation on Encrypting RDS Resources.
Customer-Managed Keys for Oracle Database
AWS CloudHSM is a service that helps you to meet stringent compliance requirements for cryptographic operations and storage of encryption keys by using single tenant Hardware Security Module (HSM) appliances within the AWS cloud.CloudHSM is now integrated with Amazon RDS for Oracle Database. This allows you to maintain sole and exclusive control of the encryption keys in CloudHSM instances when encrypting RDS database instances using Oracle Transparent Data Encryption (TDE).
You can use the new CloudHSM CLI tools to configure groups of HSM appliances in order to ensure that RDS and other applications that use CloudHSM keep running as long as one HSM in the group is available. For example, the CLI tools allow you to clone keys from one HSM to another.
To learn how to use Oracle TDE in conjunction with a CloudHSM, please read our new guide to Using AWS CloudHSM with Amazon RDS.
Available Now
These features are available now and you can start using them today!— Jeff;
-
ClassicLink – Private Communication Between Classic EC2 Instances & VPC Resources
Amazon Virtual Private Cloud lets you create and run a logically isolated section of the Cloud. Running within a VPC combines the benefits of the cloud with the flexibility of a network topology designed to fit the unique needs of your in-house IT department. For example:
- Isolation – You can create a network and exercise fine-grained control over internal and external connectivity.
- Flexibility – You have full control over the IP address range, routing, subnets, and ACLs.
- Features – Certain AWS features, including Enhanced Networking and the new T2 instances are available only within a VPC. The powerful C3 instances can make use of Enhanced Networking when run within a VPC.
- Private Communication – You can connect to your existing on-premises or colo’ed infrastructure using AWS Direct Connect and a VPN connection.
You define a virtual network by specifying an IP address range using a CIDR block, partition the range in to one or more subnets, and setting up Access Control Lists (ACLs) to allow network traffic to flow between the subnets. After you define your virtual network, you can launch Amazon Elastic Compute Cloud (EC2) instances, Amazon Relational Database Service (RDS) DB instances,
Amazon ElastiCache nodes, and other AWS resources, each on a designated subnet.Up until now, EC2 instances that were not running within a VPC (commonly known as EC2-Classic) had to use public IP addresses or tunneling to communicate with AWS resources in a VPC. They could not take advantage of the higher throughput and lower latency connectivity available for inter-instance communication. This model also resulted in additional bandwidth charges and has some undesirable security implications.
Hello, ClassicLink
In order to allow EC2-Classic instances to communicate with these resources, we are introducing a new feature known as ClassicLink. You can now enable this feature for any or all of your VPCs and then put your existing Classic instances in to VPC security groups.ClassicLink will allow you to learn about and adopt VPC features, even if you are currently making good use of EC2-Classic. For example, you can use a new Amazon RDS T2 Instance (available only within a VPC) to launch a cost-effective DB instance that can easily accommodate bursts of traffic and queries. You can also take advantage of the additional control and flexibility available to you when you make use of the VPC security groups. For example, you can make use of outbound traffic filtering rules and you can change the security groups associated with a running instance.
Enabling & Using ClassicLink
You can enable ClassicLink on a per-VPC basis. Simply open up the VPC tab of the AWS Management Console, select the desired VPC, right-click, and choose Enable ClassicLink:Now you can link any or all of your EC2 instances to the VPC by right-clicking and choosing Link to VPC from the ClassicLink menu:
Choose the appropriate security group and you will be good to go:
The new setting takes effect immediately; the instance is now part of the chosen group(s). You can remove the security group(s) from the instance at a later time if you no longer have a need for private communication from the EC2-Classic instance to the AWS resources in the VPC.
Cost and Availability
ClassicLink is accessible from the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, and the AWS SDKs. To learn more, click here.ClassicLink is available at no charge. If you are currently running in EC2-Classic and have been looking for an easy way to start taking advantage of VPC resources, please take a closer look.
— Jeff;
-
New – Cross-Account Access in the AWS Management Console
Many AWS customers use separate AWS accounts (usually in conjunction with Consolidated Billing) for their development and production resources. This separation allows them to cleanly separate different types of resources and can also provide some security benefits.
Today we are making it easier for you to work productively within a multi-account (or multi-role) AWS environment by making it easy for you to switch roles within the AWS Management Console. You can now sign in to the console as an IAM user or via federated Single Sign-On and then switch the console to manage another account without having to enter (or remember) another user name and password.
This feature is built around IAM roles. As you may recall, roles allow you (or your AWS administrator) to define a set of permissions to access some AWS resources. The roles are not attached to a particular IAM user or group. Instead, applications or services can programmatically assume a role by requesting temporary security credentials and using them to make AWS requests.
I’m going to cover this feature from the user’s point of view. Another post on the AWS Security Blog will take a look at it from the administrator’s point of view.
Cross-Account Access in Action
Let’s assume that my administrator has set up a pair of IAM roles, creatively named Development and Production. I sign in to the Console as usual:When I click on my user name I see that there’s a new Switch Role option:
When I choose it, the Console provides a handy summary of this new feature:
It also lets me enter the information needed to switch roles (they can be from the same AWS account or different AWS accounts). The Display Name is auto generated based on the Account and the Role but can be customized as desired:
After I do this, the Console assumes the new role (Production in this case). Note that the menu is highlighted with the color that I chose to associate with the role:
In order to simplify the process of switching to a new role, the IAM Console will create a customized role-switching URL for any role that enables cross-account access:
Activating this URL initiates the role-switching process and allows me to make the same customizations that I described earlier.
I can switch to the role of my choice (the Console will remember up to five roles). I can also switch back to the identity that I used to sign in to the Console:
You will need to set up the proper IAM roles and groups in order to make use of this feature. After you decide exactly what you want to do (always important when working with accounts and permissions), you can implement this feature in a couple of minutes.
Learn More
This feature is available now and you can start using it today. To learn more, read about Cross-Account Access in the AWS Console.— Jeff;
-
New SQS Client Library for Java Messaging Service (JMS)
The Java Message Service (JMS) allows a pair of cooperating Java applications to create, send, receive, and read messages. The loosely coupled nature of JMS allows one part of the application to operate asynchronously with respect to the other.
Until now, you needed to stand up, maintain, and scale a multi-instance (for high availability) JMS server cluster if you wanted to make use of it within an application. Today we are launching a client library that implements the JMS 1.1 specification and uses Amazon Simple Queue Service (SQS) as the JMS provider. You can now count on the scale, low cost, and high availability of SQS and forget about running your own JMS cluster!
Getting Started
If you have an existing application that makes use of the JMS API, you can move it to SQS quickly and easily. Start by opening up the AWS Management Console and creating a queue:You can also create a queue using the AWS Command Line Interface (CLI), the AWS Tools for Windows PowerShell, or the
CreateQueue function.Download the Amazon SQS Java Messaging Library JAR File and update your application’s
CLASSPATH
as appropriate. Then, configure your application’s connection factory to target the queue that you created.Interesting Use Cases
Here are some of the ways that you can put this new library to use:- Remove JMS Provider Cluster – You no longer need to run a JMS cluster. In addition to the reduction in hardware overhead (less servers to buy and maintain), you may no longer need to pay licensing and support fees for commercial software. Further, because SQS scales as needed, you don’t need to add hardware when the message rate or message size increases.
- Modularize Monolithic Java Apps – You can modularize and modernize monolithic Java apps without having to stand up a
JMS cluster. You can move to an architecture that is modern and
scalable while taking advantage of time-tested Java APIs. - Load Test Producers and Consumers – You can load test your custom producer and consumer clients at production scale without having to create a correspondingly large (yet temporary) JMS cluster. This will also allow you to gain experience with SQS and allow you to observe its scale and performance in comparison to your existing vendor-provided middleware.
- Overflow Handling -With some extra logic on the producer and the consumer, you can use your existing
JMS cluster for steady-state processing, backed by a new, SQS queue to handle the extra load at peak times.
Learn More
To learn more, take a look at Using JMS with Amazon SQS in the SQS Documentation. The documentation includes sample code and
full configuration information.— Jeff;
-
New – EC2 Spot Instance Termination Notices
When potential users of AWS ask me about ways that it differs from their existing on-premises systems, I like to tell them about EC2 Spot Instances and the EC2 Spot Market. When they learn that they can submit bids for spare EC2 instances at the price of their choice, their eyes widen and they start to think about the ways that they can put this unique, powerful, and economical feature to use in their own applications.
Before we dive in, let’s review the life cycle of a Spot Instance:
- You (or an application running on your behalf) submits a bid to run a desired number of EC2 instances of a particular type. The bid includes the price that you are willing to pay to use the instance for an hour.
- When your bid price exceeds the current Spot price (which varies based on supply and demand), your instances are run.
- When the current Spot price rises above your bid price, the Spot instance is reclaimed by AWS so that it can be given to another customer.
New Spot Instance Termination Notice
Today we are improving the reclamation process with the addition of a two-minute warning, formally known as a Spot Instance Termination Notice. Your application can use this time to save its state, upload final log files, or remove itself from an Elastic Load Balancer. This change will allow more types of applications to benefit from the scale and low price of Spot Instances.The Termination Notice is accessible to code running on the instance via the instance’s metadata at
http://169.254.169.254/latest/meta-data/spot/termination-time
. This field will become available when the instance has been marked for termination (step 3, above), and will contain the time when a shutdown signal will be sent to the instance’s operating system. At that time, the Spot Instance Request’s bid status will be set tomarked_for_termination
. The bid status is accessible via theDescribeSpotInstanceRequests
API for use by programs that manage Spot bids and instances.We recommend that interested applications poll for the termination notice at five-second intervals. This will give the application almost two full minutes to complete any desired processing before it is reclaimed. Here’s a timeline to help you to understand the termination process (the “+” indicates a time relative to the start of the timeline):
- +00:00 – Your Spot instance is marked for termination because the current Spot price has risen above the bid price. The bid status of your Spot Instance Request is set to
marked_for_termination
and the/spot/termination-time
metadata is set to a time precisely two minutes in the future. - Between +00:00 and +00:05 – Your instance (assuming that it is polling at five-second intervals) learns that it is scheduled for termination.
- Between +00:05 and +02:00 – Your application makes all necessary preparation for shutdown. It can checkpoint work in progress, upload final log files, and remove itself from an Elastic Load Balancer.
- +02:00 – The instance’s operating system will be told to shut down and the bid status will be set to
instance-terminated-by-price
(be sure to read the documentation on Tracking Spot Requests with Bid Status Codes before writing code that depends on the values in this field).
Spot Instances in Action
Many AWS customers are making great use of Spot Instances and I’d like to encourage you to do the same! Here are a couple of examples:- Last November, AWS Partner Cycle Computing announced that they had used Spot Instances to launch a 70,000 core compute environment across three AWS Regions at a total cost of $5,594. This cluster was used to run one million simulations of a new head design for a Western Digital hard drive.
- AWS customer Novartis used 10,600 Spot Instances (about 87,000 cores) to conduct 39 years of computational chemistry in 9 hours at a cost of $4,232. During that time they screened 10 million compounds against a common cancer target.
- AWS customer Honda Motors use of a combination of Spot and On-Demand Instances resulted in a cost savings of 70% when compared to an earlier implementation that used only On-Demand Instances. To learn more about this and other HPC use cases, watch the Finding High Performance in the Cloud for HPC panel session from AWS re:Invent.
Available Now
This feature is available now and you can start using it today! There is no charge for the HTTP requests that you will use to retrieve the instance metadata or for the calls to theDescribeSpotInstanceRequests
API.— Jeff;
-
AWS GovCloud (US) Update – Glacier, VM Import, CloudTrail, and More
I am pleased to be able to announce a set of updates and additions to AWS GovCloud (US). We are making a number of new services available including Amazon Glacier, AWS CloudTrail, and VM Import. We are also enhancing the AWS Management Console with support for Auto Scaling and the Service Limits Report. As you may know, GovCloud (US) is an isolated AWS Region designed to allow US Government agencies and customers to move sensitive workloads in to the cloud. It adheres to the U.S. International Traffic in Arms Regulations (ITAR) regulations and well as the Federal Risk and Authorization Management Program (FedRampSM). AWS customers host a wide variety of web and enterprise applications in GovCloud (US). They also run HPC workloads and count on the cloud for storage and disaster recovery.
Let’s take a look at the new features!
Amazon Glacier
Glacier is a secure and durable storage service designed for data archiving and online backup. With prices that start at $0.013 per gigabyte per month in this Region, you can store any amount of data and retrieve it within hours. Glacier is ideal for digital media archives, financial and health care records, long term database backups. It is also a perfect place to store data that must be retained for regulatory compliance. You can store data directly in a Glacier vault or you can make use of lifecycle rules to move data from Amazon Simple Storage Service (S3) to Glacier.AWS CloudTrail
AWS CloudTrail records calls made to the AWS APIs and publishes the resulting log files to S3. The log files can be use as a compliance aid, allowing you to demonstrate that AWS resources have been managed according to rules and regulatory standards (see my blog post, AWS CloudTrail – Capture AWS API Activity, for more information). You can also use the log files for operational troubleshooting and to identity activities on AWS resources which failed due to inadequate permissions. As you can see from the blog post, you simply enable CloudTrail from the Console and point it at the S3 bucket of your choice. Events will be delivery to the bucket and stored in encrypted form, typically within 15 minutes after they take place. Within the bucket, events are organized by AWS Account Id, Region, Service Name, Date, and Time:Our white paper, Security at Scale: Logging in AWS, will help you to understand how CloudTrail works and how to put it to use in your organization.
VM Import
VM Import allows you to import virtual machine images from your existing environment for use on Amazon Elastic Compute Cloud (EC2). This allows you to use build off of your existing investment in images that meet your IT security, configuration management, and compliance requirements.You can import VMware ESX and VMware Workstation VMDK images, Citrix Xen VHD images and Microsoft Hyper-V VHD images for Windows Server 2003, Windows Server 2003 R2, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Centos 5.1-6.5, Ubuntu 12.04, 12.10, 13.04, 13.10, and Debian 6.0.0-6.0.8, 7.0.0-7.2.0.
Console Updates
The AWS Management Console in the GovCloud Region now supports Auto Scaling and the Service Limits Report.Auto Scaling allows you to build systems that respond to changes in demand by scaling capacity up or down as needed.
The Service Limits Report makes it easy for you to view and manage the limits associated with your AWS account. It includes links that let you make requests for increases in a particular limit with a couple of clicks:
All of these new features are operational now and are available to GovCloud users today!
— Jeff;
-
AWS Week in Review – December 29, 2014
Let’s take a quick look at what happened in AWS-land last week:
Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.
— Jeff;
-
Welcome New AWS Community Heroes
Earlier this year we welcomed the first AWS Community Heroes. As I wrote at the time, these hard-working folks were selected for the program because they routinely deliver high-quality, impactful, developer-focused activities to the AWS Community.
More Heroes
Today I would like to welcome eight new heroes!
Java developer Satoshi Yokota is the Founder & CEO of Classmethod, a writer for Developers.IO, and a founding member of JAWS-UG. In 2010 he participated in the kickoff meeting of JAWS-UG and has help to expand it to more than 40 branches. JAWS-UG currently features more than 100 leaders, over 100 meetup events, over 1000 members.
Brazilian developer Heitor Vital has been working with mobile games/applications and web development for over 10 years. He blogs on developer-oriented AWS topics and recently earned an Executive MBA. As CTO of Site Blindado SA, his work focuses on security, cloud computing, and infrastructure.
Based in Berlin, Chad Fowler is CTO of Wunderlist. As a leading voice on the topic of cloud-based deployment techniques, Chad coined the term “immutable infrastructure.” Chad has been host and organizer of many technology conferences including the International Ruby Conference & Railsconf.
Norm Driskell is a public sector digital leader. He focuses on digital transformation, transparency, and open source. From his base in London, Norm founded the AWS UK User Group in 2012, with events every two months and sponsorship from leading tech and media companies.
Victor Oliveira is a founding partner of Concrete Solutions. As Director of Engineering for this Brazilian company, he runs Cloud Computing and Agile practices. Concrete Solutions was the first AWS Partner in Latin America, due in large part to Victor’s efforts. Today he helps clients turn infrastructure in to code while also keeping a watchful eye out for optimizations that can reduce costs without sacrificing quality.
After leading technical operations at Pinterest, Ryan Park became Principal Engineer for Runscope. There, he leads the design and development effort for their AWS-powered debugging and testing service. Ryan speaks on the topic of application design and is a mentor for companies in the 500 Startups incubator.
As a founding member of the engineering team at Bizo (recently acquired by LinkedIn), Larry Ogrodnek began using AWS way back in 2008! He adopted additional AWS services as they came along, and enjoys shutting down homegrown alternatives in the process. Larry is the co-founder and organizer of the Advanced AWS Meetup Group in San Francisco.
Lynn Langit consults on big data and cloud architecture from her base in Southern California. She has designed production-quality AWS solutions and also delivers technical content (also related to AWS) at developer conferences all over the world. Lynn creates technical AWS screencasts and is the primary courseware author for Teaching Kids Programming.Welcome!
Please join me in welcoming these new AWS Community Heroes to the roster!— Jeff;