AWS Official Blog

  • AWS Webinars – September, 2015

    by Jeff Barr | on | in Webinars | | Comments

    I have always advised my family, friends, colleagues, and audiences to plan to spend some time learning something new every day. If you don’t take responsibility for your own continued education, you can soon find that you have fallen far behind your peers!

    If you are interested in staying current with AWS, I have a couple of recommendations. You can read this blog, follow me (and @awscloud) on Twitter, check in with the AWS What’s New page from time to time, and make use of our self-paced labs.

    When you are ready for a more in-depth look at a certain topic or service, I would strongly advise you to attend one or more of our webinars!

    Every month my colleagues assemble a lineup of sessions that are hand-selected to provide you with a detailed look at a subject that they believe will be of value to you. The webinars are presented by senior members of the team including evangelists, product managers, and solution architects.

    With that said, I would like to introduce you to our September 2015 webinars (all times are Pacific).

    There’s no charge and no obligation, of course! Please feel free to sign up, and to share this post wiht your friends and colleagues.

    Jeff;

    PS – Please feel free to suggest new topics by leaving a comment on this post.

  • New AWS Quick Start – Magento for E-Commerce

    by Jeff Barr | on | in Quick Start | | Comments

    Magento is a very popular open-source content management system for e-commerce sites. Sellers and developers appreciate its open architecture, flexibility, extensibility (hundreds of extensions), and back-end workflows that can be tailored to fit the unique needs of each customer.

    Magento Community Edition (Magento CE) Magento Enterprise Edition (Magento EE) are popular among our customers. Some of these deployments have been launched on AWS by way of partners such as Anchor Hosting, Elastera, Tenzing, Razorfish, and Optaros. Others have been launched via the AWS Marketplace (a search for Magento returns more than 20 listings). And still others have been launched in do-it-yourself form.

    Today we are publishing a new Magento Quick Start Reference Deployment. This 29-page document will show you how to build an AWS cluster that runs version 1.9.2 of Magento Community Edition. It walks you through best practices, provides cost estimates, and outlines the recommended set of AWS components.

    Using the AWS CloudFormation template referenced in the Quick Start, you can launch Magento into a new or existing Virtual Private Cloud (Amazon VPC). The template will create (if requested) the VPC, along with the necessary EC2 instances (auto scaled instances for the web server and a NAT instance for SSH connectivity), an Amazon Relational Database Service (RDS) instance running MySQL, and Elastic Load Balancing. It will also create the requisite IAM roles and security groups and configure the Auto Scaling to add more EC2 instances when traffic rises and remove them when it subsides. Here’s what the completed system looks like:

     

    The Quick Start also includes a pointer to some sample data that you can download from the Magento site!

    Jeff;

     

     

     

  • Welcome to our New Colleagues at Elemental

    by Jeff Barr | on | in Announcements | | Comments

    Earlier today we announced that we had reached an agreement to acquire Elemental Technologies of Portland, Oregon. Elemental has pioneered a number of software-based solutions for multiscreen content delivery and powers many of the world’s most innovative app-delivered video offerings and new services like 4K TV. Elemental customers use its software to process and deliver video streams that are customized for a wide variety of different formats, displays, devices, data rates, and environments. Here’s a recent photo of the Elemental team:

    We have been working with Elemental to address shared customers in the Media & Entertainment (M&E) business for the past four years. During this time we have been impressed by their penchant for moving fast and their long-term vision for software-defined video.  We quickly realized that we could work together to create solutions that spanned the entire video pipeline.

    Today’s announcement will allow us to work even more closely together to provide media companies with a family of on-premises, hybrid, and cloud-based solutions for Internet-based video delivery. Of course, I’ll do my best to learn about and then share information about any and all new offerings as they become available. Perhaps I’ll even make a trip to Portland to interview the Elemental folks for an AWS Podcast episode.

    On behalf of the entire AWS team I would like to extend a warm welcome to our new colleagues!

    Jeff;

  • Amazon S3 Update – CloudTrail Integration

    by Jeff Barr | on | in Amazon S3, CloudTrail | | Comments

    You can now use AWS CloudTrail to track bucket-level operations on your Amazon Simple Storage Service (S3) buckets.  The tracked operations include creation and deletion of buckets, modifications to access controls, changes to lifecycle policies, and changes to cross-region replication settings.

    AWS CloudTrail records API activity in your AWS account and delivers the resulting log files to a designated S3 bucket. You can  look up API activity related to creating, deleting and modifying your S3 resources using the CloudTrail Console, including access to 7 days of historical data. You can also create Amazon CloudWatch Alarms to look for specific API activities and receive email notifications when they occur.

    Effective today we are now logging actions on S3 buckets to CloudTrail in all AWS Regions. If you have already enabled CloudTrail, you do not need to take any further action in order to take advantage of this new feature.  If you are not using CloudTrail, you can turn it on with a couple of clicks (read my introductory post – AWS CloudTrail – Capture API Activity) to learn more.

    You can use the log files in many different ways. For example, you can use them as supporting evidence if you need to demonstrate compliance with internal or external policies. Let’s say that you store some important files in an S3 bucket. You can set up a CloudWatch Alarm that will fire if someone else in your organization makes changes to the bucket’s access control policy. This will allow you to verify that the change is in compliance with your policies and to take immediate corrective action if necessary.

    You can also monitor creation and deletion of buckets, updates to life cycle policies, and changes to the cross-region replication settings.

    Jeff;

  • Introducing the AWS SDK for C++

    by Jeff Barr | on | in Developers | | Comments

    My colleague Jonathan Henson sent me a guest post to introduce a brand-new AWS SDK!

    — Jeff;


    After a long effort, we are proud to announce an open-source C++ SDK for scaling your native applications with Amazon Web Services.

    The AWS SDK for C++ is a modern C++ interface with lightweight dependencies. We designed it to be fully functioning, with both low-level and high-level interfaces. However, we also wanted it to have as few dependencies as possible and to be platform-independent. At the moment, it includes support for Windows, OS X, Linux, and mobile platforms.

    This SDK has been specifically designed with game developers in mind, but we have also worked hard to maintain an interface that will work for systems engineering tasks, as well as other projects that simply need the efficiency of native code.

    Features

    • Works with the Standard Template Library (STL).
    • Custom memory management support.
    • C++ 11 features used and supported.
    • Builds with CMake and your native compiler toolchain.
    • Lightweight dependencies.
    • Exception-safe.
    • Extensive, configurable logging.
    • Default credentials providers.
    • Identity management through Amazon Cognito Identity.
    • High-level Amazon S3 interface through TransferClient.
    • Uses native OS APIs for cryptographic and HTTP support.

    Code Samples

    Storing a value in a Amazon DynamoDB table:

    Aws::DynamoDB::DynamoDBClient dynamoDbClient;
    PutItemRequest putItemRequest;
    putItemRequest.WithTableName("TestTableName");
    AttributeValue hashKeyAttribute;
    hashKeyAttribute.SetS("SampleHashKeyValue");
    putItemRequest.AddItem("HashKey", hashKeyAttribute);
    AttributeValue valueAttribute;
    valueAttribute.SetS("SampleValue");
    putItemRequest.AddItem("Value", valueAttribute);
    auto putItemOutcome = dynamoDbClient.PutItem(putItemRequest);
    
    if(putItemOutcome.IsSuccess())
    {
        std::cout << "PutItem Success Using IOPS " << putItemOutcome.GetResult().GetConsumedCapacity();
    }
    else
    {
        std::cout << "PutItem failed with error " << putItemOutcome.GetError().GetMessage();
    }
    

    Downloading a file from Amazon Simple Storage Service (S3):

    Aws::S3::S3Client s3Client;
    GetObjectRequest getObjectRequest;
    getObjectRequest.SetBucket("sample_bucket");
    getObjectRequest.SetKey("sample_key");
    getObjectRequest.SetResponseStreamFactory(
        [](){
            return Aws::New(ALLOCATION_TAG, DOWNLOADED_FILENAME, std::ios_base::out | std::ios_base::in | std::ios_base::trunc);
        });
    auto getObjectOutcome = s3Client.GetObject(getObjectRequest);
    if(getObjectOutcome.IsSuccess())
    {
        std::cout << "File downloaded from S3 to location " << DOWNLOADED_FILENAME;
    }
    else
    {
        std::cout << "File download failed from s3 with error " << getObjectOutcome.GetError().GetMessage();
    }
    

    It’s that simple! Download the code from GitHub today and start scaling your C++ applications with the power of Amazon Web Services.

    Status
    We are launching the AWS SDK for C++ in its current experimental state while we gather feedback from users and the open source community to harden the APIs. We also are adding support for individual services as we become more confident that the client generator can properly support each protocol. Support for more services will be coming in the near future. We invite our customers to follow along with our progress and join the development efforts by submitting pull requests and sending us feedback and ideas via GitHub Issues.

    Jonathan Henson, Software Development Engineer (SDE)

  • New – Resource-Oriented Bidding for EC2 Spot Instances

    by Jeff Barr | on | in Amazon EC2 |

    Earlier this year we introduced the EC2 Spot fleet API. As I noted in my earlier post, this API allows you to launch and manage an entire fleet of Spot instances with one request. You specify the fleet’s target capacity, a bid price per hour, and tell Spot what instance type(s) you would like to launch. Spot fleet will find the lowest price spare EC2 capacity available, and then work to maintain the requested target capacity.

    Today we are making the Spot fleet API even more powerful with the addition of bidding weights. This new feature allows you to create and place bids that are better aligned with the instance types and Availability Zones that are the best fit for your application. Each call to RequestSpotFleet includes a single bid price (expressed on a per-instance basis). This was simple to use, but the simplicity disallowed certain desirable features. For example, there was no way to launch a fleet that had at least 488 GiB of memory spread across two or more R3 (Memory Optimized) instances or least 128 vCPUs spread across a combination of C3 and C4 (both Compute Optimized) instances.

    New Resource-Oriented Bidding
    Our new resource-oriented bidding model will allow you to make Spot fleet requests of this type. Think of each instance in a fleet as having a number of “capacity units” of some resource that affects how many instances you need to create a fleet of the proper size. In my examples above, the resources might be GiBs of RAM or vCPUs. They could also represent EBS-optimized bandwidth, compute power, networking performance, or another (perhaps more abstract) application-specific unit. You can now request a certain number of capacity units in your Spot fleet, and you can indicate how many such units each instance type contributes to the total.

    As a result, you can now use resources of multiple instance types, possibly spread across multiple Availability Zones, without having to be concerned about the instance types that are actually provisioned. Each call to RequestSpotFleet already includes one or more launch specifications, each one requesting instances of a particular type, running a specified AMI. You can now include one or both of the following values in each launch specification:

    WeightedCapacity – How many capacity units an instance of the specified type contributes to the fleet. This value is multiplied by the bid price per unit when Spot fleet makes bids for Spot instances on your behalf. You can use this, for example, to indicate that you are willing to pay twice as much for an r3.xlarge instance with 30.5 GiB of memory as for an r3.large instance with  15.25 GiB of memory.

    SpotPrice – An override of the bid price per unit specified in the request. You can use this value to introduce a bias into the selection process, allowing you to express a preference (for or against) particular instance types, Availability Zones, or subnets.

    Here’s a launch specification that would represent the memory-centric example that I introduced earlier (I have omitted the other values for clarity):

    Instance Type Instance Weight
     r3.large  15.25
     r3.xlarge  30.5
     r3.2xlarge  61
     r3.4xlarge  122
     r3.8xlarge  244

    You would then specify a Target Capacity of 488 (representing the desired fleet capacity in GiB) in your call to RequestSpotFleet, along with a Bid Price that represents the price (per GiB-hour) that you are willing to pay for the capacity.In this example, you are indicating that you are willing to pay 16 times as much for an r3.8xlarge instance as for r3.large.

    EC2 will use this information to build the fleet using the most economical combination of available instance types, looking for instances that have the lowest Spot price per unit. This could be as simple as one of the following, using a single instance type:

    • 2 x r3.8xlarge
    • 4 x r3.4xlarge
    • 8 x r3.2xlarge
    • 16 x r3.xlarge
    • 32 x r3.large

    Or something more complex and heterogeneous, such as:

    • 1 x r3.8xlarge and 2 x r3.4xlarge
    • 2 x r3.4xlarge and 8 x r3.xlarge
    • 8 x r3.xlarge and 16 x r3.large

    Over time, as prices change and instances are interrupted due to rising prices, replacement instances will be launched as needed. This example assumes that your application is able to sense the instance type (and the amount of memory available to it) and to adjust accordingly. Note that that the fleet might be overprovisioned by a maximum of one instance in order to meet your target capacity using the available resources. In my example above, this would happen if you requested a fleet capable of storing 512 GiB. It could also happen if you make a small request and the cheapest price (per unit) is available on a large instance.

    About Those Units
    The units are arbitrary, and need not map directly to a physical attribute of the instance. Suppose you did some benchmarking and measured the transaction rate (in TPS) for a number of different instance types over time. You could then request a fleet capable of processing the desired number of transactions per second, while knowing that EC2 will give you the of instance types that are the most economical at any given point in time. As I have pointed out in the past, the Spot mechanism sits at the intersection of technology and business, and gives you the power to build systems and to write code that improves the bottom-line economics of your business! There’s a lot of room to be creative and innovative (and to save up to 90% over On-Demand prices) here.

    You can also use this mechanism to prioritize specific Availability Zones by specifying a higher WeightedCapacity value in the desired zone. In this case, your launch specification would include two or more entries for the same instance type, with distinct Availability Zones and weights.

    Requests can be submitted using the AWS Command Line Interface (CLI) or via calls to RequestSpotFleet.

    Available Now
    This new functionality is available now and you can start using it today in all public AWS regions where Spot is available.

    Jeff;

    PS – For more information about Spot instances, take a look at two of my recent posts: Focusing on Spot Instances and Building Price-Aware Applications.

  • Subscribe to AWS Public IP Address Changes via Amazon SNS

    by Jeff Barr | on | in Amazon EC2, Amazon Simple Notification Service, AWS Lambda | | Comments

    Last year we announced that the AWS Public IP Address Ranges Were Available in JSON Form. This was a quick, Friday afternoon post that turned out to be incredibly popular! Many AWS users are now polling this file on a regular basis and using it to manage their on-premises firewall rules or to track the growth of the AWS network footprint over time.  If you are using AWS Direct Connect, you can use the file to update your route tables to reflect the prefixes that are advertised for Direct Connect public connections.

    Today we are making it even easier for you to make use of this file. You can now subscribe to an Amazon Simple Notification Service (SNS) topic and receive notifications when the file is updated. Your code can then retrieve the file, parse it, and make any necessary updates to your local environment.

    Simply subscribe to topic arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged and confirm the subscription in the usual way (you can use any protocol supported by SNS):

    You will receive a notification that looks like this each time the IP addresses are changed:

    {
      "create-time":"yyyy-mm-ddThh:mm:ss+00:00",
      "synctoken":"0123456789",
      "md5":"6a45316e8bc9463c9e926d5d37836d33",
      "url":"https://ip-ranges.amazonaws.com/ip-ranges.json"
    }
    

    You can also build a AWS Lambda function that responds to the changes:

    In either case, your app will be responsible for fetching the file, parsing the JSON, and extracting the desired information. To learn more about the file, read about AWS IP Address Ranges.

    If you build something useful (environment updates) and/or cool (an intriguing visualization) that you would like to share with the other readers of this blog, please feel free to leave me a comment!

    Jeff;

    PS – My count shows 13,065,550 IP addresses in the EC2 range as of August 25, 2015.

  • AWS Week in Review – August 24, 2015

    by Jeff Barr | on | in Week in Review | | Comments

    Let’s take a quick look at what happened in AWS-land last week:

    Monday, August 24
    Tuesday, August 25
    Wednesday, August 26
    Thursday, August 27
    Friday, August 28

    New & Notable Open Source

    New Customer Success Stories

    New SlideShare Presentations

    New YouTube Videos

    Upcoming Events

    Upcoming Events at the AWS Loft (San Francisco)

    Upcoming Events at the AWS Loft (New York)

    Help Wanted

    Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

    Jeff;

  • Amazon Underground – New Business Model for Android Apps

    by Jeff Barr | on | in Games, Mobile Development |

    My friends and family members who build apps tell me that there’s a huge hurdle to cross on the road to monetization. Users are willing and eager to download new games and tools, but can be reluctant to pay to do so and expect a lot for free. While some apps make good use of In-App Purchasing (IAP) as a monetization vehicle and optimize for the (reported) 2% to 10% of the user base, many developers struggle to build an audience and a sustainable business model.

    We aim to change things with the new Amazon Underground app for Android phones. This app builds upon the regular Amazon mobile shopping app, providing users with access to over ten thousand dollars in apps, games, and in-app purchases that are actually free. Underground apps and games are also available automatically on Kindle Fire HD and Fire HDX tablets.

    As an app developer, you get paid $0.002 (1/5th of a cent) for every minute that a customer is using your Amazon Underground app. You can now focus on building apps that engage your users over the long term. You can build up long-term story lines, roll out additional content over time, and count on a continued revenue stream that is based on actual usage.

    To learn more, register for a free developer account, read the eligibility and submission checklist, migrate to Amazon Underground, and submit your app to the Amazon Appstore and read this blog post.

    Jeff;

     

     

  • Building Price-Aware Applications Using EC2 Spot Instances

    by Jeff Barr | on | in Amazon EC2, EC2 Spot Instances | | Comments

    Last month I began writing what I hope to be a continuing series of posts of EC2 Spot Instances by talking about some Spot Instance Best Practices. Today I spoke to two senior members of the EC2 Spot Team to learn how to build price-aware applications using Spot Instances. I met with Dmitry Pushkarev (Head of Tech Development) and Joshua Burgin (General Manager) and would like to recap our conversation in interview form!

    Me: What does price really mean in the Spot world?

    Joshua: Price and price history are important considerations when building Spot applications. Using price as a signal about availability helps our customers to deploy applications in the most available capacity pools, reduces the chance of interruption and improves the overall price-performance of the application.

    Prices for instances on the Spot Market are determined by supply and demand. A low price means that there is a more capacity in the pool than demand. Consistently low prices and low price variance means that pool is consistently underutilized. This is often the case for older generations of instances such as m1.small, c1.xlarge, and cc2.8xlarge.


    Me: How do our customers build applications that are at home in this environment?

    Dmitry: It is important to architect your application for fault tolerance and to make use of historical price information. There are probably as many placement strategies as there are customers, but generally we see two very successful use patterns: one is choosing capacity pools (instance type and availability zone) with low price variance and the other is to distribute capacity across multiple capacity pools.

    There is a good analogy with the stock market – you can either search for a “best performing” capacity pool and periodically revisit your choice or to diversify your capacity across multiple uncorrelated pools and greatly reduce your exposure to risk of interruption.


    Me: Tell me a bit more about these placement strategies.

    Joshua:  The idea here is to analyze the recent Spot price history in order to find pools with consistently low price variance. One way to do this is by ordering capacity pools by duration of time that elapsed since the last time Spot price exceeded your preferred bid – which is the maximum amount you’re willing to pay per hour. Even though past performance certainly doesn’t guarantee future results it is a good starting point.  This strategy can be used to make bids on instances that can be used for dev environments and long running analysis jobs. It is also good for adding supplemental capacity to Amazon EMR clusters. We also recommend that our customers revisit their choices over time in order to ensure that they continue to use the pools that provide them with the most benefit.

    Me: How can our customers access this price history?

    Dmitry: It’s available through the console as well as programmatically through SDKs and  the AWS Command Line Interface (CLI).

    We’ve also created a new web-based Spot Bid Advisor that can be accessed from the Spot page. This tool presents the relevant statistics averaged across multiple availability zones making it easy to find instance types with low price volatility. You can choose the region, operating system, and bid price (25%, 50%, or 100% of On-Demand) and then view historical frequency of being outbid for last week or a month.

    Another example can be found in the aws-spot-labs repo on GitHub. The get_spot_duration.py script demonstrates how spot price information can be obtained programmatically and used to order instance types and availability zones based on the duration since the price last exceeded your preferred bid price.


    Me: Ok, and then I pick one of the top instance pools and periodically revisit my choice?

    Dmitry: Yes, that’s a great way to get started. As you get more comfortable with Spot typically next step is to start using multiple pools at the same time and distribute capacity equally among them. Because capacity pools are physically separate, prices often do not correlate among them, and it’s very rare that more than one capacity pool will experience a price increase within a short period of time.

    This will reduce the impact of interruptions and give you plenty of time to restore the desired level of capacity.

    Joshua: Distributing capacity this way also improves long-term price/performance: if capacity is distributed evenly across multiple instance types and/or availability zones then the hourly price is averaged across multiple pools which results in really good overall price performance.


    Me: Ok, sounds great.  Now let’s talk about the second step, bidding strategies.

    Joshua: It is important to place a reasonable bid at a price that you are willing to pay. It’s better to achieve higher availability by carefully selecting multiple capacity pools and distributing your application across the instances therein than by placing unreasonably high spot bids. When you see increasing prices within a capacity pool, this is a sign that demand is increasing. You should start migrating your workload to less expensive pools or shut down idle instances with high prices in order to avoid getting interrupted.

    Me: Do you often see our customers use more sophisticated bidding tactics?

    Dmitry: For many of our customers the ability to leverage Spot is an important competitive advantage and some of them run their entire production stacks on it – which certainly requires additional engineering to hit their SLA. One interesting way to think about Spot is to view is it as a significant reward for engineering applications that are “cloud friendly.”  By that I mean fault tolerant by design, flexible, and price aware. Being price aware allows the application to deploy itself to the pools with the most spare capacity available. Startups in particular often get very creative with how they use Spot which allows them to scale faster and spend less on compute infrastructure.

    Joshua: Tools like Auto Scaling, Spot fleet, and Elastic MapReduce offer Spot integration and allow our customers to use multiple capacity pools simultaneously without adding significant development effort.


    Stay tuned for even more information about Spot Instances! In the meantime, please feel free to leave your own tips (and questions) in the comments.

    Jeff;