AWS Blog

CloudWatch Update – Jump From Metrics to Associated Logs

by Jeff Barr | on | in Amazon CloudWatch | | Comments

A few years ago I showed you how to Store and Monitor OS & Application Log Files with Amazon CloudWatch. Many AWS customers now create filters for their logs, publish the results as CloudWatch metrics, and then raise alarms when something is amiss. For example, they watch their web server logs for 404 errors that denote bad inbound links and 503 errors that can indicate an overload condition.

While the monitoring and alarming are both wonderful tools for summarizing vast amounts of log data, sometimes you need to head in the opposite direction. Starting from an overview, you need to be able to quickly locate the log file entries that were identified by the filters and caused the alarms to fire. If you, like many of our customers, are running hundreds or thousands of instances and monitoring multiple types of log files, this can be statistically more difficult than finding a needle in a haystack.

Today we are launching a new CloudWatch option that will (so to speak) reduce the size of the haystack and make it a lot easier for you to find the needle!

Let’s say that I am looking at this graph and want to understand the spike in the ERROR metric at around 17:00:

I narrow down the time range with a click and a drag:

Then I click on the logs icon (it, along with the other icons, appears only when the mouse is over the graph), and select the log file of interest (ERROR):

CloudWatch opens in a second tab, with a view that shows the desired log files, pre-filtered to the desired time span. I can then expand an entry to see what’s going on (these particular errors were manufactured for demo purposes; they are not very exciting or detailed):

This feature works great for situations where filters on log files are used to publish metrics. However, what if I am looking at some CloudWatch system metrics that are not associated with a particular log file? I can follow the steps above, but select View logs in this time range from the menu:

I can see all of my CloudWatch Log Groups, filtered for the time range in the graph:

At this point I can use my knowledge of my application’s architecture to guide my decision-making and to help me to choose a Log Group to investigate. Once again, the events in the log group will be filtered so that only those in the time frame of interest will be visible. If a chart contains metrics in the Lambda namespace, links to the log group will be displayed even if no metric filters are in effect.

This new feature is available now and you can start using it today!

Jeff;

AWS Week in Review – October 31, 2016

by Jeff Barr | on | in Week in Review | | Comments

Over 25 internal and external contributors helped out with pull requests and fresh content this week! Thank you all for your help and your support.

Monday

October 31

Tuesday

November 1

Wednesday

November 2

Thursday

November 3

Friday

November 4

Saturday

November 5

Sunday

November 6

New & Notable Open Source

New Customer Success Stories

  • Apposphere – Using AWS and bitfusion.io from the AWS Marketplace, Apposphere can scale 50 to 60 percent month-over-month while keeping customer satisfaction high. Based in Austin, Texas, the Apposphere mobile app delivers real-time leads from social media channels.
  • CADFEM – CADFEM uses AWS to make complex simulation software more accessible to smaller engineering firms, helping them compete with much larger ones. The firm specializes in simulation software and services for the engineering industry.
  • Mambu – Using AWS, Mambu helped one of its customers launch the United Kingdom’s first cloud-based bank, and the company is now on track for tenfold growth, giving it a competitive edge in the fast-growing fintech sector. Mambu is an all-in-one SaaS banking platform for managing credit and deposit products quickly, simply, and affordably.
  • Okta – Okta uses AWS to get new services into production in days instead of weeks. Okta creates products that use identity information to grant people access to applications on multiple devices at any time, while still enforcing strong security protections.
  • PayPlug – PayPlug is a startup created in 2013 that developed an online payment solution. It differentiates itself by the simplicity of its services and its ease of integration on e-commerce websites. PayPlug is a startup created in 2013 that developed an online payment solution. It differentiates itself by the simplicity of its services and its ease of integration on e-commerce websites
  • Rent-a-Center – Rent-a-Center is a leading renter of furniture, appliances, and electronics to customers in the United States, Canada, Puerto Rico, and Mexico. Rent-A-Center uses AWS to manage its new e-commerce website, scale to support a 1,000 percent spike in site traffic, and enable a DevOps approach.
  • UK Ministry of Justice – By going all in on the AWS Cloud, the UK Ministry of Justice (MoJ) can use technology to enhance the effectiveness and fairness of the services it provides to British citizens. The MoJ is a ministerial department of the UK government. MoJ had its own on-premises data center, but lacked the ability to change and adapt rapidly to the needs of its citizens. As it created more digital services, MoJ turned to AWS to automate, consolidate, and deliver constituent services.

New SlideShare Presentations

New YouTube Videos

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

MLB Statcast – Featuring David Ortiz, Joe Torre, and Dave O’Brien

by Jeff Barr | on | in Case Studies | | Comments

One of my colleagues has been rooting for the Red Sox since she was in her teens. She recently had the opportunity to work with Red Sox legend and Hank Aaron award winner David Ortiz (aka “Big Papi”), Hall of Famer Joe Torre, and announcer Dave O’Brien (ESPN and NESN) to produce a fun video to commemorate Big Papi’s retirement from Major League Baseball. As you can see, he takes the idea of the quantified self very seriously, and is now measuring just about every aspect of his post-baseball life using AWS-powered Statcast in order to keep his competitive edge:

MLB Statcast uses high-resolution cameras and radar equipment to track the position of the ball (20,000 metrics per second) and the players (30 metrics per second) on the field, generating 7 terabytes of data per game, all stored in and processed by the AWS Cloud. The application uses multiple AWS services including Amazon CloudFront, Amazon DynamoDB, Amazon Elastic Compute Cloud (EC2), Amazon ElastiCache, Amazon Simple Storage Service (S3), AWS Direct Connect, and AWS Lambda; here’s a big-picture look at the architecture:

To learn more, read our case study: Major League Baseball Fields Big Data, and Excitement, with AWS.

If you are coming to AWS re:Invent, get your shoulders in to shape and come visit our Statcast-equipped batting cage! You will be able to measure your metrics and see how you’d do in the big leagues.

Jeff;

 

CodePipeline Update – Build Continuous Delivery Workflows for CloudFormation Stacks

by Jeff Barr | on | in Amazon CodePipeline, AWS CloudFormation | | Comments

When I begin to write about a new way for you to become more productive by using two AWS services together, I think about a 1980’s TV commercial for Reese’s Peanut Butter Cups! The intersection of two useful services or two delicious flavors creates a new one that is even better.

Today’s chocolate / peanut butter intersection takes place where AWS CodePipeline meets AWS CloudFormation. You can now use CodePipeline to build a continuous delivery pipeline for CloudFormation stacks. When you practice continuous delivery, each code change is automatically built, tested, and prepared for release to production. In most cases, the continuous delivery release process includes a combination of manual and automatic approval steps. For example, code that successfully passes through a series of automated tests can be routed to a development or product manager for final review and approval before it is pushed to production.

This important combination of features allows you to use the infrastructure as code model while gaining all of the advantages of continuous delivery. Each time you change a CloudFormation template, CodePipeline can initiate a workflow that will build a test stack, test it, await manual approval, and then push the changes to production.The workflow can create and manipulate stacks in many different ways:

As you will soon see, the workflow can take advantage of advanced CloudFormation features such as the ability to generate and then apply change sets (read New – Change Sets for AWS CloudFormation to learn more) to an operational stack.

The Setup
In order to learn more about this feature, I used a CloudFormation template to set up my continuous delivery pipeline (this is yet another example of infrastructure as code). This template (available here and described in detail here) sets up a full-featured pipeline. When I use the template to create my pipeline, I specify the name of an S3 bucket and the name of a source file:

The SourceS3Key points to a ZIP file that is enabled for S3 versioning. This file contains the CloudFormation template (I am using the WordPress Single Instance example) that will be deployed via the pipeline that I am about to create. It can also contain other deployment artifacts such as configuration or parameter files; here’s what mine looks like:

The entire continuous delivery pipeline is ready just a few seconds after I click on Create Stack. Here’s what it looks like:

The Action
At this point I have used CloudFormation to set up my pipeline. With the stage set (so to speak), now I can show you how this pipeline makes use of the new CloudFormation actions.

Let’s focus on the second stage, TestStage. Triggered by the first stage, this stage uses CloudFormation to create a test stack:

The stack is created using parameter values from the test-stack-configuration.json file in my ZIP. Since you can use different configuration files for each CloudFormation action, you can use the same template for testing and production.

After the stack is up and running, the ApproveTestStack step is used to await manual approval (it says “Waiting for approval above.”). Playing the role of the dev manager, I verify that the test stack behaves and performs as expected, and then approve it:

After approval, the DeleteTestStack step deletes the test stack.

Now we are just about ready to deploy to production. ProdStage creates a CloudFormation change set and then submits it for manual approval. This stage uses the parameter values from the prod-stack-configuration.json file in my ZIP. I can use the parameters to launch a modestly sized test environment on a small EC2 instance and a large production environment from the same template.

Now I’m playing the role of the big boss, responsible for keeping the production site up and running. I review the change set in order to make sure that I understand what will happen when I deploy to production. This is the first time that I am running the pipeline, so the change set indicates that an EC2 instance and a security group will be created:

And then I approve it:

With the change set approved, it is applied to the existing production stack in the ExecuteChangeSet step. Applying the change to an  existing stack keeps existing resources in play where possible and avoids a wholesale restart of the application. This is generally more efficient and less disruptive than replacing the entire stack. It keeps in-memory caches warmed up and avoids possible bursts of cold-start activity.

Implementing a Change
Let’s say that I decide to support HTTPS. In order to do this, I need to add port 443 to my application’s security group. I simply edit the CloudFormation template, put it into a fresh ZIP, and upload it to S3. Here’s what I added to my template:

      - CidrIp: 0.0.0.0/0
        FromPort: '443'
        IpProtocol: tcp
        ToPort: '443'

Then I return to the Console and see that CodePipeline has already detected the change and set the pipeline in to motion:

The pipeline runs again, I approve the test stack, and then inspect the change set, confirming that it will simply modify an existing security group:

One quick note before I go. The CloudFormation template for the pipeline creates an IAM role and uses it to create the test and deployment stacks (this is a new feature; read about the AWS CloudFormation Service Role to learn more). For best results, you should delete the stacks before you delete the pipeline. Otherwise, you’ll need to re-create the role in order to delete the stacks.

There’s More
I’m just about out of space and time, but I’ll briefly summarize a couple of other aspects of this new capability.

Parameter Overrides – When you define a CloudFormation action, you may need to exercise additional control over the parameter values that are defined for the template. You can do this by opening up the Advanced pane and entering any desired parameter overrides:

Artifact References – In some situations you may find that you need to reference an attribute of an artifact that was produced by an earlier stage of the pipeline. For example, suppose that an early stage of your pipeline copies a Lambda function to an S3 bucket and calls the resulting artifact LambdaFunctionSource. Here’s how you would retrieve the bucket name and the object key from the attribute using a parameter override:

{
  "BucketName" : { "Fn::GetArtifactAtt" : ["LambdaFunctionSource", "BucketName"]},
  "ObjectKey" : { "Fn::GetArtifactAtt" : ["LambdaFunctionSource", "ObjectKey"]}
}

Access to JSON Parameter – You can use the new Fn::GetParam function to retrieve a value from a JSON-formatted file that is included in an artifact.

Note that Fn:GetArtifactAtt and Fn::GetParam are designed to be used within the parameter overrides.

S3 Bucket Versioning – The first step of my pipeline (the Source action) refers to an object in an S3 bucket. By enabling S3 versioning for the object, I simply upload a new version of my template after each change:

If I am using S3 as my source, I must use versioning (uploading a new object over the existing one is not supported). I can also use AWS CodeCommit or a GitHub repo as my source.

Create Pipeline Wizard
I started out this blog post by using a CloudFormation template to create my pipeline. I can also click on Create pipeline in the console and build my initial pipeline (with source, build, and beta deployment stages) using a wizard. The wizard now allows me to select CloudFormation as my deployment provider. I can create or update a stack or a change set in the beta deployment stage:

Available Now
This new feature is available now and you can start using it today. To learn more, check out the Continuous Delivery with AWS CodePipeline  in CodePipeline Documentation.

Jeff;

 

New – Sending Metrics for Amazon Simple Email Service (SES)

by Jeff Barr | on | in Amazon CloudWatch, Amazon Kinesis, Amazon SES | | Comments

Amazon Simple Email Service (SES) focuses on deliverability – getting email through to the intended recipients. In my launch blog post (Introducing the Amazon Simple Email Service), I noted that several factors influence delivery, including the level of trust that you have earned with multiple Internet Service Providers (ISPs) and the number of complaints and bounces that you generate.

Today we are launching a new set of sending metrics for SES. There are two aspects to this feature:

Event Stream – You can configure SES to publish a JSON-formatted record to Amazon Kinesis Firehose each time a significant event (sent, rejected, delivered, bounced, or complaint generated) occurs.

Metrics – You can configure SES to publish aggregate metrics to Amazon CloudWatch. You can add one or more tags to each message and use them as CloudWatch dimensions. Tagging messages gives you the power to track deliverability based on campaign, team, department, and so forth.  You can then use this information to fine-tune your messages and your email strategy.

To learn more, read about Email Sending Metrics on the Amazon Simple Email Service Blog.

Jeff;

New Amazon Linux Container Image for Cloud and On-Premises Workloads

by Jeff Barr | on | in Amazon EC2, Amazon Linux AMI, EC2 Container Service | | Comments

The Amazon Linux AMI is designed to provide a stable, secure, high performance execution environment for applications running on EC2. With limited remote access (no root login and mandatory SSH key pairs) and a very small number of non-critical packages installed, the AMI has a very respectable security profile.

Many of our customers have asked us to make this Linux image available for use on-premises, often as part of their development and testing workloads.

Today I am happy to announce that we are making the Amazon Linux Container Image available for cloud and on-premises use. The image is available from the EC2 Container Registry (read Pulling an Image to learn how to access it). It is built from the same source code and packages as the AMI and will give you a smooth path to container adoption. You can use it as-is or as the basis for your own images.

In order to test this out, I launched a fresh EC2 instance, installed Docker, and then pulled and ran the new image. Then I installed cowsay and lolcat (and the dependencies) and created the image above.

To learn more about this image, read Amazon Linux Container Image.

You can also use the image with Amazon EC2 Container Service. To learn more, read Using Amazon ECR Images with Amazon ECS.

Jeff;

 

New – Product Support Connection for AWS Marketplace Customers

by Jeff Barr | on | in AWS Marketplace | | Comments

There are now over 2,700 software products listed in AWS Marketplace. Tens of thousands of AWS customers routinely find, buy, and start using offerings from more than 925 Independent Software Vendors (ISVs).

Today we are giving you, as a consumer of software through AWS Marketplace, the ability to selectively share your contact information (name, title, phone number, email address, location, and organization) with software vendors in order to simplify and streamline your requests for product support. The vendors can programmatically access this information and store it within their own support systems so that they can easily verify your identity and provide you with better support.

This is an opt-in program. Sellers can choose to participate, and you can choose to share your contact information.

Product Support Connection as a Buyer
In order to test out this feature I launched Barracuda Web Application Firewall (WAF). After selecting my options and clicking on the Accept Software Terms & Launch with 1-click button, I was given the option to share my contact details:

Then I entered my name and other information:

I have the option to enter up to 5 contacts for each subscription at this point. I can also add, change, or delete them later if necessary:

If products that I already own are now enabled for Product Support Connection, I can add contact details here as well.

Product Support Connection as a Seller
If I am an ISV and want to participate in this program, I contact the AWS Marketplace Seller & Catalog Operations Team (aws-marketplace-seller-ops@amazon.com). The team will enroll me in the program and provide me with access to a secure API that I can use to access contact information. Per the terms of the program, I must register each contact in my support or CRM system within one business day, and use it only for support purposes. To learn more, read AWS Marketplace Product Support Connection Helps Software Vendors Provide More Seamless Product Support on the AWS Partner Network Blog.

Getting Started
When I am searching for products in AWS Marketplace, I can select Product Support Connection as a desired product attribute:

As part of today’s launch, I would like to thank the following vendors who worked with us to shape this program and to add the Product Support Connection to their offerings:

  • Barracuda – Web Application Firewall (WAF), NextGen Firewall F-Series, Load Balancer ADC, Email Security Gateway, Message Archiver.
  • Chef – Chef Server, Chef Compliance.
  • Matillion – Matillion ETL for Redshift.
  • Rogue Wave – OpenLogic Enhanced Support for CentOS (6 & 7, Standard & Security Hardened).
  • SoftNAS – SoftNAS Cloud (Standard & Express).
  • Sophos -Sophos UTM Manager 4, Sophos UTM 9.
  • zData – Greenplum database.
  • Zend – PHP, Zend Server.
  • Zoomdata – Zoomdata.

We’re looking forward to working with other vendors who would like to follow their lead!

To get started, contact the AWS Marketplace Seller & Catalog Operations Team at aws-marketplace-seller-ops@amazon.com.

Jeff;

 

Amazon CloudWatch Update – Extended Metrics Retention & User Interface Update

by Jeff Barr | on | in Amazon CloudWatch | | Comments

Amazon CloudWatch is a monitoring service for your AWS resources and for the applications that you run on AWS. It collects and tracks metrics, monitors log files, and allows you to set alarms and respond to changes in your AWS resources.

Today we are launching several important enhancements to CloudWatch:

  • Extended Metrics Retention – CloudWatch now stores all metrics for 15 months.
  • Simplified Metric Selection – The CloudWatch Console now makes it easier for you to find and select the metrics of interest.
  • Improved Metric Graphing – Graphing of selected metrics is now easier and more flexible.

Let’s take a look!

Extended Metrics Retention
When we launched CloudWatch back in 2009 (New Features for Amazon EC2: Elastic Load Balancing, Auto Scaling, and Amazon CloudWatch), system metrics were stored for 14 days. Later, when we gave you the ability to publish your own metrics to CloudWatch, the same retention period applied. Many AWS customers would like to access and visualize data over longer periods of time. They want to detect and understand seasonal factors, observe monthly growth trends, and perform year-over-year analysis.

In order to support these use cases (and many others that you’ll undoubtedly dream up), CloudWatch now stores all metrics for 15 months at no extra charge. In order to keep the overall volume of data reasonable, historical data is stored at a lower level of granularity, as follows:

  • One minute data points are available for 15 days.
  • Five minute data points are available for 63 days.
  • One hour data points are available for 455 days (15 months).

In order to allow you to understand the value of this new feature first-hand, you can immediately access three months of retained metrics. Over the course of the next 12 months, metrics will continue to be retained up until the point where a full 15 months of history are stored.

As a first-hand illustration of the value of extended metrics retention, this long-term view of the Duration of my AWS Lambda functions tells me that something unusual is happening every two weeks:

Simplified Metric Selection
In order to make it easier for you to find and select the metrics that you would like to examine and graph, the CloudWatch Console now includes a clean, clutter-free card style view with a clean demarcation between AWS metrics and custom metrics:

The next step is to search for metrics of interest. For example, I’m interested in metrics with “CPU” in the name:

From there I can drill down, select a metric, and graph it. Here’s the CPU Utilization metric across all of my EC2 instances:

I can also show multiple metrics on the same graph:

This is not as illuminating as I would like; I’ll show you how to fix that in just a minute.

Improved Metric Graphing
With metrics now stored for 15 months and easier to find and to select, the next step is to make them easier to comprehend. This is where the new tabs (Graphed metrics and Graph options) come in to play.

The Graphed metrics tab gives me detailed control over each metric:

The Actions column on the right lets me create alarms, duplicate a metric, or to delete it from the graph.

I can click on any of the entries in the Statistic column to make a change:

The ability to duplicate a metric and then change a statistic lets me compare, for example, maximum and average CPU Utilization (I also changed the Period to 6 hours):

Now let’s take a look at the Y Axis control on the Graph options tab. In the last section I graphed seven EC2 metrics at the same time. Due to the variation in range, the graph was not as informative as it could be. I can now choose independent ranges for the left and right Y axes, and then assign metrics to the left or to the right:

I can edit the label for a metric by clicking on it:
I can also choose to set ranges for the left and the right Y axes:

I can use the new date picker to choose the time frame of interest to me. I can look at metrics for a desired absolute date range, or relative to the current date. Here’s how I choose an absolute date range:

And here’s how I choose a relative one:

I can rename a graph by clicking on the current name and entering a new one:

After I have fine-tuned the graph to my liking I can add it to an existing dashboard or create a new one:

Here’s one of my existing dashboards with the new graph at the bottom:

Available Now
The extended metric storage and all of the features that I described above are now available in the US East (Northern Virginia), US West (Oregon), US West (Northern California), EU (Ireland), EU (Frankfurt), South America (Brazil), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), and Asia Pacific (Sydney) Regions.

As I noted before, there is no extra charge for the extended metric storage.

Jeff;

 

AWS Week in Review – October 24, 2016

by Jeff Barr | on | in Week in Review | | Comments

Another busy week in AWS-land! Today’s post included submissions from 21 internal and external contributors, along with material from my RSS feeds, my inbox, and other things that come my way. To join in the fun, create (or find) some awesome AWS-related content and submit a pull request!

Monday

October 24

Tuesday

October 25

Wednesday

October 26

Thursday

October 27

Friday

October 28

Saturday

October 29

Sunday

October 30

New & Notable Open Source

  • aws-git-backed-static-website is a Git-backed static website generator powered entirely by AWS.
  • rds-pgbadger fetches log files from an Amazon RDS for PostgreSQL instance and generates a beautiful pgBadger report.
  • aws-lambda-redshift-copy is an AWS Lambda function that automates the copy command in Redshift.
  • VarnishAutoScalingCluster contains code and instructions for setting up a shared, horizontally scalable Varnish cluster that scales up and down using Auto Scaling groups.
  • aws-base-setup contains starter templates for developing AWS CloudFormation-based AWS stacks.
  • terraform_f5 contains Terraform scripts to instantiate a Big IP in AWS.
  • claudia-bot-builder creates chat bots for Facebook, Slack, Skype, Telegram, GroupMe, Kik, and Twilio and deploys them to AWS Lambda in minutes.
  • aws-iam-ssh-auth is a set of scripts used to authenticate users connecting to EC2 via SSH with IAM.
  • go-serverless sets up a go.cd server for serverless application deployment in AWS.
  • awsq is a helper script to run batch jobs on AWS using SQS.
  • respawn generates CloudFormation templates from YAML specifications.

New SlideShare Presentations

New Customer Success Stories

  • AlbemaTV – AbemaTV is an Internet media-services company that operates one of Japan’s leading streaming platforms, FRESH! by AbemaTV. The company built its microservices platform on Amazon EC2 Container Service and uses an Amazon Aurora data store for its write-intensive microservices—such as timelines and chat—and a MySQL database on Amazon RDS for the remaining microservices APIs. By using AWS, AbemaTV has been able to quickly deploy its new platform at scale with minimal engineering effort.
  • Celgene – Celgene uses AWS to enable secure collaboration between internal and external researchers, allow individual scientists to launch hundreds of compute nodes, and reduce the time it takes to do computational jobs from weeks or months to less than a day. Celgene is a global biopharmaceutical company that creates drugs that fight cancer and other diseases and disorders. Celgene runs its high-performance computing research clusters, as well as its research collaboration environment, on AWS.
  • Under Armour – Under Armour can scale its Connected Fitness apps to meet the demands of more than 180 million global users, innovate and deliver new products and features more quickly, and expand internationally by taking advantage of the reliability and high availability of AWS. The company is a global leader in performance footwear, apparel, and equipment. Under Armour runs its growing Connected Fitness app platform on the AWS Cloud.

New YouTube Videos

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

AWS Hot Startups – October 2016 – Optimizely, Touch Surgery, WittyFeed

by Jeff Barr | on | in Startups | | Comments

I’m pleased to share yet another month of hot startups, courtesy of Tina Barr!

Jeff;

Check out this month’s AWS-powered startups:

  • Optimizely – Providing web and mobile A/B testing for the world’s leading brands.
  • Touch Surgery – Building technologies for the global surgical community.
  • WittyFeed – Creating viral content.

Optimizely (San Francisco)
Optimizely is one of the world’s leading experience optimization platforms, providing website and mobile A/B testing and personalization for leading brands. The platform’s easy-to-use products coupled with the high speed of deployment allows organizations to run experiments and make data-driven decisions. Thousands of customers worldwide are using Optimizely to deliver better experiences to their audiences across a variety of channels. To date, those customers have created and delivered over 275 billion optimized visitor experiences.

In today’s digital world, it is essential for companies to provide personalized experiences for their consumers. In Optimizely’s words, “Being personal is no longer optional.” Their web personalization products allow companies to deliver targeted content to their customers in real time by using browsing behavior, demographic information, contextual clues, and 1st- and 3rd-party data. This in turn drives revenue and keeps customers coming back. Companies don’t have to rely on a team of analysts or engineers to know the impact of their campaigns either – Optimizely has developed an industry-leading Stats Engine to support experimentation and decision-making on their terms.

Optimizely relies heavily on AWS for hosting core parts of its data infrastructure that powers experimentation across different channels, targeting and personalization. They use services like Amazon S3, Amazon EC2, Amazon EMR, Amazon RDS, Amazon Redshift, Amazon DynamoDB, and Amazon ElastiCache to rapidly and reliably scale out infrastructure for supporting business growth. Optimizely also uses AWS security services such as Amazon Identity and Access Management (IAM) roles, Amazon Virtual Private Cloud (VPC), and AWS CloudTrail for auditing in order to enhance customer trust in the product.

Check out the Optimizely blog to keep up with the latest news!

Touch Surgery (London)
Founded by 4 surgeons, Touch Surgery is a technology startup with a mission to scale global surgery by using the latest technologies to train surgeons and standardize surgical knowledge. The company is transforming professional healthcare training through the delivery of a unique platform that links mobile apps with a powerful data back-end. The team behind Touch Surgery has spent years working with leading surgical minds from around the world to map and share the language of surgery.

Touch Surgery uses cognitive mapping techniques coupled with cutting edge AI and 3D rendering technology to codify surgical procedures. They have partnered with leaders in Virtual Reality and Augmented Reality to work toward a vision of advancing surgical care in the operating room (OR). Future surgeons can practice anytime, anywhere by downloading the app (iOS & Android). The app allows users to learn and practice over 50 surgical procedures, evaluate and measure progress, and connect with physicians across the world. Touch Surgery also offers a variety of simulations in specialties such as neurosurgery, orthopedics, plastics, and more.

Touch Surgery’s back-end is fully hosted on AWS. They make use of many Amazon EC2 instances, as well as Amazon S3 and Amazon Elasticsearch Service. AWS has allowed them to scale with the use of Amazon RDS, and they now have over 1 million users and are recording vast amounts of usage data to power their data analytics product. Touch Surgery is also able to deploy different environments with relative ease to help with testing and increase the speed of delivery.

Touch Surgery is always looking for more talent. For more info check out https://www.touchsurgery.com/jobs/.

WittyFeed (India)
WittyFeed is a platform connecting social media influencers with creative writers who have a passion for sharing entertaining and attention-grabbing stories. Launched in 2014 by cofounders Vinay Singhal (CEO), Shahshak Vaishnav (CTO), and Parveen Singhal (COO), WittyFeed has become India’s largest viral content company generating over 60 stories per day and attracting 75 million unique visitors every month. They have over 100 writers that are constantly producing content, and an in-house team of editors ensures that the content goes viral and reaches the right audience. The WittyFeed team is passionate about helping publishers grow their pages and is currently creating tools that will tell them not only when to post, but the type of content that will reach a broad audience.

WittyFeed is easy to use and allows readers to personalize their feeds to get the most relevant content. Their focus is on photo-stories, ‘listicles’, and ‘charticles’ in categories such as Technology, History, Sports, and of course, Hilarious. WittyFeed also makes it easy for users to engage and interact with each other with their new live-commenting feature, which allows readers to leave comments without having to sign into a personal account.

Being a content company means that WittyFeed has to deal with a heavy load, user base, and data. With AWS they are able to have a highly scalable infrastructure that can handle sudden surges and adapts to needs in a short period of time. WittyFeed uses a broad range of services including Amazon Kinesis Firehouse, Amazon Redshift, and Amazon RDS. These services allow WittyFeed to handle over 100 million visitors and serve over 500 million page views with almost zero downtime and have become the central pillars of their infrastructure.

Start exploring WittyFeed today!

Tina Barr