AWS Partner Network (APN) Blog

An Introduction to AWS CodeBuild

by Kate Miller | on | in DevOps on AWS, Guest Post, Partner Guest Post, re:Invent 2016 | | Comments

This is a guest post by Harlen Bains, Stelligent. Stelligent is an AWS DevOps Competency Partner.

At re:Invent 2016, AWS announced a new service called AWS CodeBuild that allows you to build your software. This AWS managed build service can be used to compile your source code, run unit tests, and produce deployable application artifacts. In short, CodeBuild eliminates the need to manage, provision, scale, and maintain build servers. With support for most build languages and tools like Maven and Gradle, CodeBuild is ready to go out of the box. It provides built-in support for programming platforms such as Java, Ruby, Python, Golang, Docker, Node, and Android and can be customized for others. As with most AWS services you pay for only what you use and the service scales automatically based on your demand (for more information, see AWS CodeBuild pricing).

In this post we are going to explore the basics of CodeBuild and then learn how to use the service to build a Java application. In Figure 1, you see how CodeBuild fits into the build stage of a typical deployment pipeline. The pipeline itself can be orchestrated with a tool like AWS CodePipeline – which will be discussed in our next post.

Figure 1 – How CodeBuild fits into a deployment pipeline

Before we start building our own artifacts we need to understand two key CodeBuild concepts: Build Environment and the Build Spec File.

Build Environment – The operating system, programming language runtime, and tools that CodeBuild uses to run a build are specified by a Docker image. A customer can either use their own Docker image or use one of the many optimized images that are provided by AWS.

Build Spec File – A required YAML file that contains build commands and related settings that CodeBuild uses for a build.

In addition, CodeBuild relies on the following services:

Amazon S3, GitHub, and CodeCommit – Used to store source code, build spec, and build environment

IAM – Securely provides access to AWS resources

CloudWatch Logs – Used to store and access logs for each CodeBuild run

Although CodeBuild is usually used as the build component of AWS CodePipeline, it can also be used to replace the building component of Jenkins and other tools. Cross-tool support is provided through the AWS SDKs for CodeBuild.

By default, CodeBuild allows up to 60 minutes for a build, but this can be set to any amount of time between 5 minutes and 8 hours. Keep in mind that you will also be charged for any other AWS resources that are provisioned as part of CodeBuild. This might include services such as S3, KMS, CloudWatch, and other EC2 resources provisioned by CodeBuild. In figure 2, you see how CodeBuild can be executed via the AWS Management Console, AWS CLI, AWS SDKs, and AWS CodePipeline.

Figure 2 – Different ways to access CodeBuild

Who Should Use CodeBuild?

While CodeBuild can be seamlessly integrated with AWS CodePipeline, you can also integrate it with many other tools in your tooling ecosystem. The following scenarios are ideal candidates for using CodeBuild:

  • Are you still building JARs in your IDE?
  • Are you looking to remove the need to setup a separate Jenkins environment to build artifacts?
  • Do you want to reduce the amount of dedicated build infrastructure you maintain?
  • Are you trying to move to a hosted Continuous Integration (CI) server but it’s difficult to get new services approved within your organization?
  • Is your organization uncomfortable having a third party run their builds, but comfortable with running those builds in AWS?

Let’s Get CodeBuild-ing! (Tutorial)

Let’s take a look at what we want to do here. Starting with an AWS provided sample application source code, we are going to use CodeBuild to create a deployable Java artifact. In Figure 3 we can see how this looks as a process.

Figure 3 – CodeBuild process

Step 1 – Setup

Download this GitHub package: https://github.com/stelligent/aws-codedeploy-sample-tomcat (as shown in Figure 3) and upload the zip to an S3 Bucket. Be sure that S3 versioning is enabled for the bucket.

Take note of the S3 bucket name and the name of the zip file. You will be needing these later.

Step 2 – Create Project

Launch CodeBuild by going to https://console.aws.amazon.com/codebuild/ and click Create project (or Get Started and then Create project if this is your first time using CodeBuild). Once you’ve clicked this, you will be see a page where you can enter the settings for your build project.

Step 3 – Configure Project

1.Let’s begin to configure the project. There are a few steps to this:

a. Project Configuration – Add a project name. Pick one that makes sense. This is what you will see on your CodeBuild page and in your logs.

b. Source: What to Build – Add the source for your code. Here we select S3 as a source and then enter the name of the S3 bucket that we created earlier in Step 1. You’ll enter aws-codedeploy-sample-tomcat-master.zip as the S3 object key. Although we are using S3 in this example we could also use GitHub or CodeCommit as well.

c. Environment: How to Build – Right after the source we want to tell CodeBuild how to build the file. The first step is to select Use an image managed by AWS CodeBuild for the Environment image*. This way we do not need to create our own Docker image. Next select the Operating system*. Here we use Ubuntu and then we select Java as our run time – since the source we are trying to build is a Java project.

In the version selection, we tell CodeBuild what version of Java we want to use. We are using aws/codebuild/java:openjdk-8.

For the Build specification section, select Insert build commands. Under the Build command* section, type:

cd aws-codedeploy-sample-tomcat-master && mvn package && zip -
r MavenTomcatApp.zip *

We use this build command because CodeBuild actually extracts the Zip file from S3 and uses the specific folder name. Once in that folder, we want to build from the Maven package in the source code.

buildspec.yml

While the above example demonstrates the use of running a simple build command, you can also provide a more expressive description of a build using a buildspec.yml file. In this YAML file you can configure the commands that occur in each of the build phases along with the name and type of the artifact file(s). The buildspec.yml file needs to reside in the root directory of the source repository. A snippet of the buildspec.yml is shown below:

version: 0.1
containers:
  main:
    phases:
      pre_build:
        commands:
          - echo Entering pre_build phase...
      build:
        commands:
          - echo Entering build phase...
      post_build:
        commands:
          - echo Entering post_build phase...
    artifacts:
      type: none
      files:
        - 'target/SampleMavenTomcatApp.war'

 

d. Artifacts: Where to Put the Artifacts from this Build Project – Now that we have told CodeBuild where to find the source code and how to build the artifact, we are going to tell it where to store the built artifact (i.e. a WAR file, in this case).

Under the Output Files section, enter:

aws-codedeploy-sample-tomcat-
master/target/SampleMavenTomcatApp.war, aws-codedeploy-sample-
tomcat-master/MavenTomcatApp.zip

We are going to store the artifact in S3, with the name CodeBuildTomcat, and in the same bucket we used to store the source code.

e. Service Role – Create a new Service Role in IAM for this CodeBuild project.

f. Click Continue

2. View the review page and then click Save and Build.

3. Now on the Start new build page just click Start build.

4.  On the next page wait for the build to succeed.

5. Once it has completed, you can open the S3 bucket you specified for the artifact and view the S3 bucket with the SampleMavenTomcatApp.war file.

CodeBuild Phases

As CodeBuild is building artifacts, it goes through several distinct phases that are logged in AWS CloudWatch Logs (and accessible from the CodeBuild console). Each of these phases are described in some more detail below:

    • SUBMITTED – This is the initial phase that indicates that the build process initiated and configuration has been received
    • PROVISIONING – In this phase, CodeBuild launches a build container using a Docker image as defined
    • DOWNLOAD_SOURCE – Downloads the source from S3, CodeCommit or GitHub
    • INSTALL – Installs the source onto the container
    • PRE_BUILD – Any actions that need to occur prior to the build
    • BUILD – Executes the commands defined in the build specification
    • POST_BUILD – Runs any cleanup actions
    • UPLOAD_ARTIFACTS – Uploads the build artifacts to S3
    • FINALIZING – Completing the build process
    • COMPLETED – The build process is complete

6. Congratulations! Now you have used CodeBuild to build a sample application.

Conclusion

Now that you have successfully created an artifact with CodeBuild, the next step is to run it with the rest of the AWS Developer Tools suite. To see how to do this, stay tuned for tomorrow’s post: Deploy to Production using CodeBuild and Developer Tools Suite which demonstrates how to integrate CodeBuild with AWS CodeDeploy, AWS CodeCommit, and AWS CodePipeline by automating the provisioning in AWS CloudFormation.

Case Study: ‘Industrial Strength’ Software using Amazon Web Services Storage

by Kate Miller | on | in Partner Guest Post, re:Invent 2016, Storage | | Comments

This is a guest post by Harley Puckett, Program Director for IBM Spectrum Protect development. IBM is an AWS Technology Partner.

As Amazon Web Services (AWS) continues to grow, software developers are discovering new ways to take advantage of its capabilities. IBM is no exception. My team makes IBM Spectrum Protect, formerly known as Tivoli Storage Manager, which is a leading backup and recovery software for large and mid-sized organizations. We recently delivered integration with AWS to our customers that’s easy to use, fast, and efficient.  If you’re writing storage-intensive software for use with AWS, this post may help you learn from our experiences and achieve fast results.

Before we began coding, we studied AWS documentation and met with several experts. We built prototypes using both Amazon S3 and OpenStack Swift APIs. We received multiple assurances from experts that our design was on the right track.  Most of our initial design was fine, but there were a few surprises. Our prototype was efficient, but didn’t deliver the scalable high performance that customers expect from IBM. We found ourselves back at the drawing board, with the goal of optimizing performance for heavy workloads while keeping administration simple.

Optimizing writes

When you write to ordinary disk blocks, there is very little abstraction between what you write and where it gets written.  When you write to object storage, there is more prep work to be done because you’re really talking to a higher level API over the network, authenticating, and getting confirmation that the object has been written. We know more operations are needed to write an object than a block.  The questions are, does it matter for your application, and what can you do about it? Under normal circumstances, IBM Spectrum Protect can manage dozens of concurrent backup streams.  Each stream of data expects to quickly complete its work. As we scaled up, multi-part writes and small delays due to network overhead accumulated, and we found our daily workload limits to be below our design goals.  We wanted IBM customers to be able to use AWS storage for all backup workloads, so we refactored our design to optimize write processing.

Figure 1:  Write optimized local cache design in IBM Spectrum Protect v7.1.7

In our improved design, we split write requests into two processes: writes to a cache and writes to Amazon S3. Our backup process now writes to a local object cache.  A new process reads from the local cache and writes to Amazon S3.  This change enables the Amazon S3 writer to stream data to AWS, without impacting the backup streams. As an added bonus of decoupling the backup streams from the object storage writer, there was added flexibility to be able to test various object sizes until we found the optimal size for our workload. In combination, these changes enable us to write to the cloud at nearly the same speed as file transfers.

IBM Spectrum Protect cloud storage pools using Amazon S3 can support all workloads: application, database, VM, and file backups, with a daily ingest rate of up to 56 terabytes of changed client data in a typical 8-hour backup window, helping organizations of all sizes to use AWS storage for backup data.

Optimizing reads

When performing a restore, backup software is read-intensive. Restore requests are often time-sensitive; data owners and application users are waiting, and often running low on patience. A slow restore can result in extended downtime, and in pay-per-use cloud environments, an inefficient read operation may result in higher costs.

 

Figure 2:  Read optimized data space reduction built into IBM Spectrum Protect v7.1.7

We were able to significantly improve read performance, and therefore reduce the time needed to perform a restore, by reducing the amount of data needing to be retrieved from Amazon S3. This space-saving technology is integral to IBM Spectrum Protect.

Over the years, we’ve learned that multiple space saving techniques deliver the best results for business data. Mixed workloads can present a number of space saving challenges, so it’s good to enable more than one technique.  Efficiency capabilities built into IBM Spectrum Protect include:

  • Deduplication – Local IBM Spectrum Protect backup servers keep track of duplicate data objects, and only send/retrieve unique data to and from Amazon S3. The behavior is similar to deduplication appliances, except no special hardware is required at the local site or on AWS to take advantage of it.
  • Compression – Yann Collet’s LZ4 compression algorithm achieves additional efficiency savings for most data, and for our use case, was the fastest lossless compression algorithm we tested.
  • Incremental ‘forever’ backups just store changed data, reducing the amount of data needing to be deduplicated and compressed.

Faster restores can enable more business critical workloads to take advantage of AWS storage, and meet their recovery time objectives.

Optimizing administration

We decided to provide guided setup with intelligent presets for backup administrators to optimize administration, so backup administrators can be more confident in their choices.

Figure 3:  Guided Storage Pool set up in IBM Spectrum Protect v7.1.7

Step-by-step instructions clearly communicate required tasks and whether they have been completed. When cloud storage is selected, IBM Spectrum Protect automatically turns on encryption so data sent to the cloud has end-to-end security, following AWS security best practices.  When Amazon S3 is selected, IBM Spectrum Protect automatically selects the closest AWS region, but also provides the option for other AWS regions to be selected from a drop-down list box.  The guided setup automatically enables features for optimized product efficiency and performance, further streamlining the process.  Once configured, administrators can see data moving to Amazon S3 at a glance from the IBM Spectrum Protect Operations Center.

Figure 4:  Managing Amazon S3 storage from IBM Spectrum Protect v7.1.7

By streamlining the process to setup and verify AWS cloud storage pools, IBM Spectrum Protect backup administrators can master hybrid cloud data protection in minutes, even if they don’t have deep cloud expertise.

If your software is storage-intensive and you want to support cloud storage, take time to optimize for reads, writes, and ease of administration.

Good luck and good programming.

See for yourself how easy it is to add AWS storage to IBM Spectrum Protect.  The following short video shows the steps.

Visit me at AWS re:Invent in Las Vegas, Nov. 28-Dec. 2, in the IBM booth, #434; or check out IBM Spectrum Protect at http://www.ibm.com/systems/storage/spectrum/protect/.

At re:Invent 2016, AWS and Richard Spurlock, CEO and Founder of Cobalt Iron, a IBM Spectrum Protect user, spoke at session STG-212 on Wednesday at 11:00 entitled, ‘Three Customer Viewpoints: Private Equity, Managed Services, and Government – How These Customers Transformed Business Operations through Storage’. Visit the AWS YouTube page later this week to find a recording of that presentation.

About the author

Harley Puckett is the Program Director for IBM Spectrum Protect development. He regularly presents at client briefings and user conferences. Harley spent 6 ½ years as an Executive Storage Software Consultant and manager of the Client Workshop Program in the Tucson Executive Briefing Center.  He was the Solutions Architect for IBM’s Global Archive Solutions Center in Guadalajara Mexico.  Prior to that he spent 9 ½ years as a senior development manager for IBM Tivoli Storage Manager (TSM).  Harley has been working on storage management at IBM for over 25 years.  The posting on this site is my own and doesn’t necessarily represent IBM’s positions, strategies or opinions.

LinkedIn:  https://www.linkedin.com/in/harleypuckett


The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

Congratulations to our Premier Consulting Partners – Eight New Premier Partners Announced at re:Invent 2016

by Kate Miller | on | in APN Consulting Partners, APN Partner Highlight, Premier Partners, re:Invent 2016 | | Comments

Reaching the APN Premier tier is an enormous achievement. APN Premier Partners have deep experience on AWS, are consistently raising the bar in their AWS-based practice, and are constantly looking for new ways to drive customer success. We also find that Premier Partners go above and beyond in their AWS Training & Certification. Having a deep bench of AWS Trained & Certified individuals, I’ve often been told from Premier Partners, changes the conversation they have with customers. BlazeClan, for instance, has told us:

 

“When initially engaging with a customer, it not only helps for us to be able to tell the customer how many Associate and Professional Certified resources we have on the team, but it changes the entire conversation with the customer. When our AWS Certified resources engage with the customer, they have a different level of conversation. And it brings a different level of credibility to our company.” – Varoon Rajani, Co-Founder & CEO, BlazeClan; read the full BlazeClan case study here

We are very proud of our AWS Premier Partners, and at the Global Partner Summit at re:Invent 2016, we announced that eight more APN Consulting Partners have officially earned Premier tier status.

Learn more about our new Premier Partners:

We also had the pleasure of recognizing 11 Premier Partners who’ve been in the Premier tier for five years – congratulations to these firms!

Learn more about these Premier Partners here:

Congratulations to all of our 55 Premier Partners!

Just Launched: Canonical Enterprise Support for Ubuntu on AWS Marketplace

by Kate Miller | on | in APN Partner Highlight, APN Technology Partners, AWS Marketplace, re:Invent 2016, SaaS on AWS | | Comments

This is a guest post from Udi Nachmany, Head of Public Cloud at Canonical. Canonical is an Advanced APN Technology Partner. 


Ubuntu has long been popular with users of AWS, due to its stability, regular cadence of releases, and scaleout-friendly usage model. Canonical, an Advanced APN Technology Partner, optimizes, builds, and regularly publishes the latest Ubuntu images to the Amazon EC2 console and AWS Marketplace, which is designed to provide an optimal Ubuntu experience for developers who are using AWS Cloud services. At AWS re:Invent 2016, Canonical will augment that experience with the added stability, security, and efficiency enterprise users require, by launching its enterprise support package for Ubuntu, Ubuntu Advantage, on AWS Marketplace.

Ubuntu Advantage Virtual Guest is designed for virtualized enterprise workloads on AWS, which use official Ubuntu images. It is the professional package of tooling, technology, and expertise from Canonical, and helps organizations around the world manage their Ubuntu deployments. Ubuntu Advantage Virtual Guest includes:

  • Access to Landscape (SaaS version), the systems management tool for using Ubuntu at scale
  • Canonical Livepatch Service, which allows you to apply critical kernel patches without rebooting on Ubuntu 16.04 LTS images using the Linux 4.4 kernel
  • Up to 24×7 telephone and web support
  • Access to the Canonical Knowledge Hub, and regular security bug fixes

The added benefits of accessing Ubuntu Advantage through the AWS Marketplace SaaS subscription model are hourly pricing rates based on the size of your actual Ubuntu usage on AWS, and centralized billing through your existing AWS Marketplace account. Ubuntu Enterprise Support is available in two tiers: Standard and Advanced.You can learn about the difference in support levels here.

At re:Invent, you will also be able to learn more about Canonical’s innovations around software operations, containers, and the Internet of Things (IoT). Nearly all Canonical technologies such as Juju, LXD, and Snaps, as well as the Canonical distribution of Kubernetes, can be used and deployed in production with your Amazon EC2 credentials today.  What’s more, these technologies are supported with professional SLAs from Canonical.

We are also actively innovating around containers with our machine container solution LXD,  which provides the density and efficiency of containers with the manageability and security of virtual machines. We are also partnering with Docker on the Cloud Native Computing Foundation (CNCF) and others around process container orchestration. All of this and much more can be deployed through Juju, our open source service modeling platform for operating complex, interlinked, dynamic software stacks known as Big Software.

Snaps are a new packaging format used to securely package software as an app, making updates and rollbacks a breeze. Canonical’s Ubuntu Core is an open source, Snap-enabled production operating system that powers virtually anything, including robots, drones, industrial IoT gateways, network equipment, digital signage, mobile base stations, and fridges.

At re:Invent 2016, we will be talking to Ubuntu users about all these innovations and more. Come visit us at booth 2341 in Hall D.


The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

Running Tableau Online on AWS

by Kate Miller | on | in APN Technology Partners, Partner Guest Post, re:Invent 2016 | | Comments

The following post is a guest post from Nick Brisoux, Product Manager – Cloud at Tableau. Tableau is an Advanced APN Technology Partner and an AWS Big Data Competency Partner. 

Tableau is a self-service visual analytics platform that helps people see and understand their data. Our customers use Tableau Desktop to create charts, graphs, maps and statistical analysis and share their creations and insights with their colleagues. Tableau Online is our cloud analytics solution that allows them to do this easily. We announced the product in July 2013, and it has become the fastest growing product in the history of the company.

Back in 2013, to launch Tableau Online, we started with our own data center in California and later added one in Dublin, Ireland. As more and more users moved to Tableau Online – and with our growth accelerating faster than ever – we quickly realized the challenge ahead of us – how to scale Tableau online in the data center world. To meet the growth of Tableau Online, this year we decided to move to AWS so we can get out of the business of managing data centers and focus on what we do best: helping users see and understand their data.

Even while planning and evaluating what the effort would be to migrate to AWS, we could clearly see the benefits our customers would have by making Tableau Online available on AWS.

  • Focus: With AWS managing the undifferentiated, heavy lifting around infrastructure, we could focus our development resources and cycles on delivering more goodies to our customers faster.
  • Speed: Amazon Redshift is one of the most popular databases with Tableau online, and having Tableau online on AWS could improve visualization loading times and data refresh speed, especially for AWS data sources in the same region.
  • Global: We could deploy Tableau Online in other AWS regions faster and cheaper than we could with traditional physical data centers. This would allow us to expand Tableau Online globally much faster.
  • Availability: Redundant “availability zones” in each AWS region could help us run in a high availability configuration and eliminate potential “single point of failure” scenarios that we currently needed to mitigate for.

In early summer we started working on a proof of concept to deploy Tableau Online in AWS and launched in October, a much shorter turnaround compared to the launch our own data center in Europe.

Building the deployment

Our primary objective for the proof of concept was to get a Tableau Online deployment in North Virgina (us-east-1) region and also validate our assumption that being closer to users would improve performance. For our PoC, we also had these additional goals:

  • Leverage AWS services wherever possible
  • Leverage existing automation and operational tools
  • Design for high availability and disaster recovery
  • Integrate with our existing shared services (e.g. Identity management)
  • Get the environment ready within a month of the decision to launch

We decided to take a staged approach by breaking down the deployment into its major operational readiness stages. Figure 1 shows the major components of our architecture after many stages of evolution in making Tableau Online available on AWS.

Figure 1 – Tableau Online on AWS Architecture

Stage 1: Setting up the AWS environment

The first step for us was to create the network infrastructure for connecting our existing shared services to our new AWS environment. We were quickly able to create this link via IPSEC tunnels through AWS VPN Gateways. Then we created production and QA VPCs in a subset of AWS regions and connected them accordingly. We also divided the VPC into logical subnets, and chose a significantly large CIDR range for our external subnet, to minimize the need for customers to adjust their IP whitelisting in the future.

Stage 2:  Deploying Tableau online on AWS

We decided to deploy the backgrounder processes first since,

  • Much of the work in Tableau Online is done by various Tableau Server background processes
  • These can be deployed outside of the main application stack
  • We could test them with our current production/QA data center deployment

Based on our experience with deploying Tableau Server backgrounder process, we created AMIs for the various Tableau Server process and leveraged AWS services. Once we broke Tableau server into several components, we could quickly and easily deploy instances of these in each VPC utilizing multiple Availability Zones for redundancy.

Once we were satisfied with our deployment of Tableau Server on AWS, we went through our existing full functional test pass to certify that we were ready to on board customers. At this point we were ready to do our performance testing.

Stage 3: Performance Testing

Given that one of our major goals was deploying as close to the user as possible, we were eager to see how this affected performance, and the improvements for a user in the same AWS region as Tableau Online are impressive. The chart below highlights the dramatic performance improvement for Tableau Online in the Northern Virginia region (us-east-1) versus Tableaus Online in our current west coast data center, when connecting to an Amazon Redshift Cluster in the Northern Virginia region (us-east-1).

The chart shows 2 metrics:

  1. The time it takes to publish content to Online
  2. The time it takes for a user to see their dashboard rendered in their browser.

Stage 4 – Building Disaster Recovery

An operational readiness requirement for us is to have a proven disaster recovery story. This was accomplished by repeatedly building out and tearing down a full deployment, exercising our failover handling and data recovery procedures for starting new back ground instances, being resilient for Postgres failover with Amazon RDS, and validating Windows DFSR replication, failover, and consistency. This was fully automated, resulting in a significant reduction of existing operational work.

Launch!

When we started on this journey we identified some key objectives outlined at the beginning of this blog, and we were able to meet these objectives. This October we launched a production instance of Tableau Online on AWS and started accepting new customer signups.

In addition to the performance benefits customers will see, we also identified others:

  • Instead of running and managing our own Postgres or Load balancer or Caching, all of which required hand crafted replication and deployment, we were able to leverage AWS service such as Amazon RDS for PostgreSQL, Elastic Load Balancing and Amazon ElastiCache. With Amazon RDS, customers get redundancy (including replication and failover) for high availability of the Postgres service, on demand.
  • Instead of using a CDN for our current data center deployments, Amazon CloudFront was simple to use and gave us faster initial page load times since the static content is more effectively served by CloudFront.
  • Cost savings are important, but operational flexibility is a game changer, and AWS allowed us to deliver on this front by providing us various ways to use these managed services through SDK, APIs and deployment tools like AWS CloudFormation with which we can onboard customer faster than ever.
  • Underpinning Tableau Online is our Tableau Server technology and our work in making Tableau Online on AWS, which allowed us to make it easier for our customers to deploy Tableau Server on AWS.  For customers who are looking to move Tableau Server to AWS, we have made available CloudFormation scripts for automated deployments and best practices around Amazon EC2 sizing.

Next Up

Tableau recently concluded its annual user conference where we announced our new AWS us-east-1 deployment. Many of our existing Tableau Online customers have already reached out to us so they can take advantage of the new pod. Tableau is a Diamond level Sponsor at the AWS re:Invent 2016 conference, and we look forward to seeing you there.

Want to learn more about Tableau’s journey on AWS? Check out our AWS Partner video, featuring Ashley Kramer, Director of Product Management at Tableau!


The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

Learn about the VMware Cloud on AWS Partner Program, Coming in 2017

by Kate Miller | on | in APN Consulting Partners, APN Launches, re:Invent 2016 | | Comments

Earlier this year, VMware and Amazon Web Services (AWS) announced a strategic alliance to build and deliver a seamlessly integrated hybrid offering that will give customers the full software-defined data center (SDDC) experience from the leader in the private cloud, running on the world’s most popular, trusted, and robust public cloud. VMware Cloud™ on AWS will enable customers to run applications across VMware vSphere®-based private, public, and hybrid cloud environments.

Delivered, sold, and supported by VMware as an on-demand, elastically scalable service, VMware Cloud on AWS allows VMware customers to use their existing VMware software and tools to leverage AWS’s global footprint and breadth of services, including storage, databases, analytics, and more. For more information on VMware Cloud on AWS, visit VMware Cloud on AWS.

Today, VMware and AWS are announcing that we are working on a joint initiative, the VMware Cloud on AWS Partner Program, that will be launched in 2017. The program will provide support for APN Partners that help customers deploy and operate VMware workloads on AWS.

If you’re interested in this program and want to stay informed as more information becomes available, please submit your interest at https://aws.amazon.com/partners/vmware.

Connect with Customers through the AWS Partner Solutions Finder

by Kate Miller | on | in APN Competency Partner, APN Consulting Partners, APN Launches, APN Program News, APN Technology Partners, re:Invent 2016 | | Comments

Our top priority is to ensure that we are helping you connect with customers whose business needs you can help meet on AWS. Customers have told us that they often look to find APN Partners who are focused in delivering services and solutions to solve very specific use cases within their industry, and often seek APN Partners with a presence and focus in particular regions.

Today, I’m excited to announce the launch of the AWS Partner Solutions Finder (PSF), a new web-based tool meant to help customers easily filter, find, and connect with APN Partners to meet specific business needs.

What is the AWS Partner Solutions Finder?

Built based on customer and partner feedback, the PSF is a whole new way to connect customers and partners. Say you’re a Consulting Partner focused on the Financial Services industry, and you hold the AWS Financial Services Competency. As customers come to the PSF and filter by ‘Financial Services’, your firm may appear higher in the search results, along with other AWS Competency holders. Customers can continue to filter by use case, location, and products, to find exactly what they need.

With the AWS Partner Solutions Finder, customers can also:

  • Easily identify authorized AWS Resellers and validated AWS Managed Service Providers
  • Quickly find APN Partners who hold an AWS Competency and/or AWS Service Delivery Program distinction
  • Learn about different APN Partners at a glance, with data that is verified by AWS
  • Seamlessly get in touch with an APN Partner

Are Customers Seeing Your Updated Information?

If you are an APN Partner at the Standard tier or higher, it is important that your Partner Detail page is up-to-date in the Partner Solutions Finder. AWS Customers will benefit from learning more about your company and AWS offerings. The Alliance Lead of your APN account can update your information in the APN Portal by following these steps:

  1. Alliance lead must log in to the APN Portal
  2. Click “Manage Directory Listing” located on the left navigation pane
  3. Click “Edit” to modify content

To visit the AWS Partner Solutions Finder, click here.

To hear from AWS leadership about the PSF, watch our video below, featuring Terry Wise, Worldwide VP of Channels & Alliances, AWS, and Mike Clayville, Global VP of Commercial Sales & Business Development, AWS:

Introducing the AWS Financial Services Competency

by Kate Miller | on | in APN Competency Partner, APN Consulting Partners, APN Launches, APN Program News, APN Technology Partners, AWS Competencies, Financial Services, re:Invent 2016 | | Comments

According to a series of new IDC Financial Insights Worldwide IT Spending Guides, financial services IT spending will reach almost $480 billion worldwide in 2016 with a five-year compound annual growth rate (CAGR) of 4.2%. And more and more Financial Services firms are looking to meet and exceed their business needs by embracing a cloud-based strategy.  “This year, we are excited to introduce a different view of U.S. financial services spending on 3rd Platform technologies – cloud, mobile and big data/analytics,” said Karen Massey, Senior Research Analyst, IDC Financial Insights. “These foundational components of digital transformation (DX), have significantly impacted IT budgets in 2016. Forward thinking financial institutions understand the value of leveraging these technologies to better meet the demands of performance, efficiency, compliance, and competition.”[1]

Through the AWS Competency Program, customers are able to easily find and connect with APN Partners who can help them take advantage of AWS. Today, I’m excited to tell you about one of two new AWS Competencies we launched at the AWS Global Partner Summit at re:Invent, the AWS Financial Services Competency.

Announcing the AWS Financial Services Competency

The AWS Financial Services Competency is a part of the AWS Competency Program and is now one of many other vertical and business application Competencies available to our customers today.  The AWS Financial Services Competency’s purpose is to help customers identify and connect with industry-leading Consulting and Technology APN Partners with solutions for banking and payments, capital markets, and insurance.

To become an AWS Financial Services Competency Partner, APN Partners must demonstrate expertise on AWS within the Financial Services industry and meet a number of requirements, such as providing use case-specific public customer references, as well as successfully complete an audit of their Financial Services solution or practice. Our top priority is to ensure we’ve set a high bar to become a member of the AWS Financial Services Company, so that we can enable our Financial Services customers to easily find APN Partners with proven experience to help them meet their specialized use cases on AWS. To learn more about the requirements to apply, click here.

We have deep confidence in the abilities of our Financial Services Competency Partners to help Financial Services customers meet their needs on AWS. “Accenture is proud to be one of the first APN Partners to achieve AWS Financial Services Competency status,” says Cathinka Wahlstrom, Senior Managing Director – Financial Services, North America. “Our financial services customers need to stay agile, at the same time ensuring they are operating in a secure and compliant environment. Accenture is dedicated to helping customers in the financial services industry achieve their business goals by leveraging the security and scalability of the AWS Cloud.”

As an AWS Financial Services Competency Partner, you become eligible for a number of business, technical, and marketing benefits, including:

  • The AWS Financial Services Competency logo to use in your marketing materials
  • Public designation as an AWS Financial Services Competency Partner throughout the AWS website
  • Designation as an AWS Financial Services Competency Partner in the AWS Partner Solutions Finder
  • Preferred access to Market Development Funds
  • Selective eligibility for inclusion in future AWS announcements regarding AWS Competencies and Financial Services on AWS
  • Selective eligibility for inclusion in APN partner spotlights (such as quotes, testimonials, videos, etc.)
  • Selective eligibility for AWS Competency subject matter webinars, roadshows, and events

Financial Services Categories and Launch Partners

Congratulations to our launch AWS Financial Services Competency Partners in the following categories:

Technology Partners:

Risk Management – Solutions that are helping financial institutions identify, model, and assess risk, ensure monitoring & compliance with the industry regulations, or help in surveillance or fraud monitoring. Examples include companies in the market risk, credit risk, regulatory risk, compliance risk, fraud & cybersecurity, and operational risk.

  • FICO
  • FIS-Prophet
  • NICE Systems

Core Systems – transaction processing systems for banking/mortgage/payments, capital markets/brokerage/asset management, or the property & casualty insurance industry. Examples include core banking systems, core trading systems, core insurance systems, trade processing systems, and reconciliation systems.

  • Avoka
  • Calypso
  • Corezoid
  • EIS Group
  • Guidewire
  • Mambu
  • Moven

Data Management– platforms providing market and reference data or data processing, pricing or financial analytics solutions.

  • IHS Markit

Consulting Partners:

  • 2nd Watch
  • Accenture
  • Capgemini
  • Cloud Technology Partners
  • Cloudreach
  • Cognizant
  • Infosys
  • REAN Cloud
  • Sopra Steria
  • Wipro

Learn More

Hear from two of our launch AWS Financial Services Competency Partners, EIS and IHS Markit, as the trends that they’ve observed as their customers migrate to AWS, and the value of the AWS Financial Services Competency:

EIS:

IHS Markit:

Want to learn more about the different Financial Services Partner Solutions? Click here.


[1] Financial Services IT Spending to Reach $480 Billion Worldwide in 2016, According to IDC Financial Insights, 27 April 2016, http://www.idc.com/getdoc.jsp?containerId=prUS41216616

Introducing the AWS IoT Competency

by Kate Miller | on | in APN Consulting Partners, APN Launches, APN Technology Partners, AWS Competencies, IoT, re:Invent 2016 | | Comments

Are you an APN Partner who has built an innovative Internet of Things (IoT) solution that helps customers take advantage of AWS for IoT? Does your firm focus on providing services to Enterprise customers as they look to build IoT applications on AWS?

The opportunity for APN Partners in the IoT space is enormous. IDC predicts that the worldwide Internet of Things (IoT) market will grow from $692.6 billion in 2015 to $1.46 trillion in 2020 with a compound annual growth rate (CAGR) of 16.1%. The installed base of IoT endpoints will grow from 12.1 billion in 2015 to more than 30 billion in 2020.[1] We see more and more customers around the globe looking to build innovative IoT applications on AWS. And our goal is to help you connect with customers whose business goals you can help meet.

The AWS Competency Program is all about helping customers find the right APN Partners to engage to meet their specific business needs. Today, I’m excited to tell you about one of two new AWS Competencies we launched at the AWS Global Partner Summit at re:Invent, the AWS IoT Competency.

Announcing the AWS IoT Competency

The AWS IoT Competency showcases industry-leading AWS Consulting and Technology Partners that provide proven technology and/or implementation capabilities for a variety of use cases including (though not limited to) intelligent factories, smart cities, energy, automotive, transportation, and healthcare.

To become an AWS IoT Competency Partner, you must meet a number of requirements, such as providing use case-specific public customer references, and you must successfully complete a third-party audit of your IoT solution or practice. We’ve set a high bar to become an IoT Competency Partner, as we want to ensure that we are helping customers easily identify and connect with AWS IoT Partners who have proven the maturity of their IoT solution/practice on AWS. To learn more about the requirements to apply, click here.

As an AWS IoT Competency Partner, you become eligible for a number of business, technical, and marketing benefits, including:

  • The AWS IoT Competency logo to use in your marketing materials
  • Public designation as an AWS IoT Competency Partner throughout the AWS website
  • Designation as an AWS IoT Competency Partner in the AWS Partner Solutions Finder
  • Preferred access to Market Development Funds
  • Selective eligibility for inclusion in future AWS announcements regarding AWS Competencies and IoT on AWS
  • Selective eligibility for inclusion in APN partner spotlights (such as quotes, testimonials, videos, etc.)
  • Selective eligibility for AWS Competency subject matter webinars, roadshows, and events

IoT Categories and Launch Partners

Congratulations to our launch AWS IoT Competency Partners in the following categories:

Technology Partners

Edge:  Partners who provide hardware and software ingredients used to build IoT devices, or finished products used in IoT solutions or applications.  Examples include: Sensors, Microprocessors and Microcontrollers, Operating Systems, Secure Communication Modules, Evaluation and Demo kits.

  • Intel
  • Microchip Technology

Gateway: Partners who provide data aggregation hardware and/or software connecting edge devices to the cloud and providing on premise intelligence as well as connecting to enterprise information technology (“IT”) systems.  Examples include hardware gateways, software components to translate protocols, and platforms running on-premises to support local decision making.

  • MachineShop

Platform Providers: ISVs who’ve developed a cloud-based platform to acquire, analyze, act and act on IoT data. Examples include device management systems, visualization tools, predictive maintenance applications, data analytics, and machine learning software.

  • Bsquare Corporation
  • C3 IoT
  • Splunk
  • PTC
  • Thinglogix

Connectivity: Partners who provide systems to manage wide-area connectivity for edge and gateway devices.  Examples include device and subscription management platforms, billing and rating systems, device provisioning systems, and Mobile Network Operators (MNOs) and Mobile Virtual Network Operators (MVNOs)

  • Amdocs, Inc.
  • Asavie
  • Eseye
  • SORACOM

Congratulations to our launch AWS IoT Consulting Competency Partners!

  • Accenture
  • Aricent
  • Cloud Technology Partners
  • Luxoft
  • Mobiquity, Inc.
  • Solstice
  • Sturdy
  • Trek10

Learn More

Hear from two of our launch AWS IoT Competency Partners, C3 IoT and MachineShop, as they discuss why they work with AWS, and the value of the AWS IoT Competency for customers:

C3 IoT:

MachineShop:

Want to learn more about the different IoT Partner Solutions? Click here.


[1] “Worldwide Internet of Things Forecast Update, 2016-2020”, IDC Market Forecast, May 2016. https://www.idc.com/getdoc.jsp?containerId=US40755516

Build Your Public Sector Practice on AWS – Introducing the AWS Public Sector Program

by Kate Miller | on | in APN Launches, APN Program News, APN Technology Partners, Public Sector, re:Invent 2016 | | Comments

We have a robust Public Sector customer base on AWS. And there’s an enormous opportunity for AWS Partner Network (APN) firms to build a thriving practice to help Public Sector customers take advantage of AWS. We’d like to help by providing you with the guidance and resources you need, specific to your areas of focus, at every step along your journey on AWS.

To that end, we are excited to announce the launch of the AWS Public Sector Program, which recognizes APN Partners with solutions and experience delivering government, education, and nonprofit customer missions globally.

What is the AWS Public Sector Program?

The AWS Public Sector Program helps you build and accelerate your AWS Public Sector business through alignment with our public sector sales, marketing, partner, and bid teams. Our goal is to help enable you to build a successful business focused on serving public sector customers. We know that you have specific business needs depending on the specific public sector use cases that are your main areas of focus, and we’ve structured the AWS Public Sector Program to be able to provide specific resources, benefits, and guidance in the areas of Government, Education, and Nonprofit.

Every APN Partner’s journey is unique, and there are additional programs within the broader AWS Public Sector Program that we recommend you take advantage of along the way to differentiate your business and areas of expertise to customers. These include the Authorized Government Reseller program, which enables qualified AWS Channel Reseller partners to resell AWS services to public sector end customers; the AWS GovCloud (US) Skill Program, which provides customers with the ability to readily identify APN Partners with experience supporting regulated workloads in the AWS GovCloud (US) Region; and the AWS Government Competency Program, which helps customers connect to vetted APN Partners who provide solutions to—and/or have deep experience working with—government customers to deliver mission-critical workloads and applications on AWS.

In addition to being able to differentiate your firm’s area of expertise through specific programs, there are a number of other benefits you receive as an AWS Public Sector Partner, including the ability to:

  • Identify your public sector solution, and receive an ‘Authorized Public Sector Partner’ APN logo
  • Feature your company in the APN Partner Solutions Finder as a Government, Education, and/or Nonprofit solution provider to stand out to prospective customers
  • Access self-service public sector marketing campaigns to co-brand and promote your AWS solution through APN Marketing Central
  • Develop a business plan to expand your public sector customer base and become eligible for additional APN Partner funding benefits
  • Build your partnership with our sales, solution architect, marketing, bid and proposal teams to expand your AWS public sector expertise

What value do our launch Public Sector Partners find in being a part of the program? Let’s hear from a few of our launch Partners, SmartSimple (an Advanced APN Technology Partner), Smartronix (a Premier APN Consulting Partner), and Blackboard (an APN Technology Partner), and get their perspectives:

“Working with the AWS team has been a truly collaborative experience. They’ve opened doors for us, shared unique growth opportunities and industry-leading tools” says Mike Reid, Co-founder and Chief Operating Officer of SmartSimple. “Having access to the services of the world’s most trusted cloud service provider has enabled us to offer best-in-class security and technology applications, giving our clients’ peace of mind. AWS has been a great partner, and they’ve been integral to helping us build our brand as we continue to grow. SmartSimple became an AWS Advanced Technology Partner more than 2 years ago. Since then we’ve been awarded statuses as an AWS SaaS Partner, Government Competency Partner, and are now honored to be recognized as a Public Sector Partner.”

 

“Smartronix is privileged to be part of the AWS Public Sector Partner program.  Since 2009, Smartronix has delivered mission critical solutions on AWS for our government customers.  This designation reflects our strong commitment to support the transformation efforts of our Government clients achieving real value in cloud solutions,” says Robert Groat EVP, Technology and Strategy. “AWS has provided foundational technologies that enable our customers to save money, deliver innovative citizen centric services, and achieve the highest levels of security for critical government workloads.”

 

“By joining the Amazon Web Services (AWS) Partner Network (APN), Blackboard is building on an existing strong relationship with AWS that delivers world-class hosting solutions for our clients,” says Katie Blot, Chief Strategy Officer for Blackboard. “In collaboration with AWS, Blackboard is committed to providing educational institutions with tools and resources needed for a complete cloud services and cloud management portfolio—giving them fast, flexible and secure access to our solutions.”

How Can I Join?

Are you ready to get started with the AWS Public Sector Program?

Our goal is to ensure that we’re helping AWS customers connect with Public Sector Partners with proven experience on AWS. What follows are the requirements to join the program and become an official AWS Public Sector Partner.

You must be at the Standard tier or above, have a dedicated public sector practice web page on your website, provide public sector practice customer references, and demonstrate a clear AWS go-to-market plan with partner manager approval.

Read more about the requirements here.

What’s the Impact for Customers?

Customers are now able to easily identify AWS Public Sector Partners by specifically searching APN Partners in the categories of Government, Education, or Nonprofit in the new AWS Partner Solutions Finder. Customers are able to differentiate these partners by their listing in the AWS Public Sector Partner Program, and through their use of the ‘Authorized Public Sector Partner’ logo.

If you’d like to read more about the AWS Public Sector Program launch, visit the AWS Governments, Education, and Nonprofits Blog for a unique perspective about the value of the Program for customers.