AWS Partner Network (APN) Blog

Don’t Miss Our Newest AWS Partner Success Videos

by Kate Miller | on | in APN Competency Partner, APN Consulting Partners, APN Partner Highlight, APN Partner Success Stories, APN Technology Partners, AWS Competencies | | Comments

Curious to learn how some of our top AWS Partner Network (APN) Consulting and Technology Partners have grown their businesses on AWS?

We’ve recently published seven new AWS Partner Success videos. The APN Partners featured in our newest videos collectively hold a number of AWS Competencies, including the Big DataDevOps, Government, Healthcare, Life Sciences, Marketing & Commerce, Microsoft SharePoint, Microsoft Exchange, Migration, Mobile, and Security Competencies. Take a look!

 

Okta

 

Advanced APN Technology Partner; AWS Security Competency Partner


BlazeMeter

 

APN Technology Partner; AWS DevOps Competency Partner


Cloud Technology Partners

 

AWS Premier Consulting Partner; AWS DevOps Competency Partner; AWS Migration Competency Partner


Cloudreach

 

AWS Premier Consulting Partner; AWS Managed Service Provider; AWS DevOps Competency Partner; AWS Migration Competency Partner


REAN Cloud

 

AWS Premier Consulting Partner; AWS Managed Service Provider; AWS DevOps Competency Partner; AWS Migration Competency Partner; AWS Life Sciences Competency Partner; AWS Government Competency Partner


Slalom

 

AWS Premier Consulting Partner; AWS DevOps Competency Partner; AWS Big Data Competency Partner; AWS Migration Competency Partner; AWS Microsoft SharePoint and Microsoft Exchange Competency Partner; AWS Mobile Competency Partner


Logicworks

 

AWS Premier Consulting Partner; AWS Managed Service Provider; AWS DevOps Competency Partner; AWS Migration Competency Partner; AWS Healthcare Competency Partner; AWS Marketing & Commerce Competency Partner

 

Check out all of our Partner Success videos here!

AWS Networking for Developers

by Nick Matthews | on | in AWS Partner Solutions Architect (SA) Guest Post, Networking | | Comments

This post is co-authored with Mark Stephens, Partner Solutions Architect, AWS

If you’re a developer and are new to AWS, there is a good chance you have not had the need to set up or configure many networks. As a developer that has a need to work with infrastructure, you might end up running into some concepts that you aren’t familiar with and can take some time to understand. Within the AWS console we give you full control to set up your environment securely, which means understanding some key concepts will be important to setup that infrastructure in the best way possible for your use case. We’ll introduce you to key network infrastructure concepts in this post to help get you started down the right path.

The following diagram illustrates the AWS networking concepts we’ll discuss in this blog post:

Figure 1: AWS networking environment

AWS networking concepts

The first AWS networking concept you should be familiar with is Amazon Virtual Private Cloud (Amazon VPC). Amazon VPC allows you to provision logically isolated sections of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You can think of a VPC as the high-level container for your infrastructure.

Subnets and routing

Within a VPC, there is another construct called a subnet, or a logical grouping of IP addresses that are a subsection of a larger network. You can create private and public subnets. The only difference between the two is that a private subnet does not have an Internet gateway. For example, you do not want someone to gain access to your database server, so you’d put that in a private subnet. Your Web server does need Internet connectivity, so you’d place it in a public subnet. Subnets live in an Availability Zone. If you want your infrastructure to be highly available, you will want to have subnets in multiple Availability Zones. The subnet size determines how many hosts can live in that subnet.

Routes and route tables decide which subnets and applications can reach the Internet or can connect back to your on-premises systems. We recommend that you layer your architecture and isolate your database tier in a private subnet. You can use subnets and network address translation (NAT) instances to ensure they have Internet access but are not vulnerable to outside attacks. This is similar to what network teams do on premises with demilitarized zones (DMZs) and firewalls.  Using different route tables to create private and public subnets helps ensure that our API hosts communicate with our database but not with any bad actors.

Each subnet is assigned a route table which defines whether or not that subnet can reach things like VPNs or the internet. The route table has a list of IP destinations that specify where the traffic should go. You specify the IP destinations by using classless inter-domain routing (CIDR) notation; for example, 0.0.0.0/0 means all traffic goes to that IP address range, often to an Internet gateway. We’ll discuss CIDR ranges in more detail later in this post. Let’s say you want to create a VPN connection and send traffic back to your on-premises data center. You would create a virtual private gateway and attach it to your VPC. You would then create a VPN connection and a customer gateway. After you download the configuration for your customer gateway from the list of common options and apply it to your device, you would have a connection to AWS. The next step would be to choose which subnets should be allowed to use that VPN connection. In the route table for these subnets, you would create a route for your on-premises IP range (for example, 10.0.0.0/8). What’s important is that you can create different levels of access based on subnets. Individual subnets can have Internet access only, access to the Internet and VPN, or even access to other network resources like VPC peering or Amazon S3 endpoints.

Subnets are important because they define what type of network access an instance has and which Availability Zone the instances live in. Network access control lists (ACLs) are another concept specific to a subnet. Network ACLs are a broad-stroke security feature that you can use in addition to security groups (virtual firewalls that control traffic) to define which IP addresses and ports are allowed in and out of a subnet. Many customers put instances with the same security requirements and level of trust in the same subnet, since it’s easier to segment applications in different subnets.

If you want to segment your development, staging and production environments you can use subnets or VPCs to segment them.  VPCs have a higher level of segmentation, but you may have to recreate infrastructure like bastion hosts, security groups and VPNs.  Subnets offer segmentation as long as security groups are properly managed.

Often, choosing subnet sizes and VPC sizes can be a challenge for developers who aren’t used to working with IP addresses and subnets. Choosing CIDR ranges and subnet sizes is an arcane but necessary art. Let’s start with a few basics.

IP addressing 101

A CIDR range is a way of notating which IP address space is being used. An IP address is 32 bits long, split into four 8-bit octets like x.x.x.x. Subnets differentiate between the network part of the address and the host part of the address. Devices that route packets in the network are only aware of where sets of addresses (the network) are. In subnet notation, we use a 1 to define network and a 0 to define host. A common subnet is a /24, which means that the network will keep track of the first 24 bits (or 3 octets). This can also be notated as 255.255.255.0, or you can think of it as network.network.network.host. For example, 10.1.2.3/24 means 10.1.2 is the network, and the host was assigned the .3 address.  Since 2 to the 8th power is 256, there are 256 addresses for the hosts. The network uses a few addresses—on premises, it’s typically 2 (first and last), and on AWS it’s 5 (first 4, last). Subnet calculators are really useful if doing lots of binary math isn’t your thing, or until you get the hang of it.

There are also public and private addresses. An Internet standard (RFC 1918) defined three private IP address ranges that would never be used on the Internet and are reserved for private use: 10.0.0.0/8, 192.168.0.0/16, and 172.16.0.0/12.  Many organizations choose to use the 10.0.0.0/8 space since it’s the largest (it starts with a 10). Since we’ll need to divide the 10.x.x.x space up further we’ll need to split it into subnetworks. This is where we need to think strategically. We have to figure out how many different networks we’ll need, and how many hosts are in each network.

Designing subnet space

The first goal of IP addressing is to prevent any overlapping address space. If every new VPC uses the same 172.16.0.0/16 or 10.0.0.0/16 space, life becomes very difficult when those resources need to talk to one another, share any resources, or connect to a shared service. The second goal is to not run out of addresses for any given application in a subnet. You will want to use RFC 1918 space (10.x.x.x, 192.168.x.x, 172.16.x.x/12) in your VPC. As well, you don’t want to overlap with your on-premises networks.

If you have lots of unused address space, the primary goal of not overlapping address ranges will be simple to achieve. Otherwise, the art is finding a balance between subnet and network sizes so applications don’t run out of IP address space, and the organization doesn’t run out of network address space to use. If you’ve either been allocated a small address space or know that address space is limited, you’ll need to do some binary math, which we’ll illustrate here.

Figure 2: A subnetting example. In this example we end up with 32 subnets, each with a /24 (255.255.255.0) netmask. This makes the entire VPC CIDR a /19 netmask (255.255.224.0).

Securing your instances

Security groups are virtual firewalls at the instance level.  You can use security groups to set rules on inbound and outbound connections. Security groups track the state of connections, and remember where a connection came from. For example, if we allow all outbound traffic but deny inbound traffic, the only packets allowed into your instance are inbound replies from the outbound connections. An example security group for Secure Shell (SSH) might allow TCP port 22 and a source IP of 167.165.32.0/24. Security group changes happen immediately. By default, security groups allow all outbound traffic and deny inbound traffic.

You can use network ACLs in addition to security groups. Network ACLs allow you to define a security policy for an entire subnet. Network ACLs use an ordered list of IP address and protocol information such as TCP or UDP port numbers to allow or deny traffic. Network ACLs are stateless, which means they don’t remember what traffic has come in or gone out.  Each network ACL has rules for inbound and outbound traffic. By default, network ACLs allow all traffic. For example, if you wanted to block all traffic to or from 192.168.1.0/24 you would put a DENY statement higher in the list than the ALLOW traffic statement in both the inbound and outbound directions.

CIDR ranges are something you also use with security groups and network ACLs. For example, on a security group, you can limit the range of IPs that can access an Amazon EC2 host.  A CIDR range of 192.168.70.0/24 will allow or deny IPs in the range 192.168.70.0 – 192.168.70.255. CIDR ranges are a powerful way to control access to your company’s public IP range.

Let’s talk about private subnets. An instance in a private subnet can generally reach the Internet but the Internet can’t directly reach the instance. When you have a host in a private subnet and you want access to the Internet, you use something called a NAT gateway. Since the private subnet does not have an Internet gateway associated with that subnet, you need a way to get out. The sole responsibility of the NAT gateway is to translate packets from a private subnet into public traffic and then back again to that private address. It’s best practice to put instances like databases that only connect to other internal instances in a private subnet for security.

Automation

What’s great is that we can automate what we’ve discussed in this blog post, because AWS builds automation into its services. Many people call this “infrastructure as code.” By using AWS services such as AWS CloudFormation and AWS OpsWorks, or solutions like Chef or Puppet, you can fully automate the build of your stack and infrastructure. This includes your VPC, subnets, security groups, Amazon EC2 servers, and databases. Using automation to deploy infrastructure reduces error, saves time, and makes it easier to track changes like code commits. How many times have you set up your server only to forget how you set up that tedious bit months later? With automation, you have code that describes your setup.

Since your setup is now code, you can version your CloudFormation, Chef, or Puppet scripts just like your code. Want to test out a new setup? Just modify your infrastructure scripts and spin up a new version of your infrastructure.  Something in the new setup not working right?  Just revert back to the previous version. It’s all in version control alongside your application code.

You can also use your code control system to keep track of who made changes and why. Curious why that security group rule was added? Now you can check the commit history and comments for that change. AWS also offers the AWS CloudTrail service for monitoring your deployments, and the AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing.

The AWS Quick Start team recently published an AWS CloudFormation template that automates building a VPC infrastructure, based on some of the concepts we discussed in this blog post. This configurable infrastructure includes multiple Availability Zones, public and private subnets, and managed NAT gateways (or NAT instances, when the deployment region doesn’t support NAT gateways). You can use the Quick Start as a baseline for your own AWS environment, scale it up or down by adding or removing subnets and Availability Zones, and create additional private subnets with network ACLs. For more information, see Building a Modular and Scalable Virtual Network Architecture with Amazon VPC.

Automation and the larger philosophy behind it is called DevOps, and you can read more about it on our DevOps page.

Monitoring

Now that you have set up your infrastructure, you’ll want to monitor performance and security measures such as vulnerabilities or weaknesses. There are many AWS partners that can help with these challenges. For example, Alert Logic, an AWS Security Competency Partner, has a product called Cloud Insight that will monitor your infrastructure and deliver continuous vulnerability and configuration management insight.  Cloud Insight is available in AWS Marketplace. Dome9 is another APN Partner and AWS Security Competency Partner that will monitor and manage your security groups, and their products are also available in AWS Marketplace.

These products can help you shift from being reactive to proactive about the security and operation of your infrastructure.

We hope that this blog post provided some insights into the network that sits below your applications. If you have any questions or would like to see additional topics or details on networking in the cloud, we’d like to hear from you.


Want more information about networking on AWS? Read Part 1 & Part 2 of an in-depth discussion of Amazon VPC for on-premises network engineers. 

Delivering Critical Insights Faster with AWS and Self-Service Analytics – A Webinar with AWS Big Data Competency Partner Alteryx

by Kate Miller | on | in APN Technology Partners, AWS Competencies, Big Data, Big Data Competency, Third Party Webinars | | Comments

Are you ready to put the power of self-service data analytics into the hands of your business analysts? Don’t miss an upcoming webinar from AWS Big Data Competency Partner Alteryx and AWS, featuring joint customer, Deep Root Analytics, to learn how you can empower your analysts to leverage all of your data to uncover business insights more quickly. Deep Root Analytics, a political media analytics client, manages and analyzes an ever-growing list of data sources – including Amazon Redshift – to project political voter turnout and predict vote choice. To do this, Deep Root’s analysts must first access and acquire the data, and then build complex data blending and analysis workflows to turn a variety of unlinked data sources into a single, actionable database of information. Then can they decide what voters to speak with, with what message, and through what media.

Register for the upcoming webinar, “Data Analysis to Predict Voter Turnout and Outcome”, and learn how to:

  • Quickly blend and analyze data from all sources
  • Apply predictive and geo-spatial analytics to Big Data
  • Enable data analysts with the cloud computing power of AWS
  • Empower the organization at large with analytic visualizations from Tableau (another AWS Big Data Competency Partner)

This live webinar will be held on August 16, 2016 at 9:00am PDT. It will feature Danielle Mendheim, Database Analyst with Deep Root Analytics, and a demonstration of how AWS and Alteryx can deliver very fast self-service analytics when used in conjunction.

Click here to register >>


(Note: the register link will take you to a third party site, off the APN Blog. If you register for the webinar you are registering with a third party, not AWS)

How AWS Partner Medidata Solutions Approaches GxP Compliance in the Cloud

by Christopher Crosbie | on | in APN Consulting Partners, APN Technology Partners, AWS Competencies, AWS Partner Solutions Architect (SA) Guest Post, Healthcare, Life Sciences, Partner Guest Post | | Comments

Special thanks to Jordin Green, Global Head of Industry Marketing & Global Healthcare and Global Life Sciences Marketing Lead, AWS and Gretchen Murphy, PR Manager at Medidata Solutions.

AWS Life Science Competency Partner Medidata Solutions is a leading SaaS company solely focused on helping life sciences companies conduct faster, safer, less expensive, and more insightful clinical research. Medidata Solutions’s cloud technology platform uses AWS to power clinical trials for more than 600 life sciences customers. Medidata Solutions has been very successful at educating their customers on the benefits of using AWS commercial-off-the-shelf (COTS) services as an underlying platform within their GxP SaaS solutions. Given the highly regulated nature of clinical trials and the confidential patient data that is collected, it is essential for Medidata Solutions and its customers to be GxP compliant.

GxP requirements apply to organizations that make regulated food and medical products such as pharmaceuticals, medical devices, and mobile medical applications.  GxP is a general term for Good (Something) Practice.  The “x” simply represents a variable. For example, GCP is an acronym for Good Clinical Practice, a set of standards for clinical trials.

Medidata Solutions – and its customers – consider AWS services COTS, and therefore the GxP controls are relatively detached from the actual AWS service components. For Medidata Solutions, the majority of its procedures used for app design, development, testing etc., stayed consistent when using AWS services.

Medidata Solution’s clients assess cloud-based software that operates with AWS services in much the same way as on-premises software built on top of COTS operating systems, databases, and storage hardware.

The company has found that the majority of GxP clients tend to focus their validation of necessary controls at the software and logical layers of the stack (i.e., the customer responsibility portion of the AWS Shared Responsibility security model). However, for those determined sponsors that did dive deep into how the AWS services were being leveraged, it became clear that the resilience provided by the use of AWS Availability Zones (AZs) and AWS services, such as Amazon Simple Storage Service (Amazon S3), demonstrated a clear benefit for the availability and durability of Medidata’s clinical cloud platform.  Customer discussions about validating AWS services often shifted conversations from, for example, backup to availability. This alteration in viewpoint is a major step forward in GxP systems since it establishes that GxP systems built with AWS services can be architected for high availability and continue operating even if a single system component or AWS service is temporarily unavailable.

Now that members of the APN, such as Medidata Solutions, have shown customers these enhanced architectures, sponsors of GxP systems have started to demand these systems from all their clients. Since customer obsession and customer trust are core leadership principals of the Amazonian team culture, we wanted to provide more guidance for sponsors that need guidance in making quality assessments of systems they chose to build on AWS. To meet this customer need, AWS created three key enablers for building and moving GxP systems onto AWS:

  1. AWS’s ISO 9001 certification directly supports customers who develop, migrate and operate their quality-controlled IT systems on the AWS Cloud. Under a Non-Disclosure Agreement (NDA), customers and partners can leverage these compliance reports as evidence to validate their own ISO 9001 programs.
  2. AWS Quality Manual, available upon request, is for AWS customers who have a Non-Disclosure Agreement and are in the process of performing a supplier assessment of AWS’s quality and security management controls
  3. In cooperation with Lachman Consultants, a multidisciplinary team of highly experienced FDA and pharmaceutical industry experts, we’ve developed and published a new whitepaper, “Considerations for Using AWS Products in GxP Systems

 

 

 

Tony Hewer, Senior Director of Quality & Regulatory affairs at Medidata Solutions tells us, “These items have positioned Medidata Solutions extremely well in the eyes of our customers. The materials are very insightful and enable us to put a very large check in the box that confirms we’ve performed the quality activity.”

 

 

To learn more about how your company can work with AWS to build or consult on GxP compliant workloads, be sure to check out the AWS Compliance page and watch a previously recorded webinar on Next Generation GxP Systems in the Cloud.

For those that want to dive into the details of starting a GxP partner practice on AWS, make sure to watch the APN Webcast series on the APN Portal.

  • Overview – An overview of GxP compliance on AWS
  • Quality Systems – Management responsibility, personnel, audits, purchasing, and recordkeeping.
  • System Development Life Cycle – Development, validation, and operations.
  • Regulatory Affairs – Regulatory submissions, health authority inspections, and personal data privacy controls for research participants.

Upcoming AWS Partner Webinars – August

by Kate Miller | on | in AWS Marketing, Big Data, Security | | Comments

We have four upcoming AWS Partner Webinars featuring a select number of AWS Partners.

These webinars feature technical sessions led by AWS solutions architects and engineers, live demonstrations, customer examples, and Q&A with AWS experts.

Upcoming Webinars

 

Intro to CloudPassage’s Automatic Security Scaling

August 24, 2016 | 1:00 – 2:00 PM PST

CloudPassage® Halo® provides virtually instant visibility and continuous protection for services in any combination of on-premises deployments and cloud infrastructures, like AWS. Organizations that are transitioning to the AWS Cloud, are finding that automating and scaling legacy security services for comprehensive workload security requires a versatile and elastic solution. Join AWS, Xero (a New Zealand-based accounting software company), and CloudPassage to learn about best practices for migrating your security workloads onto the cloud.

Register here >>

 


Fortinet Automates Migration onto Layered Secure Workloads

August 18th, 2016 | 10:00 – 11:00 AM PST

Fortinet provides a comprehensive security solution for your hybrid workloads, allowing you to effectively secure your workloads with simplified, automated migration. A primary consideration of many of today’s organizations is how to securely migrate their data and workloads to the cloud. Multi-layered protection should be in place at all points along the path of data: entering, exiting, and within the cloud. Join Fortinet and AWS to learn how you can enable robust and effective security for your AWS Cloud-based applications and workloads.

Register here >> 

 


Qubole: Unlocking Self-Service Big Data Analytics on AWS 

August 17, 2016 | 10:00 – 11:00 AM PST

Data driven organizations want to be able to store, access, and analyze large amounts of data from diverse sources and make it easily accessible to deliver actionable insights for their users. A solution for customers is to optimize scaling and create a unified interface to simplify analysis. Qubole helps customers simplify their big data analytics with speed and scalability using Qubole Data Services (QDS) on the AWS Cloud. Join Qubole and AWS to discuss how Auto Scaling and Amazon EC2 Spot pricing can enable customers to efficiently turn data into insights. We’ll also discuss best practices for migrating from an on-premises Big Data architecture to the AWS Cloud.

Register here >> 

 


Ask.com: Running Elastic Data Warehouse with Snowflake

August 16, 2016 | 10:00 – 11:00 AM PST

Data driven organizations can be challenged to deliver new and growing business intelligence requirements, but can be constrained by lack of scalability and performance from existing data warehouse platforms. A solution for customers is a data warehouse that scales for real-time demands and uses resources in a more optimized and cost-effective manner. Join Snowflake, AWS, and Ask.com to learn how Ask.com enhanced BI service levels and decreased expenses while meeting demand to collect, store, and analyze over a terabyte of data per day on AWS. Snowflake Computing delivers a fast and flexible elastic data warehouse solution that reduces complexity and overhead, built on top of the elasticity, flexibility, and resiliency of AWS.

Register here >> 

 


Did you know that you can watch past webinars on-demand? Visit the AWS Partner Webinar Series page to check out webinars here.

DevOps and Cloud Migrations – A Perspective from AWS DevOps and Migration Competency Partner Pythian

by Kate Miller | on | in APN Consulting Partners, APN Partner Highlight, APN Technology Partners, DevOps on AWS, Migration, Partner Guest Post | | Comments

The following is a guest post from our friends at AWS DevOps & AWS Migration Competency Partner Pythian. 

It’s no secret that the commoditization of computing resources, and cloud in particular, have removed a significant barrier to entry for organizations seeking platform scalability, elasticity, and rapid maturation. Cloud computing has created an opening for organizations of all sizes to compete on level ground, while aligning their costs of operations with demand and effectively supporting a more sustainable financial model. The broad service offering and the presence of reliable DevOps tools offered by AWS, such as AWS CloudFormation , AWS OpsWorks, AWS CodeDeploy, AWS CodePipeline, AWS CodeCommit, AWS Elastic Beanstalk, and AWS Config support automated, DevOps-style migrations of platforms of all sizes and complexities.

Migrating a platform to the cloud can be tackled in numerous different ways. Whether the migration is approached via the refactoring strategy or down to the traditional re-host strategy, the primary driver is to enable speed, agility and the freedom to experiment in a cost-efficient manner for your organization. Regardless of which route is chosen, they all require strategic planning and precise execution. If not planned well or if done incorrectly, a migration could result in downtime or potentially not allowing your organization to take full advantage of what the cloud has to offer.

Migrations require careful, granular planning, and a deep, cross-discipline understanding of platform intricacies. They also require the involvement and cooperation of a wide range of domain experts — from platform architects to operations experts, to developer resources — and a high degree of orchestration and focus on maintaining team coherence.

To resolve these incompatibilities, bridge the various disciplines, and realign the previously separate interests, advanced technology teams have been evolving their DevOps practices.

AWS provides deep guidance on what DevOps is, why it matters, and how to adopt a DevOps model. AWS defines DevOps as the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market. Adopting a DevOps model can have a fundamental impact on the success of your migration to the cloud.

DevOps supports an experiment-driven, fast-learning, flexible, and high-velocity engineering culture. It is designed to automate the repeatable parts and sequences of an application’s, and its supporting infrastructure’s, lifecycle (like build, test, deploy, scale, failover, recover, etc.) to make changes to the resulting software and production services as close to zero-cost as possible. The core DevOps principles of shared responsibility, agility, transparency, and lowered risks through heavy reliance on automation have proven to be crucial for handling complex projects requiring constant and precise risk management, such as enterprise platform migrations.

Early adoption and a commitment to DevOps practices enables low-risk execution of AWS migrations. DevOps also brings about a number of short and long-term business benefits, such as repeatability, auditability, significantly lower deployment risks, and faster iteration cycles – all crucial to employee retention, improved operational excellence, better security posture, and improved competitive advantage.

DevOps teams that have migrated their infrastructure and workload to the cloud using automation, have now enabled their organizations to run more experiments at significantly lower cost than previously possible, and to iterate at a more rapid pace than ever before.

Today, the most advanced teams in the world release each developer’s changes automatically to huge environments many times per day. Deploying more often and in smaller increments allows for significant risk reduction at the time of deployment. These smaller change deltas, coupled with fast feedback loops enabled by rigorous automated testing of each change increment, reduce the time it takes to find and correct bugs and further reduces deployment risks.

The Road to Maturity: Elements of a Mature DevOps Practice

We’ve learned a lot helping numerous customers migrate to AWS and build mature DevOps practices within their organization, and we want to share some best practices with you. Below is a brief overview of the environment we suggest you create, and the technologies your organization should embrace, to ensure ongoing improvements to your software and services to achieve your business goals.

CULTURAL SHIFT

  • Create a culture of experimentation: Every product organization must make ongoing tradeoffs between velocity, performance, reliability, cost efficiency, and security, so it’s imperative to have blameless, respectful, and truthful discussions about decisions and outcomes. Adopting DevOps requires a deep commitment to continuous improvement throughout the organization. Continuous experimentation with a deep commitment to constant learning leads to continuous improvement, also known as “Kaizen”. The successful organization will learn to seek answers and limit the risk of bad assumptions. AWS Head of Enterprise Stephen Orban has also released a great series of blogs on creating a culture of experimentation, which you can read here.
  • Get executive attention: The modern executive team should concern itself with improving its engineering leverage, and must be willing to seek help through the inherent disruption.
  • Be customer focused: Commercial innovation is judged by the market. Empathy for the customer is a major differentiator.
  • Embrace tool-driven collaboration: Mature DevOps teams are characterized by deep, efficient, tool-driven collaboration between skilled, cross-functional individuals working as part of a team, or teams, who execute a product development and operations lifecycle.
  • Adopt data-driven decision making: Success comes from making good decisions over time. Business strategy and financial planning cycles must adapt to the reality of continually evolving markets by constantly seeking alignment with product and technology objectives — and vice versa. This requires an even closer, data-driven decision cycle at all levels.
  • Get Lean and Agile: Adopting Lean principles and Agile methodologies is crucial to delivering DevOps in a highly productive, scalable, and efficient manner.

TECHNICAL PRACTICES

  • Take an infrastructure-as-code approach: All development efforts should be cloud native, incorporate an infrastructure-as-code approach using tools like AWS CloudFormation, and avoid dependence on manual operations by leveraging appropriate public, private and/or hybrid cloud infrastructure. This approach allows you to automate manual and repetitive tasks and eliminate human errors as you configure, maintain, scale, and recover from faults, and ultimately fully automate the end-to-end infrastructure lifecycle.
  • Automate application deployments: Application deployments should be guided by the principle of delivering value to the end user as quickly as possible. To attain high velocity, deployments must be automated, undergo rigorous automated testing, and upon successfully passing the tests, deployed to production. The entire software deployment pipeline should be optimized for defect detection, speed of delivery, and rapid feedback, with Continuous Integration (CI) and Continuous Delivery (CD) as the goal.
  • Optimize orchestration: Design, coordinate, and optimize the full infrastructure and application lifecycle processes by creating fast feedback loops, eliminating repetitive steps to optimize the speed and quality of new application capabilities.
  • Integrate operational visibility early: Integrate operational visibility services as part of the development process, not as an afterthought.  Select and configure tools to monitor KPIs to maintain performance, detect issues, and provide alerts when those issues arise. Use pre-built operational visibility platforms like Pythian’s OpsViz stack, Amazon CloudWatch, Datadog, Splunk, Loggly, or Sumo Logic for general data aggregation, dashboards, triggering of actionable alerting, as well as reporting. Drive development efficiency and service quality using APM tools, like New Relic and AppDynamics.
  • Create a CI/CD environment: With Continuous Integration (CI), you have an automated software integration and unit test environment that enables continuous validation of development changes.  Continuous Deployment (CD) gives you an automated production test, release, and rollback environment that enables frequent, high-quality production releases. When combined with a culture of experimentation, continuous delivery equals competitive advantage.

Pythian’s DevOps team, through an unshakeable dedication to end-to-end automation, continuous testing, and the application of modern software development practices to cloud migrations, has been able to consistently deliver predictable results to our customers throughout the lifetime of migration projects.  Learn more about what we’ve done for customers on AWS by reading about our customer TinyCo.

 

How to List Your Product in AWS Marketplace

by Suney Sharma | on | in APN Technology Partners, AWS Marketplace, AWS Partner Solutions Architect (SA) Guest Post | | Comments

The following is a guest post from Suney Sharma, a Partner Solutions Architect at AWS. 

Are you planning to launch an AWS Marketplace offering for your product?

If you are an independent software vendor (ISV), the AWS Marketplace offers you an opportunity to capitalize on the widespread migration to cloud computing, simplify and shorten your sales cycle, and grow your business. Many APN Partners sell their solutions on AWS Marketplace, as it is a great channel with which to reach customers who are running their workloads on AWS. In this post, we walk through an overview of AWS Marketplace, followed by some key considerations and best practices as you get ready to list your product in AWS Marketplace. We focus on bundling your product as an AMI (Amazon Machine Image). We also offer tips for applications that have a complex multi-node architecture.

What is AWS Marketplace?

AWS Marketplace is an online store where customers can find, buy, and deploy software that runs on AWS. As with AWS services, customers pay only for what they use. AWS Marketplace helps enable you, as an ISV, to reduce the cost of customer acquisition and deliver your solution on a global scale. To date, AWS Marketplace has over 2,700 listed offerings from more than 925 software vendors. AWS customers use 205 million hours a month of Amazon Elastic Compute Cloud (Amazon EC2) resources for AWS Marketplace products. These numbers are a strong statement of the value of AWS Marketplace for both customers and partners.

AWS customers subscribe to products in AWS Marketplace based on their business needs. They can then launch the software in their AWS accounts. AWS meters and charges customers based on usage, and the usage charges are reflected in the monthly AWS bill. This method of software delivery enables customers to link software costs to actual usage instead of being bound by a fixed cost license.

AWS Marketplace also takes into account that different products might have different metrics for metering usage. For instance, some products may need to be billed by the number of provisioned users while others might be billed by amount of data transfer. To support additional billing options, AWS Marketplace now supports multiple dimensions for usage metering: Users, Hosts, Bandwidth, and Data.  For example, by using the AWS Marketplace Metering Service, a seller can charge a customer every hour based on the number of provisioned users or the amount of data transferred. Find out more about the AWS Marketplace Metering Service.

How Products are Listed in AWS Marketplace

Sellers can bundle their product in an AMI, which is listed in AWS Marketplace. Customers use these AMIs to launch Amazon EC2 instances in their AWS account. AWS charges customers based on their usage of the product. See the AWS Marketplace Seller Guide for more details.

Along with the AMI, the seller provides product metadata in a form called the Product Load Form. The form captures a lot of information about the product, including:

  • Types of instances that the customer can select.
  • Product charges applicable to each instance (in addition to Amazon EC2 charges).
  • AWS regions in which the product will be available.
  • Additional information such as product description, links to support information, and the seller’s end user license agreement (EULA).

The AWS Marketplace team uses this information to provide a product description on the AWS Marketplace website. Customers who wish to purchase the product do so by subscribing to the AMI and accepting the seller’s EULA.

The customer is then taken to the launch page within the AWS Marketplace website. The AMI is used to launch Amazon EC2 instances in the customer’s AWS account, and the necessary API calls from the software are provided on a minimal privilege basis. This means that the software will have limited privileges to the customer’s AWS account and to the resources provisioned within it.

Support for Clusters and AWS Resources

In the flow explained above, the customer runs the product on one or more Amazon EC2 instances. However, some products might have more complex needs; for example, they might need to:

  • Run on a cluster of instances or meet HA requirements.
  • Access AWS resources like Elastic Load Balancing or Auto Scaling groups.
  • Need specific network and security configurations.

An example could be a product with a three-tier architecture (web, app, and database), which is deployed on different Amazon EC2 instances. Each tier may need to interact with different AWS services, which will need to be provisioned along with the instances. For example, Auto Scaling groups and Elastic Load Balancers may be required for the web and app layers, and the database layer may need access to an Amazon S3 bucket for backups.

For such products, AWS Marketplace provides support for clusters and AWS resources. When a customer subscribes to such a product in AWS Marketplace, all resources, including Amazon EC2 instances, required AWS services, and dependencies get installed and configured automatically.

Under the hood, this is a simple execution of an AWS CloudFormation template provided by the seller. When the customer chooses Subscribe on the AWS Marketplace page, the AWS CloudFormation console window is displayed and the seller-provided template is executed. It is important to remember that the template is executed in the customer’s AWS account and relies on AWS Identity and Access Management (IAM) roles and policies to provide access to AWS resources in the account. This is a powerful capability because customers can securely deploy sophisticated software in their accounts without having to understand the underlying setup complexity.

For these types of software, the seller provides the AWS Marketplace team with an AMI and the AWS CloudFormation templates required to set up and install the product. AWS CloudFormation manages the installation and setup.

Best Practices for Creating AMIs

AWS provides detailed documentation around creating generic AMIs in the Amazon EC2 user guides for Linux and Windows.

The best practices mentioned in these documents apply to AWS Marketplace AMIs. There are some additional considerations that we’d like to highlight.

Customer experience considerations:

  • Your AMI dictates the customer experience; therefore, you should take appropriate care that it conforms to your customer experience requirements.
  • Launching an AMI performs an unassisted installation of your software. Since a good customer experience is of paramount importance, we recommend that you have clear documentation around setup, usage, and best practices to assist the customer.

Security considerations:

  • User accounts and passwords: For security of the customer, you should not hard-code passwords in cleartext in the AMI. As a best practice you can:
    • Generate a password at boot time or use passwords that are specific to the customer environment, such as instance IDs (see the Tips section).

OR

  • Force the customer to change the password at first login.
  • If any new vulnerability affecting your AMI is discovered, you will need to modify your AMI to ensure that the customer is not affected.
  • In some cases you may have no control over the network and security group configurations for the Amazon EC2 instances the AMI is deployed on. In such cases it is a good idea to disable or uninstall unnecessary services and packages.

Development and Test considerations:

  • Make sure you have a well-defined, repeatable strategy to build, update, and test your AMIs. (Note: No test or sandbox capabilities exist for AWS Marketplace.)
  • Your AMI should provide the customer with an image that is secure and performant per requirements. This means that you may need to include these aspects in your testing.

For additional details, see the Security Guidance for AMI Developers.

Best Practices for Using AWS CloudFormation for AWS Marketplace

As mentioned earlier, in cases where a product needs to be deployed on multiple nodes and also interact with other AWS services like Elastic Load Balancing, Amazon S3, etc., the seller can provide AWS CloudFormation templates along with the AMI.

Some points to consider while planning this approach:

  • Documentation: Clear documentation with snapshots of the setup screens will be of great value in unassisted installations. This could be in the form of setup guides, user guides, etc.
  • Customer input: A multi-node deployment will need to capture some information from the customer such as security group configuration, network details, instance types, instance count, etc. The AWS CloudFormation templates provide parameters to capture customer input, and description fields that provide information about the required input. For example, you could use parameters to define the source IPs in a security group configuration and provide a description to assist the customer.
  • Region of launch: Since the customer will have the choice of launching the application in different AWS Regions, you can use the Mappings section in the AWS CloudFormation template to point to the relevant AMI in the region.
  • Outputs: The Outputs section of the template can be used to provide more information about the deployed stack. This can include things like IP addresses of the configured nodes.
  • AWS Identity and Access Management Roles: In most cases, the Amazon EC2 instances may need to interact with other AWS services like Amazon S3, Amazon SNS, etc. Use of access/secret keys in the AMI is forbidden. As a security best practice, you should use AWS IAM roles. These roles should have the least privileges that are required for interacting with the service. The good news is that IAM roles can be created in the AWS CloudFormation template, in the Resources section. If you choose to create a role using this method, it will be a good idea to include details in the documentation.
  • Successful completion: Your documentation should clearly articulate how the customer will know that the installation is complete. This could be something like a login page displayed on one of the IPs provided in the Outputs section. The documentation should also provide information on what the user should do if the installation fails. Remember that the customer is charged for the AWS resources even if your product is not correctly installed.
  • If you create or edit security group rules, please ensure that SSH/RDP [port 22/443] is not open to 0.0.0.0/0 by default.

Additional Tips for Your AWS Marketplace Deployments

  • For unique passwords to be generated at run time you can use the instance IDs of the created instances. The metadata service can be queried for instance IDs. Customers can then be required to change the password when they log in for the first time.
  • AWS CloudFormation does not allow you to create a variable number of instances. If your product requires the customer to select the number of nodes, you could create an Auto Scaling group where the group size is captured as input in the Parameters section.
  • If you need to create new security groups as a part of the deployment, request customer input for the CIDR block/IP address allowed to access the application and use the same for security group configuration.
  • For complex configuration management tasks like editing files, configuring inter-node communication and complex deployment tasks, consider using AWS OpsWorks. AWS CloudFormation supports creation of AWS OpsWorks stacks, layers, and applications in the Resources section. The relevant recipes can be stored in a publicly readable Amazon S3 bucket.

Conclusion

In this post, we provided a high-level view of how to list products in AWS Marketplace. We also shared some important considerations when creating AMIs and AWS CloudFormation-based offerings. To conclude, remember that customer experience and security should be your top considerations when you evaluate different strategies to list your product in AWS Marketplace.

Don’t Miss Upcoming AWS Partner ConneXions Events Across Australia

by Kate Miller | on | in AWS Events, AWS Marketing | | Comments

The following is a guest post from the AWS ANZ Partner Marketing team. 


Whether you’re a new AWS Partner Network (APN) Partner looking for more of an introduction to AWS and the APN, or an established APN Partner who wants to learn more about utilising APN programs and resources, we welcome you to join us at one of our upcoming AWS Partner ConneXions events for an interactive afternoon with the AWS team. We’ll be providing you with an AWS update on the latest technology trends, cloud solutions, and an overview of the various programs closely tied to APN Partner success.

 

 

This will be a great opportunity for you to engage in person with AWS team members to discuss how APN programs, resources, and tools can be leveraged to build a thriving AWS offering and practice. You’ll also have the opportunity to network with other APN Partners to hear their stories and to share best practices.

The events will be held in Sydney, Brisbane, Melbourne, and Perth. Learn more here.

Introducing AWS Business Builder – Build, Market, and Deliver Software Solutions on AWS

by Kate Miller | on | in APN Technology Partners, AWS Competencies, AWS Marketing | | Comments

The following is a guest post from AWS Partner Marketing. 

Are you a software company looking to build your business on AWS?

AWS provides a broad portfolio of go-to-market resources to help software companies of all sizes connect with customers and grow their businesses. The new AWS Business Builder program is designed to simplify the process for you and help you understand what go-to-market resources are available, determine which resources best suit your business needs, and provide information on how you can get started.

AWS Business Builder organizes the available AWS go-to-market resources into the categories of build, market and deliver in order to clearly outline the resources available at each phase of your solution development journey. Below, we walk you through the different categories, and how to get started.

Build: Leverage the AWS Partner Network (APN) to build a global software business

Many of the tools and resources offered through AWS Business Builder are benefits offered through the APN. The APN is the global partner program for AWS focused on helping partners build a successful AWS-based business. Whether you’re in the emerging stages of building a solution on AWS, or are a more established APN Technology Partner seeking to deepen your relationship with AWS, the AWS Partner Network can help enable you to build your business globally. You can leverage tools and resources such as: AWS Quick Start reference deployments, which help you rapidly deploy fully functional software on the AWS cloud, following AWS best practices; the AWS SaaS Partner Program, which provides you with support as you build, launch, and grow SaaS solutions on AWS; and Innovation Sandbox Usage Credits for qualified APN Partners, to cover the AWS usage costs when you’re developing new software and solution offerings on AWS.

If you’re not already an APN Partner, click here to learn how to join.

Market: Expand reach and generate qualified leads

AWS Business Builder includes a variety of programs and resources to you to expand your reach and generate leads. For example, AWS offers programs for you to generate demand with AWS customers at AWS Events and through digital promotion on the AWS website and in the APN Partner Directory. In addition, if you’re a Standard tier or higher APN Partner, APN Marketing Central provides you with tools and resources to market your solutions on AWS through co-branded solution-based campaigns and access to marketing agency services. And through the AWS Competency Program, your firm can differentiate itself to customers by showcasing expertise and proven customer success in a particular solution area.

Deliver: Accelerate customer time to value

AWS Business Builder also provides resources that enable you to deliver your solution on AWS. AWS Marketplace is a low-friction sales channel where more than 1 million active AWS customers can discover, subscribe, deploy, and manage application solutions. If you’re an APN Technology Partner, you can use Free Trials for Proof of Concepts and GTM Programs to promote your software products on Marketplace. In addition, as a qualifying APN Partner, AWS enables you to easily run free-trial campaigns by providing AWS credits to cover the cost of the underlying AWS usage required to run the free trials. And AWS may provide you with Proof of Concept (POC) usage credits and/or professional services funding to facilitate new strategic or enterprise customer projects utilizing your AWS-based solution.

Want to Explore AWS Business Builder?

Learn more about AWS Business Builder here.

Register for the 2016 AWS New York Summit

by Kate Miller | on | in AWS Events | | Comments

Last week, we published a recap of the AWS Santa Clara Summit highlighting some of the great sessions and activities that took place during the event.

Were you unable to attend the AWS Santa Clara Summit? Fear not. The AWS New York Summit is on August 10 – 11, and affords you another great opportunity to take advantage of a large number of sessions, activities, and networking opportunities.

Register today and reserve your seat to hear keynote speaker Dr. Werner Vogels, Amazon CTO, highlighting the newest AWS services and customer stories. You can also take advantage of the opportunity to get hands-on training with AWS, attend numerous breakout sessions, and meet fellow AWS Partners and Customers in the Expo Hall.

If you can’t make it to the event, you can sign up to view the live stream of the keynote here.