AWS Partner Network (APN) Blog

Building a Customer-Centric Sales Practice as an APN Partner

by Lauren Small | on | in APN Launches | | Comments

Interested in building a customer-centric sales practice on AWS?

Join us on November 15th for a webinar for APN Partners to learn how to build a successful sales-focused relationship with AWS. AWS provides customers with broad and deep cloud computing infrastructure and tools, and APN Consulting and Technology Partners provide customers with value-added services that can help them meet their business needs on AWS.

This webinar, hosted by Mike Clayville, AWS Global VP of Sales & Business Development and Walter Rogers, Founder & CEO at Cloud Coaching International, will dive into the benefits to APN Partners of building an AWS-based business, and how to provide great customer outcomes.

Join us to learn:

  • How successful APN Partners are providing solutions and value-added services on AWS to meet customers’ business needs
  • How APN Partners can effectively engage with the AWS sales team
  • The importance of taking a customer-centric approach, and how to incorporate customer obsession into your own sales process

Register now and join us on November 15th at 9:15am PST. We hope to see you there!

How We Built a SaaS Solution on AWS, by CrowdTangle

by Kate Miller | on | in Amazon DynamoDB, Amazon RDS, Amazon Redshift, APN Partner Highlight, APN Technology Partners, AWS Elastic Beanstalk, Database, SaaS on AWS, Startups | | Comments

The following is a guest post from Matt Garmur, CTO at CrowdTangle, a startup and APN Technology Partner who makes it easy for you to keep track of what’s happening on social media. Enjoy!


Horses were awesome.

If you had a messenger service 150 years ago, using horses was so much better than the alternative, walking. Sure, you had to hire people to take care of horses, feed them, and clean up after them, but the speed gains you got were easily worth the cost. And over time, your skills at building a business let you create systems that could handle each of these contingencies extremely efficiently.

And then cars came around, and you were out of luck.

Not immediately, of course. The first car on the street didn’t put you out of business. Even as cars got more mainstream, you still had the benefit of experience over startup car services. But once the first company grew up that was built with the assumption that cars existed, despite all your knowledge, you were in big trouble.

At CrowdTangle, we build some of the best tools in the world for helping people keep track of what’s happening on social media. We have a team of engineers and account folks helping top media companies, major league sports teams, and others find what they care about in real time (and we’re hiring!). Importantly, we started our company in 2011, which meant that AWS had been around for 5 years, and we could, and did, confidently build our entire business around the assumption that it would exist.

AWS was our car.

It may seem like an exaggeration, but it’s not. We were able to build an entirely different type of organization on AWS than we could have built five years prior. Specifically, it has impacted us in four critical ways: business model, hiring, projections and speed, which of course are all different ways of saying, “cost,” and thus, “survival.”

First is the business model. When we started developing our company, we didn’t consider producing physical media to hold our software, nor did we consider installing it on-premises. By making our model Software as a Service (SaaS), we got a lot of immediate benefits: we were able to allow users to try our product with no more effort than going to a website; we could push features and fixes dozens of times a day; and we could know that everyone would get the same controlled experience. But by taking on the hosting ourselves, we would need to have a significant capital outlay at the start in order to simply deliver our product. Having AWS to begin on without those initial costs made SaaS a viable option for our growing startup.

Next is hiring. AWS has Amazon Relational Database Service (Amazon RDS), a managed database service, which means I don’t need to hire a DBA, since it’s coder-ready (and on Intel Xeon E5s, so we’re certainly not sacrificing quality). AWS has Elastic Beanstalk, a service that makes it simple for us to deploy our application on AWS, which means I can set up separate environments for front- and back-end servers, and scale them independently at the push of a button. Amazon DynamoDB, the company’s managed noSQL database service, helps alleviate me of the need to have four full-time engineers on staff keeping my database ring up and running. We keep terabytes of real-time data, get single-digit millisecond response times, and from my perspective, it takes care of itself. My team can be focused on what matters to our driving the growth of our business, because we don’t need to spend a single hire on keeping the lights on.

Third is projections. If you’re in the horse world, your purchasing model for computers is to run as close to capacity as possible until it’s clear you need a capital outlay. Then you research the new machine, contact your supplier, spend a lot of money at once, wait for shipping, install it, and when it goes out of service, try to resell it and recover some of the cost. In the car world, if I think we might need more machinery, even for a short time, I request an instance, have it available immediately, and start paying pennies or dollars by the hour. If I’m done with that instance? Terminate and I stop paying for it. If I need a bigger instance? I simply provision a bigger instance on the spot.

Finally, I want to talk about speed. Because of our choice to build our solution on AWS, we have a lean team that can provision resources faster, and can constantly work on fun projects rather than having to focus on simple maintenance. Not only can we move quickly on the scoped projects, but we can do cheap R&D for the moonshots. Every new project could be a bust or our next million-dollar product, but they start the same — have an idea, clone an existing environment, put your project branch on it, trot it out for clients to play with, and spin it down when done.

We recently decided that an aggregation portion of our system was slower than we liked, and we researched moving it to Amazon Redshift. To do so, we spun up a small Redshift instance (note: no projections), did initial testing, then copied our entire production database into Redshift (note: R&D speed). “Production” testing proved the benefits, so now we have an entire secondary Amazon Kinesis-Redshift managed pipeline for our system (note: no hiring, despite adding systems), and the speed increase has opened the door for new products that weren’t possible for us under the prior method. How much would that experimentation cost in the horse world? What would it have taken to execute? Would any of those projects have been small enough to be worth taking a chance on? We place small bets all the time, and that’s what helps us remain a leader in our field.

Your next competitor will have grown up in the age of cars. How can you compete when you have horses?

To learn more about CrowdTangle, click here.

 

 

 

 

 


The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.
 

Building Your Data Lake on AWS

by Kate Miller | on | in APN Competency Partner, APN Consulting Partners, APN Partner Highlight, AWS Competencies, AWS Webinars, Big Data, Big Data Competency | | Comments

The following is a guest post from Ken Chestnut, Global Ecosystem Lead, Big Data, AWS

In my opinion, there’s never been a better time to take advantage of data and analytics. With people, businesses, and things, moving online and getting instrumented, organizations have an unprecedented opportunity to discover new insights and deliver business results. However, with this opportunity comes complexity and traditional data management tools and techniques aren’t enough to fully realize the potential of data.

Why? Because data has traditionally been stored and managed in relational databases, organizations have, in the past, had to predetermine which questions they wanted answered and force data into columns and rows, accordingly. With traditional storage and compute options historically being expensive, organizations were further constrained by the amount of data they could afford to analyze.

With greater agility, affordability, and an ability to decouple storage and compute, more organizations are turning to the cloud and using Data Lakes as a different approach to manage and analyze data. By using a Data Lake, organizations no longer need to worry about structuring or transforming data before storing it and can rapidly analyze data, to quickly discover new business insights.

To discuss the benefits of architecting a Data Lake on AWS, tomorrow (Nov. 3rd) we are hosting a webinar with three of our AWS Big Data Competency Consulting Partners: 47Lining, Cloudwick, and NorthBay.

In this webinar, these partners will share their customer success and best practices for implementing a Data Lake on AWS. You can register here.

47Lining

47Lining was chosen by Howard Hughes Corporation to develop a Managed Enterprise Data Lake based on Amazon S3 to fuse on-premises and 3rd party data in order to enable them to answer their most interesting business questions. You can learn more about how 47Lining helped Howard Hughes and how they can help your organization rapidly uncover new business insights by downloading the company’s eBook here.

Cloudwick

When a major healthcare company needed an AWS-based Big Data solution that enabled them to ingest data quicker and leverage near real-time analytics, they chose Cloudwick to architect a Data Lake on AWS. To learn more about how Cloudwick helped this organization and can help yours, read the company’s eBook here.

NorthBay

NorthBay helped Eliza Corporation architect a Data Lake on AWS that enabled them to manage an ever-increasing volume and variety of data while maintaining HIPAA compliance. Download the company’s eBook here to learn more about how NorthBay helped Eliza obfuscate protected data and how they can help you solve your most complex big data challenges. You can learn more about how Eliza Corporation moved healthcare data to the cloud here.

Learn about all of our AWS Big Data Competency Partners by visiting us here.

Please contact us at aws-bigdata-partner-segment@amazon.com with any questions, comments, or feedback.

We look forward to seeing you tomorrow.

UPDATE, 11/11 – Watch the webinar on-demand:

AWS Marketplace Product Support Connection Helps Software Vendors Provide More Seamless Product Support

by Kate Miller | on | in APN Technology Partners, AWS Marketplace | | Comments

The following is a guest post from our AWS Marketplace team.

Timely, high-quality software support is a critical part of business-to-business and business-to-consumer software sales. To help ensure that software vendors on AWS Marketplace have the tools to easily support their end customers, AWS Marketplace today released AWS Marketplace Product Support Connection (PSC), a new program that gives vendors more visibility into the end customer in order to more easily provide product support. Using PSC, customers can choose to share information such as name, organization, and email address with software vendors through the AWS Marketplace website.

Customers can share their contact data directly in the AWS Marketplace web site during or after the subscription process, without needing to go to a separate web site to register for support.  AWS Marketplace then shares the provided data, along with details such as product code and subscription information, with software vendors via an API. The data that customers share through the program lets vendors keep customer contact information up to date in their support systems, enabling vendors to quickly verify and access customers’ identities upon receiving a support request.

If you are an AWS Marketplace software vendor and would like to enroll your products in PSC, you will need to integrate with the API, provide a writeup of the support processes you plan to follow under PSC, and ensure that you meet a few program requirements. To get started, please log in to the AWS Marketplace Management Portal to learn more.

AWS Marketplace is launching this new feature with nine vendors who provided feedback on the program design and integrated early. We’d like to thank Barracuda, Chef, Matillion, Rogue Wave, SoftNAS, Sophos, zData, Zend, and Zoomdata for their commitment to providing high-quality product support.

AWS Marketplace offers more than 2,700 software listings from more than 925 independent software vendors. If you are a U.S.- or E.U.-based software vendor and would like to list your product for sale on AWS Marketplace, please visit our Sell on AWS Marketplace page to get started.

How Does Premier APN Partner BlazeClan Help Customers Innovate on AWS?

by Kate Miller | on | in APN Competency Partner, APN Consulting Partners, APN Partner Highlight, APN Partner Success Stories, AWS Case Study, AWS Competencies, Big Data Competency, Cloud Managed Services, Premier Partners | | Comments

BlazeClan is a born-in-the-cloud consulting company who provides customers around the globe with advisory, cloud migration, Big Data and analytics, product development, and cloud managed services. The company is dedicated to helping customers embrace agility and automation to transform their businesses on the cloud. Headquartered in India, BlazeClan also has a strong presence in ASEAN, Europe, and North America.

BlazeClan’s journey on AWS began as a startup in India in 2011. “We became a customer of AWS when we started our organization,” says Varoon Rajani, Co-Founder & CEO, BlazeClan. As a startup, the BlazeClan team found great benefit in not having to invest in infrastructure, and being able to move quickly on AWS. The team became deeply versed in product development on AWS, and identified an opportunity to help other companies with similar use cases. “When we started out, AWS was a real differentiator for us, both because of the cost-effectiveness of the platform and the speed to market building on AWS afforded us. We built a deep understanding of how AWS worked and helped us. And over time, we started helping some of our friends in the startup space migrate to AWS. It was a time when the startup market in India was growing quickly, and whenever we discussed our use case on AWS, we saw a lot of similar need across startups,” explains Rajani. “We thought, ‘if we’re facing certain challenges and are able to address those using AWS, then there are likely tens of thousands of companies around the world who could benefit from AWS, and we could help them.’”

Read More >>

First AWS Certification Study Guide – Now Available!

by AWS Training and Certifications Team | on | in AWS Training and Certification | | Comments

As an APN Partner, becoming AWS Certified is a crucial step in your journey on AWS.

If you’re currently studying for the AWS Certified Solutions Architect – Associate exam, then be sure to check out our new, AWS-authored study guide available on Amazon.com in hardcopy or Kindle e-reader formats.

This official study guide, written by AWS subject matter experts, covers exam concepts, and provides key review on exam topics. Individuals studying for the Solutions Architect – Associate exam can leverage the study guide and complementary online study tools to prepare for their exam through expert advice, real-world knowledge, key exam essentials, chapter review questions, and much more.

Attending re:Invent this year? Purchase a study guide today to prepare for your Solutions Architect- Associate exam. Onsite exam seats still available!

Want to learn more about AWS Training, and recommendations for training courses to complete based on your role? Download the APN Partner Learning Plan.

 

AWS for Windows – Upcoming Opportunities to Learn and Engage

by Lauren Small | on | in APN Webcast, AWS for Windows | | Comments

Hundreds of thousands of companies choose AWS to run their Windows workloads for reasons that include the breadth of services we offer, our highly available architecture, security and compliance, and last but not least, our partner ecosystem.

We are now offering prospective and existing APN Partners several ways to learn more about the opportunity to complement their existing solution areas by building an AWS practice around Windows-based workloads.

These include 1-hour onboarding webinars for new partners, half-day immersion events in cities around the world, and also full-day technical bootcamps.

 
AWS for Windows Onboarding Webinar

In this session you can learn how your company can benefit from building and growing a solution practice around Amazon Web Services (AWS) for Windows to address the business needs of your customers.  This session is focused on AWS Partner Network (APN) Partners new to AWS and will help you to:

  • Understand the opportunity and offerings around AWS for Windows
  • Learn about programs that can help customers plan and execute migration strategies
  • Overview of the AWS Partner Network (APN) and how to onboard your company
  • Tools and programs for learning, accreditation, and certification on Windows on AWS workloads

We will be offering two sessions live on Wednesday, October 19th.

Click the one of the following links to register for the October 19 sessions.

 

AWS for Windows In-Person Events

Join us in Irvine, CA and Seattle, WA to go deeper on the resources, APN Partner opportunities and benefits, and how you can leverage your existing knowledge of Microsoft technologies such as Windows Server, SQL Server, Exchange, SharePoint, and Dynamics to grow your business.  We will also explore more deeply how you can build your business and support more customers on the AWS cloud. These events run from 10:00am – 1:30pm local time.

Take a look at the following list of cities and dates to find a training time that works for you. Then, just click the hyperlink to register. Registration is FREE, but seating capacity is limited.

Coming Soon:

  • 15-Nov  Dallas
  • 16-Nov  Austin
  • 14-Dec  Boston
  • 12-Jan   Chicago
  • 3-Feb     Atlanta
  • 23-Feb   Washington D.C.

 

AWS for Windows Technical Bootcamps

AWS is running a series of Level 200-300 technical bootcamps for APN Consulting Partners Standard-tier or higher.  Learn more about these free sessions in this recent blog post.

Join us at the AWS Global Partner Summit at re:Invent!

by Lauren Small | on | in re:Invent 2016 | | Comments

Are you an APN Partner? With AWS re:Invent 2016 less than two months away, we want to extend to you an invitation to join us at the Global Partner Summit. This complimentary event is hosted on Tuesday, November 29th.

At the Global Partner Summit, you’ll get the opportunity to hear from AWS and APN leadership about the future of the business and what’s next for the APN, learn about the successes of fellow APN Partners, participate in sessions focused on developing new skills, and discover insights into growing your cloud practice on AWS.

There are a wide range of topics relevant for all professionals.

Global Partner Summit tracks include:

  • Business Track
  • Segment Track
  • ISV/Technical Track
  • 300-Level Technical Track
  • 400-Level Technical Track
  • Global Partner Summit Chalk Talks

 Learn more about the Global Partner Summit from Kelly Hartman, Global Head of APN Partner Programs.

We look forward to seeing you there.

Register Now >>

AWS Quick Starts for Atlassian Tools

by Kate Miller | on | in APN Technical Content Launch, APN Technology Partners, AWS Competencies, DevOps on AWS | | Comments

This is a guest post from Shiva Narayanaswamy, GSI Partner Solution Architect with AWS. 

We find that Enterprise customers using Atlassian tools in their DevOps tool chain are increasingly moving parts (or all!) of their team development environments to AWS, to take advantage of a wide range of cloud benefits including scalability and high availability.

Today, in collaboration with Advanced Technology and DevOps Competency partner Atlassian, we released two new Quick Start guides to help customers deploy JIRA Software Data Center and Bitbucket Data Center on AWS.  The Quick Starts provide customers a fully automated, best practices reference architecture to very rapidly deploy JIRA Software and Bitbucket on AWS.

Jeff Barr has published this post on the AWS Blog introducing the Quick Start guides. And, we’re just getting started!  Look out for two new Quick Starts around the re:Invent timeframe.

Automating ECR Authentication on Marathon with the Amazon ECR Credential Helper

by Kate Miller | on | in Amazon ECR, APN Technology Partners, AWS Partner Solutions Architect (SA) Guest Post, Containers | | Comments

This is a guest post from Erin McGill and Brandon Chavis, Partner Solution Architects with AWS. 

Are you running the Datacenter Operating System (DC/OS) on AWS and want to leverage the Amazon EC2 Container Registry (Amazon ECR) without managing Docker registry credentials or scheduling a periodic job to authenticate with ECR on your DC/OS hosts? In this blog post, we’ll show you how to use Marathon, a native, production-grade container orchestrator for DC/OS, to automate authentication with ECR. This method uses the ECR Credential Helper to pull and run Docker images seamlessly, without scheduled re-authentication tasks or storing Docker credentials on the Marathon agents.

To learn more about DC/OS on AWS, check out our previous blog post.

Authenticating to EC2 Container Registry

To pull an image from an ECR hosted private repository, you must first obtain a valid login token for Docker to use. Because Docker doesn’t use IAM directly, you can first call the aws ecr get-login command from the AWS Command Line Interface (AWS CLI) to request a temporary login token. This command returns a docker login command that you can use to authenticate with ECR:

docker login -u AWS -p temp-password -e none 
https://aws_account_id.dkr.ecr.region.amazonaws.com 

This temporary token lasts for 12 hours. When the token expires, you’ll need to request a new one.  We can streamline this process and remove the need to either manually re-authenticate or write a program to call aws ecr get-login by using the Amazon ECR Docker Credential Helper.

The ECR Credential Helper is a tool that makes it easier to use Amazon ECR based on Docker credential helpers. Docker credential helpers is a suite of programs that allow you to use external credential stores for your Docker credentials.  When you use the ECR Credential Helper, you no longer need to schedule a job to get temporary tokens and store those secrets on the hosts, and the ECR Credential Helper can get IAM permissions from your AWS credentials, such as an IAM EC2 Role, so there are no stored authentication credentials in the Docker configuration file. The configuration file tells Docker to use the credential helper, and the helper gets an ECR authorization token that is used by Docker for each call to ECR.

Creating a Docker container to run the ECR Helper

On AWS, DC/OS runs on CoreOS, a lightweight host system, and uses Docker containers for all applications, so nothing is installed on the host. To adhere to the CoreOS model, we developed a solution to use a Docker container that compiles the ECR credential helper binary and puts the binary file and a compressed TAR credential file on the host. After running the container, the agents will be able to automate authentication with ECR and pull containers from the private repositories.

To use this solution, create an empty directory called aws-ecr-helper. Within that directory, create a folder named .docker. Create a Docker configuration file called config.json and save it in the new, empty .docker folder. The config.json file consists of a single line:

{ "credsStore": "ecr-login" }

Following the documentation on how to use a private Docker registry with Marathon, create a compressed TAR file that includes the .docker folder and its contents:

tar -czf docker.tar.gz .docker

A Dockerfile is a file that contains all the commands to create a Docker image. Using a Dockerfile, you can create an image to:

  1. Get a zipped archive of the ECR Credential Helper repository.
  2. Unzip the repository archive.
  3. Create the ECR Credential Helper binary.
  4. Place the binary on the host system.
  5. Place the docker.tar.gz file on the host system.

The Dockerfile we used looks like this:

# Because the Amazon ECR Helper is a go binary, we are using 
# the official golang docker image as the base image
FROM golang:1.6

RUN apt-get update && apt-get install -y \
    unzip \
    && rm -rf /var/lib/apt/lists/*

# Marathon requires a gzipped credntial file - this compressed tarball contains 
# .docker/config.json

# The JSON file contains the following single line: { "credsStore": "ecr-login" }
# There is no secret or confidential account information needed

COPY docker.tar.gz /tmp/

# Creating the necessary github directories and pulling a zip of the master branch 
# from the repository using the wget command to avoid installing the Git client

RUN mkdir -p src/github.com/awslabs/amazon-ecr-credential-helper/ && \
    wget -O src/github.com/awslabs/amazon-ecr-credential-helper/master.zip \
    --no-check-certificate https://github.com/awslabs/amazon-ecr-credential-helper/archive/master.zip

# Setting the new working directory, expanding the ECR Helper code, and cleanup

WORKDIR /go/src/github.com/awslabs/amazon-ecr-credential-helper/
RUN unzip master.zip && \
    mv amazon-ecr-credential-helper-master/* . && \
    rm -rf amazon-ecr-credential-helper-master && \
    rm -f master.zip

# Compile the binary with make - the binary will be created in the #/go/src/github.com/awslabs/amazon-ecr-credential-helper/bin/local
# directory inside the container
#
# To ensure that the host has the necessary files after the container runs and is removed, the user has to mount 2 volumes from the host
#  - mapped to the container's /go/src/github.com/awslabs/amazon-ecr-credential-helper/bin/local directory
#  - mapped to the container's /data directory

CMD /usr/bin/make && cp /tmp/docker.tar.gz /data

Save the Dockerfile in the same directory as the docker.tar.gz file.

The aws-ecr-helper directory now contains:

  • The .docker folder, which contains the config.json file
  • The docker.tar.gz file created from the .docker folder
  • the Dockerfile from the example above

Build the container using the command:

docker build –t marathon-ecr-helper .

Note: If you previously built this Docker image on the same host, run the docker build command with the --no-cache option to ensure that the container pulls the latest master branch of the ECR helper.

Running the Amazon ECR Credential Helper

To test that our Docker image compiles the binary successfully, we can use the docker run command on your build host:

docker run -it --rm -v /opt/mesosphere/bin:/go/src/github.com/awslabs/amazon-ecr-credential-helper/bin/local/ -v /etc:/data marathon-ecr-helper

This command compiles the ECR Credential Helper and places the resulting ECR Credential Helper binary bin and compressed TAR credential file on the host. The -v flag bind-mounts a host directory into the container. In this case, there are two mount points:

-v /opt/mesosphere/bin:/go/src/github.com/awslabs/amazon-ecr-credential-helper/bin/local/ 

 

and:

-v /etc:/data

The first mount from the host has to be a directory in the PATH environment variable of the Marathon process owner. In our example, we used /opt/mesosphere/bin. In the DC/OS documentation for using a private Docker registry, the example location for the compressed credential file is /etc, so we used this location as well.

Run the container with the -it --rm flags to view what the container is doing and to remove the container after its job is finished. The example command outputs the following to the screen:

. ./scripts/shared_env && ./scripts/build_binary.sh  ./bin/local
Built ecr-login

You can see what the container is executing, any errors that occurred, and a notification that the build is complete and successful.

The container is now ready to be tagged and sent to the repository. Tag the image by using the tag command:

docker tag marathon-ecr-helper public-repo/marathon-ecr-helper:latest

Then push the container to the registry:

docker push public-repo/marathon-ecr-helper:latest

You should store the Docker image in a public repository so Marathon doesn’t need to authenticate it in order to pull the ECR Credential Helper image. When the image is in the repository, you can create an application within Marathon to pull the image and run the container to place the helper binary and necessary configuration on the Marathon agent nodes.

If you want to use the ECR Credential Helper on your development machine, ensure that the config.json file is present and that the binary is in a directory that is in the environment PATH variable.

Preparing the DC/OS stack with CloudFormation

If you are not already running DC/OS or want to launch a new DC/OS test environment, first, download the CloudFormation template. We will use it to launch the DC/OS cluster in this example. If you are already running DC/OS launched from a CloudFormation template, you’ll need to update your stack with these changes to use the automated solution presented in this blog post.

To access ECR with DC/OS on AWS, you need to make sure that your Marathon agent nodes can access the ECR service and that the CoreOS version can support Docker credential helpers.

The IAM instance profiles for the EC2 instances need to contain read-only permissions for ECR, so we’ve modified the CFN template by adding these ECR permissions to the EC2 IAM Roles:

"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:BatchGetImage"

To use the compiled ECR Credential Helper, we also need to modify the version of CoreOS in the Cloudformation template. Docker credential helper support was introduced in Docker version 1.11. As of this writing, Docker version 1.11 is available in the Beta CoreOS release. You can choose the tab for the Beta channel on the CoreOS EC2 page to find the AMI ID for the region where you want to launch DC/OS. You will replace the existing AMI IDs with the new Beta Channel AMI ID in RegionToAmi of the Mappings section in the CloudFormation template.

In our example, we select 2 public agents and 2 private agents to run in our DC/OS cluster.

Creating the ECR Credential Helper application in Marathon

Once the stack has the correct permissions and is running with the correct version of CoreOS, you can log in to the DC/OS stack and create a Marathon application for the ECR Credential Helper containers.

The Marathon application consists of the following code:

  1 {
  2   "id": "/aws-ecr-helper",
  3   "cmd": null,
  4   "cpus": 1,
  5   "mem": 256,
  6   "disk": 0,
  7   "instances": 0,
  8   "acceptedResourceRoles": [
  9         "*",
 10         "slave_public"
 11   ],
 12   "container": {
 13     "type": "DOCKER",
 14     "volumes": [
 15       {
 16         "containerPath": "/data",
 17         "hostPath": "/etc",
 18         "mode": "RW"
 19       },
 20       {
 21         "containerPath": "/go/src/github.com/awslabs/amazon-ecr-credential-helper/bin/local/",
 22         "hostPath": "/opt/mesosphere/bin/",
 23         "mode": "RW"
 24       }
 25     ],
 26     "docker": {
 27       "image": "public-repo/marathon-ecr-helper:latest",
 28       "network": "HOST",
 29       "privileged": false,
 30       "parameters": [],
 31       "forcePullImage": true
 32     }
 33   },
 34   "portDefinitions": [
 35     {
 36       "port": 10000,
 37       "protocol": "tcp",
 38       "labels": {}
 39     }
 40   ]
 41 }

Let’s break down the configuration and identify the important sections of code.

Line 2 identifies the name you give the application in Marathon.

Line 7 tells Marathon to launch 0 Docker instances for this application.

In lines 8-10, you can ensure that when you deploy your test web container, the ECR Credential Helper container will have been deployed to it. The resource role is an asterisk (*) and “slave_public” so the Docker container for the credential helper will be deployed to Marathon workers that are available inside and outside the environment.

Lines 14-18 and 19-23 show the two mount points we will be using when running this container. The containerPath is the path within the Docker container, the hostPath is the directory path on the agent node.

The first entry mounts /etc from the host into the container at the /data directory. After the Docker container runs, the docker.tar.gz file is copied to the /data location. Once the container finishes running its command, the TAR file will be in /etc on the host.

The second entry mounts /opt/mesosphere/bin/ from the host into the container at the /go/src/github.com/awslabs/amazon-ecr-credential-helper/bin/local/ location. When the container runs, it compiles the Go code into a binary. When the container has completed its job, the binary will be left on the host at /opt/mesosphere/bin/ so Marathon can use it to authenticate users when pulling images from ECR.

Lines 26-32 define the repository and the image to launch as well as any parameters or specifications for the running container.

Now that you’ve created the Marathon application for the ECR Credential Helper, you can scale up from 0 instances (line 7 in the above JSON document) to have Marathon launch the containers. In our example, we launched the DC/OS stack with the private agent node count set to 2 and the public agent node count set to 2, so we should scale the application up to 4: one for each agent node launched. The container spins up, places the compiled binary and compressed TAR file, and then stops. Once the container has been run on all your agents, you can scale the ECR Credential Helper application back down to 0. There is no need to run the application again until you need to replace an agent or scale up your DC/OS cluster.

Deploying a ‘Hello World!’ container from ECS

To test that you can pull from a private repository, you can create a simple container based on the official Nginx container. If you do not already have an ECR repository to push to, either create one in the console or use the AWS CLI command aws ecr create-repository. Save the URI for the created repository; you will use it when tagging and pushing the sample container image.

$ aws ecr create-repository --repository-name marathon-nginx-example
{
    "repository": {
        "registryId": " aws_account_id ",
        "repositoryName": "marathon-nginx-example",
        "repositoryArn": "arn:aws:ecr:region:aws_account_id:repository/marathon-nginx-example",
        "repositoryUri": " aws_account_id.dkr.ecr.region.amazonaws.com/marathon-nginx-example"
    }
}

Create an index.html page for the new container:

<!DOCTYPE html>
<html>
<head>
<title>Welcome to Nginx - in a Docker container!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Hello World!</h1>
<p>This is an image pulled from ECR</p>

</body>
</html>

The Dockerfile to place the new index.html page inside the container:

# This is for an nginx test
FROM nginx:latest

RUN apt-get -y update
COPY index.html /usr/share/nginx/html/index.html

EXPOSE 80

To build the Docker image, use the command:

docker build -t marathon-nginx-example .

Next, if you have the ECR Credential Helper and proper configuration on your development machine, you can push the image to an ECR repository called marathon-nginx-example. Tag the image and upload it to your private ECR repository:

docker tag marathon-nginx-example aws_account_id.dkr.ecr.region.amazonaws.com /marathon-nginx-example:latest

docker push aws_account_id.dkr.ecr.region.amazonaws.com/marathon-nginx-example:latest

Your modified Nginx container is now in ECR. You will configure Marathon to pull the new image from the private repository and run the web server.  To do this, you’ll need to create an application configuration for the new Nginx container. It needs to expose port 80 on the agent, so you can view the modified index page, and it needs to use the compressed configuration file that was placed on the host by the Docker container for ECR Credential Helper, so Marathon knows to use the ECR Credential Helper binary. Here’s the application definition that will pull the image and run the newly created Nginx container:

{
  "id": "/marathon-nginx-example",
  "cmd": null,
  "cpus": 1,
  "mem": 256,
  "disk": 0,
  "instances": 1,
  "acceptedResourceRoles": [
    "slave_public"
  ],
  "container": {
    "type": "DOCKER",
    "volumes": [],
    "docker": {
      "image": " aws_account_id.dkr.ecr.region.amazonaws.com/marathon-nginx-example",
      "network": "BRIDGE",
      "portMappings": [
        {
          "containerPort": 80,
          "hostPort": 80,
          "servicePort": 0,
          "protocol": "tcp",
          "name": "nginx",
          "labels": {
            "VIP_0": "80"
          }
        }
      ],
      "privileged": true,
      "parameters": [],
      "forcePullImage": true
    }
  },
  "portDefinitions": [
    {
      "port": 10003,
      "protocol": "tcp",
      "labels": {}
    }
  ],
  "uris": [
    "file:///etc/docker.tar.gz"
  ],
  "fetch": [
    {
      "uri": "file:///etc/docker.tar.gz",
      "extract": true,
      "executable": false,
      "cache": false
    }
  ]
}

This example configuration pulls the new image that you committed to the ECR; specifies the public agents so that when you scale your application up, it deploys to publicly available EC2 instances; bridges port 80 on the host to port 80 on the container instance; and uses the URI to fetch the compressed configuration file from where the ECR Credential Helper placed it.

You can now scale up the application and wait for it to be launched on the public agents. To view the new page, get the DNS host name for the public agent ELB load balancer that was created when you launched the DC/OS stack. You can find it in the Outputs section of your CloudFormation stack.

When you open a new web page using the DNS name of the public agent ELB load balancer, this is what you should see:

There it is! You just deployed a Docker container from a private repository without having to store and manage access and secret keys, user names and passwords, or create a scheduled job on each host.

To recap, we created a Docker image that compiled the ECR Docker Credential Helper and places the compiled binary and compressed configuration tar file on a DC/OS host. We then pushed this container to a public repository. Next, we modified the DC/OS CloudFormation template to include a Beta version of the CoreOS AMI that includes Docker 1.11, which allows us to use Docker Credential helpers and added IAM policies to allow the DC/OS agents to perform specific actions in ECR.

We then launched the modified CloudFormation template, created an application in Marathon to pull the credential-helper image from the public repository, and scheduled the container on the DC/OS agents.

Finally, to test that the compiled binary is in place and works as expected, we created a sample Nginx Docker image with a modified index.html that we then pushed to a private ECR repository and launched on the DC/OS agents.

To learn more about ECR, visit https://aws.amazon.com/ecr/

To learn more about DC/OS, visit https://dcos.io/