Google Research Blog
The latest news from Research at Google
Announcing the Google Cloud Platform Research Credits Program
Thursday, April 26, 2018
Posted by Steven Butschi, Head of Higher Education, Google
Scientists across nearly every discipline are researching ever larger and more complex data sets, using tremendous amounts of compute power to learn, make discoveries and build new tools that few could have imagined only a few years ago. Traditionally, this kind of research has been limited by the availability of resources, with only the largest universities or industry partners able to successfully pursue these endeavors. However, the power of cloud computing has been removing obstacles that many researchers used to face, enabling projects that use
machine learning tools to understand and address student questions
and that
study robotic interactions with humans
, among many more.
In order to ensure that more researchers have access to powerful cloud tools, we’re launching Google
Cloud Platform (GCP) research credits
, a new program aimed to support faculty in qualified regions who want to take advantage of GCP’s compute, analytics, and machine-learning capabilities for research. Higher education researchers can use GCP research credits in a multitude of ways — below are just three examples to illustrate how GCP can help propel your research forward.
Andrew V. Sutherland, a computational number theorist and Principal Research Scientist at the Massachusetts Institute of Technology, is one of a growing number of academic researchers who has already
made the transition
and benefited from GCP. His team moved his extremely large database to GCP because “we are mathematicians who want to focus on our research, and not have to worry about hardware failures or scaling issues with the website.”
Ryan Abernathey, Assistant Professor of Earth and Environmental Sciences, Ocean and Climate Physics at the Lamont-Doherty Earth Observatory at Columbia University, used Google Cloud credits through an NSF partnership and, with his team, developed an
open-source platform to manage the complex data sets of climate science
. The platform, called Pangeo, can run Earth System Modeling simulations on petabytes of high-resolution, three-dimensional data. “This is the future of what day-to-day science research computing will look like,” he predicts.
At the Stanford Center for Genomics and Personalized Medicine (SCGPM), researchers using GCP and BigQuery can now run hundreds of genomes through a
variant analysis pipeline
and get query results quickly. Mike Snyder, director of SCGPM, notes, “We’re entering an era where people are working with thousands or tens of thousands or even million genome projects, and you’re never going to do that on a local cluster very easily. Cloud computing is where the field is going.”
The GCP research credits program is open to faculty doing cutting-edge research in
eligible
countries. We’re eager to hear how we can help accelerate your progress. If you’re interested, you can learn more on our
website
or
apply now
.
Consistent Hashing with Bounded Loads
Monday, April 03, 2017
Posted by Vahab Mirrokni, Principal Scientist, Morteza Zadimoghaddam, Research Scientist, NYC Algorithms Team
Running a large-scale web service, such as content hosting, necessarily requires
load balancing
— distributing clients
uniformly
across multiple servers such that none get overloaded. Further, it is desirable to find an allocation that does not change very much over time in a
dynamic
environment in which both clients and servers can be added or removed at any time. In other words, we need the allocation of clients to servers to be
consistent
over time.
In collaboration with
Mikkel Thorup
, a visiting researcher from university of Copenhagen, we developed a new efficient allocation algorithm for this problem with
tight guarantees
on the maximum load of each server, and studied it theoretically and empirically. We then worked with our Cloud team to implement it in
Google Cloud Pub/Sub
, a scalable event streaming service, and observed substantial improvement on uniformity of the load allocation (in terms of the maximum load assigned to servers) while maintaining consistency and stability objectives. In August 2016 we described our algorithm in the paper “
Consistent Hashing with Bounded Loads
”, and shared it on ArXiv for potential use by the broader research community.
Three months later, Andrew Rodland from
Vimeo
informed us that he had found the paper, implemented it in
haproxy
(a widely-used piece of open source software), and used it for their load balancing project at Vimeo. The results were dramatic: applying these algorithmic ideas helped them decrease the cache bandwidth by a factor of almost 8, eliminating a scaling bottleneck. He recently summarized this story in a
blog post
detailing his use case. Needless to say, we were excited to learn that our theoretical research was not only put into application, but also that it was useful
and
open-sourced.
Background
While the concept of
consistent hashing
has been developed in the past to deal with load balancing in dynamic environments, a fundamental issue with all the previously developed schemes is that, in certain scenarios, they may result in sub-optimal load balancing on many servers.
Additionally, both clients and servers may be added or removed periodically, and with such changes, we do not want to move too many clients. Thus, while the dynamic allocation algorithm has to always ensure a proper load balancing, it should also aim to minimize the number of clients moved after each change to the system. Such allocation problems become even more challenging when we face hard constraints on the capacity of each server - that is, each server has a capacity that the load may not exceed. Typically, we want capacities close to the average loads.
In other words, we want to simultaneously achieve both
uniformity
and
consistency
in the resulting allocations. There is a vast amount of literature on solutions in the much simpler case where the set of servers is fixed and only the client set is updated, but in this post we discuss solutions that are relevant in the fully
dynamic
case where both clients and servers can be added and removed.
The Algorithm
We can think about the servers as bins and clients as balls to have a similar notation with well-studied
balls-to-bins stochastic processes
. The uniformity objective encourages all bins to have a load roughly equal to the average density (the number of balls divided by the number of bins). For some parameter ε, we set the capacity of each bin to either
floor or ceiling
of the average load times (1+ε). This extra capacity allows us to design an allocation algorithm that meets the consistency objective in addition to the uniformity property.
Imagine a given range of numbers overlaid on a circle. We apply a hash function to balls and a separate hash function to bins to obtain numbers in that range that correspond to positions on that circle. We then start allocating balls in a specific order independent of their hash values (let’s say based on their ID). Then each ball is moved clockwise and is assigned to the first bin with spare capacity.
Consider the example above where 6 balls and 3 bins are assigned using two separate hash functions to random locations on the circle. For the sake of this instance, assume the capacity of each bin is set to 2. We start allocating balls in the increasing order of their ID values. Ball number 1 moves clockwise, and goes to bin C. Ball number 2 goes to A. Balls 3 and 4 go to bin B. Ball number 5 goes to bin C. Then ball number 6 moves clockwise and hits bin B first. However bin B has capacity 2 and already contains balls 3 and 4. So ball 6 keeps moving to reach bin C but that bin is also full. Finally, ball 6 ends up in bin A that has a spare slot for it.
Upon any update in the system (ball or bin insertion/deletion), the allocation is recomputed to keep the uniformity objective. The art of the analysis is to show that a small update (a few number of insertions and deletions) results in minor changes in the state of the allocation and therefore the consistency objective is met. In
our paper
we show that every ball removal or insertion in the system results in O(1/ε
2
) movements of other balls. The most important thing about this upper bound is that it is independent of the total number of balls or bins in the system. So if the number of balls or bins are doubled, this bound will not change. Having an upper bound independent of the number of balls or bins introduces room for scalability as the consistency objective is not violated if we move to bigger instances. Simulations for the number of movements (relocations) per update is shown below when an update occurs on a bin/server.
The red curve shows the average number of movements and the blue bars indicate the variance for different values of ε (the x-axis). The dashed curve is the upper bound suggested by our theoretical results which fits nicely as a prediction of the actual number of movements. Furthermore, for any value of ε, we know the load of each bin is at most (1+ε) times the average load. Below we see the load distribution of bins for different values of ε=0.1, ε=0.3 and ε=0.9.
The distribution of loads for several values of ε. The load distribution is nearly uniform covering all ranges of loads from 0 to (1+ε) times average, and many bins with load equal to (1+ε) times average.
As one can see there is a tradeoff — a lower ε helps with uniformity but not with consistency, while larger ε values help with consistency. A lower ε will ensure that many loads will be equal to the hard capacity limit of (1+ε) times the average, and the rest have a decaying distribution.
When providing content hosting services, one must be ready to face a variety of instances with different characteristics. This consistent hashing scheme is ideal for such scenarios as it performs well even for worst-case instances.
While our internal results are exciting, we are even more pleased that the broader community found our solution useful enough to
open-source
, allowing anyone to use this algorithm. If you are interested in further details of this research, please see the
paper
on ArXiv, and stay tuned for more research from the
NYC Algorithms Team
!
Acknowledgements:
We would like to thank Alex Totok, Matt Gruskin, Sergey Kondratyev and Haakon Ringberg from the Google Cloud Pub/Sub team, and of course
Mikkel Thorup
for his invaluable contributions to this paper.
Reproducible Science: Cancer Researchers Embrace Containers in the Cloud
Tuesday, September 06, 2016
Posted by Dr. Kyle Ellrott, Oregon Health and Sciences University, Dr. Josh Stuart, University of California Santa Cruz, and Dr. Paul Boutros, Ontario Institute for Cancer Research
Today we hear from the principal investigators of the ICGC-TCGA DREAM Somatic Mutation Calling Challenges about how they are encouraging cancer researchers to make use of Docker and Google Cloud Platform to gain a deeper understanding of the complex genetic mutations that occur in cancer, while doing so in a reproducible way.
– Nicole Deflaux and Jonathan Bingham, Google Genomics
Today’s genomic analysis software tools often give different answers when run in different computing environments - that’s like getting a different diagnosis from your doctor depending on which examination room you’re sitting in.
Reproducible
science matters, especially in cancer research where so many lives are at stake. The
Cancer Moonshot
has called for the research world to '
Break down silos and bring all the cancer fighters together
'. Portable software “
containers
” and cloud computing hold the potential to help achieve these goals by making scientific data analysis more reproducible, reusable and scalable.
Our team of researchers from the
Ontario Institute for Cancer Research
,
University of California Santa Cruz
,
Sage Bionetworks
and
Oregon Health and Sciences University
is pushing the frontiers by encouraging scientists to package up their software in reusable
Docker
containers and make use of cloud-resident data from the
Cancer Cloud Pilots funded by the National Cancer Institute
.
In 2014 we initiated the
ICGC-TCGA DREAM Somatic Mutation Calling (SMC) Challenges
where Google provided credits on
Google Cloud Platform
. The first result of this collaboration was the DREAM-SMC DNA challenge, a public challenge that engaged cancer researchers from around the world to find the best methods for discovering
DNA somatic mutations
. By the end of the challenge, over 400 registered participants competed by submitting 3,500 open-source entries for 14 test genomes,
providing key insights
on the strengths and limitations of the current mutation detection methods.
The SMC-DNA challenge enabled comparison of results, but it did little to facilitate the exchange of cross-platform software tools. Accessing extremely large genome sequence input files and shepherding complex software pipelines created a “double whammy” to discourage data sharing and software reuse.
How can we overcome these barriers?
Exciting developments have taken place in the past couple of years that may annihilate these last barriers. The availability of cloud technologies and
containerization
can serve as the vanguards of reproducibility and interoperability.
Thus, a new way of creating open DREAM challenges has emerged: rather than encouraging the status quo where participants run their own methods themselves on their own systems, and the results cannot be verified, the new challenge design requires participants to submit open-source code packaged in Docker containers so that anyone can run their methods and verify the results. Real-time leaderboards show which entries are winning and top performers have a chance to claim a prize.
Working with Google Genomics and Google Cloud Platform, the DREAM-SMC organizers are now using cloud and containerization technologies to enable portability and reproducibility as a core part of the DREAM challenges. The latest SMC installments, the
SMC-Het Challenge
and the
SMC-RNA Challenge
have implemented this new plan:
SMC-Het Challenge
: Tumour biopsies are composed of many different cell types in addition to tumour cells, including normal tissue and infiltrating immune cells. Furthermore, the tumours themselves are made of a mixture of different subpopulations, all related to one another through cell division and mutation. Critically, each sub-population can have distinct clinical outcomes, with some more resistant to treatment or more likely to metastasize than others. The goal of the SMC-Het Challenge is to identify the best methods for predicting
tumor subpopulations
and their “family tree” of relatedness from genome sequencing data.
SMC-RNA Challenge
: The alteration of RNA production is a fundamental mechanism by which cancer cells rewire cellular circuitry. Genomic rearrangements in cancer cells can produce fused protein products that can bestow Frankenstein-like properties. Both RNA abundances and novel fusions can serve as the basis for clinically-important prognostic biomarkers. The SMC-RNA Challenge will identify the best methods to detect such rogue expressed RNAs in cancer cells.
Ultimately, the success will be gauged by the amount of serious participation in these latest competitions. So far, the signs are encouraging. SMC-Het, which focuses on a very new research area, launched in November 2015 and has already enlisted 18 teams contributing over 70 submissions. SMC-RNA just recently launched and will run until early 2017, with several of the world leaders in the field starting to prepare entries. What’s great about the submissions being packaged in containers is that even after the challenges end, the tested methods can be applied and further adapted by anyone around the world.
Thus, the moon shot need not be a lucky solo attempt made by one hero in one moment of inspiration. Instead, the new informatics of clouds and containers will enable us to combine intelligence so we can build a series of bridges from here to there.
To participate in the DREAM challenges, visit the
SMC-Het
and
SMC-RNA
Challenge sites.
Genomic Data Processing on Google Cloud Platform
Tuesday, April 05, 2016
Posted by Dr. Stacey Gabriel, Director of the Genomics Platform at the Broad Institute of MIT and Harvard
Today we hear from Broad Institute of MIT and Harvard about how their researchers and software engineers are collaborating closely with the Google Genomics team on large-scale genomic data analysis. They’ve already reduced the time and cost for whole genome processing by several fold, helping researchers think even bigger. Broad’s open source tools, developed in close
collaboration with Google Genomics
, will also be made available to the wider research community.
– Jonathan Bingham, Product Manager, Google Genomics
Dr. Stacey Gabriel, Director of the
Genomics Platform at the Broad Institute
As one of the largest genome sequencing centers in the world, the
Broad Institute
of MIT and Harvard generates a lot of data. Our DNA sequencers produce more than 20 Terabytes (TB) of genomic data per day, and they run 365 days a year. Moreover, our rate of data generation is not only growing, but accelerating – our output increased more than two-fold last year, and nearly two-fold the previous year. We are not alone in facing this embarrassment of riches; across the whole genomics community, the rate of data production is doubling about every eight months with no end in sight.
Here at Broad, our team of software engineers and methods developers have spent the last year working to re-architect our production sequencing environment for the cloud. This has been no small feat, especially as we had to build the plane while we flew it! It required an entirely new system for developing and deploying pipelines (which we call
Cromwell
), as well as a new framework for wet lab quality control that uncouples data generation from data processing.
Courtesy: Broad Institute of MIT and Harvard
Last summer Broad and Google
announced a collaboration
to develop a safe, secure and scalable cloud computing infrastructure capable of storing and processing enormous datasets. We also set out to build cloud-supported tools to analyze such data and unravel long-standing mysteries about human health. Our engineers collaborate closely; we teach them about genomic data science and genomic data engineering, and they teach us about cloud computing and distributed systems. To us, this is a wonderful model for how a basic research institute can productively collaborate with industry to advance science and medicine. Both groups move faster and go further by working together.
As of today, the largest and most important of our production pipelines, the
Whole Genome Sequencing Pipeline
, has been completely ported to the
Google Cloud Platform
(GCP). We are now beginning to run production jobs on GCP and will be switching over entirely this month. This switch has proved to be a very cost-effective decision. While the conventional wisdom is that public clouds can be more expensive, our experience is that cloud is dramatically cheaper. Consider the curve below that my colleague Kristian Cibulskis recently showed at
GCP NEXT
:
Out of the box, the cost of running the
Genome Analysis Toolkit
(GATK) best practices pipeline on a 30X-coverage whole genome was roughly the same as the cost of our on-premise infrastructure. Over a period of a few months, however, we developed techniques that allowed us to
really
reduce costs: We learned how to parallelize the computationally intensive steps like aligning DNA sequences against a reference genome. We also optimized for GCP’s infrastructure to lower costs by using features such as
Preemptible VMs
. After doing these optimizations, our production whole genome pipeline was about 20% the cost of where we were when we started, saving our researchers millions of dollars, all while reducing processing turnaround time eight-fold.
There is a similar story to be told on storage of the input and output data.
Google Cloud Storage Nearline
is a medium for storing DNA sequence alignments and raw data. Like most people in genomics, we access genetic variants data every day, but raw DNA sequences only a few times per year, such as when there is a new algorithm that requires raw data or a new assembly of the human genome. Nearline’s price/performance tradeoff is well-suited to data that’s infrequently accessed. By using Nearline, along with some compression tricks, we were able to reduce our storage costs by greater than 50%.
Altogether, we estimate that, by using GCP services for both compute and storage, we will be able to lower the total cost of ownership for storing and processing genomic data significantly relative to our on premise costs. Looking forward, we also see advantages for data sharing, particularly for large multi-group genome projects. An environment where the data can be securely stored and analyzed will solve problems of multiple groups copying and paying for transmission and storage of the same data.
Porting the GATK whole genome pipeline to the cloud is just the starting point. During the coming year, we plan to migrate the bulk of our production pipelines to the cloud, including tools for arrays, exomes, cancer genomes, and RNA-seq. Moreover, our non-exclusive relationship with Google is founded on the principle that our groups can leverage complementary skills to make products that can not only serve the needs of Broad, but also help serve the needs of researchers around the world. Therefore, as we migrate each of our pipelines to the cloud to meet our own needs, we also plan to make them available to the greater genomics community through a Software-as-a-Service model.
This is an exciting time for us at Broad. For more than a decade we have served the genomics community by acting as a hub for data generation; now, we are extending this mission to encompass not only sequencing services, but also data services. We believe that by expanding access to our tools and optimizing our pipelines for the cloud, will enable the community to benefit from the enormous effort we have invested. We look forward to expanding the scope of this mission in the years to come.
Machine Learning in the Cloud, with TensorFlow
Wednesday, March 23, 2016
Posted by Slaven Bilac, Software Engineer, Google Research
At Google, researchers collaborate closely with product teams, applying the latest advances in Machine Learning to existing products and services - such as
speech recognition in the Google app
,
search in Google Photos
and the
Smart Reply feature in Inbox by Gmail
- in order to make them more useful. A growing number of Google products are using
TensorFlow
, our open source Machine Learning system, to tackle ML challenges and we would like to enable others do the same.
Today, at
GCP NEXT 2016
, we
announced the alpha release
of
Cloud Machine Learning
, a framework for building and training custom models to be used in intelligent applications.
Machine Learning projects can come in many sizes, and as we’ve seen with our open source offering
TensorFlow
, projects often need to scale up. Some small tasks are best handled with a local solution running on one’s desktop, while large scale applications require both the scale and dependability of a hosted solution. Google
Cloud Machine Learning
aims to support the full range and provide a seamless transition from local to cloud environment.
The
Cloud Machine Learning
offering allows users to run custom distributed learning algorithms based on
TensorFlow
. In addition to the
deep learning
capabilities that power
Cloud Translate API
,
Cloud Vision API
, and
Cloud Speech API
, we provide easy-to-adopt samples for common tasks like linear regression/classification with very fast convergence properties (based on the
SDCA
algorithm) and building a custom image classification model with few hundred training examples (based on the
DeCAF
algorithm).
We are excited to bring the best of
Google Research
to
Google Cloud Platform
. Learn more about this release and more from GCP Next 2016 on the
Google Cloud Platform blog
.
Labels
accessibility
ACL
ACM
Acoustic Modeling
Adaptive Data Analysis
ads
adsense
adwords
Africa
AI
Algorithms
Android
Android Wear
API
App Engine
App Inventor
April Fools
Art
Audio
Augmented Reality
Australia
Automatic Speech Recognition
Awards
Cantonese
Chemistry
China
Chrome
Cloud Computing
Collaboration
Computational Imaging
Computational Photography
Computer Science
Computer Vision
conference
conferences
Conservation
correlate
Course Builder
crowd-sourcing
CVPR
Data Center
Data Discovery
data science
datasets
Deep Learning
DeepDream
DeepMind
distributed systems
Diversity
Earth Engine
economics
Education
Electronic Commerce and Algorithms
electronics
EMEA
EMNLP
Encryption
entities
Entity Salience
Environment
Europe
Exacycle
Expander
Faculty Institute
Faculty Summit
Flu Trends
Fusion Tables
gamification
Gboard
Gmail
Google Accelerated Science
Google Books
Google Brain
Google Cloud Platform
Google Docs
Google Drive
Google Genomics
Google Maps
Google Photos
Google Play Apps
Google Science Fair
Google Sheets
Google Translate
Google Trips
Google Voice Search
Google+
Government
grants
Graph
Graph Mining
Hardware
HCI
Health
High Dynamic Range Imaging
ICLR
ICML
ICSE
Image Annotation
Image Classification
Image Processing
Inbox
India
Information Retrieval
internationalization
Internet of Things
Interspeech
IPython
Journalism
jsm
jsm2011
K-12
KDD
Keyboard Input
Klingon
Korean
Labs
Linear Optimization
localization
Low-Light Photography
Machine Hearing
Machine Intelligence
Machine Learning
Machine Perception
Machine Translation
Magenta
MapReduce
market algorithms
Market Research
Mixed Reality
ML
MOOC
Moore's Law
Multimodal Learning
NAACL
Natural Language Processing
Natural Language Understanding
Network Management
Networks
Neural Networks
Nexus
Ngram
NIPS
NLP
On-device Learning
open source
operating systems
Optical Character Recognition
optimization
osdi
osdi10
patents
Peer Review
ph.d. fellowship
PhD Fellowship
PhotoScan
Physics
PiLab
Pixel
Policy
Professional Development
Proposals
Public Data Explorer
publication
Publications
Quantum AI
Quantum Computing
renewable energy
Research
Research Awards
resource optimization
Robotics
schema.org
Search
search ads
Security and Privacy
Semantic Models
Semi-supervised Learning
SIGCOMM
SIGMOD
Site Reliability Engineering
Social Networks
Software
Speech
Speech Recognition
statistics
Structured Data
Style Transfer
Supervised Learning
Systems
TensorBoard
TensorFlow
TPU
Translate
trends
TTS
TV
UI
University Relations
UNIX
User Experience
video
Video Analysis
Virtual Reality
Vision Research
Visiting Faculty
Visualization
VLDB
Voice Search
Wiki
wikipedia
WWW
YouTube
Archive
2018
May
Apr
Mar
Feb
Jan
2017
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2016
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Jul
May
Apr
Mar
Feb
2007
Oct
Sep
Aug
Jul
Jun
Feb
2006
Dec
Nov
Sep
Aug
Jul
Jun
Apr
Mar
Feb
Feed
Google
on
Follow @googleresearch
Give us feedback in our
Product Forums
.