Google Research Blog
The latest news from Research at Google
Seminal Ideas from 2007
Wednesday, September 06, 2017
Posted by Anna Ukhanova, Technical Program Manager, Google Research Europe
It is not everyday we have the chance to pause and think about how previous work has led to current successes, how it influenced other advances and reinterpret it in today’s context. That’s what the
ICML Test-of-Time Award
is meant to achieve, and this year it was given to the work
Sylvain Gelly
, now a researcher on the
Google Brain team
in our
Zurich office
, and
David Silver
, now at
DeepMind
and lead researcher on
AlphaGo
, for their 2007 paper
Combining Online and Offline Knowledge in UCT
. This paper presented new approaches to incorporate knowledge, learned offline or created online on the fly, into a search algorithm to augment its effectiveness.
The
Game of Go
is an ancient Chinese board game, which has tremendous popularity with millions of players worldwide. Since the success of
Deep Blue
in the game of Chess in the late 90’s, Go has been considered as the next benchmark for machine learning and games. Indeed, it has simple rules, can be efficiently simulated, and progress can be measured objectively. However, due to the vast search space of possible moves, making an ML system capable of playing Go well represented a considerable challenge. Over the last two years, DeepMind’s
AlphaGo
has pushed the limit of what is possible with machine learning in games, bringing many
innovations and technological advances
in order to successfully defeat some of the best players in the world [
1
], [
2
], [
3
].
A little more than 10 years before the success of AlphaGo, the classical
tree search
techniques that were so successful in Chess were reigning in computer Go programs, but only reaching weak amateur level for human Go players. Thanks to
Monte-Carlo Tree Search
— a (then) new type of search algorithm based on sampling possible outcomes of the game from a position, and incrementally improving the
search tree
from the results of those simulations — computers were able to search much deeper in the game. This is important because it made it possible to incorporate less human knowledge in the programs — a task which is very hard to do right. Indeed, any missing knowledge that a human expert either cannot express or did not think about may create errors in the computer evaluation of the game position, and lead to blunders
*
.
In 2007, Sylvain and David augmented the Monte Carlo Tree Search techniques by exploring two types of knowledge incorporation: (i) online, where the decision for the next move is taken from the current position, using compute resources at the time when the next move is needed, and (ii) offline, where the learning process happens entirely before the game starts, and is summarized into a model that can be applied to all possible positions of a game (even though not all possible positions have been seen during the learning process). This ultimately led to the computer program
MoGo
, which showed an improvement in performance over previous Go algorithms.
For the online part, they adapted the simple idea that some actions don’t necessarily depend on each other. For example, if you need to book a vacation, the choice of the hotel, flight and car rental is obviously dependent on the choice of your destination. However, once given a destination, these things can be chosen (mostly) independently of each other. The same idea can be applied to Go, where some moves can be estimated partially independently of each other to get a very quick, albeit imprecise, estimate. Of course, when time is available, the exact dependencies are also analysed.
For offline knowledge incorporation, they explored the impact of learning an approximation of the position value with the computer playing against itself using
reinforcement learning
, adding that knowledge in the tree search algorithm. They also looked at how expert play patterns, based on human knowledge of the game, can be used in a similar way. That offline knowledge was used in two places; first, it helped focus the program on moves that looked similar to good moves it learned offline. Second, it helped simulate more realistic games when the program tried to estimate a given position value.
These improvements led to good success on the smaller version of the game of Go (9x9), even beating one professional player in an exhibition game, and also reaching a stronger amateur level on the full game (19x19). And in the years since 2007, we’ve seen many rapid advances (almost on a monthly basis) from researchers all over the world that have allowed the development of algorithms culminating in AlphaGo (which itself introduced many innovations).
Importantly, these algorithms and techniques are not limited to applications towards games, but also enable improvements in many domains. The contributions introduced by David and Sylvain in their collaboration 10 years ago were an important piece to many of the improvements and advancements in machine learning that benefit our lives daily, and we offer our sincere congratulations to both authors on this well-deserved award.
*
As a side note, that’s why
machine learning
as a whole is such a powerful tool: replacing expert knowledge with algorithms that can more fully explore potential outcomes.
↩
Google at ICML 2017
Sunday, August 06, 2017
Posted by Christian Howard, Editor-in-Chief, Research Communications
Machine learning (ML) is a key strategic focus at Google, with highly active groups pursuing research in virtually all aspects of the field, including deep learning and more classical algorithms, exploring theory as well as application. We utilize scalable tools and architectures to build machine learning systems that enable us to solve deep scientific and engineering challenges in areas of language, speech, translation, music, visual processing and more.
As a leader in ML research, Google is proud to be a Platinum Sponsor of the thirty-fourth
International Conference on Machine Learning
(ICML 2017), a premier annual event supported by the
International Machine Learning Society
taking place this week in Sydney, Australia. With over 130 Googlers attending the conference to present publications and host workshops, we look forward to our continued colalboration with the larger ML research community.
If you're attending ICML 2017, we hope you'll visit the Google booth and talk with our researchers to learn more about the exciting work, creativity and fun that goes into solving some of the field's most interesting challenges. Our researchers will also be available to talk about and demo several recent efforts, including the technology behind
Facets
, neural audio synthesis with
Nsynth
, a Q&A session on the
Google Brain Residency program
and much more. You can also learn more about our research being presented at ICML 2017 in the list below (Googlers highlighted in
blue
).
ICML 2017 Committees
Senior Program Committee includes:
Alex Kulesza
,
Amr Ahmed
,
Andrew Dai
,
Corinna Cortes
,
George Dahl
,
Hugo Larochelle
,
Matthew Hoffman
,
Maya Gupta
,
Moritz Hardt
,
Quoc Le
Sponsorship Co-Chair:
Ryan Adams
Publications
Robust Adversarial Reinforcement Learning
Lerrel Pinto,
James Davidson
,
Rahul Sukthankar
,
Abhinav Gupta
Tight Bounds for Approximate Carathéodory and Beyond
Vahab Mirrokni
,
Renato Leme
, Adrian Vladu, Sam Wong
Sharp Minima Can Generalize For Deep Nets
Laurent Dinh, Razvan Pascanu,
Samy Bengio
, Yoshua Bengio
Geometry of Neural Network Loss Surfaces via Random Matrix Theory
Jeffrey Pennington
,
Yasaman Bahri
Conditional Image Synthesis with Auxiliary Classifier GANs
Augustus Odena
,
Christopher Olah
,
Jon Shlens
Learning Deep Latent Gaussian Models with Markov Chain Monte Carlo
Matthew D. Hoffman
On the Expressive Power of Deep Neural Networks
Maithra Raghu, Ben Poole, Surya Ganguli, Jon Kleinberg,
Jascha Sohl-Dickstein
AdaNet: Adaptive Structural Learning of Artificial Neural Networks
Corinna Cortes
,
Xavi Gonzalvo
,
Vitaly Kuznetsov
,
Mehryar Mohri
, Scott Yang
Learned Optimizers that Scale and Generalize
Olga Wichrowska
, Niru Maheswaranathan, Matthew Hoffman, Sergio Gomez, Misha Denil, Nando de Freitas,
Jascha Sohl-Dickstein
Adaptive Feature Selection: Computationally Efficient Online Sparse Linear Regression under RIP
Satyen Kale
, Zohar Karnin, Tengyuan Liang, David Pal
Algorithms for â„“
p
Low-Rank Approximation
Flavio Chierichetti,
Sreenivas Gollapudi
,
Ravi Kumar
,
Silvio Lattanzi
,
Rina Panigrahy
, David Woodruff
Consistent k-Clustering
Silvio Lattanzi
,
Sergei Vassilvitskii
Input Switched Affine Networks: An RNN Architecture Designed for Interpretability
Jakob Foerster,
Justin Gilmer
,
Jan Chorowski
,
Jascha Sohl-Dickstein
,
David Sussillo
Online and Linear-Time Attention by Enforcing Monotonic Alignments
Colin Raffel
,
Thang Luong
,
Peter Liu
,
Ron Weiss
,
Douglas Eck
Gradient Boosted Decision Trees for High Dimensional Sparse Output
Si Si
, Huan Zhang, Sathiya Keerthi, Dhruv Mahajan, Inderjit Dhillon, Cho-Jui Hsieh
Sequence Tutor: Conservative fine-tuning of sequence generation models with KL-control
Natasha Jaques
,
Shixiang Gu
,
Dzmitry Bahdanau
,
Jose Hernandez-Lobato, Richard E Turner,
Douglas Eck
Uniform Convergence Rates for Kernel Density Estimation
Heinrich Jiang
Density Level Set Estimation on Manifolds with DBSCAN
Heinrich Jiang
Maximum Selection and Ranking under Noisy Comparisons
Moein Falahatgar, Alon Orlitsky, Venkatadheeraj Pichapati,
Ananda Suresh
Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders
Cinjon Resnick
,
Adam Roberts
,
Jesse Engel
,
Douglas Eck
,
Sander Dieleman, Karen Simonyan,
Mohammad Norouzi
Distributed Mean Estimation with Limited Communication
Ananda Suresh
,
Felix Yu
,
Sanjiv Kumar
,
Brendan McMahan
Learning to Generate Long-term Future via Hierarchical Prediction
Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin,
Honglak Lee
Variational Boosting: Iteratively Refining Posterior Approximations
Andrew Miller, Nicholas J Foti,
Ryan Adams
RobustFill: Neural Program Learning under Noisy I/O
Jacob Devlin, Jonathan Uesato,
Surya Bhupatiraju
, Rishabh Singh, Abdel-rahman Mohamed, Pushmeet Kohli
A Unified Maximum Likelihood Approach for Estimating Symmetric Properties of Discrete Distributions
Jayadev Acharya, Hirakendu Das, Alon Orlitsky,
Ananda Suresh
Axiomatic Attribution for Deep Networks
Ankur Taly
,
Qiqi Yan,
,
Mukund Sundararajan
Differentiable Programs with Neural Libraries
Alex L Gaunt, Marc Brockschmidt, Nate Kushman,
Daniel Tarlow
Latent LSTM Allocation: Joint Clustering and Non-Linear Dynamic Modeling of Sequence Data
Manzil Zaheer,
Amr Ahmed
, Alex Smola
Device Placement Optimization with Reinforcement Learning
Azalia Mirhoseini
,
Hieu Pham
,
Quoc Le
,
Benoit Steiner
,
Mohammad Norouzi
,
Rasmus Larsen
,
Yuefeng Zhou
,
Naveen Kumar
,
Samy Bengio
,
Jeff Dean
Canopy — Fast Sampling with Cover Trees
Manzil Zaheer, Satwik Kottur,
Amr Ahmed
, Jose Moura, Alex Smola
Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning
Junhyuk Oh, Satinder Singh,
Honglak Lee
, Pushmeet Kohli
Probabilistic Submodular Maximization in Sub-Linear Time
Serban Stan,
Morteza Zadimoghaddam
, Andreas Krause, Amin Karbasi
Deep Value Networks Learn to Evaluate and Iteratively Refine Structured Outputs
Michael Gygli,
Mohammad Norouzi
,
Anelia Angelova
Stochastic Generative Hashing
Bo Dai,
Ruiqi Guo
,
Sanjiv Kumar
, Niao He, Le Song
Accelerating Eulerian Fluid Simulation With Convolutional Networks
Jonathan Tompson
, Kristofer D Schlachter, Pablo Sprechmann, Ken Perlin
Large-Scale Evolution of Image Classifiers
Esteban Real
,
Sherry Moore
,
Andrew Selle
,
Saurabh Saxena
,
Yutaka Leon Suematsu
,
Jie Tan
,
Quoc Le
,
Alexey Kurakin
Neural Message Passing for Quantum Chemistry
Justin Gilmer
,
Samuel Schoenholz
,
Patrick Riley
, Oriol Vinyals,
George Dahl
Neural Optimizer Search with Reinforcement Learning
Irwan Bello
,
Barret Zoph
,
Vijay Vasudevan
,
Quoc Le
Workshops
Implicit Generative Models
Organizers include:
Ian Goodfellow
Learning to Generate Natural Language
Accepted Papers include:
Generating High-Quality and Informative Conversation Responses with Sequence-to-Sequence Models
Louis Shao
,
Stephan Gouws
,
Denny Britz
,
Anna Goldie
,
Brian Strope
,
Ray Kurzweil
Lifelong Learning: A Reinforcement Learning Approach
Accepted Papers include:
Bridging the Gap Between Value and Policy Based Reinforcement Learning
Ofir Nachum
,
Mohammad Norouzi
,
Kelvin Xu
,
Dale Schuurmans
Principled Approaches to Deep Learning
Organizers include:
Robert Gens
Program Committee includes:
Jascha Sohl-Dickstein
Workshop on Human Interpretability in Machine Learning (WHI)
Organizers include:
Been Kim
ICML Workshop on TinyML: ML on a Test-time Budget for IoT, Mobiles, and Other Applications
Invited speakers include:
Sujith Ravi
Deep Structured Prediction
Organizers include:
Gal Chechik
,
Ofer Meshi
Program Committee includes:
Vitaly Kuznetsov
,
Kevin Murphy
Invited Speakers include:
Ryan Adams
Accepted Papers include:
Filtering Variational Objectives
Chris J Maddison
,
Dieterich Lawson
,
George Tucker
,
Mohammad Norouzi
, Nicolas Heess, Arnaud Doucet, Andriy Mnih, Yee Whye Teh
REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models
George Tucker
, Andriy Mnih, Chris J Maddison,
Dieterich Lawson
,
Jascha Sohl-Dickstein
Machine Learning in Speech and Language Processing
Organizers include:
Tara Sainath
Invited speakers include:
Ron Weiss
Picky Learners: Choosing Alternative Ways to Process Data
Invited speakers include:
Tomer Koren
Organizers include:
Corinna Cortes
,
Mehryar Mohri
Private and Secure Machine Learning
Keynote Speakers include:
Ilya Mironov
Reproducibility in Machine Learning Research
Invited Speakers include:
Hugo Larochelle
,
Francois Chollet
Organizers include:
Samy Bengio
Time Series Workshop
Organizers include:
Vitaly Kuznetsov
Tutorial
Interpretable Machine Learning
Presenters include:
Been Kim
ICML 2016 & Research at Google
Monday, June 20, 2016
Posted by Afshin Rostamizadeh, Research Scientist
This week, New York hosts the
2016 International Conference on Machine Learning
(ICML 2016), a premier annual Machine Learning event supported by the
International Machine Learning Society
(IMLS). Machine Learning is a key focus area at Google, with highly active research groups exploring virtually all aspects of the field, including deep learning and more classical algorithms.
We work on an extremely wide variety of machine learning problems that arise from a broad range of applications at Google. One particularly important setting is that of large-scale learning, where we utilize scalable tools and architectures to build machine learning systems that work with large volumes of data that often preclude the use of standard single-machine training algorithms. In doing so, we are able to solve deep scientific problems and engineering challenges, exploring theory as well as application, in areas of language, speech, translation, music, visual processing and more.
As Gold Sponsor, Google has a strong presence at ICML 2016 with many Googlers publishing their research and hosting workshops. If you’re attending, we hope you’ll visit the Google booth and talk with our researchers to learn more about the exciting work, creativity and fun that goes into solving interesting ML problems that impact millions of people. You can also learn more about our research being presented at ICML 2016 in the list below (Googlers highlighted in
blue
).
ICML 2016 Organizing Committee
Area Chairs include:
Corinna Cortes
,
John Blitzer
,
Maya Gupta
,
Moritz Hardt
,
Samy Bengio
IMLS
Board Members include:
Corinna Cortes
Accepted Papers
ADIOS: Architectures Deep In Output Space
Moustapha Cisse, Maruan Al-Shedivat,
Samy Bengio
Associative Long Short-Term Memory
Ivo Danihelka (Google DeepMind)
,
Greg Wayne
(Google DeepMind)
,
Benigno Uria
(Google DeepMind)
,
Nal Kalchbrenner
(Google DeepMind)
,
Alex Graves
(Google DeepMind)
Asynchronous Methods for Deep Reinforcement Learning
Volodymyr Mnih
(Google DeepMind)
,
Adria Puigdomenech Badia
(Google DeepMind)
, Mehdi Mirza,
Alex Graves
(Google DeepMind)
,
Timothy Lillicrap
(Google DeepMind)
,
Tim Harley
(Google DeepMind)
,
David Silver
(Google DeepMind)
,
Koray Kavukcuoglu
(Google DeepMind)
Binary embeddings with structured hashed projections
Anna Choromanska,
Krzysztof Choromanski
, Mariusz Bojarski, Tony Jebara,
Sanjiv Kumar
, Yann LeCun
Discrete Distribution Estimation Under Local Privacy
Peter Kairouz,
Keith Bonawitz
,
Daniel Ramage
Dueling Network Architectures for Deep Reinforcement Learning
(Best Paper Award recipient)
Ziyu Wang
(Google DeepMind)
,
Nando de Freitas
(Google DeepMind)
,
Tom Schaul
(Google DeepMind)
,
Matteo Hessel
(Google DeepMind)
,
Hado van Hasselt
(Google DeepMind)
,
Marc Lanctot
(Google DeepMind)
Exploiting Cyclic Symmetry in Convolutional Neural Networks
Sander Dieleman
(Google DeepMind)
,
Jeffrey De Fauw
(Google DeepMind)
,
Koray Kavukcuoglu
(Google DeepMind)
Fast Constrained Submodular Maximization: Personalized Data Summarization
Baharan Mirzasoleiman,
Ashwinkumar Badanidiyuru
, Amin Karbasi
Greedy Column Subset Selection: New Bounds and Distributed Algorithms
Jason Altschuler, Aditya Bhaskara,
Gang Fu
,
Vahab Mirrokni
,
Afshin Rostamizadeh
,
Morteza Zadimoghaddam
Horizontally Scalable Submodular Maximization
Mario Lucic, Olivier Bachem,
Morteza Zadimoghaddam
, Andreas Krause
Continuous Deep Q-Learning with Model-based Acceleration
Shixiang Gu,
Timothy Lillicrap
(Google DeepMind)
,
Ilya Sutskever
,
Sergey Levine
Meta-Learning with Memory-Augmented Neural Networks
Adam Santoro
(Google DeepMind)
, Sergey Bartunov,
Matthew Botvinick
(Google DeepMind)
,
Daan Wierstra
(Google DeepMind)
,
Timothy Lillicrap
(Google DeepMind)
One-Shot Generalization in Deep Generative Models
Danilo Rezende
(Google DeepMind)
,
Shakir Mohamed
(Google DeepMind)
,
Daan Wierstra
(Google DeepMind)
Pixel Recurrent Neural Networks
(Best Paper Award recipient)
Aaron Van den Oord
(Google DeepMind)
,
Nal Kalchbrenner
(Google DeepMind)
,
Koray Kavukcuoglu
(Google DeepMind)
Pricing a low-regret seller
Hoda Heidari,
Mohammad Mahdian
,
Umar Syed
,
Sergei Vassilvitskii
, Sadra Yazdanbod
Primal-Dual Rates and Certificates
Celestine DĂĽnner,
Simone Forte
, Martin Takac, Martin Jaggi
Recommendations as Treatments: Debiasing Learning and Evaluation
Tobias Schnabel, Thorsten Joachims, Adith Swaminathan, Ashudeep Singh,
Navin Chandak
Recycling Randomness with Structure for Sublinear Time Kernel Expansions
Krzysztof Choromanski
,
Vikas Sindhwani
Train faster, generalize better: Stability of stochastic gradient descent
Moritz Hardt
, Ben Recht,
Yoram Singer
Variational Inference for Monte Carlo Objectives
Andriy Mnih
(Google DeepMind)
,
Danilo Rezende
(Google DeepMind)
Workshops
Abstraction in Reinforcement Learning
Organizing Committee:
Daniel Mankowitz,
Timothy Mann
(Google DeepMind)
, Shie Mannor
Invited Speaker:
David Silver
(Google DeepMind)
Deep Learning Workshop
Organizers:
Antoine Bordes, Kyunghyun Cho, Emily Denton,
Nando de Freitas
(Google DeepMind)
, Rob Fergus
Invited Speaker:
Raia Hadsell
(Google DeepMind)
Neural Networks Back To The Future
Organizers:
LĂ©on Bottou, David Grangier, Tomas Mikolov,
John Platt
Data-Efficient Machine Learning
Organizers:
Marc Deisenroth,
Shakir Mohamed
(Google DeepMind)
, Finale Doshi-Velez, Andreas Krause, Max Welling
On-Device Intelligence
Organizers:
Vikas Sindhwani
,
Daniel Ramage
,
Keith Bonawitz
, Suyog Gupta, Sachin Talathi
Invited Speakers:
Hartwig Adam
,
H. Brendan McMahan
Online Advertising Systems
Organizing Committee:
Sharat Chikkerur,
Hossein Azari
, Edoardo Airoldi
Opening Remarks:
Hossein Azari
Invited Speakers:
Martin Pál
,
Todd Phillips
Anomaly Detection 2016
Organizing Committee:
Nico Goernitz, Marius Kloft,
Vitaly Kuznetsov
Tutorials
Deep Reinforcement Learning
David Silver
(Google DeepMind)
Rigorous Data Dredging: Theory and Tools for Adaptive Data Analysis
Moritz Hardt
, Aaron Roth
ICML 2015 and Machine Learning Research at Google
Sunday, July 05, 2015
Posted by Corinna Cortes, Head, Google Research NY
This week, Lille, France hosts the
2015 International Conference on Machine Learning
(ICML 2015), a premier annual Machine Learning event supported by the
International Machine Learning Society
(IMLS). As a leader in Machine Learning research, Google will have a strong presence at ICML 2015, with many Googlers publishing work and hosting workshops. If you’re attending, we hope you’ll visit the Google booth and talk with the Googlers to learn more about the hard work, creativity and fun that goes into solving interesting ML problems that impacts millions of people. You can also learn more about our research being presented at ICML 2015 in the list below (Googlers highlighted in
blue
).
Google is a Platinum Sponsor of ICML 2015.
ICML Program Committee
Area Chair -
Corinna Cortes
&
Samy Bengio
IMLS Board Member -
Corinna Cortes
Papers:
Learning Program Embeddings to Propagate Feedback on Student Code
Chris Piech,
Jonathan Huang
, Andy Nguyen, Mike Phulsuksombati, Mehran Sahami, Leonidas Guibas
BilBOWA: Fast Bilingual Distributed Representations without Word Alignments
Stephan Gouws
, Yoshua Bengio,
Greg Corrado
An Empirical Exploration of Recurrent Network Architectures
Rafal Jozefowicz
, Wojciech Zaremba,
Ilya Sutskever
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
,
Christian Szegedy
DRAW: A Recurrent Neural Network For Image Generation
Karol Gregor
,
Ivo Danihelka
,
Alex Graves
,
Danilo Rezende
,
Daan Wierstra
Variational Inference with Normalizing Flows
Danilo Rezende
,
Shakir Mohamed
Structural Maxent Models
Corinna Cortes
, Vitaly Kuznetsov,
Mehryar Mohri
,
Umar Syed
Weight Uncertainty in Neural Network
Charles Blundell
,
Julien Cornebise
,
Koray Kavukcuoglu
,
Daan Wierstra
MADE: Masked Autoencoder for Distribution Estimation
Mathieu Germain,
Karol Gregor
, Iain Murray, Hugo Larochelle
Fictitious Self-Play in Extensive-Form Games
Johannes Heinrich,
Marc Lanctot
,
David Silver
Universal Value Function Approximators
Tom Schaul
,
Daniel Horgan
,
Karol Gregor
,
David Silver
Workshops:
Extreme Classification: Learning with a Very Large Number of Labels
Samy Bengio
- Organizing Committee
Machine Learning for Education
Jonathan Huang
- Organizing Committee
Workshop on Machine Learning Open Source Software 2015: Open Ecosystems
Ian Goodfellow
- Program Committee
Machine Learning for Music Recommendation
Philippe Hamel
- Invited Speaker
Large-Scale Kernel Learning: Challenges and New Opportunities
Poster -
Just-In-Time Kernel Regression for Expectation Propagation
Wittawat Jitkrittum, Arthur Gretton,
Nicolas Heess
,
S.M. Ali Eslami
, Balaji Lakshminarayanan, Dino Sejdinovic, Zoltan Szabo
European Workshop on Reinforcement Learning (EWRL)
RĂ©mi Munos
- Organizing Committee
David Silver
- Keynote
Workshop on Deep Learning
Geoff Hinton
- Organizer
Tara Sainath
,
Oriol Vinyals
,
Ian Goodfellow
,
Karol Gregor
- Invited Speakers
Poster -
A Neural Conversational Model
Oriol Vinyals
,
Quoc Le
Oral Presentation -
Massively Parallel Methods for Deep Reinforcement Learning
Arun Nair
,
Praveen Srinivasan
,
Sam Blackwell
,
Cagdas Alcicek
,
Rory Fearon
,
Alessandro De Maria
,
Vedavyas Panneershelvam
,
Mustafa Suleyman
,
Charles Beattie
,
Stig Petersen
,
Shane Legg
,
Volodymyr Mnih
,
Koray Kavukcuoglu
,
David Silver
Labels
accessibility
ACL
ACM
Acoustic Modeling
Adaptive Data Analysis
ads
adsense
adwords
Africa
AI
Algorithms
Android
Android Wear
API
App Engine
App Inventor
April Fools
Art
Audio
Augmented Reality
Australia
Automatic Speech Recognition
Awards
Cantonese
Chemistry
China
Chrome
Cloud Computing
Collaboration
Computational Imaging
Computational Photography
Computer Science
Computer Vision
conference
conferences
Conservation
correlate
Course Builder
crowd-sourcing
CVPR
Data Center
Data Discovery
data science
datasets
Deep Learning
DeepDream
DeepMind
distributed systems
Diversity
Earth Engine
economics
Education
Electronic Commerce and Algorithms
electronics
EMEA
EMNLP
Encryption
entities
Entity Salience
Environment
Europe
Exacycle
Expander
Faculty Institute
Faculty Summit
Flu Trends
Fusion Tables
gamification
Gboard
Gmail
Google Accelerated Science
Google Books
Google Brain
Google Cloud Platform
Google Docs
Google Drive
Google Genomics
Google Maps
Google Photos
Google Play Apps
Google Science Fair
Google Sheets
Google Translate
Google Trips
Google Voice Search
Google+
Government
grants
Graph
Graph Mining
Hardware
HCI
Health
High Dynamic Range Imaging
ICLR
ICML
ICSE
Image Annotation
Image Classification
Image Processing
Inbox
India
Information Retrieval
internationalization
Internet of Things
Interspeech
IPython
Journalism
jsm
jsm2011
K-12
KDD
Keyboard Input
Klingon
Korean
Labs
Linear Optimization
localization
Low-Light Photography
Machine Hearing
Machine Intelligence
Machine Learning
Machine Perception
Machine Translation
Magenta
MapReduce
market algorithms
Market Research
Mixed Reality
ML
MOOC
Moore's Law
Multimodal Learning
NAACL
Natural Language Processing
Natural Language Understanding
Network Management
Networks
Neural Networks
Nexus
Ngram
NIPS
NLP
On-device Learning
open source
operating systems
Optical Character Recognition
optimization
osdi
osdi10
patents
Peer Review
ph.d. fellowship
PhD Fellowship
PhotoScan
Physics
PiLab
Pixel
Policy
Professional Development
Proposals
Public Data Explorer
publication
Publications
Quantum AI
Quantum Computing
renewable energy
Research
Research Awards
resource optimization
Robotics
schema.org
Search
search ads
Security and Privacy
Semantic Models
Semi-supervised Learning
SIGCOMM
SIGMOD
Site Reliability Engineering
Social Networks
Software
Speech
Speech Recognition
statistics
Structured Data
Style Transfer
Supervised Learning
Systems
TensorBoard
TensorFlow
TPU
Translate
trends
TTS
TV
UI
University Relations
UNIX
User Experience
video
Video Analysis
Virtual Reality
Vision Research
Visiting Faculty
Visualization
VLDB
Voice Search
Wiki
wikipedia
WWW
YouTube
Archive
2018
May
Apr
Mar
Feb
Jan
2017
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2016
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Jul
May
Apr
Mar
Feb
2007
Oct
Sep
Aug
Jul
Jun
Feb
2006
Dec
Nov
Sep
Aug
Jul
Jun
Apr
Mar
Feb
Feed
Google
on
Follow @googleresearch
Give us feedback in our
Product Forums
.