Google Research Blog
The latest news from Research at Google
Google Computer Science Capacity Awards
Monday, March 16, 2015
By Maggie Johnson, Director of Education and University Relations and Chris Busselle, Google.org
One of Google's goals is to surface successful strategies that support the expansion of high-quality Computer Science (CS) programs at the undergraduate level. Innovations in teaching and technologies, while additionally ensuring better engagement of women and underrepresented minority students, is necessary in creating inclusive, sustainable, and scalable educational programs.
To address issues arising from the
dramatic increase in undergraduate CS enrollments
, we recently launched the Computer Science Capacity Awards program. For this three-year program, select educational institutions were invited to contribute proposals for innovative, inclusive, and sustainable approaches to address current scaling issues in university CS educational programs.
Today, after an extensive proposal review process, we are pleased to announce the recipients of the Capacity Awards program:
Carnegie Mellon University - Professor Jacobo Carrasquel
Alternate Instructional Model for Introductory Computer Science Classes
CMU will develop a new instructional model consisting of two optional mini lectures per week given by the instructor, and problem-solving sessions with flexible group meetings that are coordinated by undergraduate and graduate teaching assistants.
Duke University - Professor Jeffrey Forbes
North Carolina State University - Professor Kristy Boyer
University of North Carolina - Professor Ketan Mayer-Patel
RESEARCH TRIANGLE PEER TEACHING FELLOWS: Scalable Evidence-Based Peer Teaching for Improving CS Capacity and Diversity
The project hopes to increase CS retention and diversity by developing a highly scalable, effective, evidence-based peer training program across three universities in the North Carolina Research Triangle.
Mount Holyoke College - Professor Heather Pon-Barry
MaGE (Megas and Gigas Educate): Growing Computer Science Capacity at Mount Holyoke College
Mount Holyoke’s
MaGE program
includes a plan to grow enrollment in introductory CS courses, particularly for women and other underrepresented groups. The program also includes a plan of action for CS students to educate, mentor, and support others in inclusive ways.
George Mason University - Professor Jeff Offutt
SPARC: Self-PAced Learning increases Retention and Capacity
George Mason University wants to replace the traditional course model for CS-1 and CS-2 with an innovative teaching model of self- paced introductory programming courses. Students will periodically demonstrate competency with practical skills demonstrations similar to those used in martial arts.
Rutgers University - Professor Andrew Tjang
Increasing the Scalability and Diversity in the Face of Large Growth in Computer Science Enrollment
Rutger’s program addresses scalability issues with technology tools, as well as collaborative spaces. It also emphasizes outreach to Rutgers’ women’s college and includes original research on success in CS programs to create new courses that cater to the changing environment.
University of California, Berkeley - Professor John DeNero
Scaling Computer Science through Targeted Engagement
Berkeley’s program plans to increase Software Engineering and UI Design enrollment by 500 total students/year, as well as increase the number of women and underrepresented minority CS majors by a factor of three.
Each of the selected schools brings a unique and innovative approach to addressing current scaling issues, and we are excited to collaborate in developing concrete strategies to develop sustainable and inclusive educational programs. Stay tuned over the coming year, where we will report on program recipients' progress and share results with the broader CS education community.
Announcing the Google MOOC Focused Research Awards
Monday, March 09, 2015
Posted by Maggie Johnson, Director of Education and University Relations, and Aimin Zhu, University Relations Manager, APAC
Last year, Google and
Tsinghua University
hosted the
2014 APAC MOOC Focused Faculty Workshop
, an event designed to share, brainstorm and generate ideas aimed at fostering MOOC innovation. As a result of the
ideas generated at the workshop
, we solicited proposals from the attendees for research collaborations that would advance important topics in MOOC development.
After expert reviews and committee discussions, we are pleased to announce the following recipients of the MOOC
Focused Research Awards
. These awards cover research exploring new interactions to enhance learning experience, personalized learning, online community building, interoperability of online learning platforms and education accessibility:
“MOOC Visual Analytics” - Michael Ginda, Indiana University, United States
“Improvement of students’ interaction in MOOCs using participative networks” - Pedro A. PernÃas Peco, Universidad de Alicante, Spain
“Automated Analysis of MOOC Discussion Content to Support Personalised Learning” - Katrina Falkner, The University of Adelaide, Australia
“Extending the Offline Capability of Spoken Tutorial Methodology” - Kannan Moudgalya, Indian Institute of Technology Bombay, India
“Launching the Pan Pacific ISTP (Information Science and Technology Program) through MOOCs” - Yasushi Kodama, Hosei University, Japan
“Fostering Engagement and Social Learning with Incentive Schemes and Gamification Elements in MOOCs” - Thomas Schildhauer, Alexander von Humboldt Institute for Internet and Society, Germany
“Reusability Measurement and Social Community Analysis from MOOC Content Users” - Timothy K. Shih, National Central University, Taiwan
In order to further support these projects and foster collaboration, we have begun pairing the award recipients with Googlers pursuing online education research as well as product development teams.
Google is committed to supporting innovation in
online learning at scale
, and we congratulate the recipients of the MOOC Focused Research Awards. It is our belief that these collaborations will further develop the potential of online education, and we are very pleased to work with these researchers to jointly push the frontier of MOOCs.
A step closer to quantum computation with Quantum Error Correction
Wednesday, March 04, 2015
Posted by Julian Kelly, Rami Barends, and Austin Fowler, Quantum Electronics Engineers
Computer scientists have dreamt of large-scale
quantum computation
since at least
1994
-- the hope is that quantum computers will be able to process certain calculations much more quickly than any classical computer, helping to solve problems ranging from complicated physics or chemistry simulations to solving optimization problems to accelerating machine learning tasks.
One of the primary challenges is that quantum memory elements (“
qubits
”) have always been too prone to errors. They’re fragile and easily disturbed -- any fluctuation or noise from their environment can introduce memory errors, rendering the computations useless. As it turns out, getting even just a small number of qubits together to repeatedly perform the required quantum logic operations and still be nearly error-free is just plain
hard
. But our team has been developing the quantum logic operations and qubit architectures to do just that.
In our paper “
State preservation by repetitive error detection in a superconducting quantum circuit
”, published in the journal
Nature
, we describe a superconducting quantum circuit with nine qubits where, for the first time, the qubits are able to detect and effectively protect each other from bit errors. This
quantum error correction
(QEC) can overcome memory errors by applying a carefully choreographed series of logic operations on the qubits to detect where errors have occurred.
Photograph of the device containing nine quantum bits (qubits). Each qubit interacts with its neighbors to protect them from error.
So how does QEC work? In a classical computer, we can monitor bits directly to detect errors. However, qubits are much more fickle -- measuring a qubit directly will collapse
entanglement
and
superposition
states, removing the quantum elements that make it useful for computation.
To get around this, we introduce additional ‘measurement’ qubits, and perform a series of quantum logic operations that look at the 'measurement' and 'data' qubits in combination. By looking at the state of these pairwise combinations (using quantum
XOR gates
), and performing some careful cross-checking, we can pull out just enough information to detect errors without altering the information in any individual qubit.
The basics of error correction. ‘Measurement’ qubits can detect errors on ‘data’ qubits through the use of quantum XOR gates.
We’ve also shown that storing information in five qubits works better than just storing it in one, and that with nine qubits the error correction works even better. That’s a key result -- it shows that the quantum logic operations are trustworthy enough that by adding more qubits, we can detect more complex errors that otherwise may cause algorithmic failure.
While the basic physical processes behind quantum error correction are feasible, many challenges remain, such as improving the logic operations behind error correction and testing protection from phase-flip errors. We’re excited to tackle these challenges on the way towards making real computations possible.
Large-Scale Machine Learning for Drug Discovery
Monday, March 02, 2015
Posted by Patrick Riley and Dale Webster, Google Research and
Bharath Ramsundar
, Google Research Intern and Stanford Ph.D. candidate
Discovering new treatments for human diseases is an immensely complicated challenge; Even after extensive research to develop a biological understanding of a disease, an effective therapeutic that can improve the quality of life must still be found. This process often takes years of research, requiring the creation and testing of millions of drug-like compounds in an effort to find a just a few viable drug treatment candidates. These
high-throughput screens
are often automated in sophisticated labs and are expensive to perform.
Recently,
deep learning
with neural networks has been applied in
virtual drug screening
1,2,3
, which attempts to replace or augment the high-throughput screening process with the use of computational methods in order to improve its speed and success rate.
4
Traditionally, virtual drug screening has used only the experimental data from the particular disease being studied. However, as the volume of experimental drug screening data across many diseases continues to grow, several research groups have demonstrated that data from multiple diseases can be leveraged with
multitask
neural networks to improve the virtual screening effectiveness.
In collaboration with the
Pande Lab
at
Stanford University
, we’ve released a paper titled "
Massively Multitask Networks for Drug Discovery
", investigating how data from a variety of sources can be used to improve the accuracy of determining which chemical compounds would be effective drug treatments for a variety of diseases. In particular, we carefully quantified how the amount and diversity of screening data from a variety of diseases with very different biological processes can be used to improve the virtual drug screening predictions.
Using our
large-scale neural network training system
, we trained at a scale 18x larger than previous work with a total of 37.8M data points across more than 200 distinct biological processes. Because of our large scale, we were able to carefully probe the sensitivity of these models to a variety of changes in model structure and input data. In the paper, we examine not just the performance of the model but why it performs well and what we can expect for similar models in the future. The data in the paper represents more than 50M total CPU hours.
This graph shows a measure of prediction accuracy (ROC AUC is the
area under the receiver operating characteristic curve
) for virtual screening on a fixed set of 10 biological processes as more datasets are added.
One encouraging conclusion from this work is that our models are able to utilize data from many different experiments to increase prediction accuracy across many diseases. To our knowledge, this is the first time the effect of adding additional data has been quantified in this domain, and our results suggest that even more data could improve performance even further.
Machine learning at scale has significant potential to accelerate drug discovery and improve human health. We look forward to continued improvement in virtual drug screening and its increasing impact in the discovery process for future drugs.
Thank you to our other collaborators
David Konerding
(Google),
Steven Kearnes
(Stanford), and
Vijay Pande
(Stanford).
References:
1.
Thomas Unterthiner, Andreas Mayr, Günter Klambauer, Marvin Steijaert, Jörg Kurt Wegner, Hugo Ceulemans, Sepp Hochreiter.
Deep Learning as an Opportunity in Virtual Screening
. Deep Learning and Representation Learning Workshop: NIPS 2014
2.
Dahl, George E, Jaitly, Navdeep, and Salakhutdinov, Ruslan.
Multi-task neural networks for QSAR predictions
. arXiv preprint arXiv:1406.1231, 2014.
3.
Ma, Junshui, Sheridan, Robert P, Liaw, Andy, Dahl, George, and Svetnik, Vladimir.
Deep neural nets as a method for quantitative structure-activity relationships
. Journal of Chemical Information and Modeling, 2015.
4.
Peter Ripphausen, Britta Nisius, Lisa Peltason, and Jürgen Bajorath.
Quo Vadis, Virtual Screening? A Comprehensive Survey of Prospective Applications
. Journal of Medicinal Chemistry 2010 53 (24), 8461-8467
Labels
accessibility
ACL
ACM
Acoustic Modeling
Adaptive Data Analysis
ads
adsense
adwords
Africa
AI
Algorithms
Android
Android Wear
API
App Engine
App Inventor
April Fools
Art
Audio
Augmented Reality
Australia
Automatic Speech Recognition
Awards
Cantonese
Chemistry
China
Chrome
Cloud Computing
Collaboration
Computational Imaging
Computational Photography
Computer Science
Computer Vision
conference
conferences
Conservation
correlate
Course Builder
crowd-sourcing
CVPR
Data Center
Data Discovery
data science
datasets
Deep Learning
DeepDream
DeepMind
distributed systems
Diversity
Earth Engine
economics
Education
Electronic Commerce and Algorithms
electronics
EMEA
EMNLP
Encryption
entities
Entity Salience
Environment
Europe
Exacycle
Expander
Faculty Institute
Faculty Summit
Flu Trends
Fusion Tables
gamification
Gboard
Gmail
Google Accelerated Science
Google Books
Google Brain
Google Cloud Platform
Google Docs
Google Drive
Google Genomics
Google Maps
Google Photos
Google Play Apps
Google Science Fair
Google Sheets
Google Translate
Google Trips
Google Voice Search
Google+
Government
grants
Graph
Graph Mining
Hardware
HCI
Health
High Dynamic Range Imaging
ICLR
ICML
ICSE
Image Annotation
Image Classification
Image Processing
Inbox
India
Information Retrieval
internationalization
Internet of Things
Interspeech
IPython
Journalism
jsm
jsm2011
K-12
KDD
Keyboard Input
Klingon
Korean
Labs
Linear Optimization
localization
Low-Light Photography
Machine Hearing
Machine Intelligence
Machine Learning
Machine Perception
Machine Translation
Magenta
MapReduce
market algorithms
Market Research
Mixed Reality
ML
MOOC
Moore's Law
Multimodal Learning
NAACL
Natural Language Processing
Natural Language Understanding
Network Management
Networks
Neural Networks
Nexus
Ngram
NIPS
NLP
On-device Learning
open source
operating systems
Optical Character Recognition
optimization
osdi
osdi10
patents
Peer Review
ph.d. fellowship
PhD Fellowship
PhotoScan
Physics
PiLab
Pixel
Policy
Professional Development
Proposals
Public Data Explorer
publication
Publications
Quantum AI
Quantum Computing
renewable energy
Research
Research Awards
resource optimization
Robotics
schema.org
Search
search ads
Security and Privacy
Semantic Models
Semi-supervised Learning
SIGCOMM
SIGMOD
Site Reliability Engineering
Social Networks
Software
Speech
Speech Recognition
statistics
Structured Data
Style Transfer
Supervised Learning
Systems
TensorBoard
TensorFlow
TPU
Translate
trends
TTS
TV
UI
University Relations
UNIX
User Experience
video
Video Analysis
Virtual Reality
Vision Research
Visiting Faculty
Visualization
VLDB
Voice Search
Wiki
wikipedia
WWW
YouTube
Archive
2018
May
Apr
Mar
Feb
Jan
2017
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2016
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Jul
May
Apr
Mar
Feb
2007
Oct
Sep
Aug
Jul
Jun
Feb
2006
Dec
Nov
Sep
Aug
Jul
Jun
Apr
Mar
Feb
Feed
Google
on
Follow @googleresearch
Give us feedback in our
Product Forums
.