Google Research Blog
The latest news from Research at Google
Word of Mouth: Introducing Voice Search for Indonesian, Malaysian and Latin American Spanish
Wednesday, March 30, 2011
Posted by Linne Ha, International Program Manager
Read more about the launch of Voice Search in Latin American Spanish on the Google América Latina blog.
Today we are excited to announce the launch of Voice Search in Indonesian, Malaysian, and Latin American Spanish, making Voice Search available in over two dozen languages and accents since our first launch in November 2008. This accomplishment could not have been possible without the help of local users in the region - really, we couldn’t have done it without them. Let me explain:
In 2010 we launched Voice Search in Dutch, the first language where we used the “word of mouth” project, a crowd-sourcing effort to collect the most accurate voice data possible.The traditional method of acquiring voice samples is to license the data from companies who specialize in the distribution of speech and text databases. However, from day one we knew that to build the most accurate Voice Search acoustic models possible, the best data would come from the people who would use Voice Search once it launched - our users.
Since then, in each country, we found small groups of people who were avid fans of Google products and were part of a large social network, either in local communities or on online. We gave them phones and asked them to get voice samples from their friends and family. Everyone was required to sign a consent form and all voice samples were anonymized. When possible, they also helped to test early versions of Voice Search as the product got closer to launch.
Building a speech recognizer is not just limited to localizing the user interface. We require thousands of hours of raw data to capture regional accents and idiomatic speech in all sorts of recording environments to mimic daily life use cases. For instance, when developing Voice Search for Latin American Spanish, we paid particular attention to Mexican and Argentinean Spanish. These two accents are more different from one another than any other pair of widely-used accents in all of South and Central America. Samples collected in these countries were very important bookends for building a version of Voice Search that would work across the whole of Latin America. We also chose key countries such as Peru, Chile, Costa Rica, Panama and Colombia to bridge the divergent accent varieties.
As an International Program Manager at Google, I have been fortunate enough to travel around the world and meet many of our local Google users. They often have great suggestions for the products that they love, and word of mouth was created with the vision that our users could participate in developing the product. These Voice Search launches would not have been possible without the help of our users, and we’re excited to be able to work together on the product development with the people who will ultimately use our products.
Reading tea leaves in the tourism industry: A Case Study in the Gulf Oil Spill
Thursday, March 24, 2011
Posted by Hyunyoung Choi and Paul Liu, Senior Economists
A few years ago, our in-house economists, Hal Varian and Hyunyoung Choi, demonstrated how to “
predict the present
” with monthly visitor arrivals to Hong Kong. We took this idea further to see if search queries could predict the future. If users start to research their travel plans some weeks or months in advance, then intuitively shouldn’t we be able to extend "predicting the present" into "predicting the future?" We decided to test it out by focusing on a region whose tourism was recently severely impacted: Florida’s gulf coast.
With the travel industry still in the midst of recovering from a deep recession, the Gulf Oil spill had the potential to do significant economic damage. Our case study on the Gulf Oil spill helped find useful insight into people’s future travel plans to Florida; in fact, we found that travel search queries actually were good predictors for trips to Florida, and destinations within Florida, about 4 weeks later.
The results we saw surprised us.
Google Insights for Search
suggested that at least with respect to hotel bookings (using data from Smith Travel Research, Inc.), the aggregate effect of the oil spill was modest on Florida travel, since travelers tended to shift their destinations from the affected regions on the west coast to the east coast or central regions of Florida. In particular, hotel bookings for affected areas along the Gulf coast were 4.25% less than predicted, and unaffected areas along the Atlantic coast were 4.89% greater than predicted.
You can read the full case study
here
or
try your own hand
at predicting the future!
Games, auctions and beyond
Wednesday, March 16, 2011
Posted by Yossi Matias, Senior Director, Head of Israel R&D Center
In an effort to advance the understanding of market algorithms and Internet economics, Google has launched an academic research initiative focused on the underlying aspects of online auctions, pricing, game-theoretic strategies, and information exchange. Twenty professors from three leading Israeli academic institutions - the
Hebrew University
,
Tel Aviv University
and the
Technion
- will receive a Google grant to conduct research for three years.
In the past two decades, we have seen the Internet grow from a scientific network to an economic force that positively affects the global economy. E-commerce, online advertising, social networks and other new online business models present fascinating research questions and topics of study that can have a profound impact on society.
Consider online advertising, which is based on principles from algorithmic game theory and online auctions. The Internet has enabled advertising that is more segmented and measurable, making it more efficient than traditional advertising channels, such as newspaper classifieds, radio spots, and television commercials. These measurements have led to better pricing models, which are based on online real-time auctions. The original Internet auctions were designed by the industry, based on basic economic principles which have been known and appreciated for forty years.
As the Internet grows, online advertising is becoming more sophisticated, with developments such as ad-exchanges, advertising agencies which specialize in online markets, and new analytic tools. Optimizing value for advertisers and publishers in this new environment may benefit from a better understanding of the strategies and dynamics behind online auctions, the main driving tool of Internet advertising.
These grants will foster collaboration and interdisciplinary research by bringing together world renowned computer scientists, engineers, economists and game theorists to analyze complex online auctions and markets. Together, they will help bring this area of study into mainstream academic scientific research, ultimately advancing the field to the benefit of the industry at large.
The professors who received research grants include:
Hebrew University: Danny Dolev, Jeffrey S. Rosenschein, Noam Nisan (
Computer Science and Engineering
); Liad Blumrosen, Alex Gershkov, Eyal Winter (
Economics
); Michal Feldman and Ilan Kremer (
Business
). The last six are also members of the
Center for the Study of Rationality
.
Tel Aviv University: Yossi Azar, Amos Fiat, Haim Kaplan, and Yishay Mansour (
Computer Science
); Zvika Neeman (
Economics
); Ehud Lehrer and Eilon Solan (
Mathematics
); and Gal Oestreicher (
Business
).
Technion: Seffi Naor (
Computer Science
); Ron Lavi (
Industrial Engineering
); Shie Mannor and Ariel Orda (
Electrical Engineering
).
In addition to providing the funds, Google will offer support by inviting the researchers to seminars, workshops, faculty summits and brainstorming events. The results of this research will be published for the benefit of the Internet industry as a whole, and will contribute to the evolving discipline of market algorithms.
Large Scale Image Annotation: Learning to Rank with Joint Word-Image Embeddings
Thursday, March 10, 2011
Posted by Jason Weston and Samy Bengio, Research Team
In our
paper
, we introduce a generic framework to find a joint representation of images and their labels, which can then be used for various tasks, including image ranking and image annotation.
We focus on the task of automatic assignment of annotations (text labels) to images given only the pixel representation of the image (i.e., with no known metadata). This is achieved by a learning algorithm, that is, where the computer learns to predict annotations for new images given annotated training images. Such training datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. In this paper, we propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at the top of the ranked list of annotations for a given image and learning a low-dimensional joint embedding vector space for both images and annotations. Our system learns an interpretable model, where annotations with alternate wordings ("president obama" or "barack"), different languages ("tour eiffel" or "eiffel tower"), or similar concepts (such as "toad" or "frog") are close in the embedding space. Hence, even when our model does not predict the exact annotation given by a human labeler, it often predicts similar annotations.
Our system is trained on ~10 million images with ~100,000 possible annotation types and it annotates a single new image in ~0.17 seconds (not including feature processing) and consumes only 82MB of memory. Our method both outperforms all the methods we tested against and in comparison to them is faster and consumes less memory, making it possible to house such a system on a laptop or mobile device.
Building resources to syntactically parse the web
Wednesday, March 09, 2011
Posted by Slav Petrov and Ryan McDonald, Research Team
One major hurdle in organizing the world’s information is building computer systems that can understand natural, or human, language. Such understanding would advance if systems could automatically determine syntactic and semantic structures.
This analysis is an extremely complex inferential process. Consider for example the sentence, "A hearing is scheduled on the issue today." A syntactic parser needs to determine that "is scheduled" is a verb phrase, that the "hearing" is its subject, that the prepositional phrase "on the issue" is modifying the "hearing", and that today is an adverb modifying the verb phrase. Of course, humans do this all the time without realizing it. For computers, this is non-trivial as it requires a fair amount of background knowledge, typically encoded in a rich statistical model. Consider, "I saw a man with a jacket" versus "I saw a man with a telescope". In the former, we know that a "jacket" is something that people wear and is not a mechanism for viewing people. So syntactically, the "jacket" must be a property associated with the "man" and not the verb "saw", i.e., I did not see the man by using a jacket to view him. Whereas in the latter, we know that a telescope is something with which we can view people, so it can also be a property of the verb. Of course, it is ambiguous, maybe the man is carrying the telescope.
Linguistically inclined readers will of course notice that this parse tree has been simplified by omitting empty clauses and traces.
Computer programs with the ability to analyze the syntactic structure of language are fundamental to improving the quality of many tools millions of people use every day, including
machine translation
,
question answering
,
information extraction
, and
sentiment analysis
. Google itself is already using syntactic parsers in many of its projects. For example,
this paper
, describes a system where a syntactic dependency parser is used to make translations more grammatical between languages with different word orderings.
This paper
uses the output of a syntactic parser to help determine the scope of negation within sentences, which is then used downstream to improve a
sentiment
analysis
system
.
To further this work, Google is pleased to announce a gift to the
Linguistic Data Consortium (LDC)
to create new annotated resources that can facilitate research progress in the area of syntactic parsing. The primary purpose of the gift is to generate data sets that language technology researchers can use to evaluate the robustness of new parsing methods in several web domains, such as blogs and discussion forums. The goal is to move parsing beyond its current focus on carefully edited text such as print news (for which annotated resources already exist) to domains with larger stylistic and topical variability (where spelling errors and grammatical mistakes are more common).
The Linguistic Data Consortium is a non-profit organization that produces and distributes linguistic data to researchers, technology developers, universities and university libraries. The LDC is hosted by the University of Pennsylvania and directed by Mark Liberman, Christopher H. Browne Distinguished Professor of Linguistics.
The LDC is the leader in building linguistic data resources and will annotate several thousand sentences with syntactic parse trees like the one shown in the figure. The annotation will be done manually by specially trained linguists who will also have access to machine analysis and can correct errors the systems make. Once the annotation is completed, the corpus will be released to the research community through the
LDC catalog
. We look forward to seeing what they produce and what the natural language processing research community can do with the rich annotation resource.
Labels
accessibility
ACL
ACM
Acoustic Modeling
Adaptive Data Analysis
ads
adsense
adwords
Africa
AI
Algorithms
Android
Android Wear
API
App Engine
App Inventor
April Fools
Art
Audio
Augmented Reality
Australia
Automatic Speech Recognition
Awards
Cantonese
Chemistry
China
Chrome
Cloud Computing
Collaboration
Computational Imaging
Computational Photography
Computer Science
Computer Vision
conference
conferences
Conservation
correlate
Course Builder
crowd-sourcing
CVPR
Data Center
Data Discovery
data science
datasets
Deep Learning
DeepDream
DeepMind
distributed systems
Diversity
Earth Engine
economics
Education
Electronic Commerce and Algorithms
electronics
EMEA
EMNLP
Encryption
entities
Entity Salience
Environment
Europe
Exacycle
Expander
Faculty Institute
Faculty Summit
Flu Trends
Fusion Tables
gamification
Gboard
Gmail
Google Accelerated Science
Google Books
Google Brain
Google Cloud Platform
Google Docs
Google Drive
Google Genomics
Google Maps
Google Photos
Google Play Apps
Google Science Fair
Google Sheets
Google Translate
Google Trips
Google Voice Search
Google+
Government
grants
Graph
Graph Mining
Hardware
HCI
Health
High Dynamic Range Imaging
ICLR
ICML
ICSE
Image Annotation
Image Classification
Image Processing
Inbox
India
Information Retrieval
internationalization
Internet of Things
Interspeech
IPython
Journalism
jsm
jsm2011
K-12
KDD
Keyboard Input
Klingon
Korean
Labs
Linear Optimization
localization
Low-Light Photography
Machine Hearing
Machine Intelligence
Machine Learning
Machine Perception
Machine Translation
Magenta
MapReduce
market algorithms
Market Research
Mixed Reality
ML
MOOC
Moore's Law
Multimodal Learning
NAACL
Natural Language Processing
Natural Language Understanding
Network Management
Networks
Neural Networks
Nexus
Ngram
NIPS
NLP
On-device Learning
open source
operating systems
Optical Character Recognition
optimization
osdi
osdi10
patents
Peer Review
ph.d. fellowship
PhD Fellowship
PhotoScan
Physics
PiLab
Pixel
Policy
Professional Development
Proposals
Public Data Explorer
publication
Publications
Quantum AI
Quantum Computing
renewable energy
Research
Research Awards
resource optimization
Robotics
schema.org
Search
search ads
Security and Privacy
Semantic Models
Semi-supervised Learning
SIGCOMM
SIGMOD
Site Reliability Engineering
Social Networks
Software
Speech
Speech Recognition
statistics
Structured Data
Style Transfer
Supervised Learning
Systems
TensorBoard
TensorFlow
TPU
Translate
trends
TTS
TV
UI
University Relations
UNIX
User Experience
video
Video Analysis
Virtual Reality
Vision Research
Visiting Faculty
Visualization
VLDB
Voice Search
Wiki
wikipedia
WWW
YouTube
Archive
2018
May
Apr
Mar
Feb
Jan
2017
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2016
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Jul
May
Apr
Mar
Feb
2007
Oct
Sep
Aug
Jul
Jun
Feb
2006
Dec
Nov
Sep
Aug
Jul
Jun
Apr
Mar
Feb
Feed
Google
on
Follow @googleresearch
Give us feedback in our
Product Forums
.