Google Research Blog
The latest news from Research at Google
51 Languages in Google Translate
Monday, August 31, 2009
Posted by Franz Och, Principal Scientist
Are you using
Google Translate
to access the world's information? It can help you
find and translate
local restaurant and hotel reviews into your language when planning a vacation abroad, allow you to read the
Spanish
or
French Editions of Google News
, communicate with people who speak different languages using
Google Translate chat bots
, and more. We're constantly working to improve translation quality, so if you haven't tried it recently, you may be pleasantly surprised with what it can do now.
We're especially excited to announce that we've added 9 new languages to Google Translate:
Afrikaans
,
Belarusian
,
Icelandic
,
Irish
,
Macedonian
,
Malay
,
Swahili
,
Welsh
, and
Yiddish
, bringing the number of languages we support from 42 to 51. Since we can translate between any two of these languages, we offer translation for 2550 language pairs!
How do we decide which languages to add to Google Translate? Our goal is to provide automatic translation for as many languages as possible. So internally we've been collecting data and building systems for more than 100 languages. Whenever a set of languages meets our quality bar we consider it for our next language launch. We've found that one of the most important factors in adding new languages to our system is the ability to find large amounts of translated documents from which our system automatically learns how to translate. As a result, the set of languages that we've been able to develop is more closely tied to the size of the web presence of a language and less to the number of speakers of the language.
We're very happy that our technology allows us to produce machine translation systems for languages that often don't get the attention they deserve. For many of the newly supported languages ours is the only mature and freely available translation system. While translation quality in these languages will be noticeably rougher than for languages we've supported for a longer time like
French
or
Spanish
, it is most often good enough to give a basic understanding of the text, and you can be sure that the quality will get better over time.
Remember, you can also use Google Translate from inside other Google products. For example you can
translate e-mails
within GMail,
translate web pages
using Google Toolbar,
translate RSS news feeds
from around the world in Google Reader, and
translate documents
in Google Docs. (The new languages aren't available in these products yet but will be soon!) And, if you're translating content into other languages, you can use our technology within
Google Translator Toolkit
to help you translate faster and better. In the future, expect to find our translation technology in more places, making it increasingly simple to get access to information no matter what language it is written in.
On the predictability of Search Trends
Monday, August 17, 2009
Posted by Yossi Matias, Niv Efron, and Yair Shimshoni, Google Labs, Israel.
Since
launching
Google Trends and Google Insights for Search, we've been providing daily insight into what the world is searching for. An understanding of search trends can be useful for advertisers, marketers, economists, scholars, and anyone else interested in knowing more about their world and what's currently top-of-mind.
As many have observed, the trends of some search queries are quite seasonal and have repeated patterns. See, for instance, the search trends for the query "
ski
" hit their peak during the winter seasons in the US and Australia. The search trends for
basketball
correlate with annual league events, and are consistent
year-over-year
. When looking at trends of the aggregated volume of search queries related to particular
categories
, one can also observe regular patterns in some categories like
Food & Drink
or
Automotive
. Such trends sequences appear quite predictable, and one would naturally expect the patterns of previous years to repeat looking forward.
On the other hand, for many other search queries and categories, the trends are quite irregular and hard to predict. Examples include the search trends for
obama
,
twitter
,
android
, or
global warming
, and the trend of aggregate searches in the
News & Current Events
category.
Having predictable trends for a search query or for a group of queries could have interesting ramifications. One could forecast the trends into the future, and use it as a "best guess" for various business decisions such as budget planning, marketing campaigns and resource allocations. One could identify deviation from such forecasting and identify new factors that are influencing the search volume as demonstrated in
Flu Trends
.
We were therefore interested in the following questions:
How many search queries have trends that are predictable?
Are some categories more predictable than others? How is the distribution of predictable trends between the various categories?
How predictable are the trends of aggregated search queries for different categories? Which categories are more predictable and which are less so?
To learn about the predictability of search trends, and so as to overcome our basic limitation of not knowing what the future will entail, we characterize the predictability of a Trends series based on its historical performance. In other words, we estimate the
a posteriori
predictability of a sequence determined by the error of forecasted trends vs the actual performance.
Specifically, we have used a simple forecasting model that learns basic seasonality and general trend. For each trends sequence of interest, we take a point in time,
t
, which is about a year back, compute a one year forecasting for
t
based on historical data available at time
t
, and compare it to the actual trends sequence that occurs since time
t
. The error between the forecasting trends and the actual trends characterizes the predictability level of a sequence, and when the error is smaller than a pre-defined threshold, we denote the trends query as
predictable
.
Our work to date is summarized in a paper called
On the Predictability of Search Trends
which includes the following observations:
Over half of the most popular Google search queries are predictable in a 12 month ahead forecast, with a mean absolute prediction error of about 12%.
Nearly half of the most popular queries are not predictable (with respect to the model we have used).
Some categories have particularly high fraction of predictable queries; for instance,
Health
(74%),
Food & Drink
(67%) and
Travel
(65%).
Some categories have particularly low fraction of predictable queries; for instance,
Entertainment
(35%) and
Social Networks & Online Communities
(27%).
The trends of aggregated queries per categories are much more predictable: 88% of the aggregated category search trends of over 600 categories in Insights for Search are predictable, with a mean absolute prediction error of of less than 6%.
There is a clear association between the existence of seasonality patterns and higher predictability, as well as an association between high levels of outliers and lower predictability. For the
Entertainment
category that has typically less seasonal search behavior as well as relatively higher number of singular spikes of interest, we have seen a predictability of 35%, where as the category of
Travel
with a very seasonal behavior and lower tendency for short spikes of interest had a predictability of 65%.
One should expect the actual search trends to deviate from forecast for many predictable queries, due to possible events and dynamic circumstances.
We show the forecasting vs actual for trends of a few categories, including some that were used recently for
predicting the present
of various economic indicators. This demonstrates how forecasting can serve as a good baseline for identifying interesting deviations in actual search traffic.
As we see that many of the search trends are predictable, we are introducing today a new
forecasting feature
in Insights for Search, along with a
new version
of the product. The forecasting feature is applied to queries which are identified as predictable (see, for instance,
basketball
or the trends in the
Automotive
category) and then shown as an extrapolation of the historical trends and search patterns.
There are many more questions that can be looked at regarding search trends in general, and their predictability in particular, including design and testing more advanced forecasting models, getting other insights into the distributions of sequences, and demonstrating interesting deviations of actual-vs-forecast for predictable trends series. We'd love to hear from you - share with us your findings, published results or insights - email us at insightsforsearch@google.com.
Under the Hood of App Inventor for Android
Tuesday, August 11, 2009
Posted by Bill Magnuson, Hal Abelson, and Mark Friedman
We recently announced our
App Inventor for Android
project on the Google Research Blog. That blog entry was long on vision but short on technological details--details which we think would be of interest to our readers.
Of particular interest is our use of
Scheme
. Part of our development environment is a visual programming language similar to
Scratch
. The visual language provides a drag-and-drop interface for assembling procedures and event handlers that manipulate high-level components of Android-based phones. The components are similar to the ones in the recently announced
Simple
; in fact, the code bases share an ancestor.
We parse the visual programming language into an
S-expression
intermediate language, which is a domain-specific language expressed as a set of Scheme macros, along with a Scheme runtime library. We did this for a few reasons:
S-expressions are easy to generate and read for both humans and machines.
Scheme macros are a convenient (albeit sometimes arcane) way to express S-expression based syntax.
Scheme is a small, powerful and elegant language well suited to describe and evaluate a large set of programming semantics. Additionally, it provides the flexibility that we require as our language and its semantics grow and develop.
Scheme expertise was readily available among our team.
A pre-existing tool (
Kawa
by Per Bothner) to create Android compatible output from scheme code was already available.
For now the project is just an experiment we're performing with a dozen colleges and universities, but we hope to eventually open up the development environment to wider use and to open-source parts of the code.
Two Views from the 2009 Google Faculty Summit
Monday, August 03, 2009
Posted by Alfred Spector, Vice President of Research and Special Initiatives
[cross-posted with the
Official Google Blog
]
We held our fifth
Computer Science Faculty Summit
at our Mountain View campus last week. About 100 faculty attendees from schools in the Western hemisphere attended the summit, which focused on a collection of technologies that serve to connect and empower people. Included in the agenda were presentations on technologies for automated translation of human language, voice recognition, responding to crises, power monitoring and collaborative data management. We also talked about technologies to make personal systems more secure, and how to teach programming — even using Android phones. You can see a more complete list of the topics in the
Faculty Summit Agenda
or check out my
introductory presentation
for more information.
I asked a few of the faculty to provide us their perspective on the summit, thinking their views may be more valuable than our own:
Professor Deborah Estrin
, a Professor of Computer Science at UCLA and an expert in large-scale sensing of environmental and other information, and
Professor John Ousterhout
, an expert in distributed operating systems and scripting languages.
Professor Estrin's perspective:
We all know that Google has produced a spectacular array of technologies and services that has changed the way we create, access, manage, share and curate information. A very broad range of people samples and experiences Google’s enhancements and new services on a daily basis. I, of course, am one of those minions, but last week I had the special opportunity to get a glimpse inside the hive while attending the 2009 Google Faculty Summit. I still haven't processed all of the impressions, facts, figures and URLs that I jotted down over the packed day and a half-long gathering, but here are a few of the things that impressed me most:
The way Google simultaneously launches production services while making great advances in really hard technical areas such as machine translation and voice search, and how these two threads are fully intertwined and feed off of one another.
Their embrace of open source activities, particularly in the Android operating system and programming environment for mobiles. They also seed and sponsor all sorts of creative works, from K-12 computer science learning opportunities to an the open data kit that supports data-gathering projects worldwide.
The company’s commitment to thinking big and supporting their employees in acting on their concerns and cares in the larger geopolitical sphere. From the creation of
Flu Trends
to the support of a new "Crisis Response Hackathon" (an event that Google, Microsoft and Yahoo are planning to jointly sponsor to help programmers find opportunities to use their technical skills to solve societal problems), Googlers are not just encouraged to donate dollars to important causes — they are encouraged to use their technical skills to create new solutions and tools to address the world's all-too-many challenges.
This was my second Google Faculty Summit — I previously attended in 2007. I was impressed by the 2007 Summit, but not as deeply as I was this year. Among other things, this year I felt that Googlers talked to us like colleagues instead of just visitors. The conversations flowed: Not once did I run up across the "Sorry, can't talk about that... you know our policy on early announcements". I left quite excited about Google's expanded role in the CS research ecosystem. Thanks for changing that API!
Professor Ousterhout's perspective:
I spent Thursday and Friday this week at Google for their annual Faculty Summit. After listening to descriptions of several Google projects and talking with Googlers and the other faculty attendees, I left with two overall takeaways. First, it's becoming clear that information at
scale is changing science and engineering. If you have access to enormous datasets, it opens up whole new avenues for scientific discovery and for solving problems. For example, Google's machine translation tools take advantage of "parallel texts": documents that have been translated by humans from one language to another, with both forms available. By comparing the sentences from enormous numbers of parallel texts, machine translation tools can develop effective translation tools using simple probabilistic approaches. The results are better than any previous attempts at computerized translation, but only if there are billions of words available in parallel texts. Another example of using large-scale information is Flu Trends, which tracks the spread of flu by counting the frequency of certain search terms in Google's search engine; the data is surprisingly accurate and available more quickly than that from traditional approaches.
My second takeaway is that it's crucial to keep as much information as possible publicly available. It used to be that much of science and engineering was driven by technology: whoever had the biggest particle accelerator or the fastest computer had an advantage. From now on, information will be just as important as technology: whoever has access to the most information will make the most discoveries and create the most exciting new products. If we want to maintain the leadership position of the U.S., we must find ways to make as much information as possible freely available. There will always be vested commercial interests that want to restrict access to information, but we must fight these interests. The overall benefit to society of publishing information outweighs the benefit to individual companies from restricting it.
Labels
accessibility
ACL
ACM
Acoustic Modeling
Adaptive Data Analysis
ads
adsense
adwords
Africa
AI
Algorithms
Android
Android Wear
API
App Engine
App Inventor
April Fools
Art
Audio
Augmented Reality
Australia
Automatic Speech Recognition
Awards
Cantonese
Chemistry
China
Chrome
Cloud Computing
Collaboration
Computational Imaging
Computational Photography
Computer Science
Computer Vision
conference
conferences
Conservation
correlate
Course Builder
crowd-sourcing
CVPR
Data Center
Data Discovery
data science
datasets
Deep Learning
DeepDream
DeepMind
distributed systems
Diversity
Earth Engine
economics
Education
Electronic Commerce and Algorithms
electronics
EMEA
EMNLP
Encryption
entities
Entity Salience
Environment
Europe
Exacycle
Expander
Faculty Institute
Faculty Summit
Flu Trends
Fusion Tables
gamification
Gboard
Gmail
Google Accelerated Science
Google Books
Google Brain
Google Cloud Platform
Google Docs
Google Drive
Google Genomics
Google Maps
Google Photos
Google Play Apps
Google Science Fair
Google Sheets
Google Translate
Google Trips
Google Voice Search
Google+
Government
grants
Graph
Graph Mining
Hardware
HCI
Health
High Dynamic Range Imaging
ICLR
ICML
ICSE
Image Annotation
Image Classification
Image Processing
Inbox
India
Information Retrieval
internationalization
Internet of Things
Interspeech
IPython
Journalism
jsm
jsm2011
K-12
KDD
Keyboard Input
Klingon
Korean
Labs
Linear Optimization
localization
Low-Light Photography
Machine Hearing
Machine Intelligence
Machine Learning
Machine Perception
Machine Translation
Magenta
MapReduce
market algorithms
Market Research
Mixed Reality
ML
MOOC
Moore's Law
Multimodal Learning
NAACL
Natural Language Processing
Natural Language Understanding
Network Management
Networks
Neural Networks
Nexus
Ngram
NIPS
NLP
On-device Learning
open source
operating systems
Optical Character Recognition
optimization
osdi
osdi10
patents
Peer Review
ph.d. fellowship
PhD Fellowship
PhotoScan
Physics
PiLab
Pixel
Policy
Professional Development
Proposals
Public Data Explorer
publication
Publications
Quantum AI
Quantum Computing
renewable energy
Research
Research Awards
resource optimization
Robotics
schema.org
Search
search ads
Security and Privacy
Semantic Models
Semi-supervised Learning
SIGCOMM
SIGMOD
Site Reliability Engineering
Social Networks
Software
Speech
Speech Recognition
statistics
Structured Data
Style Transfer
Supervised Learning
Systems
TensorBoard
TensorFlow
TPU
Translate
trends
TTS
TV
UI
University Relations
UNIX
User Experience
video
Video Analysis
Virtual Reality
Vision Research
Visiting Faculty
Visualization
VLDB
Voice Search
Wiki
wikipedia
WWW
YouTube
Archive
2018
May
Apr
Mar
Feb
Jan
2017
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2016
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Jul
May
Apr
Mar
Feb
2007
Oct
Sep
Aug
Jul
Jun
Feb
2006
Dec
Nov
Sep
Aug
Jul
Jun
Apr
Mar
Feb
Feed
Google
on
Follow @googleresearch
Give us feedback in our
Product Forums
.