Google Research Blog
The latest news from Research at Google
Announcing Google-hosted workshop videos from NIPS 2011
Thursday, February 23, 2012
Posted by John Blitzer and Douglas Eck, Google Research
At the
25th Neural Information Processing Systems (NIPS)
conference in Granada, Spain last December, we engaged in dialogue with a diverse population of neuroscientists, cognitive scientists, statistical learning theorists, and machine learning researchers. More than twenty Googlers participated in an intensive single-track program of talks, nightly poster sessions and a workshop weekend in the Spanish Sierra Nevada mountains. Check out the
NIPS 2011 blog post
for full information on Google at NIPS.
In conjunction with our technical involvement and gold sponsorship of NIPS, we recorded the five workshops that Googlers helped to organize on various topics from big learning to music. We’re now pleased to provide access to these rich workshop experiences to the wider technical community.
Watch videos of Googler-led workshops on the
YouTube Tech Talks Channel
:
Big Learning: Algorithms, Systems, and Tools for Learning at Scale
by Joseph Gonzalez, Sameer Singh, Graham Taylor, James Bergstra, Alice Zheng, Misha Bilenko, Yucheng Low, Yoshua Bengio, Michael Franklin, Carlos Guestrin, Andrew McCallum, Alexander Smola, Michael Jordan, Sugato Basu (Googler)
Domain Adaptation Workshop: Theory and Application
by John Blitzer, Corinna Cortes, Afshin Rostamizadeh (all Googlers)
Learning Semantics
by Antoine Bordes, Jason Weston (Googler), Ronan Collobert, Leon Bottou
Sparse Representation and Low-rank Approximation
by Ameet Talwalkar, Lester Mackey, Mehryar Mohri (Googler), Michael Mahoney, Francis Bach, Mike Davies, Remi Gribonval, Guillaume Obozinski
International Workshop on Music and Machine Learning: Learning from Musical Structure
by Rafael Ramirez, Darrell Conklin, Douglas Eck (Googler), Ryan Rifkin (Googler)
To highlight a few workshops:
The Domain Adaptation
workshop organized by Google, which fused theoretical and practical domain adaptation, featured invited talks from Shai Ben-David and Googler Mehryar Mohri from the theory side and Dan Roth from the applications side. This was just next door to Googlers Doug Eck and Ryan Rifkin's workshop on
Machine Learning and Music
, with musical demonstrations loud enough for the next-door neighbors to ask them to “turn it down a bit, please.” In addition to the Googler-run workshops, the
Integrating Language and Vision
workshop showcased invited talks by Google postdoctoral fellow Percy Liang on the pragmatics of visual scene description and Josh Tenenbaum on physical models as a cognitive plausible mechanism for bridging language and vision. Finally, Google consultant Andrew Ng was one of the organizers of the
Deep Learning and Unsupervised Feature Learning
, which offered an extended tutorial, several inspiring talks, and two panel discussions (one with Googler Samy Bengio as panelist) exploring the question of “How deep is deep?”
As the workshop weekend drew to a close, an airline strike in Spain left NIPS attendees scrambling to get home for the holidays. We hope the skies look clear for 2012 when NIPS lands in Google’s neck of the woods, Lake Tahoe!
2011 EMEA Android Educational Outreach Program Awards Mobile Phones to Universities
Wednesday, February 22, 2012
Posted by David Harper, Head of University Relations, EMEA
As part of EMEA’s 2011 Android Educational Outreach program, we recently granted over 300 Android-powered mobile phones to 40 universities across Europe, the Middle East, and Africa. These phones will be used to support mobile related project work in university teaching and research. Our steering committee reviewed applications from 77 universities in 24 countries across the region and selected finalists based on each proposal’s scope to generate interest in mobile engineering, reach many students, and be applicable both within and outside the university.
This is the second year we have awarded mobile phones to universities. This is largely attributable to the enthusiastic feedback from last year’s recipients who were interested in continued support for Android project work. The phones donated last year were used in a range of interesting projects, including:
George Candea
, EPFL (Switzerland): The
Pocket Campus
, an application that helps students, graduates, staff and visitors to find their way around the EPFL campus was created as a course project. After the course, some of the students decided to continue development of the application. It has become so successful that it’s now EPFL’s campus-wide smartphone app.
Andrew Rice
, University of Cambridge (United Kingdom): Students in the
summer programme
developed
Learn!
, a flashcard-based learning application that is available in Android Market. This project investigated how one might incorporate features of modern phones such as multimedia capture and playback, data communications and significant computation power into a learning application.
Alan Smeaton
and colleagues, Dublin City University (Ireland): Undergraduate, master’s, and PhD students embarked on a wide variety of projects, which included lifelogging (recording everyday activities using the phone); measuring the strengths of wireless networks as an aid to mapping wireless propagation; and interface design for an augmented reality application.
Nicolae Tapus
, University Politehnica of Bucharest (Romania): Numerous applications were developed by students, including: TaxiFinder, an application that finds the closest taxi number with the lowest price, and Viewlity, an augmented reality engine for showing nearby points of interest (e.g., gas stations, restaurants, ATMs, places of worship) on an Android phone.
Gerhard Tröster
, ETH Zurich (Switzerland):
Martin Wirz
and his team are using mobile phones to conduct research in the field of
wearable computing
and machine learning. The devices are used to collect all kinds of sensor information (e.g., accelerometer, magnetometer, microphone, GPS) to infer personal activities, psychological behaviors and social phenomena.
We are looking forward to sharing the great projects resulting from this year’s Android Educational Outreach program early next summer.
Quantifying comedy on YouTube: why the number of o’s in your LOL matter
Thursday, February 09, 2012
Posted by Sanketh Shetty, YouTube Slam Team, Google Research
In a previous
post
, we talked about quantification of musical talent using machine learning on acoustic features for
YouTube Music Slam
. We wondered if we could do the same for funny videos, i.e. answer questions such as: is a video funny, how funny do viewers think it is, and why is it funny? We noticed a few audiovisual patterns across comedy videos on YouTube, such as shaky camera motion or audible laughter, which we can automatically detect. While content-based features worked well for music, identifying humor based on just such features is
AI-Complete
. Humor preference is subjective, perhaps even more so than musical taste.
Fortunately, at YouTube, we have more to work with. We focused on videos uploaded in the comedy category. We captured the uploader’s belief in the funniness of their video via features based on title, description and tags. Viewers’ reactions, in the form of comments, further validate a video’s comedic value. To this end we computed more text features based on words associated with amusement in comments. These included (a) sounds associated with laughter such as hahaha, with culture-dependent variants such as hehehe, jajaja, kekeke, (b) web acronyms such as lol, lmao, rofl, (c) funny and synonyms of funny, and (d) emoticons such as :), ;-), xP. We then trained classifiers to identify funny videos and then tell us why they are funny by categorizing them into genres such as “funny pets”, “spoofs or parodies”, “standup”, “pranks”, and “funny commercials”.
Next we needed an algorithm to rank these funny videos by comedic potential, e.g. is “
Charlie bit my finger
” funnier than “
David after dentist
”? Raw viewcount on its own is insufficient as a ranking metric since it is biased by video age and exposure. We noticed that viewers emphasize their reaction to funny videos in several ways: e.g. capitalization (LOL), elongation (loooooool), repetition (lolololol), exclamation (lolllll!!!!!), and combinations thereof. If a user uses an “loooooool” vs an “loool”, does it mean they were more amused? We designed features to quantify the degree of emphasis on words associated with amusement in viewer comments. We then trained a passive-aggressive ranking algorithm using human-annotated pairwise ground truth and a combination of text and audiovisual features. Similar to Music Slam, we used this ranker to populate candidates for human voting for our Comedy Slam.
So far, more than 75,000 people have cast more than 700,000 votes, making comedy our most popular slam category.
Give it a try
!
Further reading:
“
Opinion Mining and Sentiment Analysis
,” by Bo Pang and Lillian Lee.
“
A Great Catchy Name: Semi-Supervised Recognition of Sarcastic Sentences in Online Product Reviews
,” by Oren Tsur, Dmitry Davidov, and Ari Rappoport.
“
That’s What She Said: Double Entendre Identification
,” by Chloe Kiddon and Yuriy Brun.
Labels
accessibility
ACL
ACM
Acoustic Modeling
Adaptive Data Analysis
ads
adsense
adwords
Africa
AI
Algorithms
Android
API
App Engine
App Inventor
April Fools
Art
Audio
Australia
Automatic Speech Recognition
Awards
Cantonese
China
Chrome
Cloud Computing
Collaboration
Computational Imaging
Computational Photography
Computer Science
Computer Vision
conference
conferences
Conservation
correlate
Course Builder
crowd-sourcing
CVPR
Data Center
Data Discovery
data science
datasets
Deep Learning
DeepDream
DeepMind
distributed systems
Diversity
Earth Engine
economics
Education
Electronic Commerce and Algorithms
electronics
EMEA
EMNLP
Encryption
entities
Entity Salience
Environment
Europe
Exacycle
Expander
Faculty Institute
Faculty Summit
Flu Trends
Fusion Tables
gamification
Gmail
Google Books
Google Brain
Google Cloud Platform
Google Docs
Google Drive
Google Genomics
Google Play Apps
Google Science Fair
Google Sheets
Google Translate
Google Trips
Google Voice Search
Google+
Government
grants
Graph
Graph Mining
Hardware
HCI
Health
High Dynamic Range Imaging
ICLR
ICML
ICSE
Image Annotation
Image Classification
Image Processing
Inbox
Information Retrieval
internationalization
Internet of Things
Interspeech
IPython
Journalism
jsm
jsm2011
K-12
KDD
Klingon
Korean
Labs
Linear Optimization
localization
Machine Hearing
Machine Intelligence
Machine Learning
Machine Perception
Machine Translation
MapReduce
market algorithms
Market Research
ML
MOOC
Multimodal Learning
NAACL
Natural Language Processing
Natural Language Understanding
Network Management
Networks
Neural Networks
Ngram
NIPS
NLP
open source
operating systems
Optical Character Recognition
optimization
osdi
osdi10
patents
ph.d. fellowship
PhD Fellowship
PiLab
Policy
Professional Development
Proposals
Public Data Explorer
publication
Publications
Quantum Computing
renewable energy
Research
Research Awards
resource optimization
Robotics
schema.org
Search
search ads
Security and Privacy
Semi-supervised Learning
SIGCOMM
SIGMOD
Site Reliability Engineering
Social Networks
Software
Speech
Speech Recognition
statistics
Structured Data
Style Transfer
Supervised Learning
Systems
TensorFlow
Translate
trends
TTS
TV
UI
University Relations
UNIX
User Experience
video
Video Analysis
Vision Research
Visiting Faculty
Visualization
VLDB
Voice Search
Wiki
wikipedia
WWW
YouTube
Archive
2017
Jan
2016
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Jul
May
Apr
Mar
Feb
2007
Oct
Sep
Aug
Jul
Jun
Feb
2006
Dec
Nov
Sep
Aug
Jul
Jun
Apr
Mar
Feb
Feed
Google
on
Follow @googleresearch
Give us feedback in our
Product Forums
.