Engineering and Developers Blog
What's happening with engineering and developers at YouTube
Machine learning for video transcoding
Friday, May 13, 2016
At YouTube we care about the quality of the pixels we deliver to our users. With many millions of devices uploading to our servers every day, the content variability is so huge that delivering an acceptable audio and video quality in all playbacks is a considerable challenge. Nevertheless, our goal has been to continuously improve quality by reducing the amount of compression artifacts that our users see on each playback. While we could do this by increasing the bitrate for every file we create, that would quite easily exceed the capacity of many of the network connections available to you. Another approach is to optimize the parameters of our video processing algorithms to meet bitrate budgets and minimum quality standards. While Google’s compute and storage resources are huge, they are finite and so we must temper our algorithms to
also
fit within compute requirements. The hard problem then is to adapt our pipeline to create the best quality output for each clip you upload to us, within constraints of quality, bitrate and compute cycles.
This is a well known triad in the world of video compression and transcoding. The problem is usually solved by finding a sweet spot of transcoding parameters that seem to work well on average for a large number of clips. That sweet spot is sometimes found by trying every possible set of parameters until one is found that satisfies all the constraints. Recently, others have been using this “exhaustive search” idea to tune parameters on a per clip basis.
What we’d like to show you in this blog post is a new technology we have developed that adapts our parameter set for each clip automatically using Machine Learning. We’ve been using this over the last year for improving the quality of movies you see on YouTube and Google Play.
The good and bad about parallel processing
We ingest more than 400 hours of video per minute. Each file must be transcoded from the uploaded video format into a number of other video formats with different codecs so we can support playback on any device you might have. The only way we can keep up with that rate of ingest and quickly show you your transcoded video in YouTube is to break each file in pieces called “chunks,” and process these in parallel. Every chunk is processed independently and simultaneously by CPUs in our Google cloud infrastructure. The complexity involved in chunking and recombining the transcoded segments is significant. Quite aside from the mechanics of assembling the processed chunks, maintaining the quality of the video in each chunk is a challenge. This is because to have as speedy a pipeline as possible, our chunks don’t overlap, and are also very small; just a few seconds. So the good thing about parallel processing is increased speed and reduced latency. But the bad thing is that without the information about the video in the neighboring chunks, it’s now difficult to control chunk quality so that there is no visible difference between the chunks when we tape them back together. Small chunks don’t give the encoder much time to settle into a stable state hence each encoder treats each chunk slightly differently.
Smart parallel processing
You could say that we are shooting ourselves in the foot before starting the race. Clearly, if we communicate information about chunk complexity between the chunks, each encoder can adapt to what’s happening in the chunks after or before it. But inter-process communication increases overall system complexity and requires some extra iterations in processing each chunk.
Actually, OK, truth is we’re stubborn here in Engineering and we wondered how far we could push this idea of “don’t let the chunks talk to each other.”
The plot below shows an example of the PSNR in dB per frame over two chunks from a 720p video clip, using H.264 as the codec. A higher value of PSNR means better picture quality and a lower value means poorer quality. You can see that one problem is the quality at the start of a chunk is very different from that at the end of the chunk. Aside from the average quality level being worse than we would like, this variability in quality causes an annoying pulsing artifact.
Because of small chunk sizes, we would expect that each chunk behaves like the previous and next one, at least statistically. So we might expect the encoding process to converge to roughly the same result across consecutive chunks. While this is true much of the time, it is not true in this case. One immediate solution is to change the chunk boundaries so that they align with high activity video behavior like fast motion, or a scene cut. Then we would expect that each chunk is relatively homogenous so the encoding result should be more uniform. It turns out that this does improve the situation, but not as much as we’d like, and the instability is still often there.
The key is to allow the encoder to process each chunk multiple times, learning on each iteration how to adjust its parameters in anticipation of what happens in across the entire chunk instead of just a small part of it. This results in the start and end of each chunk having similar quality, and because the chunks are short, it is now more likely that the differences across chunk boundaries are also reduced. But even then, we noticed that it can take quite a number of iterations for this to happen. We observed that the number of iterations is affected a great deal by the quantization related parameter (CRF) of the encoder on that first iteration. Even better, there is often a “best” CRF that allows us to hit our target bitrate at a desired quality with just one iteration. But this “best” setting is actually different for every clip. That’s the tricky bit. If only we could work out what that setting was for each clip, then we’d have a simple way of generating good looking clips without chunking artifacts.
The plot on the right shows the result of many experiments with our encoder at varying CRF (constant quality) settings, over the same 1080p clip. After each experiment we measured the bitrate of the output file and each point shows the CRF, bitrate pair for that experiment. There is a clear relationship between these two values. In fact it is very well modeled as an exponential fit with three parameters, and the plot shows just how good that modeled line is in fitting the observed data points. If we knew the parameters of the line for our clip, then we’d see that to create a 5 Mbps version of this clip (for example) we’d need a CRF of about 20.
Pinky and the Brain
What we needed was a way to predict our three curve fitting parameters from low complexity measurements about the video clip. This is a classic problem in machine learning, statistics and signal processing. The gory mathematical details of our solution are in technical papers that we published recently.
1
You can see there how our thoughts evolved. Anyway, the idea is rather simple: predict the three parameters given things we know about the input video clip, and read off the CRF we need. This prediction is where the “Google Brain” comes in.
The “things we know about the input video clip” are called video “features.” In our case there are a vector of features containing measurements like input bit rate, motion vector bits in the input file, resolution of the video and frame rate. These measurements can also be made from a very fast low quality transcode of the input clip to make them more informative. However, the exact relationship between the features and the curve parameters for each clip is rather more complicated than an equation we could write down. So instead of trying to discover that explicitly ourselves, we turned to Machine Learning with Google Brain. We first took about 10,000 video clips and exhaustively tested every quality setting on each, measuring the resulting bitrate from each setting. This gave us 10,000 curves which in turn gave us 4 x 10,000 parameters measured from those curves.
The next step was to extract features from our video clips. Having generated the training data and the feature set, our Machine Learning system learned a “Brain” configuration that could predict the parameters from the features. Actually we used both a simple “regression” technique as well as the Brain. Both outperformed our existing strategy. Although the process of training the Brain is relatively computationally heavy, the resulting system was actually quite simple and required only a few operations on our features. That meant that the compute load in production was small.
Does it work?
The plot on the right shows the performance of the various systems on 10,000 video clips. Each point (x,y) represents the percentage of clips (y-axis) in which the resulting bitrate after compression is within x% of the target bitrate. The blue line shows the best case scenario where we use exhaustive search to get the perfect CRF for each clip. Any system that gets close to that is a good one. As you can see at the 20% rate, our old system (green line) would hit the target bitrate 15% of the time. Now with our fancy Brain system we can hit it 65% of the time if we use features from your upload only (red line), and better than 80% of the time (dashed line) using some features from a very fast low quality transcode.
But does this actually look good? You may have noticed that we concentrated on our ability to hit a particular bitrate rather than specifically addressing picture quality. Our analysis of the problem showed that this was the root cause. Pictures are the proof of the pudding and you can see some frames from a 720p video clip below (shot from a racing car). The top row shows two frames at the start and end of a typical chunk and you can see that the quality in the first frame is way worse than the last. The bottom row shows the frames in the same chunk using our new automated clip adaptive system. In both cases the measured bitrate is the same at 2.8 Mbps. As you can see, the first frame is much improved and as a bonus the last frame looks better as well. So the temporal fluctuation in quality is gone and we also managed to improve the clip quality overall.
This concept has been used in production in our video infrastructure division for about a year. We are delighted to report it has helped us deliver very good quality streams for movies like "Titanic" and most recently "Spectre." We don’t expect anyone to notice, because they don’t know what it would look like otherwise.
But there is always more we can do to improve on video quality. We’re working on it. Stay tuned.
Anil Kokaram, Engineering Manager, AV Algorithms Team, recently watched "
Tony Cozier speaking about the West Indies Cricket Heritage Centre
," Yao Chung Lin, Software Engineer, Transcoder Team, recently watched "
UNDER ARMOUR | RULE YOURSELF | MICHAEL PHELPS
," Michelle Covell, Research Scientist, recently watched "
Last Week Tonight with John Oliver: Scientific Studies (HBO)
" and Sam John, Software Engineer, Transcoder Team, recently watched "
Atlantis Found: The Clue in the Clay | History
."
1
Optimizing transcoder quality targets using a neural network with an embedded bitrate model, Michele Covell, Martin Arjovsky, Yao-Chung Lin and Anil Kokaram, Proceedings of the Conference on Visual Information Processing and Communications 2016, San Francisco
Multipass Encoding for reducing pulsing artefacts in cloud based video transcoding, Yao-Chung Lin, Anil Kokaram and Hugh Denman, IEEE International Conference on Image Processing, pp 907-911, Quebec 2015
A look into YouTube’s video file anatomy
Wednesday, April 20, 2016
Over 1 billion people use YouTube, watching hundreds of millions of hours of content all over the world everyday. We have been receiving content at a rate exceeding 100 hours/min for the last three years (currently at 400 hours/min). With those kinds of usage statistics what we see on ingest actually says something about the state of video technology today.
Video files are the currency of video sharing and distribution over the web. Each file contains the video and audio data wrapped up in some container format and associated with metadata that describes the nature of the content in some way. To make sure each user can “Broadcast yourself” we have spent years building systems that can faithfully extract the video and audio data hidden inside almost any kind of file you can imagine. That is why when our users upload to YouTube they have confidence that their video and audio will always appear.
The video and audio data is typically compressed using a codec and of course the data itself comes in a variety of resolutions, frame rates, sample rates and channels (in the case of audio). As technology evolves, codecs get better, and the nature of the data itself changes, typically toward higher fidelity. But how much variety is there in this landscape and how has that variety changed with time? We’ve been analyzing the anatomy of files you’ve been uploading over the years and think it reflects how video technology has changed.
Audio/video file anatomy
Audio/video files contain audio and video media which can be played or viewed on some multimedia devices like a TV or desktop or smartphone. Each pixel of video data is associated with values for brightness and color which tells the display how that pixel should appear. A quick calculation on the data rate for the raw video data shows that for 720p video at 30 frames per second the data rate is in excess of 420 Mbits/sec. Raw audio data rates are smaller but still significant at about 1.5 MBits/sec for 44.1 KHz sampling with 16 bits per sample. These rates are well in excess of the 10’s of MBits/sec (at most) that many consumers have today. By using compression technology that same > 400 MBits/sec of data can be expressed in less than 5 Mbits/sec. This means that audio and video compression is a vital part of any practical media distribution system. Without compression we would not be able to stream media over the internet in the way everyone enjoys now.
There are three main components of media files today: the container, the compressed bitstream itself and finally metadata. The bitstream (called the video and audio “essence”) contains the actual audio and video media in a compressed form. It will also contain information about the size of the pictures and start and end of frames so that the codec knows how to decode the picture data in the right way. This information embedded in the bitstream is still not enough though. The “container” refers to the additional information that helps the decoder work out when a video frame is to be played, and when the audio data should be played relative to the frame. The container often also holds an index to the start of certain frames in the bitstream. This makes it easier for a player system to allow users to “seek” or “fast forward” through the contents. The container will also hold information about the file content itself like the author, and other kinds of “metadata” that could be useful for a rights holder or “menu” on a player for instance. So the bitstream contains the actual picture and audio, but the container lets the player know how that content should be played.
Standardization of containers and codecs was vital for the digital video industry to take off as it did in the late 1990s. The Motion Picture Experts Group (MPEG) was the consortium responsible and they are still active today. The interaction between containers and codecs has been so tight in the past that quite often the container and the codec might have the same name, because they arise from the same standards document. Needless to say, there are many different standards for the various components in a media file. Today we have MPEG and the Alliance for Open Media (AOM) emerging as the two major bodies engaged in creating new media compression and distribution technology. This is what makes the job of YouTube so challenging. We must correctly decode your content despite the endless variety, and despite the frequent errors and missing component in uploaded files. We deal with thousands of combinations of containers and codecs every week.
Containers
The plot below shows the percentage of files uploaded having the same container month on month over the last five years. Each container is associated with the same color over time. The legend is ordered from the bottom up. The container type used in the largest fraction of uploads is at the bottom.
In 2011, MP4 (.mp4), Audio Video Interleave (.avi), Flash Video (.flv), Advanced Systems Format (.asf) and MPEG Transport Stream (.ts) were more equally distributed than they are now. But over the years MP4 has overtaken them all to become the most common ingest container format. Open source formats like WebM and Matroska seem to have been slowly gaining in popularity since about 2012, which is when we started rolling out the open source VP9 codec. Windows Media files (using the .asf container) and Flash Video have declined significantly. On the other end of the scale, files using Creative Labs video containers (for instance), which were popular before 2011, are hardly ever seen in our ingest today.
Codecs
The history of ingested codec types reflects the speed with which new codecs are adopted by hardware manufacturers and the makers of software editing and conforming systems. The chart below looks at the top ten video codecs back in 2011 and reveals how they have fared since then in our ingest profile. The VP range of codecs (VP6 - VP8) do still figure in our ingest today and in fact compared to 2011, VP8 ranks seventh in our top ten in 2015. Clearly H.264 is the dominant codec we see in use for upload to YouTube now, but MPEG4 and Windows Media bitstreams are still significant. This is very different from the situation in 2011 when almost every codec had a significant share of our ingest profile. This reflects how heterogeneous the video compression landscape was five years ago, with no dominant compression technology. The chart shows how rapidly the ecosystem moves to adopt a compression technology as soon as it proves itself: just five years. Uploads from mobile devices have also driven this trend as efficient codec technology enables more uploads from low power devices with low bandwidth availability. In that time we have seen the almost complete erosion of Flash Video (FLV) and MPEG1/2 video for upload to YouTube, which all appear to have reached some kind of low volume steady state behavior in our ingest.
The situation with audio codecs follows similar trends. The chart below shows the top 15 codecs we see on ingest, measured over 2015. Five years ago we saw a very heterogeneous landscape with Raw audio data (PCM), Windows Media (WMA), MPEG and Advanced Audio (AAC) all contributing significant proportions. Over the intervening time the AAC codec has grown to dominate the profile of audio codecs, but PCM, WMA and MP3 are still significant. It's interesting that we see a pretty steady rate of media with no audio at all (shown as “No Audio”), although the total proportion is of course small. The use of the VORBIS open source audio codec got a boost in 2012 when the new version was released. Although it is hard to see from the chart, OPUS follows a similar pattern with uploads starting to kick off in late 2012 once the reference software was available and then a boost in uploads in 2013 coinciding with the next API release.
Properties
But what about the nature of video and audio media itself? Is there evidence to show that capture is increasing in resolution and color fidelity? This section reinforces the law that “in the internet everything gets bigger with time.”
Picture size
The chart below stacks the proportions of each resolution in our ingest against month. The legend shows the top ten resolutions by proportion of ingest as measured over the last year, with the topmost label being the largest proportion. There is always some disparity between “standard” picture sizes and the actual uploaded sizes. Those which do not fall into the labels used here are allocated to “OTHER.” Although the vast majority of our ingest shows standard picture sizes, that “OTHER” category has been persistently steady, showing that there will always be about 10 percent of our uploaders who upload non-standard sizes. The trend is clearly toward bigger pictures, with 480p dominating five years ago and HD (720p and 1080p) dominating now. It is interesting that we do not see step changes in behavior but rather a gradual acceleration to higher pixel densities. The 480p resolution does appear to be in a permanent decline however. 720p seems set to replace “vanilla” 480p in about a year.
With the 4K and 8K formats we see rapid take-up reflected in our ingest. The chart below breaks out just these two resolutions. Although understandably small as a proportion of the whole YouTube ingest profile, these formats are still significant and we notice that the take-up accelerated/spiked once announcements were made in 2013 (4K) and 2015 (8K). What is even more interesting is that the upload of 4K content started well before “official” announcement of the support. Our creators are always pushing the limits of our ingest and this is good evidence.
Audio channels
We observe that an increasing percentage of our media, which contain audio, contain stereo audio tracks as shown below in red. We also show here the relative amount of files having no audio (about 5 percent in 2015), and the trend is similar here as in the audio codec chart shown previously. A growing proportion of tracks contain 5.1 material but that is swamped by the amount of mono and stereo sound files. Making a linear prediction of the curves below would seem to imply that mono audio will decline to less than 5 percent of ingest in just over a year’s time.
Interlacing
Interlacing is still with us. This is the legacy TV broadcast practice of constructing a video frame from two half height images that record the odd and even lines of the final frame, but at slightly different times. The fraction of our content that is interlaced on upload appears to be roughly 2-3 percent averaged over the last five years and there is no sign of that actually dwindling. This is perhaps because of the small but significant made-for-TV content that is uploaded. The reasons for the observed rapid changes in some months are intriguing. One suggestion is correlation with unusually large volume TV coverage e.g. 2012 Olympics and the U.S. election.
Color spaces
We are continually working on our ability to reproduce color faithfully between ingest and display. This is a notoriously challenging task across the consumer display industry for TV’s, monitors and mobile devices. The first step to color nirvana is the correct specification of the color space in the associated video file. Although color space specifications have been in place for some time, it has taken a long while for file-based content to properly represent this data across a wide range of consumer devices. The chart below reflects our observations of the top five spaces we see. We started collecting information in 2012 and compared to the stability in codecs and containers, the specification of color spaces in video data is clearly still evolving. It is only in the last three years that we have started to observe more consistent color labeling of video data, and as the chart shows below, BT709 (the default color space for HD resolution) has emerged as the dominant color space definition. At the end of 2015 there was still an alarmingly large proportion of video files without any color information, more than 70 percent. Note that the vertical axis on the chart below starts from 70 percent. The trend in that proportion is downwards and if we extend our curve of the decline in unspecified color spaces it would appear that it will be about a year before we can expect to see the majority of files having some color specification, and two years for almost all files to contain that metadata. We have just started to observe files expressing the recent BT2020 color space, being ingested at the end of 2015. These of course account for a tiny proportion of ingest (< .005 percent). It does herald the start of the HDR technology rollout though (as BT2020 is a color space associated with that format) and reflects various announcements about HDR capable devices made at CES 2016.
Frame rates
The chart below shows how the use of a range of frame rates has actually not changed that much over time. As expected the U.S. and EU standards of 30 and 25Hz respectively, dominate the distribution. Less expected is that low frame rates of 15fps and lower also significantly impact our ingest. This is because of the relatively large proportion of educational material including slide shows, as well as music slide decks that are uploaded to YouTube. That sort of material tends to be captured at low frame rates. High frame rate (HFR) material (e.g. from 48Hz and upwards) is a steady flow especially since the announcement of HFR support in the YouTube player in 2014. Before 2014, the ceiling of our output target video was 30fps but since then we have raised the ceiling to 60fps. However the trend is not increasing as much as is say 1080p ingest itself. This possibly reflects bandwidth constraints on upload as well as the fact that most capture today especially on mobile devices still defaults to 25 or 30fps.
We continuously analyze both a wide angle and close up view of the video file activity worldwide. That has given us a unique perspective on the evolution of video technology. In a sense the data is a reflection of the consensus of device manufacturers and creators in the area of media capture and creation. So we can see the growing agreement around video codecs, frame rates and stereo audio. Color space specification is still very poor however, and some expected consensus have not emerged. For example in the area of HFR content creation, 60+ fps is not quite yet on a growth curve as HD resolution has been over the last year.
The data presented here show that even in the last five years the variability in data types and formats is reducing. However, as with many broadcasters and streaming sites we see enough variability in our ingested file profiles that we remain keen on standardization activities. We look forward to continuing engagement of the YouTube and Google engineering community in SMPTE, MPEG and AOM activities.
Even with the dominance of certain technologies like H.264/AAC codecs and the MOV type containers, there will always be a small but significant portion of audio video data that falls outside the “consensus.” These small proportions are important to us however, because we want you to be confident that we’re going to do our darndest to help you broadcast yourself no matter what device you use to make your clip.
Anil Kokaram, Tech Lead/Engineering Manager, AV Algorithms Team, recently watched "
Carlos Brathwaite's 4 sixes
," Thierry Foucu, Tech Lead Transcoder Team, recently watched "
Sale of the Century
," and Yang Hu, Software Engineer, recently watched "
MINECRAFT: How to build wooden mansion
."
YouTube now defaults to HTML5 <video>
Tuesday, January 27, 2015
Four years ago, we wrote about YouTube’s
early support for the HTML5 <video> tag
and how it performed compared to Flash. At the time, there were limitations that held it back from becoming our preferred platform for video delivery. Most critically, HTML5 lacked support for Adaptive Bitrate (ABR) that lets us show you more videos with less buffering.
Over the last four years, we’ve worked with browser vendors and the broader community to close those gaps, and now, YouTube uses HTML5 <video> by default in Chrome, IE 11, Safari 8 and in beta versions of Firefox.
The benefits of HTML5 extend beyond web browsers, and it's now also used in smart TVs and other streaming devices. Here are a few key technologies that have enabled this critical step forward:
MediaSource Extensions
Adaptive Bitrate (ABR) streaming is critical for providing a quality video experience for viewers - allowing us to quickly and seamlessly adjust resolution and bitrate in the face of changing network conditions. ABR has reduced buffering by more than 50 percent globally and as much as 80 percent on heavily-congested networks. MediaSource Extensions also enable live streaming in game consoles like Xbox and PS4, on devices like Chromecast and in web browsers.
VP9 video codec
HTML5 lets you take advantage of the open VP9 codec, which gives you higher quality video resolution with an average bandwidth reduction of 35 percent. These smaller files allow more people to access 4K and HD at 60FPS -- and videos start 15-80 percent faster. We've already served hundreds of billions of VP9 videos, and you can look for more about VP9 in a future post.
Encrypted Media Extensions
and Common Encryption
In the past, the choice of delivery platform (Flash, Silverlight, etc) and content protection technology (Access, PlayReady) were tightly linked, as content protection was deeply integrated into the delivery platform and even the file format. Encrypted Media Extensions separate the work of content protection from delivery, enabling content providers like YouTube to use a single HTML5 video player across a wide range of platforms. Combined with Common Encryption, we can support multiple content protection technologies on different platforms with a single set of assets, making YouTube play faster and smoother.
WebRTC
YouTube enables everyone to share their videos with the world, whether uploading pre-recorded videos or
broadcasting live
. WebRTC allows us to build on the same technology that enables
plugin-free Google Hangouts
to provide broadcasting tools from within the browser.
Fullscreen
Using the new fullscreen APIs in HTML5, YouTube is able to provide an immersive fullscreen viewing experience (perfect for those
4K videos
), all with standard HTML UI.
Moving to <iframe> embeds
Given the progress we've made with HTML5 <video>, we’re now defaulting to the HTML5 player on the web. We're also deprecating the "old style" of Flash <object> embeds and our Flash API. We encourage all embedders to use the
<iframe> API
, which can intelligently use whichever technology the client supports.
These advancements have benefitted not just YouTube’s community, but the entire industry. Other content providers like Netflix and Vimeo, as well as companies like Microsoft and Apple have embraced HTML5 and been key contributors to its success. By providing an open standard platform, HTML5 has also enabled new classes of devices like Chromebooks and Chromecast. You can support HTML5 by using the
<iframe> API
everywhere you embed YouTube videos on the web.
Richard Leider, Engineering Manager, recently watched,
“Ex Hex - Waterfall.”
Cool YouTube apps from Google I/O 2012
Friday, September 28, 2012
We're constantly amazed at the innovative ways that developers incorporate YouTube into their applications. At
Google I/O
this year, 12 partners (over 30% from outside the U.S.) demonstrated their apps in the YouTube section of the
Developer Sandbox
, a demo area highlighting applications based on technologies and products featured at I/O.
Google's own
Daniel Sieberg
, an Emmy-nominated journalist, interviewed some of our partners about their use of the YouTube APIs.
With Daniel’s hectic schedule, he only had time to interview a handful of our great partners. With that in mind, we highlighted all the awesome apps showcased by our partners at the YouTube API Developer Sandbox.
Business.me
(YouTube Data API and YouTube Player API)
Overview
Business.me, headquartered in Singapore, is the place to share and discover videos about business. They have created a video-sharing site to help producers of business videos reach their audience. The site also helps business professionals discover relevant business information in video format.
Fun Fact
Oscar Moreno, CEO, not only holds Business and Law degrees, he helped launch several startups (Business.me, Netjuice, Keldoo, and Tuenti).
Code Hero
(YouTube Data API)
Overview
Code Hero teaches you to code through a fun, 3D game. Become a code hero and shape the future!
Fun Fact
The Code Hero Team implemented the recording mechanism in the game that exports to YouTube at a 3 day hackathon!
Bonus: The game has
sharks with lasers
attached to their heads!
Flipboard
(YouTube Data API and YouTube Player API)
Overview
See everything on Flipboard, all your news and life’s great moments in one place. Using the YouTube Data API, Flipboard lets users discover, rate, share, and comment on top videos from YouTube. In addition, users can access their own videos and subscriptions, and subscribe to other YouTube users.
Fun Fact
Flipboard launched an Android app one week before I/O with a YouTube and Google+ integration!
LOOT Entertainment
by Sony DADC (YouTube Data API)
Overview
Gather your friends and set up your own production crew inside PlayStation®Home! What will you be? Director? Actor? Cinematographer? Extra? Try them all! Check out the
amazing Machinima tools
to help you record, light and build your film or television sets! What will you make?
Fun Fact
LOOT gives you tons of sets to make your own movies (
machinima
) on the
PS3
, including a
Ghostbusters Firehouse Stage Set
!
Moviecom.tv
(YouTube Data API and YouTube Player API)
Overview
A simple and easy online video platform for businesses. Record, centralize and share instantly. Moviecom.tv also allows you to link directly to your YouTube account through the YouTube APIs.
Fun Fact
The founders flew all the way from Glasgow to attend Google I/O!
Parrot
(YouTube Data API and YouTube Player API)
Overview
The Parrot AR.Drone is a quadricopter that can be controlled by a smartphone or tablet. Get more out of your AR.Drone with the AR.Drone Academy. Keep track of all your flights on the Academy map. Watch your best videos with added statistical feedback and directly share online with pilots from all over the world!
Fun Fact
Parrot makes
remote controlled flying devices
that can record and track their flights!
PicoTube - Vettl, Inc.
(YouTube Data API and YouTube Player API)
Overview
Picotube uses content from YouTube and allows users to create avatars, watch clips together, create playlists, and rate videos selected by other video jockeys.
Fun Fact
Picotube was the Grand Prix
winner of TechCrunch Tokyo 2011
!
Skimble
(YouTube Data API and YouTube Player API, and new Android Player API)
Overview
Here to power the mobile fitness movement, Skimble offers fun, dynamic and social applications for everyone. Available now are Skimble's Workout Trainer and GPS Sports Tracker apps that help motivate people to get and stay active. Skimble uses the YouTube Player API to display fitness videos.
Fun Fact
Co-founder
Maria Ly
got the
crowd moving
at one of YouTube’s Google I/O Sessions!
Squrl
(YouTube Data API and YouTube Player API)
Overview
Squrl is a great place to watch and discover video. Know what videos are trending, receive recommendations on what to watch and see what your friends are watching.
Fun Fact
Co-founders Mark Gray and Michael Hoydich also founded the successful software development company
IndustryNext
together in 2004!
Telestream
(YouTube Data API and YouTube Player API)
Overview
Telestream demonstrated Wirecast for YouTube, a live video production and streaming product, which was developed specifically for Google YouTube partners. Telestream specializes in products that make it possible to get video content to any audience regardless of how the content is created, distributed or viewed (entire process).
Fun Fact
Telestream’s NASCAR Project won the
IBC2012 Innovation Award
!
Vidcaster
(YouTube Data API and YouTube Player API)
Overview
VidCaster is a video site creation platform that allows you to create a video portal instantly from your existing video library on YouTube or other video hosts. Choose from a beautiful set of designer themes and customize to your heart's content using VidCaster's powerful template language.
Fun Fact
Kieran Farr
, CEO and co-founder, used to drive a taxi full-time in San Francisco before becoming a successful entrepreneur!
WeVideo
(YouTube Data API)
Overview
WeVideo is a cloud-based video editing suite that allows easy, full-featured, collaborative HD video editing across Google Drive, Chromebooks, and Android devices.
Fun Fact
WeVideo partnered with Marvel and YouTube to allow fans to create their own
trailers
!
YouTube Channels: Get with the Program!
Monday, September 24, 2012
It's never been easier to create compelling videos and build a social presence on YouTube. At this year's Google I/O, YouTube product managers and channel gurus Dror and A.J. presented tips and tricks for making great content centered around raising brand awareness, raising money, and obtaining feedback about your products and services.
Don't worry if you missed their talk, we recorded it! So, sit back, grab some popcorn, and get ready to learn how to showcase your brand in front of YouTube's 800 million unique visitors per month!
Click
here
to view the slides from the video above.
Not sold yet? Well, have a sneak peek at some of the great material they cover below, and remember Dror and A.J.’s number one recommendation:
make content, not commercials
!
Sneak Peek
Tips and Tricks
Hook the user in the first 15 seconds (or they'll leave)
Brand your channel!
Make the most of your budget
Review
YouTube’s Trends
for ideas
Camera shy?
Consider animation.
(It might actually be cheaper than video.)
Several successful channels focus on curating videos from their community
Enhance your videos without fancy software/hardware using the
YouTube Editor
or
other integrated web editors
Many, many more...
What's your goal?
Raising awareness
Master your PR via video (include all your features and make bloggers’ lives easier)
Provide product/service demo videos to promote your company
Tell backstories about clients using your products/services
Raising money
Add video to your crowdfunding pitch to increase funds raised by 114% (
source: Indiegogo
)
Researching and supporting users
Record tutorials to promote and educate (see which features are the most popular using
YouTube’s Analytics
... you might be surprised)
Use
Google Hangouts
for scalable office hours and virtual focus groups
Figure out what features customers like/dislike via the world’s largest focus group
Resources to learn more
Creator Hub
Creator Playbook
(what you wish you knew about YouTube)
Trends Dashboard
YouTube for Developers
(that’s us)
Wow, you made it this far without
watching the video
? Did we tell you they fill the presentation with awesome videos that showcase their points (including
Chuck Testa
)?
Nope!?
Well, now you know, and you will definitely want to
watch the whole thing!
-Jeremy Walker, YouTube API Team
Labels
.net
360
acceleration
access control
accessibility
actionscript
activities
activity
android
announcements
apis
app engine
appengine
apps script
as2
as3
atom
authentication
authorization
authsub
best practices
blackops
blur faces
bootcamp
captions
categories
channels
charts
chrome
chromeless
client library
clientlibraries
clientlogin
code
color
comments
compositing
create
curation
custom player
decommission
default
deprecation
devs
direct
discovery
docs
Documentation RSS
dotnet
education
embed
embedding
events
extension
feeds
flash
format
friendactivity
friends
fun
gears
google developers live
google group
googlegamedev
googleio
html5
https
iframe
insight
io12
io2011
ios
iphone
irc
issue tracker
java
javascript
json
json-c
jsonc
knight
legacy
Live Streaming API
LiveBroadcasts API
logo
machine learning
mashups
media:keywords keywords tags metadata
metadata
mobile
mozilla
NAB 2016
news
oauth
oauth2
office hours
open source
partial
partial response
partial update
partners
patch
php
player
playlists
policy
previews
pubsubhubbub
push
python
quota
rails
releases
rendering
reports
responses
resumable
ruby
samples
sandbox
shortform
ssl https certificate staging stage
stack overflow
stage video
staging
standard feeds
storify
storyful
subscription
sup
Super Chat API
survey
tdd
theme
tos
tutorials
updates
uploads
v2
v3
video
video files
video transcoding
virtual reality
voting
VR
watch history
watchlater
webvtt
youtube
youtube api
YouTube Data API
youtube developers live
youtube direct
YouTube IFrame Player API
YouTube live
YouTube Reporting API
ytd
Archive
2018
Aug
Apr
2017
Nov
Sep
Aug
Mar
Jan
2016
Nov
Oct
Aug
May
Apr
2015
Dec
Nov
Oct
May
Apr
Mar
Jan
2014
Oct
Sep
Aug
May
Mar
2013
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
2012
Dec
Nov
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Nov
Oct
Sep
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
2007
Dec
Nov
Aug
Jun
May
Feed
YouTube
on
Follow @youtubedev