Subscribe: YouTube API Blog
http://apiblog.youtube.com/feeds/posts/default
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
api  audio  content  data api  data  https  live  new  quality  reports  sound  time  users  video  videos  youtube 
Rate this Feed
Rating: 3 starRating: 3 starRating: 3 starRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: YouTube API Blog

YouTube Engineering and Developers Blog



What's happening with engineering and developers at YouTube



Updated: 2018-04-24T16:15:08.963-07:00

 



Making high quality video efficient

2018-04-24T16:15:08.961-07:00

YouTube works hard to provide the best looking video at the lowest bandwidth. One way we're doing that is by optimizing videos with bandwidth in mind. We recently made videos stream better -- giving you higher-quality video by improving our videos so they are more likely to fit into your available bandwidth.When you watch a video the YouTube player measures the bandwidth on the client and adaptively chooses chunks of video that can be downloaded fast enough, up to the limits of the device’s viewport, decoding, and processing capability. YouTube makes multiple versions of each video at different resolutions, with bigger resolutions having higher encoding bitrates.Figure 1: HTTP-based Adaptive Video Streaming.YouTube chooses how many bits are used to encode a particular resolution (within the limits that the codecs provide). A higher bitrate generally leads to better video quality for a given resolution but only up to a point. After that, a higher bitrate just makes the chunk bigger even though it doesn’t look better. When we choose the encoding bitrate for a resolution, we select the sweet spot on the corresponding bitrate-quality curve (see Figure 2) at the point where adding more data rate stops making the picture look meaningfully better.Figure 2: Rate-quality curves of a video chunk for a given video codec at different encoding resolutions.We found these sweet spots, but observing how people watch videos made us realize we could deliver great looking video even more efficiently.These sweet spots assume that viewers are not bandwidth limited but if we set our encoding bitrates based only on those sweet spots for best looking video, we see that in practice video quality is often constrained by viewers’ bandwidth limitations. However, if we consider an operating point (other than the sweet spot) given our users’ bandwidth distribution (what we call streaming bandwidth), we end up providing better looking video (what we call delivered video quality).A way to think about this is to imagine the bandwidth available to a user, as a pipe shown in Figure 3. Given the pipe’s capacity fits a 360p chunk but not a 480p chunk, we could tweak the 480p chunk size to be more likely to fit within that pipe by estimating the streaming bandwidth, thereby increasing the resolution users see. We solved the resulting constrained optimization problem to make sure there was no perceivable impact to video quality. In short, by analyzing aggregated playback statistics, and correspondingly altering the bitrates for various resolutions, we worked out how to stream higher quality video to more users.1Figure 3: Efficient streaming scenario before and after our proposalTo understand how streaming bandwidth is different from an individual viewer’s bandwidth, consider the example in Figure 4 below. Given the measured distribution of viewers’ available bandwidth, the playback distribution can be estimated using the areas between the encoding bitrates of neighboring resolutions.Using playback statistics, we are able to model the behavior of the player as it switches between resolutions. This allows us in effect to predict when an increased bitrate would be more likely to cause a player to switch to a lower resolution and thereby cancel the effect of bitrate increase in any one resolution. With this model, we are able to find better operating points for each video in the real world.1Figure 4: For a given resolution 720p for example, the playback distribution across resolutions can be estimated from the probability density function of bandwidth. Partitioning the bandwidth using encoding bitrates of the different representations, the probability of watching a representation can then be estimated with the corresponding area under the bandwidth curve.Another complication here is that the operating points provide an estimate of delivered quality, which is different from encoded quality. If the available bandwidth of a viewer decreases, then the viewer is more likely to switch down to a lower resolution, and therefore land on a different oper[...]



Resonance Audio: Multi-platform spatial audio at scale

2017-11-06T09:37:50.258-08:00

Cross-posted from the VR BlogPosted by Eric Mauskopf, Product ManagerAs humans, we rely on sound to guide us through our environment, help us communicate with others and connect us with what's happening around us. Whether walking along a busy city street or attending a packed music concert, we're able to hear hundreds of sounds coming from different directions. So when it comes to AR, VR, games and even 360 video, you need rich sound to create an engaging immersive experience that makes you feel like you're really there. Today, we're releasing a new spatial audio software development kit (SDK) called Resonance Audio. It's based on technology from Google's VR Audio SDK, and it works at scale across mobile and desktop platforms. allowfullscreen="" frameborder="0" height="480" src="https://www.youtube.com/embed/IYdx9cnHN8I" width="720">Experience spatial audio in our Audio Factory VR app for Daydreamand SteamVR Performance that scales on mobile and desktopBringing rich, dynamic audio environments into your VR, AR, gaming, or video experiences without affecting performance can be challenging. There are often few CPU resources allocated for audio, especially on mobile, which can limit the number of simultaneous high-fidelity 3D sound sources for complex environments. The SDK uses highly optimized digital signal processing algorithms based on higher order Ambisonics to spatialize hundreds of simultaneous 3D sound sources, without compromising audio quality, even on mobile. We're also introducing a new feature in Unity for precomputing highly realistic reverb effects that accurately match the acoustic properties of the environment, reducing CPU usage significantly during playback. Using geometry-based reverb by assigning acoustic materials to a cathedral in UnityMulti-platform support for developers and sound designersWe know how important it is that audio solutions integrate seamlessly with your preferred audio middleware and sound design tools. With Resonance Audio, we've released cross-platform SDKs for the most popular game engines, audio engines, and digital audio workstations (DAW) to streamline workflows, so you can focus on creating more immersive audio. The SDKs run on Android, iOS, Windows, MacOS and Linux platforms and provide integrations for Unity, Unreal Engine, FMOD, Wwise and DAWs. We also provide native APIs for C/C++, Java, Objective-C and the web. This multi-platform support enables developers to implement sound designs once, and easily deploy their project with consistent sounding results across the top mobile and desktop platforms. Sound designers can save time by using our new DAW plugin for accurately monitoring spatial audio that's destined for YouTube videos or apps developed with Resonance Audio SDKs. Web developers get the open source Resonance Audio Web SDK that works in the top web browsers by using the Web Audio API. DAW plugin for sound designers to monitor audio destined for YouTube 360 videos or apps developed with the SDKModel complex Sound Environments Cutting edge featuresBy providing powerful tools for accurately modeling complex sound environments, Resonance Audio goes beyond basic 3D spatialization. The SDK enables developers to control the direction acoustic waves propagate from sound sources. For example, when standing behind a guitar player, it can sound quieter than when standing in front. And when facing the direction of the guitar, it can sound louder than when your back is turned. Controlling sound wave directivity for an acoustic guitar using the SDKAnother SDK feature is automatically rendering near-field effects when sound sources get close to a listener's head, providing an accurate perception of distance, even when sources are close to the ear. The SDK also enables sound source spread, by specifying the width of the source, allowing sound to be simulated from a tiny point in space up to a wall of sound. We've also released an Ambisonic recording tool to spatially capture your sound design directly within Unity, save it to a file, and use it any[...]



Variable speed playback on mobile

2017-09-07T14:06:45.785-07:00

Variable speed playback was launched on the web several years ago and is one of our most highly requested features on mobile. Now, it’s here! You can speed up or slow down videos in the YouTube app on iOS and on Android devices running Android 5.0+. Playback speed can be adjusted from 0.25x (quarter speed) to 2x (double speed) in the overflow menu of the player controls.The most commonly used speed setting on the web is 1.25x, closely followed by 1.5x. Speed watching is the new speed listening which was the new speed reading, especially when consuming long lectures or interviews. But variable speed isn’t just useful for skimming through content to save time, it can also be an important tool for investigating finer details. For example, you might want to slow down a tutorial to learn some new choreography or figure out a guitar strumming pattern.To speed up or slow down audio while retaining its comprehensibility, our main challenge was to efficiently change the duration of the audio signal without affecting the pitch or introducing distortion. This process is called time stretching. Without time stretching, an audio signal that was originally at 100 Hz becomes 200 Hz at double speed causing that chipmunk effect. Similarly, slowing down the speed will lower the pitch. Time stretching can be achieved using a phase vocoder, which transforms the signal into its frequency domain representation to make phase adjustments before producing a lengthened or shortened version. Time stretching can also be done in the time domain by carefully selecting windows from the original signal to be assembled into the new one. On Android, we used the Sonic library for our audio manipulation in ExoPlayer. Sonic uses PICOLA, a time domain based algorithm. On iOS, AVplayer has a built in playback rate feature with configurable time stretching. Here, we have chosen to use the spectral (frequency domain) algorithm.To speed up or slow down video, we render the video frames in alignment with the modified audio timestamps. Video frames are not necessarily encoded chronologically, so for the video to stay in sync with the audio playback, the video decoder needs to work faster than the rate at which the video frames need to be rendered. This is especially pertinent at higher playback speeds. On mobile, there are also often more network and hardware constraints than on desktop that limit our ability to decode video as fast as necessary. For example, less reliable wireless links will affect how quickly and accurately we can download video data, and then battery, CPU speed, and memory size will limit the processing power we can spend on decoding it. To address these issues, we adapt the video quality to be only as high as we can download dependably. The video decoder can also skip forward to the next key frame if it has fallen behind the renderer, or the renderer can drop already decoded frames to catch up to the audio track.If you want to check out the feature, try this: turn up your volume and play the classic dramatic chipmunk at 0.5x to see an EVEN MORE dramatic chipmunk. Enjoy! allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/y8Kyi0WNg40" width="560"> Posted by Pallavi Powale, Software Engineer, recently watched “Dramatic Chipmunk” at 0.5x speed.[...]



Blur select faces with the updated Blur Faces tool

2017-08-21T10:00:21.092-07:00

In 2012 we launched face blurring as a visual anonymity feature, allowing creators to obscure all faces in their video. Last February we followed up with custom blurring to let creators blur any objects in their video, even as they move. Since then we’ve been hard at work improving our face blurring tool.Today we’re launching a new and improved version of Blur Faces, allowing creators to easily and accurately blur specific faces in their videos. The tool now displays images of the faces in the video, and creators simply click an image to blur that individual throughout their video.To introduce this feature, we had to improve the accuracy of our face detection tools, allowing for recognition of the same person across an entire video. The tool is designed for a wide array of situations that we see in YouTube videos, including users wearing glasses, occlusion (the face being blocked, for example, by a hand), and people leaving the video and coming back later.Instead of having to use video editing software to manually create feathered masks and motion tracks, our Blur Faces tool automatically handles motion and presents creators with a thumbnail that encapsulates all instances of that individual recognized by our technology. Creators can apply these blurring edits to already uploaded videos without losing views, likes, and comments by choosing to “Save” the edits in-place. Applying the effect using “Save As New” and deleting the original video will remove the original unblurred video from YouTube for an extra level of privacy. The blur applied to the published video cannot be practically reversed, but keep in mind that blurring does not guarantee absolute anonymity.To get to Blur Faces, go to the Enhance tool for a video you own. This can be done from the Video Manager or watch page. The Blur Faces tool can be found under the “Blurring Effects” tab of Enhancements. The following image shows how to get there.When you open the Blur Faces tool on your video for the first time, we start processing your video for faces. During processing, we break your video up into chunks of frames, and start detecting faces on each frame individually. We use a high-quality face detection model to increase our accuracy, and at the same time, we look for scene changes and compute motion vectors throughout the video which we will use later.Once we’ve detected the faces in each frame of your video, we start matching face detections within a single scene of the video, relying on both the visual characteristics of the face as well as the face’s motion. To compute motion, we use the same technology that powers our Custom Blurring feature. Face detections aren’t perfect, so we use a few techniques to help us hone in on edge cases such as tracking motion through occlusions (see the water bottle in the above GIF) and near the edge of the video frame. Finally, we compute visual similarity across what we found in each scene, pick the best face to show as a thumbnail, and present it to you.Before publishing your changes, we encourage you to preview the video. As we cannot guarantee 100 percent accuracy in every video, you can use our Custom Blurring tool to further enhance the automated face blurring edits in the same interface.Ryan Stevens, Software Engineer, recently watched 158,962,555,217,826,360,000 (Enigma Machine), and Ian Pudney, Software Engineer, recently watched Wood burning With Lightning. Lichtenberg Figures![...]



Visualizing Sound Effects

2017-03-23T10:00:29.113-07:00

At YouTube, we understand the power of video to tell stories, move people, and leave a lasting impression. One part of storytelling that many people take for granted is sound, yet sound adds color to the world around us. Just imagine not being able to hear music, the joy of a baby laughing, or the roar of a crowd. But this is often a reality for the 360 million people around the world who are deaf and hard of hearing. Over the last decade, we have been working to change that.The first step came over ten years ago with the launch of captions. And in an effort to scale this technology, automated captions came a few years later. The success of that effort has been astounding, and a few weeks ago we announced that the number of videos with automatic captions now exceeds 1 billion. Moreover, people watch videos with automatic captions more than 15 million times per day. And we have made meaningful improvements to quality, resulting in a 50 percent leap in accuracy for automatic captions in English, which is getting us closer and closer to human transcription error rates.But there is more to sound and the enjoyment of a video than words. In a joint effort between YouTube, Sound Understanding, and Accessibility teams, we embarked on the task of developing the first ever automatic sound effect captioning system for YouTube. This means finding a way to identify and label all those other sounds in the video without manual input.We started this project by taking on a wide variety of challenges, such as how to best design the sound effect recognition system and what sounds to prioritize. At the heart of the work was utilizing thousands of hours of videos to train a deep neural network model to achieve high quality recognition results. There are more details in a companion post here.As a result, we can now automatically detect the existence of these sound effects in a video and transcribe it to appropriate classes or sound labels. With so many sounds to choose from, we started with [APPLAUSE], [MUSIC] and [LAUGHTER], since these were among the most frequent manually captioned sounds, and they can add meaningful context for viewers who are deaf and hard of hearing.So what does this actually look like when you are watching a YouTube video? The sound effect is merged with the automatic speech recognition track and shown as part of standard automatic captions. allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/oOtqbAxRkyM?start=128&end=178" width="560"> Click the CC button to see the sound effect captioning system in actionWe are still in the early stages of this work, and we are aware that these captions are fairly simplistic. However, the infrastructural backend to this system will allow us to expand and easily apply this framework to other sound classes. Future challenges might include adding other common sound classes like ringing, barking and knocking, which present particular problems -- for example, with ringing we need to be able to decipher if this is an alarm clock, a door or a phone as described here.Since the addition of sound effect captions presented a number of unique challenges on both the machine learning end as well as the user experience, we continue to work to better understand the effect of the captioning system on the viewing experience, how viewers use sound effect information, and how useful it is to them. From our initial user studies, two-thirds of participants said these sound effect captions really enhance the overall experience, especially when they added crucial “invisible” sound information that people cannot tell from the visual cues. Overall, users reported that their experience wouldn't be impacted by the system making occasional mistakes as long as it was able to provide good information more often than not.We are excited to support automatic sound effect captioning on YouTube, and we hope this system helps us make information useful and accessible for eve[...]



Improving VR videos

2017-03-16T10:55:26.718-07:00

At YouTube, we are focused on enabling the kind of immersive and interactive experiences that only VR can provide, making digital video as immersive as it can be. In March 2015, we launched support for 360-degree videos shortly followed by VR (3D 360) videos. In 2016 we brought 360 live streaming and spatial audio and a dedicated YouTube VR app to our users.Now, in a joint effort between YouTube and Daydream, we're adding new ways to make 360 and VR videos look even more realistic.360 videos need a large numbers of pixels per video frame to achieve a compelling immersive experience. In the ideal scenario, we would match human visual acuity which is 60 pixels per degree of immersive content. We are however limited by user internet connection speed and device capabilities. One way to bridge the gap between these limitations and the human visual acuity is to use better projection methods.Better ProjectionsA Projection is the mapping used to fit a 360-degree world view onto a rectangular video surface. The world map is a good example of a spherical earth projected on a rectangular piece of paper. A commonly used projection is called equirectangular projection. Initially, we chose this projection when we launched 360 videos because it is easy to produce by camera software and easy to edit.However, equirectangular projection has some drawbacks:It has high quality at the poles (top and bottom of image) where people don’t look as much – typically, sky overhead and ground below are not that interesting to look at.It has lower quality at the equator or horizon where there is typically more interesting content.It has fewer vertical pixels for 3D content.A straight line motion in the real world does not result in a straight line motion in equirectangular projection, making videos hard to compress.Drawbacks of equirectangular (EQ) projectionThese drawbacks made us look for better projection types for 360-degree videos. To compare different projection types we used saturation maps. A saturation map shows the ratio of video pixel density to display pixel density. The color coding goes from red (low) to orange, yellow, green and finally blue (high). Green indicates optimal pixel density of near 1:1. Yellow and orange indicate insufficient density (too few video pixels for the available display pixels) and blue indicates wasted resources (too many video pixels for the available display pixels). The ideal projection would lead to a saturation map that is uniform in color. At sufficient video resolution it would be uniformly green.We investigated cubemaps as a potential candidate. Cubemaps have been used by computer games for a long time to display the skybox and other special effects.Equirectangular projection saturation mapCubemap projection saturation mapIn the equirectangular saturation map the poles are blue, indicating wasted pixels. The equator (horizon) is orange, indicating an insufficient number of pixels. In contrast, the cubemap has green (good) regions nearer to the equator, and the wasteful blue regions at the poles are gone entirely. However, the cubemap results in large orange regions (not good) at the equator because a cubemap samples more pixels at the corners than at the center of the faces.We achieved a substantial improvement using an approach we call Equi-angular Cubemap or EAC. The EAC projection’s saturation is significantly more uniform than the previous two, while further improving quality at the equator:Equi-angular Cubemap - EACAs opposed to traditional cubemap, which distributes equal pixels for equal distances on the cube surface, equi-angular cubemap distributes equal pixels for equal angular change.The saturation maps seemed promising, but we wanted to see if people could tell the difference. So we asked people to rate the quality of each without telling them which projection they were viewing. People generally rated EAC as higher quality compared to other projections[...]



Supercharge your YouTube live tools with the new Super Chat API

2017-01-12T10:00:18.816-08:00

In December 2015, we launched an array of API services that let developers access a wealth of data about live streams, chat, and fan funding. Since then, we’ve seen thousands of creators use the tools listed on our Tools for Gaming Streamers page to enhance their streams by adding chatbots, overlays, polls and more.

Today, we announced a new live feature for fans and creators, Super Chat, which lets anybody watching a live stream stand out from the crowd and get a creator’s attention by purchasing highlighted chat messages. We’re also announcing a new API service for this feature: the Super Chat API, designed to allow developers to access real-time information about Super Chat purchases.

The launch of this new API service will be followed by the shutdown of our Fan Funding API. To that end, developers using the Fan Funding API need to move to the new Super Chat API as soon as possible.

On January 31, 2017, we’ll begin offering replacements for the two ways developers currently get information about Fan Funding:

  • LiveChatMessages.list will gain a new message type, superChatMessage, which will contain details about Super Chats purchased during an active live stream
  • A new endpoint, SuperChats.list, will be made available to list a channel’s Super Chat purchases

On February 28, 2017, we’ll be turning down the two existing Fan Funding methods:

  • LiveChatMessages.list will no longer return messages of type fanFundingEvent
  • FanFundingEvents.list will no longer return data

During the transition period between Super Chats and Fan Funding, SuperChats.list will provide information about both Super Chat events and Fan Funding events, so we encourage all developers to switch to the new API as soon as it becomes available. Keep your eye on the YouTube Data API v3 Revision History to get the documentation for this service as soon as we post it.

If you’ve got questions on this, please feel free to ask the community on our Stack Overflow tag or send us a tweet at @YouTubeDev and we’ll do our best to answer.

Marc Chambers, Developer Relations, recently watched "Show of the Week: New Games for 2017."(image)



Download your ad revenue reports through the YouTube Reporting API service

2016-11-08T09:00:05.682-08:00

With the launch of the YouTube Reporting API last year, we introduced a mechanism to download raw YouTube Analytics data. It generates a set of predefined reports in the form of CSV files that contain YouTube Analytics data for content owners. Once activated, reports are generated regularly, and each one contains data for a unique, 24-hour period. We heard that you also wanted more data to be accessible via the YouTube Reporting API service.So today, we are making a set of system-managed ad revenue reports available to content owners. Previously, this data was only available via manually downloadable reports in Creator Studio. The system-managed reports released via the YouTube Reporting API maintain the same breakdowns as downloadable reports, but the schema is optimized to align to other reports available via this API.These new reports are generated automatically for eligible YouTube Partners. Thus, if you are an eligible YouTube partner, you don't even need to create reporting jobs. Just follow the instructions below to find out whether the reports are available to you and to download the reports themselves.We also want to let you know that more reports will be available via the YouTube Reporting API service in the coming weeks and months. Please keep an eye on the revision history to find out when additional reports become available.How to start using the new reportsCheck what new report types are available to youGet an OAuth token (authentication credentials)Call the reportTypes.list method with the includeSystemManaged parameter set to true.The response lists all report types available to you. As you can’t use the new report types to create reporting jobs yourself, their systemManaged property is set to true.Check if system-managed jobs have been created for youGet an OAuth token (authentication credentials)Call the jobs.list method with the includeSystemManaged parameter set to true. This will return a list of the available reporting jobs. All jobs with the systemManaged property set to true are jobs for the new report types.Store the IDs of the jobs you want download reports for.Download reportsGet an OAuth token (authentication credentials)Call the reports.list method with the jobId parameter set to the ID found in the previous section to retrieve a list of downloadable reports created by that job.Choose a report from the list and download it using its downloadUrl.Client libraries and sample codeClient libraries exist for many different programming languages to help you use the YouTube Reporting API. Our Java, PHP, and Python code samples will help you get started. The API Explorer lets you try out sample calls before writing any code.Posted by Markus Lanthaler, Tech Lead YouTube Analytics APIs, recently watched “Crushing gummy bears with hydraulic press” and Mihir Kulkarni, Software Engineer, recently watched “The $21,000 first class airplane seat.”[...]



Saying goodbye to the YouTube v2 Uploads API service

2016-10-10T10:00:44.386-07:00

If you’re already using or migrated to the YouTube Data API v3, you can stop reading.

If you develop a tool, script, plugin, or any other code that uploads video to YouTube, we have an important update for you! On October 31, 2016, we’ll be shutting down the ability to upload videos through the old YouTube Data API (v2) service. This shutdown is in accordance with our prior deprecation announcements for the YouTube Data API (v2) service in 2014 and ClientLogin authentication in 2013.

If you’re using this service, unless changes are made to your API Client(s), your users will no longer be able to upload videos using your integration starting October 31, 2016.

We announced this deprecation over two years ago to give our developer community time to adjust. If you haven’t already updated, please update your integration as soon as possible. The supported method for programmatically uploading videos to YouTube is the YouTube Data API v3 service, with OAuth2 for authentication.

You can find a complete guide to uploading videos using this method, as well as sample Python code, on the Google Developers site.

Did you already update your integration to use the YouTube Data API v3 service and OAuth2? It’s possible there are users who may still be on old versions of your software. You may want to reach out to your users and let them know about this. We may also reach out to YouTube creators who are using these old versions and let them know about this as well.

If you have questions about this shutdown or about the YouTube Data API v3 service, please post them to our Stack Overflow tag. You can also send us a tweet at @YouTubeDev, and follow us for the latest updates.

Posted by Marc Chambers, YouTube Developer Relations(image)



An updated Terms of Service and New Developer Policies for the YouTube API Services

2017-02-10T09:30:00.492-08:00

The updated YouTube API Services Terms and Policies are effective starting today (February 10, 2017)Today we are announcing changes to the YouTube API Services Terms of Service and introducing new Developer Policies to guide their implementation. These updated Terms of Service and new Developer Policies will take effect in six months so that you have time to understand and implement them.The YouTube API Services Terms of Service are developers’ rules of the road, and like any rules of the road, they need to be updated over time as usage evolves. As we've grown, so has an entire ecosystem of companies that support users, creators and advertisers, many of them built on top of YouTube’s API Services. We haven’t had major updates to our API Services Terms of Service in over four years, so during the past several months we've been speaking to developers and studying how our API Services are being used to make sure that our terms make sense for the YouTube of today. We updated the YouTube API Services Terms of Service to keep up with usage growth, strengthen user controls and protections even further, and address misuse. You can find the updated terms here.In order to provide more guidance to developers, which has been a key ask, we are introducing new Developer Policies. They aim to provide operational guidelines for accessing and using our API Services, covering user privacy and data protection, data storage, interface changes, uploads, comments, and more. You can read the full Developer Policies here.In addition to the new terms, we're also announcing the upcoming YouTube’s Measurement Program. This new certification program will help participants provide accurate, consistent, and relevant YouTube measurement data to their clients and users, thereby helping them make informed decisions about YouTube. We’ll launch the program with a few initial partners before scaling it more broadly. Please visit the YouTube’s Measurement Program website to learn more.We developed these updates with a few core principles in mind:Improving the YouTube experience for users and creators. Every month, we update our app and site with dozens of new features for users and creators. We want to make sure that every application or website takes advantage of the latest and greatest YouTube functionalities. That’s why we’re introducing a Requirement of Minimum Functionality, which is designed to ensure users have a set of basic functionality around core parts of their YouTube experience, like video playback, comment management, video upload, and other services.Strengthening user data and privacy. We want to help foster innovative products while giving users even more control around data privacy and security. These updated terms serve to strengthen our existing user controls and protections even further. For example, we now require developers to have a privacy policy that clearly explains to users what user info is accessed, collected and stored.Fostering a healthy YouTube ecosystem. While we want to continue to encourage growth of our ecosystem, we also need to make sure our terms limit misuse. As the YouTube developer ecosystem evolved, we saw some fantastic uses of our API Services. Sadly, with amazing uses, there have also been a handful of applications that have misused our API Services. These updated terms serve to further protect against misuse and protect users, creators, and advertisers.It's been great to see all the ways developer websites and applications have integrated with YouTube. We are committed to the YouTube API Services and we continue to invest with new features that will improve the product, such as expanding the Reporting API service with Payment reports, and Custom reports, launching later this year.While we understand these updated terms and new policies may require some [...]



YouTube's road to HTTPS

2016-08-01T11:00:35.230-07:00

Today we added YouTube to Google's HTTPS transparency report. We're proud to announce that in the last two years, we steadily rolled out encryption using HTTPS to 97 percent of YouTube's traffic.HTTPS provides critical security and data integrity for the web and for all web users. So what took us so long? As we gradually moved YouTube to HTTPS, we faced several unique challenges:Lots of traffic! Our CDN, the Google Global Cache, serves a massive amount of video, and migrating it all to HTTPS is no small feat. Luckily, hardware acceleration for AES is widespread, so we were able to encrypt virtually all video serving without adding machines. (Yes, HTTPS is fast now.)Lots of devices! You watch YouTube videos on everything from flip phones to smart TVs. We A/B tested HTTPS on every device to ensure that users would not be negatively impacted. We found that HTTPS improved quality of experience on most clients: by ensuring content integrity, we virtually eliminated many types of streaming errors.Lots of requests! Mixed content—any insecure request made in a secure context—poses a challenge for any large website or app. We get an alert when an insecure request is made from any of our clients and will block all mixed content using Content Security Policy on the web, App Transport Security on iOS, and uses CleartextTraffic on Android. Ads on YouTube have used HTTPS since 2014.We're also proud to be using HTTP Secure Transport Security (HSTS) on youtube.com to cut down on HTTP to HTTPS redirects. This improves both security and latency for end users. Our HSTS lifetime is one year, and we hope to preload this soon in web browsers.97 percent is pretty good, but why isn't YouTube at 100 percent? In short, some devices do not fully support modern HTTPS. Over time, to keep YouTube users as safe as possible, we will gradually phase out insecure connections.In the real world, we know that any non-secure HTTP traffic could be vulnerable to attackers. All websites and apps should be protected with HTTPS — if you’re a developer that hasn’t yet migrated, get started today.Sean Watson, Software Engineer, recently watched "GoPro: Fire Vortex Cannon with the Backyard Scientist."Jon Levine, Product Manager, recently watched "Sega Saturn CD - Cracked after 20 years."[...]



Machine learning for video transcoding

2016-05-16T09:03:23.119-07:00

At YouTube we care about the quality of the pixels we deliver to our users. With many millions of devices uploading to our servers every day, the content variability is so huge that delivering an acceptable audio and video quality in all playbacks is a considerable challenge. Nevertheless, our goal has been to continuously improve quality by reducing the amount of compression artifacts that our users see on each playback. While we could do this by increasing the bitrate for every file we create, that would quite easily exceed the capacity of many of the network connections available to you. Another approach is to optimize the parameters of our video processing algorithms to meet bitrate budgets and minimum quality standards. While Google’s compute and storage resources are huge, they are finite and so we must temper our algorithms to also fit within compute requirements. The hard problem then is to adapt our pipeline to create the best quality output for each clip you upload to us, within constraints of quality, bitrate and compute cycles. This is a well known triad in the world of video compression and transcoding. The problem is usually solved by finding a sweet spot of transcoding parameters that seem to work well on average for a large number of clips. That sweet spot is sometimes found by trying every possible set of parameters until one is found that satisfies all the constraints. Recently, others have been using this “exhaustive search” idea to tune parameters on a per clip basis. What we’d like to show you in this blog post is a new technology we have developed that adapts our parameter set for each clip automatically using Machine Learning. We’ve been using this over the last year for improving the quality of movies you see on YouTube and Google Play.The good and bad about parallel processingWe ingest more than 400 hours of video per minute. Each file must be transcoded from the uploaded video format into a number of other video formats with different codecs so we can support playback on any device you might have. The only way we can keep up with that rate of ingest and quickly show you your transcoded video in YouTube is to break each file in pieces called “chunks,” and process these in parallel. Every chunk is processed independently and simultaneously by CPUs in our Google cloud infrastructure. The complexity involved in chunking and recombining the transcoded segments is significant. Quite aside from the mechanics of assembling the processed chunks, maintaining the quality of the video in each chunk is a challenge. This is because to have as speedy a pipeline as possible, our chunks don’t overlap, and are also very small; just a few seconds. So the good thing about parallel processing is increased speed and reduced latency. But the bad thing is that without the information about the video in the neighboring chunks, it’s now difficult to control chunk quality so that there is no visible difference between the chunks when we tape them back together. Small chunks don’t give the encoder much time to settle into a stable state hence each encoder treats each chunk slightly differently.Smart parallel processingYou could say that we are shooting ourselves in the foot before starting the race. Clearly, if we communicate information about chunk complexity between the chunks, each encoder can adapt to what’s happening in the chunks after or before it. But inter-process communication increases overall system complexity and requires some extra iterations in processing each chunk.Actually, OK, truth is we’re stubborn here in Engineering and we wondered how far we could push this idea of “don’t let the chunks talk to each other.”The plot below shows an example of the PSNR in dB per frame over two c[...]



Because retro is in -- announcing historical data in the YouTube Reporting API

2016-05-10T09:00:08.473-07:00

YouTube creators rely on data -- data about how their channel is performing, data about their video’s ratings, their earnings. Lots of data. That’s why we launched the YouTube Reporting API back in October, which helps you bulk up your data requests while keeping them on a low-quota diet.

Reports made with the API started from the day you scheduled them, going forward. Now that it’s been in the wild, we’ve heard another request loud and clear: you don’t just want current data, you want older data, too. We’re happy to announce that the Reporting API now delivers historical data covering 180 days prior to when the reporting job is first scheduled (or July 1st, 2015, whichever is later.)

Developers with a keen eye may have already noticed this, as it launched a few weeks ago! Just in case you didn’t, you can find more information on how historical data works by checking out the Historical Data section of the Reporting API docs.

(Hint: if you’ve already got some jobs scheduled, you don’t need to do anything! We’ll generate the data automatically.)

New to the Reporting API? Tantalized by the possibility of all that historical data? Our documentation explains everything you need to know about scheduling jobs and the types of reports available. Try it out with our API Explorer, then dive into the sample code or write your own with one of our client libraries.

Happy reporting,

YouTube Developer Relations on behalf of Alvin Cham, Markus Lanthaler, Matteo Agosti, and Andy Diamondstein(image)



Announcing the Mobile Data Plan API

2016-04-27T10:22:33.284-07:00

More than half of YouTube watch time happens on mobile devices, with a large and rapidly increasing fraction of this time spent on cellular networks. At the same time, it is common for users to have mobile data plans with usage limits. Users who exhaust their quota can incur overage charges, have their data connections turned off and speeds reduced. When this happens, application performance suffers and user satisfaction decreases.At the root of this problem lies the fact that users do not have an easy manner to share data plan information with an application, and, in turn, applications cannot optimize the user’s experience. In an effort to address this limitation we have worked with a few partners in the mobile ecosystem to specify an API that improves data transparency.At a high level, the API comprises two parts. First, a mechanism for applications to establish an anonymous identifier of the user’s data plan. This new, Carrier Plan Identifier (CPID), protects the user’s identity and privacy. Second, a mechanism that allows applications, after establishing a CPID, to request information about the user’s data plan from the mobile network operator (MNO). Applications communicate with MNOs using HTTPS and the API encodes data plan information in an extensible JSON-based format.We believe the API will improve transparency and Quality of Experience (QoE) for mobile applications such as YouTube. For example, the cost of data can depend on the time of day, where users get discounts for using the network during off-peak hours. For another example consider that while users with unlimited data plans may prefer high resolution videos, users who are about to exceed their data caps or are in a busy network may be better served by reduced data rate streams that extend the life of the data plan while still providing good quality.Cellular network constraints are even more acute in countries where the cost of data is high, users have small data budgets, and networks are overutilized. With more than 80% of views from outside the United States, YouTube is the first Google application conducting field trials of the Mobile Data Plan API in countries, such as Malaysia, Thailand, the Philippines and Guatemala, where these characteristics are more prominent. These trials aim to bring data plan information as an additional real-time input to YouTube’s decision engine tuned to improve QoE.We believe the same data plan information will lay the foundation for other applications and mobile operators to innovate together. This collaboration can make data usage more transparent to users, incentivize efficient use of mobile networks, and optimize user experience.We designed the API in cooperation with a number of key partners in the mobile ecosystem, including Telenor Group, Globe Telecom and Tigo, all of which have already adopted and implemented this API. Google also worked with Ericsson to support the Mobile Data Plan API in their OTT Cloud Connect platform. We invite other operators and equipment vendors to implement this solution and offer applicable products and services to their customers.The Mobile Data Plan API specification is available from this link. We are looking forward to your comments and we are available at: [data-plan-api@google.com].Posted by Andreas Terzis, technical lead at Google Access, & Jessica Xu, product manager at YouTube.[...]



A look into YouTube’s video file anatomy

2016-04-20T10:00:15.054-07:00

Over 1 billion people use YouTube, watching hundreds of millions of hours of content all over the world everyday. We have been receiving content at a rate exceeding 100 hours/min for the last three years (currently at 400 hours/min). With those kinds of usage statistics what we see on ingest actually says something about the state of video technology today.Video files are the currency of video sharing and distribution over the web. Each file contains the video and audio data wrapped up in some container format and associated with metadata that describes the nature of the content in some way. To make sure each user can “Broadcast yourself” we have spent years building systems that can faithfully extract the video and audio data hidden inside almost any kind of file you can imagine. That is why when our users upload to YouTube they have confidence that their video and audio will always appear.The video and audio data is typically compressed using a codec and of course the data itself comes in a variety of resolutions, frame rates, sample rates and channels (in the case of audio). As technology evolves, codecs get better, and the nature of the data itself changes, typically toward higher fidelity. But how much variety is there in this landscape and how has that variety changed with time? We’ve been analyzing the anatomy of files you’ve been uploading over the years and think it reflects how video technology has changed.Audio/video file anatomyAudio/video files contain audio and video media which can be played or viewed on some multimedia devices like a TV or desktop or smartphone. Each pixel of video data is associated with values for brightness and color which tells the display how that pixel should appear. A quick calculation on the data rate for the raw video data shows that for 720p video at 30 frames per second the data rate is in excess of 420 Mbits/sec. Raw audio data rates are smaller but still significant at about 1.5 MBits/sec for 44.1 KHz sampling with 16 bits per sample. These rates are well in excess of the 10’s of MBits/sec (at most) that many consumers have today. By using compression technology that same > 400 MBits/sec of data can be expressed in less than 5 Mbits/sec. This means that audio and video compression is a vital part of any practical media distribution system. Without compression we would not be able to stream media over the internet in the way everyone enjoys now.There are three main components of media files today: the container, the compressed bitstream itself and finally metadata. The bitstream (called the video and audio “essence”) contains the actual audio and video media in a compressed form. It will also contain information about the size of the pictures and start and end of frames so that the codec knows how to decode the picture data in the right way. This information embedded in the bitstream is still not enough though. The “container” refers to the additional information that helps the decoder work out when a video frame is to be played, and when the audio data should be played relative to the frame. The container often also holds an index to the start of certain frames in the bitstream. This makes it easier for a player system to allow users to “seek” or “fast forward” through the contents. The container will also hold information about the file content itself like the author, and other kinds of “metadata” that could be useful for a rights holder or “menu” on a player for instance. So the bitstream contains the actual picture and audio, but the container lets the player know how that content should be played.Standardization of containers and codecs was vital for the digi[...]



New YouTube live features: live 360, 1440p, embedded captions, and VP9 ingestion

2016-04-19T10:00:14.676-07:00

Yesterday at NAB 2016 we announced exciting new live and virtual reality features for YouTube. We’re working to get you one step closer to actually being in the moments that matter while they are happening. Let’s dive into the new features and capabilities that we are introducing to make this possible:Live 360: About a year ago we announced the launch of 360-degree videos at YouTube, giving creators a new way to connect to their audience and share their experiences. This week, we took the next step by introducing support for 360-degree on YouTube live for all creators and viewers around the globe.To make sure creators can tell awesome stories with virtual reality, we’ve been working with several camera and software vendors to support this new feature, such as ALLie and VideoStitch. Manufacturers interested in 360 through our Live API can use our YouTube Live Streaming API to send 360-degree live streams to YouTube.Other 360-degree cameras can also be used to live stream to YouTube as long as they produce compatible output, for example, cameras that can act as a webcam over USB (see this guide for details on how to live stream to YouTube). Like 360-degree uploads, 360-degree live streams need to be streamed in the equirectangular projection format. Creators can use our Schedule Events interface to set up 360 live streams using this new option:Check out this help center page for some details.1440p live streaming: Content such as live 360 as well as video games are best enjoyed at high resolutions and high frame rates. We are also announcing support of 1440p 60fps resolution for live streams on YouTube. Live streams at 1440p have 70 percent more pixels than the standard HD resolution of 1080p. To ensure that your stream can be viewed on the broadest possible range of devices and networks, including those that don’t support such high resolutions or frame rates, we perform full transcoding on all streams and resolutions. A 1440p60 stream gets transcoded to 1440p60, 1080p60 and 720p60 as well as all resolutions from 1440p30 down to 144p30.Support for 1440p will be available from our creation dashboard as well as our Live API. Creators interested in using this high resolution should make sure that their encoder is able to encode at such resolutions and that they have sufficient upload bandwidth on their network to sustain successful ingestion. A good rule of thumb is to provision at least twice the video bitrate.VP9 ingestion / DASH ingestion: We are also announcing support for VP9 ingestion. VP9 is a modern video codec that lets creators upload higher resolution videos with lower bandwidth, which is particularly important for high resolution 1440p content. To facilitate the ingestion of this new video codec we are also announcing support for DASH ingestion, which is a simple, codec agnostic HTTP-based protocol. DASH ingestion will support H.264 as well as VP9 and VP8. HTTP-based ingestion is more resilient to corporate firewalls and also allows ingestion over HTTPS. It is also a simpler protocol to implement for game developers that want to offer in game streaming support with royalty free video codecs. MediaExcel and Wowza Media Systems will both be demoing DASH VP9 encoding with YouTube live at their NAB booths.We will soon publish a detailed guide to DASH Ingestion on our support web site. For developers interested in DASH Ingestion, please join this Google group to receive updates.Embedded captions: To provide more support to broadcasters, we now accept embedded EIA-608/CEA-708 captions over RTMP (H.264/AAC). That makes it easier to send captioned video content to YouTube and no longer requires posting capti[...]



Chat it up, streamers! New Live Chat, Fan Funding & Sponsorships APIs

2016-03-16T05:43:53.369-07:00

From the moment YouTube Gaming launched in August, we’ve consistently seen a pair of requests from our community: “Where are the chat bots? Where are the stream overlays?” A number of developers were happy to oblige, and some great new tools have launched for YouTube streamers.With those new tools have come some feedback on our APIs. Particularly, that there aren’t enough of them. So much is happening on YouTube live streams -- chatting, fan funding, sponsoring -- but there’s no good way to get the data out and into the types of apps that streamers want, like on-screen overlays, chat moderation bots and more.Well well, what have we here? A whole bunch of new additions to the Live Streaming API, getting you access to all those great chat messages, fan funding alerts and new sponsor messages!Fan Funding events, which occur when a user makes a one-time voluntary payment to support a creator.Live Chat events, which allow you to read the content of a YouTube live chat in real time, as well as adding new chat messages on behalf of the authenticated channel.Live Chat bans, which enable the automated application of chat “time-outs” and “bans.”Sponsors, which allows access to a list of YouTube users that are sponsoring the channel. A sponsor provides recurring monetary support to a creator and receives special benefits.In addition, we’ve closed a small gap in the LiveBroadcasts API by adding the ability to retrieve and modify the LiveBroadcast object for a channel’s “Stream now” stream.As part of the development process we gave early access to a few folks, and we’re excited to show off some great integrations that launch today:Using our new Sponsorships feature? Discord will let you offer your sponsors access to private voice and text servers.Add live chat, new sponsors and new fan funding announcements to an overlay with the latest beta of Gameshow.Looking for some help with moderating and managing your live chat? Try out Nightbot, a chat bot that can perform a variety of moderating tasks specifically designed to create a more efficient and friendly environment for your community.Show off your live chat with an overlay in XSplit Broadcaster using their new YouTube Live Chat plugin.We’ve also spotted some libraries and sample code on Github that might help get you started, including this chat library in Go and this one in Python.We hope these new APIs can bring whole new categories of tools to the creator community. We’re excited to see what you build!Marc Chambers, Developer Relations, recently watched ”ArmA 3| Episode 1|Pilot the CH53 E SS.”[...]



Smoother

2016-03-16T05:51:12.889-07:00

Video quality matters, and when an HD or HFR playback isn’t smooth, we notice. Chrome noticed. YouTube noticed. So we got together to make YouTube video playback smoother in Chrome, and we call it Project Butter. For some context, our brains fill in the motion in between frames if each frame is onscreen the same amount of time - this is called motion interpolation. In other words, a 30 frames per second video won’t appear smooth unless each frame is spaced evenly each 1/30th of a second. Smoothness is more complicated than just this - you can read more about it in this article by Michael Abrash at Valve. Frame rates, display refresh rates and cadenceYour device’s screen redraws itself at a certain frame rate. Videos present frames at a certain rate. These rates are often not the same. At YouTube we commonly see videos authored at 24, 25, 29.97, 30, 48, 50, 59.94, and 60 frames per second (fps) and these videos are viewed on displays with different refresh rates - the most common being 50Hz (Europe) and 60Hz (USA).   For a video to be smooth we need to figure out the best, most regular way to display the frames - the best cadence. The ideal cadence is calculated as the ratio of the display rate to frame rate. For example, if we have a 60Hz display (a 1/60 second display interval) and a 30 fps clip, 60 / 30 == 2 which means each video frame should be displayed for two display intervals of total duration 2 * 1/60 second. We played videos a bunch of different ways and scored them on smoothness.   Smoothness scoreUsing off the shelf HDMI capture hardware and some special video clips we computed a percentage score based on the number of times each video frame was displayed relative to a calculated optimal display count. The higher the score, the more frames aligned with the optimal display frequency. Below is a figure showing how Chrome 43 performed when playing a 30fps clip back on a 60Hz display:Smoothness: 68.49%, ~Dropped: 5 / 900 (0.555556%) The y-axis is the number of times each frame was displayed, while the x-axis is the frame number. As mentioned previously the calculated ideal display count for a 30fps clip on a 60Hz display is 2. So, in an ideal situation, the graph should be a flat horizontal line at 2, yet Chrome dropped many frames and displayed certain frames for as many as 4 display cycles! The smoothness score reflects this -  only 68.49 percent of frames were displayed correctly. How could we track down what was going on? Using some of the performance tracing tools built into Chrome, we identified timing issues inherent to the existing design for video rendering as the culprit. These issues resulted in both missed and irregular video frames on a regular basis. There were two main problems in the interactions between Chrome’s compositor (responsible for drawing frames) and its media pipeline (responsible for generating frames) --  The compositor didn’t have a timely way of knowing when a video frame needed display. Video frames were selected on the media pipeline thread while the compositor would occasionally come along looking for them on the compositor thread, but if the compositor thread was busy it wouldn’t get the notification on time.Chrome’s media pipeline didn’t know when the compositor would be ready to draw its next new frame. This led to the media pipeline sometimes picking a frame that was too old by the time the compositor displayed it. In Chrome 44, we re-architected the media and compositor pipelines to communicate carefully about the intent to generate and display. Additionally, we also improved which [...]



Improving YouTube video thumbnails with deep neural nets

2015-10-08T10:53:05.434-07:00

Video thumbnails are often the first things viewers see when they look for something interesting to watch. A strong, vibrant, and relevant thumbnail draws attention, giving viewers a quick preview of the content of the video, and helps them to find content more easily. Better thumbnails lead to more clicks and views for video creators. Inspired by the recent remarkable advances of deep neural networks (DNNs) in computer vision, such as image and video classification, our team has recently launched an improved automatic YouTube "thumbnailer" in order to help creators showcase their video content. Here is how it works. The Thumbnailer PipelineWhile a video is being uploaded to YouTube, we first sample frames from the video at one frame per second. Each sampled frame is evaluated by a quality model and assigned a single quality score. The frames with the highest scores are selected, enhanced and rendered as thumbnails with different sizes and aspect ratios. Among all the components, the quality model is the most critical and turned out to be the most challenging to develop. In the latest version of the thumbnailer algorithm, we used a DNN for the quality model. So, what is the quality model measuring, and how is the score calculated? The main processing pipeline of the thumbnailer. (Training) The Quality ModelUnlike the task of identifying if a video contains your favorite animal, judging the visual quality of a video frame can be very subjective - people often have very different opinions and preferences when selecting frames as video thumbnails. One of the main challenges we faced was how to collect a large set of well-annotated training examples to feed into our neural network. Fortunately, on YouTube, in addition to having algorithmically generated thumbnails, many YouTube videos also come with carefully designed custom thumbnails uploaded by creators. Those thumbnails are typically well framed, in-focus, and center on a specific subject (e.g. the main character in the video). We consider these custom thumbnails from popular videos as positive (high-quality) examples, and randomly selected video frames as negative (low-quality) examples. Some examples of the training images are shown below. Example training images.The visual quality model essentially solves a problem we call "binary classification": given a frame, is it of high quality or not? We trained a DNN on this set using a similar architecture to the Inception network in GoogLeNet that achieved the top performance in the ImageNet 2014 competition. ResultsCompared to the previous automatically generated thumbnails, the DNN-powered model is able to select frames with much better quality. In a human evaluation, the thumbnails produced by our new models are preferred to those from the previous thumbnailer in more than 65% of side-by-side ratings. Here are some examples of how the new quality model performs on YouTube videos: Example frames with low and high quality score from the DNN quality model, from video “Grand Canyon Rock Squirrel”.Thumbnails generated by old vs. new thumbnailer algorithm. We recently launched this new thumbnailer across YouTube, which means creators can start to choose from higher quality thumbnails generated by our new thumbnailer. Next time you see an awesome YouTube thumbnail, don’t hesitate to give it a thumbs up. ;) Weilong Yang, software engineer, recently watched “Contact Juggling - His Skills are Totally Hypnotizing” Min-hsuan Tsai, software engineer, recently watched ”People Are Awesome 2015” Thanks to the Video Content Ana[...]



Access to YouTube Analytics data in bulk

2015-10-02T06:34:44.131-07:00

Want to get all of your YouTube data in bulk? Are you hitting the quota limits while accessing analytics data one request at a time? Do you want to be able to break down reports by more dimensions? What about accessing assets and revenue data? With the new YouTube Bulk Reports API, your authorized application can retrieve bulk data reports in the form of CSV files that contain YouTube Analytics data for a channel or content owner. Once activated, reports are generated daily and contain data for a unique, 24-hour period.While the known YouTube Analytics API supports real-time targeted queries of much of the same data as the YouTube Bulk Reports API, the latter is designed for applications that can retrieve and import large data sets, then use their own tools to filter, sort, and mine that data.As of now the API supports video, playlist, ad performance, estimated earnings and asset reports.How to start developingChoose your reports:Video reports provide statistics for all user activity related to a channel's videos or a content owner's videos. For example, these metrics include the number of views or ratings that videos received. Some video reports for content owners also include earnings and ad performance metrics.Playlist reports provide statistics that are specifically related to video views that occur in the context of a playlist.Ad performance reports provide impression-based metrics for ads that ran during video playbacks. These metrics account for each ad impression, and each video playback can yield multiple impressions.Estimated earnings reports provide the total earnings for videos from Google-sold advertising sources as well as from non-advertising sources. These reports also contain some ad performance metrics.Asset reports provide user activity metrics related to videos that are linked to a content owners' assets. For its data to included in the report, a video must have been uploaded by the content owner and then claimed as a match of an asset in the YouTube Content ID system.Schedule reports: Get an OAuth token (authentication credentials)Call the reportTypes.list method to retrieve a list of the available report typesCreate a new reporting job by calling jobs.create and passing the desired report type (and/or query in the future)Retrieve reports:Get an OAuth token (authentication credentials)Call the jobs.list method to retrieve a list of the available reporting jobs and remember its ID.Call the reports.list method with the jobId filter parameter set to the ID found in the previous step to retrieve a list of downloadable reports that that particular job created.Creators can check the report’s last modified date to determine whether the report has been updated since the last time it was retrieved.Fetch the report from the URL obtained by step 3.While using our sample code and toolsClient libraries for many different programming languages can help you implement the YouTube Reporting API as well as many other Google APIs.Don't write code from scratch! Our Java, PHP, and Python code samples will help you get started.The APIs Explorer lets you try out sample calls before writing any code.Cheers,—Markus Lanthaler, Paul Harvey, Ronnie Falcon, Ibrahim Ulukaya, and the YouTube API Team[...]



Ten years of YouTube video tech in ten videos

2015-05-12T16:09:27.507-07:00

iframe.i43 { width: 480px; height: 360px; } iframe.i169 { width: 640px; height: 360px; } @media screen and (max-width: 480px) { iframe.i43 { width: 300px; height: 225px; } iframe.i169 { width: 300px; height: 169px; } } 2005: YouTube is born class="i43" src="https://www.youtube.com/embed/jNQXAC9IVRw" frameborder="0" allowfullscreen>Me at the Zoo is the first video uploaded to YouTube 2006: Google buys YouTube class="i43" src="https://www.youtube.com/embed/dMH0bHeiRNg" frameborder="0" allowfullscreen>One year after YouTube launches, videos play in the FLV container with the H.263 codec at a maximum resolution of 240p. We scale videos up to 640x360, but you can still click a button to play at original size. 2007: YouTube goes mobile class="i169" src="https://www.youtube.com/embed/_OBlgSz8sSM" frameborder="0" allowfullscreen>YouTube is one of the original applications on the iPhone. Because it doesn't support Flash, we re-encode every single YouTube video into H.264 with the MP4 container. YouTube videos get a resolution notch to 360p. 2008: YouTube kicks it up to HD class="i169" src="https://www.youtube.com/embed/edLB6YWZ-R4" frameborder="0" allowfullscreen>With upload sizes and download speeds growing, videos jump in size up to 720p HD. Lower resolution files get higher quality by squeezing Main Profile H.264 into FLVs. 2009: YouTube enters the third dimension class="i169" src="https://www.youtube.com/embed/5ANcspdYh_U" frameborder="0" allowfullscreen>YouTube supports 3D videos, 1080p and live streaming. 2010: YouTube's on TV class="i169" src="https://www.youtube.com/embed/owGykVbfgUE" frameborder="0" allowfullscreen>The biggest screen in your house now gets YouTube courtesy of Flash Lite and ActionScript 2. 2010 also sees the first playbacks with HTML5



Bye-bye, YouTube Data API v2

2015-08-03T11:32:02.131-07:00

UPDATE 08/03/15: Starting today, API v2 of comments, captions and video flagging services are turned down.------------------------------------------------------------------------------------------------------------------------------------------------------UPDATE 06/03/15: Starting today, most YouTube Data API v2 calls will receive 410 Gone HTTP responses.------------------------------------------------------------------------------------------------------------------------------------------------------UPDATE 05/06/15: Starting today, YouTube Data API v2 video feeds will only return the support video.------------------------------------------------------------------------------------------------------------------------------------------------------UPDATE: With the launch of video abuse reporting and video search for developer, the Data API v3 supports every feature scheduled to be migrated from the soon-to-be-turned-down Data API v2.------------------------------------------------------------------------------------------------------------------------------------------------------With the recent additions of comments, captions, and RSS push notifications, the Data API v3 supports almost every feature scheduled to be migrated from the soon-to-be-turned-down Data API v2. The only remaining feature to be migrated is video flagging, which will launch in the coming days. The new API brings in many features from the latest version of YouTube, making sure your users are getting the best YouTube experience on any screen.For a quick memory lane trip, in March 2014, we announced that the Data API v2 would be retired on April 20, 2015, and would be shut down soon thereafter. To help with your migration, we launched the migration guide in September 2014, and have also been giving you regular notices on v3 feature updates.Retirement planIf you’re still using the Data API v2, today we’ll start showing a video at the top of your users’ video feeds that will notify them of how they might be affected. Apart from that, your apps will work as usual. allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/UKY3scPIMd8/hqdefault.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/UKY3scPIMd8?feature=player_embedded" width="320">In early May, Data API v2 video calls will start returning only the warning video introduced on April 20. Users will not be able to view other videos on apps that use the v2 API video calls. See youtube.com/devicesupport for affected devices.By late May, v2 API calls except for comments and captions will receive 410 Gone HTTP responses. You can test your application’s reaction to this response by pointing the application at eol.gdata.youtube.com instead of gdata.youtube.com. While you should migrate your app as soon as possible, these features will work in the Data API v2 until the end of July 2015 to avoid any outages.How you can migrateCheck out the frequently asked questions and migration guide for the most up-to-date instructions on how to update specific features to use the Data API v3. The guide now lists all of the Data API v2 functionality that is being deprecated and won't be offered in the Data API v3. It also includes updated instructions for a few newly migrated features, like comments, captions, and video flagging.- Ibrahim Ulukaya, and the YouTube for Developers team[...]



Manage comments with the YouTube Data API v3

2015-04-10T09:53:44.157-07:00

Cindy 3 hours agoI wish my app could manage YouTube comments.Ibrahim 2 hours agoThen it's your day today. With the new YouTube Data API (v3) you can now have comments in your app. Just register your application to use the v3 API and then check out the documentation for the  Comments and CommentThreads resources and their methods.Andy 2 hours ago+Cindy R u still on v2? U know the v2 API is being deprecated on April 20, and you’ve updated to v3 right?Andy 1 hour ago+Ibrahim I can haz client libraries, too?Ibrahim 30 minutes agoYes, there are client libraries for many different programming languages, and there are already Java, PHP, and Python code samples.Matt 20 minutes agoMy brother had a python and he used to feed it mice. Pretty gross!Cindy 10 minutes agoThanks, +Ibrahim. This is very cool. The APIs Explorer lets you try out sample calls before writing any code, too. Ibrahim 5 minutes agoCheck out this interactive demo that uses the new comments retrieval feature and Google Prediction APIs. The demo displays audience sentiment against any video by retrieving the video's comments and feeding them to the Cloud Prediction API for the sentiment analysis.- Ibrahim Ulukaya, Andy Diamondstein and the YouTube for Developers team[...]



VP9: Faster, better, buffer-free YouTube videos

2015-04-06T09:17:18.852-07:00

p { margin: 0 0 1em 0; } div.img-comparison { height: 410px; width: 640px; background-image: url(//3.bp.blogspot.com/-k3NDuOwKSb4/VSIc9lfIhvI/AAAAAAAAAJQ/V1E2AkkJyKY/s1600/combined.jpg); background-position: 0 0; background-repeat: no-repeat; position: relative; } body.mobile div.img-comparison { height: 225px; width: 320px; background-size: 100%; } div.viewtext > span { padding: 2px 6px; margin: 0; display: inline-block; border: solid 1px #444; border-width: 1px 0px 1px 1px; color: #444; } div.viewtext > span:last-child { border-right: solid 1px #444; } div.viewtext > span:hover { cursor: pointer; } div.viewtext > span:focus, div.viewtext > span:hover { outline: 0; background-color: #444; color: #eee; } div.viewtext > span:focus::after, div.viewtext > span:hover::after { content: ""; width: 640px; height: 360px; position: absolute; left: 0; top: -370px; background-position: 0 0; background-repeat: no-repeat; background-size: 100%; } body.mobile div.viewtext > span:focus::after, body.mobile div.viewtext > span:hover::after { background-size: 100%; top: -185px; height: 180px; width: 320px; } div.viewtext > span.vp9-hover:focus::after, div.viewtext > span.vp9-hover:hover::after { background-image: url(//1.bp.blogspot.com/-sPneqMTLa4Q/VSImAgK2gOI/AAAAAAAAAKI/Rt8W0qg9F8w/s1600/vp9.jpg); } div.viewtext > span.h264-hover:focus::after, div.viewtext > span.h264-hover:hover::after { background-image: url(//4.bp.blogspot.com/-eZkYoOYnSZQ/VSImAnjbEpI/AAAAAAAAAKE/ootKpaXhFd4/s1600/h264.jpg); } div.viewtext > span.combined-hover:focus::after, div.viewtext > span.combined-hover:hover::after { background-image: url(//3.bp.blogspot.com/-k3NDuOwKSb4/VSIc9lfIhvI/AAAAAAAAAJQ/V1E2AkkJyKY/s1600/combined.jpg); } div.viewtext { color: #444; text-align: center; position: absolute; bottom: 0; left: 0; height: 40px } As more people watch more high-quality videos across more screens, we need video formats that provide better resolution without increasing bandwidth usage. That’s why we started encoding YouTube videos in VP9, the open-source codec that brings HD and even 4K (2160p) quality at half the bandwidth used by other known codecs. VP9 is the most efficient video compression codec in widespread use today. In the last year alone, YouTube users have already watched more than 25 billion hours of VP9 video, billions of which would not have been played in HD without VP9's bandwidth benefits. And with more of our device partners adopting VP9, we wanted to give you a primer on the technology. How VP9 works Videos hold a lot of information. If video were stored in the same format that a camera sensor uses when shooting a scene, the resulting files would be enormous — raw 4K is up to 18,000 Mbps! Instead, modern video compression looks at a video more like a person might, by encoding a description of the features in a scene, and tracking how those features move and change. This compression is hundreds of times more efficient than a camera sensor's recording and is what makes video streaming possible. While VP9 uses the same basic blueprint as previous codecs, the WebM team has packed improvements into VP9 to get more quality out of each byte of video. For instance, the encoder prioritizes the sharpest image features, and the codec now uses asymmetric transforms to help keep even the most challenging scenes looking crisp and[...]



Scaling MySQL in the cloud with Vitess and Kubernetes

2015-04-01T14:46:04.160-07:00

[Cross-posted from the Google Cloud Platform Blog] Your new website is growing exponentially. After a few rounds of high fives, you start scaling to meet this unexpected demand. While you can always add more front-end servers, eventually your database becomes a bottleneck, which leads you to . . .Add more replicas for better read throughput and data durabilityIntroduce sharding to scale your write throughput and let your data set grow beyond a single machineCreate separate replica pools for batch jobs and backups, to isolate them from live trafficClone the whole deployment into multiple datacenters worldwide for disaster recovery and lower latencyAt YouTube, we went on that journey as we scaled our MySQL deployment, which today handles the metadata for billions of daily video views and 300 hours of new video uploads per minute. To do this, we developed the Vitess platform, which addresses scaling challenges while hiding the associated complexity from the application layer.Vitess is available as an open-source project and runs best in a containerized environment. With Kubernetes and Google Container Engine as your container cluster manager, it's now a lot easier to get started. We’ve created a single deployment configuration for Vitess that works on any platform that Kubernetes supports.In addition to being easy to deploy in a container cluster, Vitess also takes full advantage of the benefits offered by a container cluster manager, in particular:Horizontal scaling – add capacity by launching additional nodes rather than making one huge nodeDynamic placement – let the cluster manager schedule Vitess containers wherever it wantsDeclarative specification – describe your desired end state, and let the cluster manager create itSelf-healing components – recover automatically from machine failuresIn this environment, Vitess provides a MySQL storage layer with improved durability, scalability, and manageability.We're just getting started with this integration, but you can already run Vitess on Kubernetes yourself. For more on Vitess, check out vitess.io, ask questions on our forum, or join us on GitHub. In particular, take a look at our overview to understand the trade-offs of Vitess versus NoSQL solutions and fully-managed MySQL solutions like Google Cloud SQL.-Posted by Anthony Yeh, Software Engineer, YouTube[...]