How does YouTube help ensure that unintended harmful bias is not present in its systems?
We use everyday people across the globe to train our search and discovery systems. The guidelines they use are publicly available. Our search and recommendation systems are not designed to filter or demote videos or channels based on specific political perspectives.
Additionally, we audit our machine learning systems to help ensure that unintended algorithmic bias such as gender bias isn’t present. We correct mistakes when we find them and re-train the systems to be more accurate moving forward.
Do YouTube’s policies unfairly target certain groups or political viewpoints?
When developing and refreshing our policies, we make sure to hear from a range of different voices, including Creators, subject-area experts, free speech proponents, and policy organizations from across the political spectrum.
Once a policy has been developed, we invest significant time making sure newly developed policies are consistently enforced by our global team of reviewers, based on objective guidelines. Before any policy is launched, reviewers in a staging environment (where policy decisions aren’t actually applied) must consistently make the same decision at a very high rate. If they fail to do so, we revise the training and internal guidelines to ensure clarity and repeat the process. The goal is to reduce subjectivity and personal bias to achieve high accuracy and consistency when operating at scale. Only once we’re at an acceptable level of accuracy can the policy be launched to the public.