Hello! I’m not the u/worstnerd but I’m not far from it, maybe third or fourth worst of the nerds? All that to say, I’m here to bring you our Q1 Safety & Security report. In addition to the quarterly numbers, we’re highlighting some results from the ban evasion filter we launched in Q1 to help mods keep their communities safe, as well as updates to our Automod notification architecture.
Q1 By The Numbers
Category | Volume (Oct - Dec 2022) | Volume (Jan - Mar 2023) |
---|---|---|
Reports for content manipulation | 7,924,798 | 8,002,950 |
Admin removals for content manipulation | 79,380,270 | 77,403,196 |
Admin-imposed account sanctions for content manipulation | 14,772,625 | 16,194,114 |
Admin-imposed subreddit sanctions for content manipulation | 59,498 | 88,772 |
Protective account security actions | 1,271,742 | 1,401,954 |
Reports for ban evasion | 16,929 | 20,532 |
Admin-imposed account sanctions for ban evasion | 198,575 | 219,376 |
Reports for abuse | 2,506,719 | 2,699,043 |
Admin-imposed account sanctions for abuse | 398,938 | 447,285 |
Admin-imposed subreddit sanctions for abuse | 1,202 | 897 |
Ban Evasion Filter
Ban evasion has been a persistent problem for mods (and admins). Over the past year, we’ve been working on a ban evasion filter, an optional subreddit setting that leverages our ability to identify posts and comments authored by potential ban evaders. Our goal in offering this feature was to help reduce time mods spent detecting ban evaders and prevent their potential negative community impact.
Initially piloted in August 2022, we released the ban evasion filter to all communities this May after incorporating feedback from mods. Since then we’ve seen communities adopting the filter and keeping it on — with positive qualitative feedback too. We have a few improvements on the radar, including faster detection of ban evaders, and are looking forward to continuing to iterate with y’all.
-
Adoption
-
7,500 communities have turned on the ban evasion filter
-
-
Volume
-
5,500 pieces of content are ban evasion-filtered per week from communities that have adopted the tool
-
-
Reversal Rate
-
Mods keep 92% of ban evasion filtered content out of their communities, indicating the filter is catching the right stuff
-
-
Retention
-
98.7% of communities that have turned on the ban evasion filter have kept it on
-
Automod Notification Checks
Last week, we started rolling out changes to the way our notification systems are architected. Automod will now run before post and comment reply notifications are sent out. This includes both push notifications and email notifications. The change will be fully rolled out in the next few weeks.
This change is designed to improve the user experience on our platform. By running the content checks before notifications are sent out, we can ensure that users don't see content that has been taken down by Automod.
Up Next
More Community Safety Filters
We’re working on another new set of community moderation filters for mature content to further prevent this content from showing up in places where it shouldn’t or where users might not expect it, which we’ve heard from mods that they want. We already employ automated tagging at the site level for sexually explicit content, so this will add to those protections by providing a subreddit-level filter for a wider range of mature content. We’re working to get the first version of these filters to mods in the next couple of months.
Hello Reddit Community,
Today, we're rolling out updates to Rule 3 and Rule 4 of our Content Policy to clarify the scope of these rules and give everyone a better sense of the types of content and behaviors that are not allowed on Reddit. This is part of our ongoing work to be transparent with you about how we’re evolving our sitewide rules to keep Reddit safe and healthy.
First, we're updating the language of our Rule 3 policy prohibiting non-consensual intimate media to more specifically address AI-generated sexual media. While faked depictions of non-consensual intimate media are already prohibited, this update makes it clear that sexually explicit AI-generated content violates our rules if it depicts a real, identifiable person.
This update also clarifies that AI-generated sexual media that depict fictional people, or artistic depictions such as cartoons or anime whether AI-generated or not, do not fall under this rule. Keep in mind however that this type of media may violate subreddit-specific rules or other policies (such as our policy against copyright infringement), which our Safety teams already enforce across the platform.
Sidenote: Reddit also leverages StopNCII.org, a free, online tool that supports platforms to detect and remove non-consensual intimate media while protecting the victim’s privacy. You can read more information about how StopNCII.org works here. If you've been affected by this issue, you can access the tool here.
Now to Rule 4. While the vast majority of Reddit users are adults, it is critical that our community continues to prioritize the safety, security, and privacy of minors regardless of their engagement with our platform. Given the importance of minor safety, we are expanding the scope of this Rule to also prohibit non-sexual forms of abuse of minors (e.g., neglect, physical or emotional abuse, including, for example, videos of things like physical school fights). This represents a new violative content category.
Additionally, we already interpret Rule 4 to prohibit inappropriate and predatory behaviors involving minors (e.g., grooming) and actively enforce against this content. In line with this, we’re adding language to Rule 4 to make this even clearer.
You'll also note that we're parting ways with some outdated terminology (e.g., "child pornography") and adding specific examples of violative content and behavior to shed light on our interpretation of the rule.
As always, to help keep everyone safe, we encourage you to flag potentially violative content by using the in-line report feature on the app or website, or by visiting this page.
That's all for now, and I'll be around for a bit to answer any questions on this announcement!
Hi all, I’m u/outersunset, and I’m here to share that Reddit has released our full-year Transparency Report for 2022. Alongside this, we’ve also just launched a new online Transparency Center, which serves as a central source for Reddit safety, security, and policy information. Our goal is that the Transparency Center will make it easier for users - as well as other interested parties, like policymakers and the media - to find information about how we moderate content, deal with complex things like legal requests, and keep our platform safe for all kinds of people and interests.
And now, our 2022 Transparency Report: as many of you know, we publish these reports on a regular basis to share insights and metrics about content removed from Reddit – including content proactively removed as a result of automated tooling - as well as accounts suspended, and legal requests from governments, law enforcement agencies, and third parties to remove content or lawfully obtain private user data.
Reddit’s Biggest Content Creation Year Yet
-
Content Creation: This year, our report shows that there was a lot of content on Reddit. 2022 was the biggest year of content creation on Reddit to date, with users creating an eye-popping 8.3 billion posts, comments, chats, and private messages on our platform (you can relive some of the beautiful mess that was 2022 via our Reddit Recap).
-
Content Policy Compliance: Importantly, the overwhelming majority – over 96% – of Reddit content in 2022 complied with our Content Policy and individual community rules. This is a slight increase from last year’s 95%. The remaining 4% of content in 2022 was removed by moderators or admins, with the overwhelming majority of admin removals (nearly 80%) being due to spam, such as karma farming.
Other key highlights from this year include:
-
Content & Subreddit Removals: Consistent with previous years, there were increased content and subreddit removals across most policy categories. Based on the data as a whole, we believe this is largely due to our evolving policies and continuous enforcement improvements. We’re always looking for ways to make our platform a healthy place for all types of people and interests, and this year’s data demonstrates that we’re continuing to improve over time.
-
We’d also like to give a special shoutout to the moderators of Reddit, who accounted for 58% of all content removed in 2022. This was an increase of 4.7% compared to 2021, and roughly 69% of these were a result of proactive Automod removals. Building out simpler, better, and faster mod tooling is a priority for us, so watch for more updates there from us.
-
-
Global Legal Requests: We saw increased volumes across nearly all types of global legal requests. This is in line with industry trends.
-
This includes year-over-year increases of 43% in copyright notices, 51% in legal removal requests submitted by government and law enforcement agencies, 61% in legal requests for account information from government and law enforcement agencies, and 95% in trademark notices.
-
You can read more insights in the full-year 2022 Transparency Report here.
Starting later this year, we’ll be shifting to publishing this full report - with both legal requests and content moderation data - on a biannual cadence (our first mid-year Transparency Report focused only on legal requests). So expect to see us back with the next report later in 2023!
Overall, it’s important to us that we remain open and transparent with you about what we do and why. Not only is “Default Open” one of our company values, we also think it’s the right thing to do and central to our mission to bring community, empowerment, and belonging to everyone in the world. Please let us know in the comments what other kinds of data and insights you’d be interested in seeing. I’ll stick around for a bit to hear your feedback and answer some questions.