Skip to main content

Get the Reddit app

Scan this QR code to download the app now
Or check it out in the app stores

r/redditsecurity

members
online

Reddit Transparency Report: Jul-Dec 2023 ADMIN Reddit Transparency Report: Jul-Dec 2023

Hello, redditors!

Today we published our Transparency Report for the second half of 2023, which shares data and insights about our content moderation and legal requests from July through December 2023.

Reddit’s biannual Transparency Reports provide insights and metrics about content that was removed from Reddit – including content proactively removed as a result of automated tooling, accounts that were suspended, and legal requests we received from governments, law enforcement agencies, and third parties from around the world to remove content or disclose user data.

Some key highlights include:

  • Content Creation & Removals:

    • Between July and December 2023, redditors shared over 4.4 billion pieces of content, bringing the total content on Reddit (posts, comments, private messages and chats) in 2023 to over 8.8 billion. (+6% YoY). The vast majority of content (~96%) was not found to violate our Content Policy or individual community rules.

      • Of the ~4% of removed content, about half was removed by admins and half by moderators. (Note that moderator removals include removals due to their individual community rules, and so are not necessarily indicative of content being unsafe, whereas admin removals only include violations of our Content Policy).

      • Over 72% of moderator actions were taken with Automod, a customizable tool provided by Reddit that mods can use to take automated moderation actions. We have enhanced the safety tools available for mods and expanded Automod in the past year. You can see more about that here.

      • The majority of admin removals were for spam (67.7%), which is consistent with past reports.

    • As Reddit's tools and enforcement capabilities keep evolving, we continue to see a trend of admins gradually taking on more content moderation actions from moderators, leaving moderators more room to focus on their individual community rules.

      • We saw a ~44% increase in the proportion of non-spam, rule-violating content removed by admins, as opposed to mods (admins remove the majority of spam on the platform using scaled backend tooling, so excluding it is a good way of understanding other Content Policy violations).

  • New “Communities” Section

    • We’ve added a new “Communities” section to the report to highlight subreddit-level actions as well as admin enforcement of Reddit’s Moderator Code of Conduct.

  • Global Legal Requests

    • We continue to process large volumes of global legal requests from around the world. Interestingly, we’ve seen overall decreases in global government and law enforcement legal requests to remove content or disclose account information compared to the first half of 2023.

      • We routinely push back on overbroad or otherwise objectionable requests for account information, and fight to ensure users are notified of requests.

      • In one notable U.S. request for user information, we were served with a sealed search warrant from the LAPD seeking records for an account allegedly involved in the leak of an LA City Council meeting recording that resulted in the resignation of prominent, local political leaders. We fought to notify the account holder about the warrant, and while we didn’t prevail initially, we persisted and were eventually able to get the warrant and proceedings unsealed and provide notice to the redditor.

You can read more insights in the full document: Transparency Report: July to December 2023. You can also see all of our past reports and more information on our policies and procedures in our Transparency Center.

Please let us know in the comments section if you have any questions or are interested in learning more about other data or insights.


Is this really from Reddit? How to tell: ADMIN Is this really from Reddit? How to tell:

Hey all! Today we wanted to take a moment to remind everyone how you can verify if a message, comment, or post is truly from a Reddit employee or Reddit Inc. As you can see by clicking on my profile all official Reddit accounts will have a orangered snoo or [A] denoting admin accounts.

You'll also see those on official messages, comments, or posts from us. (like on this post)

If there is an email address attached to your username, you may also receive notices at that address from @reddit.com or @redditmail.com addresses.

Account security related notifications/messages are sent officially from our u/reddit account only. We'll also never send you a chat message notifying you of a security related issue.

Finally, in the words of every gaming company anywhere, Reddit will never ask you for your password or 2FA codes. Please report any suspicious messages by clicking the "report" option below each suspicious message, post or comment, or by filling out a report using reddit.com/report directly.

Note: we're aware that this isn't currently visible if you're using the iOS app, we're working on a fix - in the meantime, if you're ever unsure please view the profile from the desktop version of the site.

upvotes · comments

Q4 2023 Safety & Security Report ADMIN Q4 2023 Safety & Security Report

Hi redditors,

While 2024 is already flying by, we’re taking our quarterly lookback at some Reddit data and trends from the last quarter. As promised, we’re providing some insights into how our Safety teams have worked to keep the platform safe and empower moderators throughout the Israel-Hamas conflict. We also have an overview of some safety tooling we’ve been working on. But first: the numbers.

Q4 By The Numbers

Category Volume (July - September 2023) Volume (October - December 2023)
Reports for content manipulation 827,792 543,997
Admin content removals for content manipulation 31,478,415 23,283,164
Admin imposed account sanctions for content manipulation 2,331,624 2,534,109
Admin imposed subreddit sanctions for content manipulation 221,419 232,114
Reports for abuse 2,566,322 2,813,686
Admin content removals for abuse 518,737 452,952
Admin imposed account sanctions for abuse 277,246 311,560
Admin imposed subreddit sanctions for abuse 1,130 3,017
Reports for ban evasion 15,286 13,402
Admin imposed account sanctions for ban evasion 352,125 301,139
Protective account security actions 2,107,690 864,974

Israel-Hamas Conflict

During times of division and conflict, our Safety teams are on high-alert for potentially violating content on our platform.

Most recently, we have been focused on ensuring the safety of our platform throughout the Israel-Hamas conflict. As we shared in our October blog post, we responded quickly by engaging specialized internal teams with linguistic and subject-matter expertise to address violating content, and leveraging our automated content moderation tools, including image and video hashing. We also monitor other platforms for emerging foreign terrorist organizations content to identify and hash it before it could show up to our users. Below is a summary of what we observed in Q4 related to the conflict:

  • As expected, we had increased the required removal of content related to legally-identified foreign terrorist organizations (FTO) because of the proliferation of Hamas-related content online

    • Reddit removed and blocked the additional posting of over 400 pieces of Hamas content between October 7 and October 19 — these two weeks accounted for half of the FTO content removed for Q4

  • Hateful content, including antisemitism and islamophobia, is against Rule 1 of our Content Policy, as is harassment, and we continue to aggressively take action against it. This includes October 7th denialism

    • At the start of the conflict, user reports for abuse (including hate) rose 9.6%. They subsided by the following week. We had a corresponding rise in admin-level account sanctions (i.e., user bans and other enforcement actions from Reddit employees).

    • Reddit Enforcement had a 12.4% overall increase in account sanctions for abuse throughout Q4, which reflects the rapid response of our teams in recognizing and effectively actioning content related to the conflict

  • Moderators also leveraged Reddit safety tools in Q4 to help keep their communities safe as conversation about the conflict picked up

    • Utilization of the Crowd Control filter increased by 7%, meaning mods were able to leverage community filters to minimize community interference

    • In the week of October 8th, there was a 9.4% increase in messages filtered by the modmail harassment filter, indicating the tool was working to keep mods safe

As the conflict continues, our work here is ongoing. We’ll continue to identify and action any violating content, including FTO and hateful content, and work to ensure our moderators and communities are supported during this time.

Other Safety Tools

As Reddit grows, we’re continuing to build tools that help users and communities stay safe. In the next few months, we’ll be officially launching the Harassment Filter for all communities to automatically flag content that might be abuse or harassment — this filter has been in beta for a while, so a huge thank you to the mods that have participated, provided valuable feedback and gotten us to this point. We’re also working on a new profile reporting flow so it’s easier for users to let us know when a user is in violation of our content policies.

That’s all for this report (and it’s quite a lot), so I’ll be answering questions on this post for a bit.