×

Reddit’s Defense of Section 230 to the Supreme Court by traceroo in reddit

[–]reddit[A] 32 points33 points  (0 children)

full comment from u/AkaashMaharaj

My colleague u/desileslointaines and I moderate the Equestrian subreddit at Reddit. We do so as unpaid volunteers: we receive no consideration from the corporation, and we are prohibited from accepting any remuneration, gifts, or incentives from third parties for our activities as Moderators.

We are also entitled to neither recourse nor remedy from Reddit, if we suffer any loss or endure any abuse, as a result of fulfilling our responsibilities as Moderators. On the contrary, we are required to indemnify the corporation and to hold it harmless if any third party should bring an action against it – howsoever frivolous or unfounded that action might be – in connection with our volunteer moderation activities.

We serve as Moderators purely as a form of public service, in the hopes that our sound stewardship of the subreddit will contribute to the wellbeing of the global equestrian community.

Our subreddit often discusses difficult issues, such as animal welfare and athlete abuse. These subjects invariably excite high passions, and often foment onslaughts of posts and comments that can include personal attacks, character assassination, thoughtless misinformation, wilful disinformation, and behaviour meant to artificially manipulate the course of discussions.

Our ability to moderate these posts and comments – to separate the wheat from the chaff – is vital to enabling the subreddit to function as a community, without becoming a scorched plain of irrelevant and predatory material.
Moreover, the existence of a well-managed virtual equestrian space is critical to enabling the global equestrian community to discuss difficult issues, to engage with alternative viewpoints, to consider international factors, to discover unfamiliar facts, and to make better‐informed decisions as citizens in the real world.

The Reddit Equestrian community itself has, over time, developed its own standards and rules for what constitutes germane and constructive posts and comments. People choose to become active members of our subreddit because they share the values and ideals embedded in those choices.

Our responsibilities as volunteer Moderators call on us to exercise our best judgement on what posts and comments fall within the parameters laid down by the community.

Especially when an emotive issue provokes a sudden influx of content, we rely on automated tools to support our manual efforts. In our experience, this combination is the best of possible worlds: the efficiency of automated systems supporting, and not replacing, human judgement.

The fact that Reddit has delegated moderation to volunteer human beings, supported by automated tools, is the platform’s single greatest strength. It is a model that should be fostered and encouraged at other social media platforms; too many platforms have instead turned to statistical routines, heuristic algorithms, and self-styled “artificial intelligence” to carry out cheap and rapid moderation, with predictable results.

Online societies will not reflect the standards of public accountability and transparency we expect in the real world, if those societies are commanded and controlled by impersonal systems shielded inside black boxes. Online communities will only advance the human condition, if they are led first and last by humans.

To make it possible for platforms such as Reddit to sustain content moderation models where technology serves people, instead of mastering us or replacing us, Section 230 must not be attenuated by the Court in a way that exposes the people in that model to unsustainable personal risk, especially if those people are volunteers seeking to advance the public interest or others with no protection against vexatious but determined litigants.

Reddit’s Defense of Section 230 to the Supreme Court by traceroo in reddit

[–]reddit[A] 19 points20 points  (0 children)

cont.

He did, however, find an attorney willing to file a defamation SLAPP (Strategic lawsuit against public participation) against Reddit, and erroneously referred to me as an ”employee” of Reddit in order to facilitate my inclusion in the suit, and target me for reasons of personal contempt. I am not and never have been an employee of Reddit, as I think is pretty clear in this statement. Reddit, considering that I had in no way defamed this person, generously provided me with legal counsel.

In the course of this, the plaintiff not only harassed me personally, but also provided a frivolous motion to attempt to unmask approximately forty users in the community in an attempt to subject them to further harassment for having seen or commented on the original post. Reddit accommodated our community with active diligence, filing legal briefs to defend those users against unmasking, and to push back against many of the plaintiff’s empty threats, and his lawyer’s failure to supply the most basic legal action to back his claims.

The suit, unsurprisingly, was ultimately dropped -- but that doesn’t reflect any kind of guarantee. The state of California, where Reddit is based, has very strong anti-SLAPP legislation in place, and because this person framed his place of business as being located there, it’s unlikely he would have made much progress. He still harasses me personally by putting my email on websites and impersonating me as soliciting sexual services, funeral services, other little contextualized hints of his malice, but he is not in a very strong position to weaponize further litigation against me.

Now, in my opinion, these acts are only restrained from escalation due to his lack of opportunity. In spite of a paucity of organization and tendency to self-sabotage, his level of hate is so vitriolic that he demonstrates a personality that does not so much resemble plaintiff Gonzales…but ISIS.

So in addition to compartmentalizing the chain of responsibility in order to protect human volunteers such as myself, we have to ask how far the distance really is between a hateful individual with enough money to hire an attorney (all while intimating wishes to do harm to the defendant with no care to their own legal case’s integrity) to bring a SLAPP -- and an individual who will visit actual physical harm on another in order to silence them in contempt of their freedoms.

It isn’t a one-to-one comparison and I am not suggesting someone who harasses me online is equivalent to ISIS, but there is another consideration: if Section 230 is weakened because of a failure by Google to address its own weaknesses (something I think we can agree it has the resources and expertise to do) what ultimately happens to the human moderator who is considered responsible for the content that appears on their platform, and is expected to counteract it, and is expected to protect their community from it?

We are already, by tacit agreement, placed in that chain. We’re not algorithms, we are the agents of programming those algorithms to aid our service to our communities. Reddit isn’t perfect, it has struggled with balancing free speech and hate speech in the past. No company or individual can monitor all corners of the internet at all time, but the same goes for a school yard, or a mall, or any other place where human communities assemble.

Further, Reddit has tightened its regulations precisely because it does not want to inadvertently host those potential threats. Without a moderationship and administration free to act without fear of being litigated against or even charged with abetting these threats, organizations like ISIS, or the Proud Boys, or various international bad actors, would in fact find comfort in the weakening of Section 230.

Such interests often attempt to use human-run forums to propagate their message and recruitment. Twitter recently saw the departure of its entire paid moderator team, and the increase of hate speech, racism, abuse, misinformation and other threats to our freedoms has skyrocketed. A weakening of Section 230 would codify such an invitation to chaos, endangering the individuals whose role it is to ensure speech while using their best judgement to mitigate threats by exposing them to prosecutions.

Indicating my actions as a single individual performing this role in my spare time are the same as Google’s automated challenges suggest that any individual who litigates for any reason against platforms like Reddit should enjoy the same protections as a victim of terrorism.

This is not consistent with what I consider a standard of freedom or free speech.

Conclusion

It’s realistic to say that large, heavily resourced, well-financed corporations like Google should be required to implement better protections where their automated regulation of content is concerned. It’s fair, I think, to say that Section 230 may need to be reconsidered in light of this, and that its text should be updated to make these distinctions, as well as expanding protection to paid or volunteer moderator teams whose primary purpose is to ensure the protection of their communities.

That includes terror threats -- and the importance of human intervention. Whether the YouTube’s content regulation instruments can recognize the difference between an ISIS recruitment video or a television clip is a question of technological limitations. If, however, if 230 is weakened in order to punish those technological limitations, as written it will ultimately punish the individuals like myself whose far more sophisticated perception is vital to determining the difference between speech, and potential harm.

I am not capable of predicting what any bad actor might choose to propagate within my community before it comes to my inbox. Reddit, by extension (relying on thousands of human volunteers) cannot predict this either. It’s possible Google has a greater share of responsibility to do so, but if Section 230 suffers as a result of this lawsuit, it would preemptively chill human participation in moderating harmful content, and as a result, that harmful content very potentially would enjoy more and not less distribution.

If the object of this case is to prevent recruitment and indoctrination by terrorists, weakening my immunity as a volunteer moderator means not only that the person who attempted to sue me for defamation would likely have far greater success in falsely crediting responsibility to me for his indignity, but that I would not choose to make myself available to police any controversial content in service to my community, whether that be cottage industry grift, or terrorist recruitment, or simple bickering.

I am not an algorithm. I am not a Reddit employee, or a Reddit department. In the course of being sued, I have taken the personal, voluntary initiative to prevent the names and addresses of community members from becoming public and making those members vulnerable. I was a liaison between Reddit and those community members. I don’t receive compensation for this, and I was happy to do it -- but I don’t think I would feel that way if I was blamed for anything posted in my community. That simply does not make sense. And if there is further examination of Section 230, it should consider my level of responsibility does not match Google’s.

Finally, the victims and the targets of terror need moderators who can act without fear of being accused of participating in terror for simply being in a chain of administrators. Section 230 must remain in place to ensure that threat management is protected and improved, or else it credits responsibility to every paid or unpaid participant responsible for regulating potentially harmful content.

Reddit’s Defense of Section 230 to the Supreme Court by traceroo in reddit

[–]reddit[A] 20 points21 points  (0 children)

full comment from u/wemustburncarthage

I first want to acknowledge that what happened on November 13th, 2014 was a heinous crime and tragedy that never should have occurred. I think what is decided in the court and its impact on Section 230 is manifestly a result of terrorism’s ultimate goal to disrupt society and lessen freedom -- freedom of speech being one of terror’s paramount targets. While I do believe that Google and other internet companies must evolve to more actively deal with these threats, the potential impact to the wider shared society now platformed by these companies could ultimately reflect a success of such acts of terror in dividing us, and reducing our capacity to regulate both automated, and manually administered technologies.

On consideration of volunteer forums like Reddit

Unlike Google and Facebook, Reddit is and always has been a platform founded on a principle of self governance by the users who choose to host their communities there. It has algorithmic functions, but unlike the defendant, those algorithmic functions are actively programmed by volunteers like myself, and other volunteer members, in order to tailor our regulation structures to the needs of our communities.

Reddit provides an administrative framework to oversee moderators like myself, but I want to be careful in making the distinction that it is not a democratic platform; it is a platform that functions on the principles of initiative, engagement and regulation. All of these principles are a matter of self-motivated accountability.

In other words, volunteer moderator teams, to a greater or lesser degree depending on individual choices, use freely accessible and available program languages to code automated responses that help us manage our communities. My subreddit has somewhere in the realm of 1.5 million subscribers, and my active moderator team is less than ten individuals. Having automoderator allows us to do things like pick out commonly asked questions, or immediately spot hateful or threatening speech that goes against our community mandate.

We are the first line of defence in safeguarding both free speech, and the right not be subjected to hate speech or discrimination. We users of Reddit are not a homogenized monolith, but rather an incredibly diverse array of communities that are administrated by a large international pool of moderators. My subreddit itself has moderators located in the US, Canada, and the UK. Many other subreddits have more diverse teams of different origins, all of which help us to understand the varying needs of our communities, and provide support availability across different time zones.

Section 230, the potential for bad-faith litigation, and how it affects human operators

We are a volunteer team, and we both design our governance framework and uphold a mandate provided by consultation with the community. I’m speaking for my individual situation, which is neither unique, nor is it universal. My subreddit is a creative writing community that is targeted, wherever else it gathers or is exposed to advertising by non-human algorithms, to predatory interests that prey on ambition, and the desire for work to be seen by our industry.

This includes but is not limited to -- private consulting, paid access to professional representation, content feedback services, and increasingly, low-return, high volume contest platforms. On occasion, these services come blended together. Very often, they are vastly more profitable than what our users might expect for their product, and are structured in such a way that any individual may pay to platform their contest, hire a pool of readers, and determine prizes and entry fees. Some of these companies are multi-billion dollar conglomerates that enjoy near-immunity from backlash, and some of them are just smaller interests that use such companies as a cover for their valueless offerings. My community, the largest online community of its kind, has a mandate that no such business will ever be allowed to advertise to our users.

A few years ago, one of the users in my community sounded a warning about just such an outfit, asserting that a 14+ contest string did not have any kind of genuine industry backing or material benefit to those paying the fees to enter. This contest string included plenty of official looking names that variously claimed to be contests or festivals from different parts of the world -- Seattle WA, Sydney, Australia, Toronto, Canada -- in an attempt to disguise their single origin, and their illegitimacy.

Considering this poster’s remarks to be in good faith and a benefit to the community, we allowed them to remain anonymous and ensured their remarks were not falsely reported and taken down. We had some back and forth with the contest owner, who promptly demanded things of the moderator team such as unmasking the individual, personal phone calls with us, and various other unacceptable, abusive behaviours.

After a considerable stretch of harassment, I advised this individual that if they wanted to continue threatening us with litigation, they were entitled to file a lawsuit against Reddit to attempt to force them to make us take down the critical remarks, and/or unmask our identities so that person could further litigate against us. These were my words, outlining the legal procedure by which this person could achieve satisfaction if he felt the legal grounds were strong enough. I did not anticipate he would attempt to actually do so, as Reddit’s commitment to free speech (and especially speech of this nature, which is cherished by the American Constitution) is considerably stronger than any claim this person had on our community.

edited for attribution

Reddit’s Defense of Section 230 to the Supreme Court by traceroo in reddit

[–]reddit[A] 24 points25 points  (0 children)

Full comment from u/halaku:

My name is [redacted] I have been using Reddit for over eleven years. I have created subreddit communities to moderate, and taken over moderation duties when previous volunteers have wished to stop. I currently moderate multiple communities that are focused on everything from specific fields in computer science, to specific musical bands, to specific television shows.

Part of my volunteer duties involves the creation and enforcement of rules relevant to the individual subreddit community in question. If posts are made that violate those rules, or if comments are made to posts which violate those removes, either I or other volunteers I have selected to help me remove them, for the good of the community. Repeated violations can result in posting or commenting capability being removed on a temporary or permanent basis, as required. This does not prevent the violator from seeing posts or comments made to the community by others, simply from joining in on that discussion, or starting a new one. One of the strengths of Reddit is that if a violator feels that they have been unfairly treated, they can move to another subreddit community that covers similar material, or start a brand new subreddit community to cover similar material if they wish to use that option, in much the same way that someone who has been repeatedly escorted out of a drinking establishment for improper behavior can in turn create their own establishment, and build a customer base of like-minded peers.

Part of those tasks are accomplished by automation, such as the "Automoderator" feature, which streamlines moderator response via advanced scripting. If I create a rule saying "No illegal postings of episodes of this show" in a subreddit dedicated to that show, I can manually remove any post that includes illegal postings or links to pirated copies, or I can employ the Automoderator function to automatically remove any posts that are made to specific websites which are devoted towards piracy. This stops my community from getting into trouble by gaining a reputation in which illegal content can be obtained.

Likewise, if someone has posted content the community has found repugnant and rejected, I can manually add them to a "Manually screen all future activity from this individual before it goes live on the community" filter, or have the Automoderator do it for me.

Subreddit communities can have up to tens of millions of active subscribers, as well as anyone on the Internet who creates an account and visits the community without subscribing. Moderation teams simply can't handle tens of millions of independent actions without assistance. Losing this automation would be exactly the same as losing the ability to spamfilter email, leaving users to hunt and peck for actual communications amidst all the falsified posts from malicious actors engaging in hate mail, advertising spam, or phishing attempts to gain financial credentials.

In the same vein, moderation teams often have to resolve situations caused by individuals acting with malice aforethought to cause problems and provoke hostile reactions, commonly known as 'trolling'. There have been more instances than I can count wherein myself or one of my team members have had to deal with individuals who show up and comment that only people who (insert extremely negative commentary based on racial, gender, sexual orientation, political orientation, religious views, age, physical / mental / emotional / spiritual health, etc) could be fans of the (musical band, television show, amateur cover of professional recording, etc) in question, and otherwise attempt to disrupt the community, typically with popular political slogans attached.

Again, Automoderator is a valuable, if not vital, tool in preventing these disruptions from occurring, by flagging said content for manual review before it can be seen by the community as a whole.

Ladies and gentlemen of the court, if these malicious actors are allowed to say that no one is permitted to take any sort of action regarding their engagement, because their discrimination, slurs, and rabid hostility is their "freely chosen venue of political expression" or "preferred method of free speech, and I as a volunteer who created the community am prevented from doing anything about the individuals or their behaviors?

If volunteer moderators, or the owners of the website that hosts these communities, are prevented from using automation to stop the community from drowning in a flood of this activity, while the malicious actors claim that they have a constitutional right to overwhelm the community with said behavior, and automation can not be used to stop them?

If communities degenerate into a baseline of "Malicious actors can completely disrupt all communication as they choose, with the community unable to respond adequately to the flood, and moderators barred from using automation to help stem the tide."?

Then Internet communication forums will suffer, and perhaps die, as any attempt at discourse can be destroyed by this behavior. My communities would be unable to discuss the topics at hand due to the interference of malicious actors, essentially squatting in the community yelling profanities, and claiming that if the community can't out-yell them by sinking to their level, the community deserves to die.

There are millions of Americans who use the Internet to talk to one another every day. There are tens of thousands of them who use Reddit to do so in the subreddit communities I manage, freely and of my own will, in an attempt to give them a space to do so. There are tens of thousands more who want nothing more than to disrupt those talks, because they don't care for the subject matter in question, because they are fans of competing bands or shows and feel that they can elevate their own interests by tearing down the interests of others, or they simply enjoy ruining someone else's good time. And there's only me to try and keep the off-topic spam, discrimination, and hate out of the community, so people can go back to talking about the band, or television show, or computer science field in question.

Without the ability to rely on technology such as automation in order to keep off-topic spam, discrimination, and hate out of the community, the community will grind to a stop, and the malicious actors win.

Reddit’s Defense of Section 230 to the Supreme Court by traceroo in reddit

[–]reddit[A] [score hidden] stickied comment (0 children)

Please see thread for the full comments submitted by the moderators who signed onto the Brief with us.

Sat 2021-12-11 by reddit in nameaserver

[–]reddit[S] 9 points10 points 3 (0 children)

A verdict has been reached!

one of our servers - go forth as McServerface and let the world know of ye.

Fri 2021-12-10 by reddit in nameaserver

[–]reddit[S] 6 points7 points  (0 children)

It is decided!

one of our servers - you shall go by that name no longer. You are hereby named ButterMyBuns.