Facebook Twitter online abuse
Ginger Gorman

On 9 February 2018 I gave evidence to the Australian Senate committee hearing into the adequacy of existing cyberbullying laws. At the time it was a big deal. I, along with the two other women speaking alongside me, came with statements carefully thought out and written down.

I’ve often found in life that so many of the most crucial and revealing moments occur not as part of the big ceremony or scheduled proceedings. Rather, they happen in the spaces in between. In the cracks. This day was no different.

The hearings were held deep in the bowels of Parliament House in Canberra. Through security doors, across noisy wooden floors and past formal paintings of previous prime ministers. Up flights of stairs. Alongside journalist and academic Jenna Price and reputation manager and CEO Liza-Jayne Loch, I sat at a table facing the senators. The three of us were representing the non-profit volunteer organisation Women in Media. The microphones hung from the ceiling, recording us; our feet were on the soft carpet. The glasses of chilled water. We read from our prepared statements. We answered questions.

Directly after our evidence, representatives from Facebook and non-profit Digital Industry Group Inc (DIGI) were due to have their say. Noticing Mia Garlick, Facebook’s Director of Policy for Australia and New Zealand, I walked up to introduce myself. She was surrounded by a wall of mostly female staff.

“I just gave evidence,” I said, smiling.

“I heard your evidence,” she said, staring straight at me. She was not smiling.

I’m writing a book about cyberhate, I said, and would like to interview her. Could I have her business card?

She said she didn’t have one on her.

“What’s the best way to get in touch, then?” I asked. “Can I get your email address?”

My paper and pen were poised. But she didn’t start spelling out her email address. She paused and mumbled something about how I could get it from Jenna Price. “She’s got it,” Garlick snapped and it was clear our conversation was over.

This brief interaction turned out to be a marker of what was to come.

Beyond public relations spin, it’s hard to get any real, in-depth and on-the-record answers from the social media companies about how they are tackling cyberhate. Despite their insistence on being platforms for and champions of free speech — “Twitter stands for freedom of expression for everyone!” — they are hellbent on controlling the message.

Predictably, social media companies aren’t crazy about the notion of further liability. Facebook’s submission to the Australian Senate hearings states:

Given the strong commitment of industry to promote the safety of people when they use our services, we believe that no changes to existing criminal law are required. If anything, we would encourage the Committee to consider carve outs from liability for responsible intermediaries.

In plain English, Facebook isn’t just seeking the status quo — it’s suggesting exemptions from prosecution. Also notable in this statement is the use of the word “intermediaries” to refer to themselves (as opposed to simply accepting their role as a publisher).

Neither Facebook nor Twitter agree to nominate a representative for me to interview on the record. To give Facebook its due, the staff do attempt to answer my direct questions and have ongoing correspondence and phone calls with me over many weeks. Twitter directly addresses only two issues I raise with them — the first is regarding the purchase of advertisements to perpetrate cyberhate, and the second is related to outsourcing moderation overseas. My other 13 questions, based on the comments and experiences from case studies and experts in my book, are not directly answered by the platform.

At various times I write to both Twitter and Facebook, expressing my frustration at their insistence on tightly managing the message. In an email to a Twitter spokeswoman based in Singapore — who won’t be named and did not directly answer my specific questions — I write:

The book makes very serious claims — investigated and backed up with evidence — about the real-life impacts of this kind of speech on social media platforms.

The bottom line is that Twitter hasn’t provided someone for me to interview. So I can’t put all the numerous concerns raised by my interviewees directly to anyone … it’s your own call as to how this will reflect on the public’s perception of how seriously Twitter takes predator trolling and cyberhate.

Needless to say, she does not reply.

A nameless Twitter spokesperson eventually writes to me that since January 2017 the company has instituted “around 100 experiments and product changes, dozens of new policy changes, expanded our enforcement and operations, and strengthened our team structure to build a safer Twitter. We’ve made good progress but we know there’s still a lot of work to be done”. The email goes on to claim Twitter is committed to striking “the right balance of protecting freedom of expression and keeping our users safe”.

Somehow, the company’s continual unwillingness to answer questions — from journalists and senators — makes these claims harder to believe. According to Amnesty’s Toxic Twitter report, Twitter has made “several positive changes to their policies and practices in response to violence and abuse on the platform over the past 16 months” and despite this, “the steps it has taken are not sufficient to tackle the scale and nature of the problem”.

Later in the same report, Amnesty calls for transparency, declaring that social media companies can’t just say they are protecting human rights — they must show us how their reporting and appeal mechanisms work.

Perhaps in response to public pressure after the Cambridge Analytica scandal, Facebook released its first Community Standards Enforcement Report in May 2018. Among other things, the report reveals the company had “disabled about 583 million fake accounts” and removed “837 million pieces of spam … nearly 100% of which we found and flagged before anyone reported it”. Facebook has also announced the company is altering its appeals process.

While these actions seem to be steps in the right direction — and better than nothing — there are still reams of key questions left unanswered. We don’t have a clear understanding about how moderation decisions are made on social media. Are there clear and timely avenues for appeal? For its part, Facebook has tried to be more transparent about enforcement of its policies, but ambiguity remains. Why does a photo of a breastfeeding mother get removed from Facebook when tech abuse against domestic violence victims remains?

We still don’t know how many cyberhate reports the platforms get, the precise nature of those reports, how they are triaged or how the problem is resourced in comparison to the scale of the issue. 

After months of highly controlled communication with me, a Facebook staff member — who also insists on not being named — asks me why the media doesn’t accurately report on what Facebook is doing in regard to user safety. I nearly laugh.

After three months of corresponding with Facebook, the company offers me a meeting with Mia Garlick. This is not the interview I’ve requested but it’s better than nothing. Perhaps Garlick won’t remember our last encounter, but on the day I’m nervous. I put on more makeup than usual — my sister fondly calls this my war paint — shapewear and a floral shirt.

The building’s companies are listed in the foyer of this nondescript high-rise in Sydney’s CBD. Facebook is not listed. Unless you know the address, it’s not easy to find. Beyond the big wooden-framed doors on the 18th floor, it’s like a separate universe. Deliberate funkiness. There’s a big wooden “f” on the wall, in Facebook’s signature font, surrounded by fake grass. A vase of orange orchids stands in a clear glass vase on the beach-coloured reception desk. Behind the desk is a high wall covered in a huge, modern mural suggesting flowers and vegetation. The phone doesn’t stop ringing.

The receptionist asks me to electronically sign a five-screen-long non-disclosure agreement. This form effectively stops me sharing “confidential information” gleaned in the upcoming meeting.

I’m collected from reception by a public relations representative and shown to a boardroom behind glass doors. Floor-to-ceiling windows look over the city. Garlick greets me. There’s a note of tension in the room and I crack a bad joke about having slept badly because of drunk kids in the city. Both Garlick and the PR person laugh politely and visibly relax. The pair of them reiterate the message they’ve given me via email: Nothing is quotable.

And that’s a crying shame because, one by one, Garlick graciously answers every single question that I have. In detail. Unlike the prepared statements Facebook sends me both before and after this meeting, this face-to-face conversation shows Garlick to be authentic. She’s passionate about her work and believes in it. She’s thoughtful and well informed. Her answers — which I’m, of course, unable to share — go a long way to making Facebook’s case in relation to what the company is actually doing about cyberhate.

Once a journalist, always a journalist — even in a situation where you can’t quote. During the meeting I take down more than 1000 words of notes. Towards the end of our allocated time, I’m told that if I wish to have these exact same questions answered officially, I need to send them again via email (for the third time).

Facebook brands itself as a place “where people from all over the world can share and connect”. Clearly, journalists — and, by proxy, the public — are not the people they have in mind.  

This is an edited extract from Troll Hunting by Ginger Gorman (Hardie Grant Books), which is available now in stores nationally, RRP $29.99.