10 Sep 2018

Censorship Machines or citizens? EU Parliament decides on Wednesday

By Diego Naranjo

On Wednesday 12 September 2018 at noon, the European Parliament will be voting again on the copyright Directive.

As EDRi and 57 other NGOs have been saying since the proposal was launched, and it has been said by academia, the UN Rapporteur on Freedom of Expression and Internet luminaries, and many others Article 13 of the Directive is a fundamentally flawed proposal.

The vote in July prevented the European Parliament’s Legal Affairs Committee from entering directly into secret negotiations (called trilogues) with the EU Council and there has been a little more time to keep debating different aspects of the Directive and to propose new alternative texts (amendments). These new amendments were discussed over the last two weeks, in closed-door meetings in the Parliament.

These amendments go from making the text even more unclear and damaging (the ones proposed by the Rapporteur Axel Voss MEP and Cavada MEP) to deletion and almost everything in between.

The best option for dealing with a bad proposal is to delete it, so this is what MEPs should be asked to vote for. However, the EU works on the basis of compromise, and some MEPs may not wish to vote for outright rejection. In that case, we would encourage those MEPs who won’t ask for deletion that to support the amendment from the Internal Market and Consumer Protection Committee (IMCO), which is a compromise that has received significant cross-party support.

If you have not contacted your MEP yet, you still have time! Go to www.saveyourinternet.eu and call, tweet or email your MEP and let them know your opposition to upload filters.

Read more:

Copyright: Compulsory filtering instead of obligatory filtering – a compromise? (04.08.2018)
https://edri.org/copyright-compulsory-filtering-instead-of-obligatory-filtering-a-compromise/

EP Plenary on the Copyright Directive – Who voted what? (23.07.2018)
https://edri.org/who-voted-what-in-the-ep-plenary-on-the-copyright-directive/

Press Release: EU Parliamentarians support an open, democratic debate on Copyright Directive (05.07.2018)
https://edri.org/press-release-eu-parliamentarians-support-open-democratic-debate-around-copyright-directive/

Action plan against the first obligatory EU internet filter (28.06.2018)
https://edri.org/strategy-against-the-first-obligatory-eu-internet-filter/

Moving Parliament’s copyright discussions into the public domain (27.06.2018)
https://edri.org/moving-parliaments-copyright-discussions-into-the-public-domain-2-0/

Twitter_tweet_and_follow_banner

close
04 Sep 2018

Copyright: Compulsory filtering instead of obligatory filtering – a compromise?

By Diego Naranjo

Tomorrow, 5 September 2018 at 12h CEST, is the deadline to table amendments to the proposed Copyright Directive. The new deadline for amendments to the text was opened as a result of the vote last 5 July. At that vote, the Parliament decided not to give the mandate to negotiate to the JURI Committee on the basis of the text it had previously adopted.

Last Friday, Rapporteur Axel Voss MEP sent his colleagues a proposal for a “compromise” that he characterised as “balanced”. Mr Voss claims the new text does not contain obligatory filtering and therefore is a real compromise.

It is true that the text no longer contains the wording of “prevent the availability” or “content recognition technologies”. Instead, the ”compromise” states simply that any platform that helps users to share content (“content sharing service providers”) will have full liability for every piece of content hosted at their servers.

If adopted, platforms that host content would have no option other than to implement upload filters, as they would be liable for every single upload from every single user – a risk that no commercial company could afford. Platforms have no choice other than to filter in an unaccountable regime that offers users no real redress mechanisms. This is not a compromise, but a more insidious effort to achieve the same result – mass filtering.

Read more:

EP Plenary on the Copyright Directive – Who voted what? (23.07.2018)
https://edri.org/who-voted-what-in-the-ep-plenary-on-the-copyright-directive/

Press Release: EU Parliamentarians support an open, democratic debate on Copyright Directive (05.07.2018)
https://edri.org/press-release-eu-parliamentarians-support-open-democratic-debate-around-copyright-directive/

Action plan against the first obligatory EU internet filter (28.06.2018)
https://edri.org/strategy-against-the-first-obligatory-eu-internet-filter/

Moving Parliament’s copyright discussions into the public domain (27.06.2018)
https://edri.org/moving-parliaments-copyright-discussions-into-the-public-domain-2-0/

Twitter_tweet_and_follow_banner

close
29 Aug 2018

What’s your trustworthiness according to Facebook? Find out!

By Bits of Freedom

On 21 August 2018 it was revealed that Facebook rates the trustworthiness of its users in its attempt to tackle misinformation. But how does Facebook judge you, what are the consequences and… how do you score? Ask Facebook by exercising your access right!

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

Your reputation is 0 or 1

In an interview with the Washington Post, the product manager who is in charge of fighting misinformation at Facebook, said that one of the factors the company uses to determine if you’re spreading “fake news”, is a so-called “trustworthiness score”. (Users are assigned a score of 0 or 1.) In addition to this score, Facebook apparently also uses many other indicators to judge its users. For example, it takes into account if you abuse the option to flag messages.

Lots of questions

The likelihood of you spreading misinformation (whatever that means) appears to be decided by an algorithm. But how does Facebook determine a user’s score? For which purposes will this score be used and what if the score is incorrect?

Facebook has objected to the description of this system as reputation rating. To the BBC a spokesperson responded: “The idea that we have a centralised ‘reputation’ score for people that use Facebook is just plain wrong and the headline in the Washington Post is misleading.”

It’s unclear exactly how the headline is misleading, because if you’d turn it into a question “Is Facebook rating the trustworthiness of its users?” the answer would be yes. In any event, the above questions remain unanswered. That is unacceptable, because Facebook is not just any old actor. Together with a handful of other tech giants, the company plays an important role in how we communicate and which information we send and receive. The decisions Facebook makes about you have impact. Therefore, assigning you a trustworthiness score comes with great responsibility.

Facebook has to share your score with you

At the very least, such a system should be fair and transparent. If mistakes are made, there should be an easy way for users to have those mistakes rectified. According to Facebook, however, this basic level of courtesy is not possible, because it could lead to people gaming the system.

However, with the new European privacy rules (GDPR) in force, Facebook cannot use this reason as an excuse for dodging these important questions and keeping its trustworthiness assessment opaque. As a Facebook user living in the EU, you have the right to access the personal data Facebook has about you. If these data are incorrect you have the right to rectify them.

Assuming that your trustworthiness score is the result of an algorithm crunching the data Facebook collects about you, and taking into account that this score can have a significant impact, you also have the right to receive meaningful information about the underlying logic of your score and you should be able to contest your score.

Send an access request

Do you live in the European Union and do you want to exercise your right to obtain your trustworthiness score? Send an access request to Facebook! You can send your request by post, email or by using Facebook’s online form. To help you with exercising your access right, Bits of Freedom created a request letter for you. You can find it here.

Read more:

Example of request letter to send by regular mail (.odt file download link)
https://www.bof.nl/wp-content/uploads/2018/08/facebook-access-request-trustworthiness-assessment-physical-mail.odt

Example text to use for email / online form (.odt file download link)
https://www.bof.nl/wp-content/uploads/2018/08/facebook-access-request-trustworthiness-assessment-form-or-email.odt

Don’t make your community Facebook-dependent! (21.02.2018)
https://edri.org/dont-make-your-community-facebook-dependent/

Press Release: “Fake news” strategy needs to be based on real evidence, not assumption (26.04.2018)
https://edri.org/press-release-fake-news-strategy-needs-based-real-evidence-not-assumption/

(Contribution by David Korteweg, EDRi member Bits of Freedom, the Netherlands)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

 

close
29 Aug 2018

US companies to implement better privacy for website browsing

By Article 19

Important changes are underway for web users, as browser manufacturers are set to put domain name system (DNS) look ups in the hands of more predictable, trusted and transparent sources.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

DNS-over-HTTPS (DoH) will introduce much-needed security and privacy features to web-browsing by looking up DNS requests made in the browser using a trusted DNS provider of that browser. The DNS is the internet architecture that ties a website address to the server where the content of that website is stored. This new feature will make transparent which DNS look-up service is being used.

By default all new DNS look-ups are handled by a chain of requests until an IP address for the website is found: first to one’s internet service provider (ISP), next to the closest DNS root server, then potentially to a cloud hosting provider and other intermediary servers until the
address is located and sent back to the user. The DNS information is stored back along the chain, from the ISP to one’s home router and finally in the web browser itself.

Private persons and consumers need a high level of technical skill to even find out who is providing their DNS, and it gets even more esoteric if they want to know privacy terms. Having a trusted DNS provider and knowing how to configure default DNS queries is even further beyond the reach of most users.

Every DNS request sends data about the website a user is visiting and in particular or in aggregate, this data can be used to infer behaviours of individual users or groups of users. Often internet usage statistics are made from DNS request data. It is a small and technically specific form of personal data collection that has not received much attention by EU regulatory authorities to date.

DoH will solve some of these issues, but at a considerable price that must be recognised. In practice, these privacy-enhancing changes will reduce the number of DNS look-up services that users are in contact with, yet since the trusted DNS services will be chosen by the browser, the new reduced number of DNS look-up services will be predominantly US-based.

As in so many internet issues, this is a trade-off between the parties who retain control over individual persons or consumers in a commercial and technical sense. In the case of Mozilla’s Firefox, the trusted DNS provider will be Cloudflare. If Chrome adopts DoH, the DNS provider is likely to be itself, i.e. Google.

If EU providers had to establish whether they would want to make a better privacy-by-design effort than Cloudflare and Google have already done then, according to Article 25 of the GDPR, they would have tobe the preferred choice for browsers in the EU. Data Protection Agencies would have to assess whether the browser makers have really opted for the most privacy enhancing DNS providers. However, as of today, there is nothing to suggest any EU DNS company would be able to credibly claim that they top Cloudflare on DNS privacy. Like it or not, the current plethora of DNS providers is not conducive to data privacy at all.

DNS discussions are currently ongoing at the Internet Engineering Task Force (IETF), the global standardisation community for low-layer internet protocols. EDRi member ARTICLE 19 is following the discussions on best practices for state-of-the-art privacy-by-design and data management.

DoH is, at least partially, a concrete and positive effect of EU leadership on data protection issues.Hopefully, it will serve to enhance protections of personal privacy while making internet back-end services less obscure. IETF standard setting will provide a benchmark for robust privacy protections in DNS. But these developments are also an example of how EU internet infrastructure organisations and their governors have some way to go before they can be at the top of the privacy game. The success of European global privacy leadership will be measurable by how it reacts to these necessary privacy enhancements.

Read more:

Improving DNS Privacy in Firefox
https://blog.nightly.mozilla.org/2018/06/01/improving-dns-privacy-in-firefox

“Avskrivningsbeslut Säkerhetsbrister i kundplacerad utrustning” (Only in Swedish)
https://pts.se/globalassets/startpage/dokument/legala-dokument/beslut/2015/internet/teliasonera_avskrivning-i-arende-14-11117_20151215.pdf

IETF DNS PRIVate Exchange (dprive) Working Group
https://datatracker.ietf.org/wg/dprive/about/

(Contribution by Amelia Andersdotter and Mallory Knodel, EDRi member Article 19, United Kingdom)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
29 Aug 2018

Can you do independent research without being independent?

By Bits of Freedom

Can you do independent research without being independent? The European Commission is evaluating how the rules on net neutrality have been implemented across Europe. These rules are designed to protect the rights of internet users. To our surprise, the evaluation is carried out by a law firm that frequently represents the big telecom providers that oppose net neutrality. Does that make sense?

The net neutrality rules are the rules that ensure that users, and not the provider, are free to decide what services to use online. These are the rules designed to make sure that “access to the internet” continues to mean “access to the entire internet”. The evaluation is vitally important, because many providers throughout Europe are starting to offer subscriptions where the traffic of certain services receives preferential treatment.

The study on which the evaluation will rely on has been awarded to the law firm Bird & Bird, in consortium with the research and consultancy company Ecorys. In EU Member States like the Netherlands, Bird & Bird represents most major telecom operators on matters related to the telecommunications regulatory framework, including net neutrality. For example, Bird & Bird represents T-Mobile in the pending court case EDRi member Bits of Freedom has initiated against the decision of the Dutch Regulatory Authority ACM not to take action against T-Mobile’s zero-rating offer. This court case revolves around the practice of zero-rating and the interpretation of the net neutrality rules that are also the subject of the study.

Although there is no reason to doubt the legal expertise and experience of Bird & Bird, one could and should have concerns with awarding this particular study to this law firm. Given the fact that this firm represents telecom operators in conflicts surrounding this legislation, there are reasonable questions to be raised about its independence and impartiality in conducting the study. It raises questions about the validity of the results and could be damaging to the credibility of, and the confidence in, the evaluation of the net neutrality provisions by the European Commission and the resulting measures taken by the European Commission.

A number of European organisations defending users and consumers have asked the European Commission to provide a written confirmation of the impartiality of this study. The confirmation should include a list of all measures taken by the European Commission and/or Bird & Bird to ensure the independence and impartiality of the evaluators conducting the study and the quality of the report. Particularly in light of these problems, it is vital that the Commission presents a balanced report based on the findings of Bird & Bird.

This article was originally published by EDRi member Bits of Freedom. It is available here. A version in Dutch is available here.

Read more:

Net Neutrality
https://www.bof.nl/dossiers/net-neutrality/

Bits of Freedom’s court case about zero rating (06.08.2018)
www.bof.nl/2018/08/06/bits-of-freedoms-court-case-about-zero-rating/

15 organisations ask the European Parliament not to weaken net neutrality enforcement (27.04.2018)
edri.org/15-organisations-parliament-not-weaken-netneutrality-enforcement/

Dutch ban on zero-rating struck down – major blow to net neutrality (17.05.2017)
edri.org/dutch-ban-on-zero-rating-struck-down-major-blow-to-net-neutrality/

(Contribution by Rejo Zenger, EDRi member Bits of Freedom, the Netherlands)

close
29 Aug 2018

Women on Waves: how internet companies police our speech

By Bits of Freedom

Increasingly, internet companies decide which content we’re allowed to publish and receive. Users have become passive participants in a Russian Roulette-like game of content moderation.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

Three suspensions, three apologies

In January 2018, pro-choice organisation Women on Waves receives a message stating it has violated YouTube’s “community guidelines” and therefore its account has been taken down. The account of Women on Web, Women on Waves’ sister organisation, is suspended too. No specifics are offered, but they are no longer able to access their account or the content on it. They appeal through YouTube’s appeal mechanism, but nothing happens. They subsequently issue a press release which proves more effective: their accounts are reinstated.

Fast forward to April. Since the reinstatement of its account in January, Women on Waves hasn’t uploaded any new material. Yet their account is suspended again and for the same reason. Just like that, Women on Waves’ videos, available in a dozen different languages, are no longer accessible to people searching for reliable medical information on safe abortion. In Europe this couldn’t have come at a more inconvenient time, namely thirty days before Ireland’s abortion referendum.

Women on Waves’ suspension doesn’t go unnoticed. Where in January it was a press release that leads to the reinstatement of their accounts, in April it seems to be a number of tweets directed at YouTube and YouTube CEO Susan Wojcicki that cause YouTube to act. The result is the same: YouTube re-reviews the account and concludes Women on Waves isn’t in violation of its community guidelines. The account is put back online and along with it Women on Web’s account.

Sadly, a month later the same thing happens again: on June 15, one of Women on Web’s videos is taken down and soon the entire account follows. On June 16, Women on Web appeals the take-down – this is denied on June 18. Two days later, after we reach out to YouTube Netherlands, Women on Web receives a message informing them their account is being reinstated after all.

#sorrynotsorry

All sounds a bit tedious? We agree. Together with Women on Waves and Women on Web, we got in touch with YouTube Netherlands last May. We asked if they would share why and how the accounts were flagged, and how the decision was made to put them back online. We were told this is internal information that can’t be made public. Further probing was met with more deflection: YouTube takes down so much content each day, mistakes are bound to be made. Soit.

Users can’t rely on YouTube

Of course, YouTube isn’t the open internet. As a company, it decides what content is allowed and what isn’t. We’ve seen numerous examples of this, from its efforts to push certain types of content or accounts, to the “concealing” of LGBTQ+-related videos. And from the forced monetization of some accounts, to the demonetizing of other, “unfavourable”, ones. But as we’ve seen, the company doesn’t even always know itself what it finds unfavourable, so how are users supposed to anticipate its rulings? YouTube’s mission is “to give everyone a voice”; it doesn’t hesitate to rob you of it, either.

This whimsical decision making might be considered cute, if YouTube didn’t hold so much power. Because as it stands, YouTube is the one website people visit to search for video content. This position of power is reinforced by the fact that in many countries YouTube and telecom providers have struck deals to offer you traffic to youtube.com “free of charge”. And don’t forget: in many countries visiting youtube.com is a lot safer than visiting womenonwaves.org. In other words, if you want your video content to be accessible, you need YouTube.

YouTube turns users into passive participants

Women on Waves’ accounts have been suspended and subsequently reinstated three times this year. Not because of YouTube’s complaint procedure, but because Women on Waves has a network of journalists and high profile followers that could draw attention to the ban and force YouTube to act. This might have worked now, and it might have worked for Women on Waves, but it can’t, and shouldn’t, be relied on to work in every case and for everyone. As long as YouTube doesn’t give more meaningful insight into their content moderation process, their users will remain on the sidelines.

If you don’t already have friends in high places (or at newspapers), start making them now – you’ll be needing them.

(Contribution by Evelyn Austin, EDRi member Bits of Freedom, the Netherlands)

This article was originally published on Bits of Freedom website. You can find the original article here.

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
29 Aug 2018

ENDitorial: The European Commission is talking “tough on terror”. Again.

By Joe McNamee

The European Commission plans to issue a Regulation on 12 September 2018 that will get tough on internet companies in the fight against terrorism. After all, somebody should do something, right? At the time of writing the title is “Regulation on preventing the dissemination of terrorist content online”.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

The launch date is significant:

– it is 96 hours after the European Commission’s most recent terrorism Directive is due to come into force (including measures on addressing terrorist content online).

– it is not quite a full year since the European Commission launched a press release on its Communication on Tackling Illegal Content Online, about getting internet companies to “tackle” illegal content

– it is slightly over six months since the European Commission launched a press release about its Recommendation on “effectively” tackling illegal content online

– it is somewhat more than two years since the Commission launched its press release on tackling (“illegal”) “hate speech” that in the meantime has produced no statistics about how much of the content being deleted is actually illegal or the actual impact of the measures.

Issues which will not be included in the European Commission’s upcoming press release on its new getting tough on internet companies Regulation include:

– the fact that it has no idea if or how many instances of serious crime and terrorism reported by Europol to internet companies ever get investigated or prosecuted (Europol also does not know if the reported content is actually illegal)

– the fact that neither the European Commission nor Europol know how much potential evidence is deleted by internet companies as a result of reports issued by Europol to internet companies

– the fact that the European Commission has failed miserably to collect meaningful data on the availability and removal of illegal child abuse material. This was severely criticised by the European Parliament, in a European Parliament Resolution passed by a vote of 597 votes in favour and six against.

The Regulation comes three months after the French (Collomb) and German (Seehofer) interior ministers sent a letter to the European Commission demanding that internet companies be made liable (as they already can be under the 2000 E-Commerce Directive) for failing to act quickly when they are made aware of illegal activity. In a separate development a few days after sending that letter, Interior Minister Collomb, failed to take action upon receiving a video containing apparent evidence of a serious assault. He reportedly told a parliamentary committee of enquiry that it wasn’t his job (Interior Minister) to do this. It is the job of internet companies to fight illegal activity, not the job of a national minister with responsibility for security, it appears.

It is, of course, a complete coincidence that the European Commission did exactly what the French and German ministers told them to do. Anything else would be a blatant breach of the oath of office of the College of Commissioners. Each Commissioner has solemnly undertaken “neither to seek nor to take instructions from any Government or from any other institution, body, office or entity”.

Read more:

Guide to the Code of Conduct on Hate Speech (03.06.2016)
https://edri.org/guide-code-conduct-hate-speech/

The time has come to complain about the Terrorism Directive (15.02.2017)
https://edri.org/the-time-has-come-to-complain-terrorism-directive/

Europol: Delete criminals’ data, but keep watch on the innocent (27.03.2018)
https://edri.org/europol-delete-criminals-data-but-keep-watch-on-the-innocent/

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
25 Jul 2018

Member in the spotlight: Hermes Center

By Hermes Center

In this edition of the “Member in the spotlight” series, EDRi is proud to present new member Hermes Center.

Who are you and what is your organisation’s goal and mission?

Hermes Center for Transparency and Digital Human Rights is an Italian civil rights organisation focusing on the promotion and development of the awareness and attention to transparency, accountability, freedom of speech online and, more in general, the protection of rights and personal freedom in a connected world.

How did it all begin, and how did your organisation develop its work?

The origins of Hermes can be traced back to the start of the GlobaLeaks project and its fundamental need for an organisation that advocates for privacy and digital human rights in Italy in an organised manner.

The Center has been formally registered in 2012 by bringing together the most active members of Italian privacy communities for years, all being part of the “Project Winston Smith” (PWS), the former “Anonymous Organisation”.

From its creation, Hermes immediately reunited a unique mix of activists, lawyers and hackers operating in the national and international context, with core activities ranging from software development of whistleblowing and anonymity tools up to very relevant digital rights advocacy activities like support of journalists, organisation of awareness-raising events, drafting of op-eds on these issues for national media, along with policy, advocacy and support for policy making.

Its main goals have been fully supported over the years by volunteers and by private donations for their works (conference presentations, free workshops, conferences and panel organisations).
Funding has mostly been provided by the Open Technology Fund in Washington and the Hivos foundation in Den Haag in form of research grants for its software development projects, GlobaLeaks and Tor2web.

The biggest opportunity created by advancements in information and communication technology is…

… the free flow of information, the possibility to connect instantaneously with people from all over the world to discuss and organise on common topics. Furthermore, these technologies can foster more transparency and accountability of the government and help citizens to defend their rights.

The biggest threat created by advancements in information and communication technology is…

… the pervasive state of surveillance that presents two different and somehow overlapping faces: surveillance capitalism by companies and police surveillance.

Which are the biggest victories/successes/achievements of your organisation?

We advocated for the adoption by the Italian Anti-Corruption Authority (ANAC) of an online whistleblowing platform using onion services, giving whistleblowers a secure way to report illegal activity and at the same time protect their identities.

With the same goals and similar activities, through a network of EU partners and mostly in collaboration with Xnet, the Center supported Barcelona City Hall, the Anti-Fraud Authority of the Catalonia, the Anti-Fraud Authority of Valencia and the Madrid City Hall in the adoption of free and open source technologies among which, first of all, the mentioned GlobaLeaks whistleblowing framework created by the Center which is now used worldwide by more than 300 organisations.

Additionally, together with other Italian organisations, we managed to push the Italian Ministry of the economic development (MISE) to revoke the export license of Area SpA, a well-known surveillance company.

If your organisation could now change one thing in your country, what would that be?

We would like to change the approach by politicians to the discussion around digital rights, and achieve greater civil society involvement.

What is the biggest challenge your organisation is currently facing in your country?

Last year, the Italian government introduced a new data retention law, extending the collection and retention of phone and telecommunication metadata to a period of 72 months. This measure is clearly against the CJEU case law on data retention. At the same time, we are concerned with government surveillance, which lacks a clear legal framework, and we are firmly opposing the adoption of electronic voting systems being discussed by the government.

How can one get in touch with you if they want to help as a volunteer, or donate to support your work?

You can reach us on Twitter, Facebook and visiting our website. There you can find our official contacts and those of our members. There is also the possibility of making a donation or volunteering.

Read more:

Edri member in the spotlight series
https://edri.org/member-in-the-spotlight/

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
25 Jul 2018

ENDitorial: The fake fight against fake news

By Guest author

The new danger is no longer yellow, but red once more: fake news. It helped getting Trump elected. It paved the highway to Brexit. Even local European elections are not safe. The greatest danger to our democracy in modern times must be fought by all possible means.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

Fortunately, we have the European Commission. Laws are gradually replaced by terms of use, and courts of law by advertising agencies. The latter are tipped off by Europol and media companies when users break their rules. Trials are no longer necessary. Progress is only measured by how much and how quickly online content gets deleted. The Commission keeps up the pace with a rapid-fire of communications, covenants, press releases and directive proposals, and sees that all is well.
Unfortunately, the previous paragraph is not an attempt at satire. The only incorrect part is that the fake news hype is not the cause of this evolution. It does, however, fit in seamlessly.

Fake news avant la lettre

Fake news is old news. In his book “The Attention Merchants”, Tim Wu links its emergence to the rise of the tabloids in the 1830s. While most news papers cost six cent, the New York Sun only cost one cent. It grew quickly in popularity thanks to publishing an abundance of gruelling details on court cases, and was mainly financed by ads for “patent medicine” – commercial medicine based on quackery.

The rise of radio tells a similar tale. RCA, the maker of the first radios, launched NBC so its clients could listen to something with their new device. CBS, which started broadcasting at a later date, nevertheless quickly grew much bigger thanks to easy listening programming coupled to an expansive franchise model that enabled local stations to share in the ad revenue. Television reruns the same story, with Fox News that manages to reach a broad audience with little previous exposure to TV ads.

Tall stories, half truths, and sensational headlines are tried and tested methods used by media companies to sell more ads. On the internet every click on “Five Tips You Won’t Believe” banners also earns money for the ad agencies. However, so do visits to “Hillary’s Secret Concentration Camps”. In this sense the distribution of fake news through Facebook and Google always has been a natural part of their business model.

The doctor is expensive

Disinformation about a person is handled by defamation law. For specific historical events, like the holocaust, most European countries have laws that make its denial a criminal offence. Spreading certain other kinds of wrong information, however, is not illegal, like claiming that the Earth is flat.
Laws in this field are always contentious, given the tension with the right to free speech and the freedom of the press. Deciding what takes precedence is rarely obvious, and so normally a judge has the final word as to whether censorship is appropriate.

However, the courts are overloaded, money to expand them is lacking, and the amount of rubbish on the internet is gargantuan. Therefore legislators are eagerly looking at alternatives.

Let’s try self-medication

In recent years, the approach at the European level to relieve the courts has been one of administrative measures and self-regulation.

In 2011, article 25 of the Directive on Combating Sexual Exploitation of Children introduced voluntary website blocking lists at the European level. The goal was to make websites related to sexual child abuse unreachable in case closing them down or arresting the criminals behind them turned out unfeasible.

The 2010 revision of the Directive on Audiovisual Media Services (AVMS), originally intended for TV broadcasters and media companies, was broadened to also partially include services like YouTube. It requires sites that enable video sharing, and only those sites, to take measures against a.o. hate speech. A procedure to broaden this required policing is ongoing.

This fight was intensified by means of a Code of Conduct on Online Hate Speech. The European Commission agreed on it in 2016 with Facebook, Microsoft, Twitter and YouTube. These companies have accepted to take the lead in combatting this kind of unwanted behaviour.

The Europol regulation, also from May 2016, complements this code of conduct. It formalised Europol’s “Internet Referral Unit” (IRU) in article 4(m). Europol itself cannot take enforcement actions. As such, the IRU is limited to reporting unwanted content to the online platforms themselves “for their voluntary consideration of the compatibility of the referred internet content with their own terms and conditions.” The reported content need not be illegal.

The European Commission’s communication on Tackling Illegal Online Content from 2017 subsequently focussed on how online platforms can remove reported content as quickly as possible. Its core proposal consists of creating lists of “trusted flaggers”. Their reports on certain topics should be assumed valid, and hence platforms can check them less thoroughly before removing flagged content.

The new, and for now voted out, Copyright Directive made video sharing sites themselves liable for copyright infringements by their users, both directly and indirectly. This would force them to install upload filters. Negotiations between the institutions on this topic will resume in September.

Concerning fake news, the European Commission’s working document Tackling Online Disinformation: a European Approach from 2017 contains an extensive section on self-regulation. In January 2018, the Commission created a “High Level Working Group on Fake News and Online Disinformation”, composed of various external parties. Their final report proposes that a coalition of journalists, ad agencies, fact checkers, and so on, be formed to write a voluntary code of conduct. Finally, the Report on the Public Consultation from April 2018 also mentions a clear preference by the respondents for self-regulation.

Follow-up of the symptoms

At the end of 2016 (a year late), the European Commission published its first evaluation of the directive against the sexual exploitation of children. It includes a separate report on the blocking lists, but it does not contain any data on their effectiveness nor side effects. This prompted a damning resolution by the European Parliament in which it “deplores” the complete lack of statistical information regarding blocking, removing websites, or problems experienced by law enforcement due to erased or removed information. It asked the European Commission to do its homework better in the future.

However, for the Commission this appears to be business as usual. In January 2018, four months after its Communication on Tackling Illegal Online Content, it sent out a press release that called for “more efforts and quicker progress” without an evaluation of what had been done already. The original document moreover did not contain any concrete goals nor evaluation metrics, so it begs the question more and quicker than what these efforts should be exactly. This was followed up in March 2018 by a recommendation by that same European Commission in which everyone, except for the Commission and the Member States themselves, were called upon to further increase their efforts. The Commission now wants to launch a Directive on this topic in September 2018, undoubtedly with requirements for everyone but themselves to do even more and more quickly.

Referral to the pathogen

Online platforms have the right, within the boundaries of the law, to implement and enforce terms of use. What is happening now, however, goes quite a bit further.

More and more decisions on what is illegal are systematically outsourced to online platforms. Next, covenants between government bodies and these platforms include the removal of non-illegal content. The public resources and the authority of Europol are used to detect such content and report it. Finally, the platforms are encouraged to perform fewer fact check on reports from certain groups, and there are attempts to make the platforms themselves liable for their users’ behaviour. This would only make them more inclined to pre-emptively erase controversial content.

When a governmental body instates measures, these are always put to the test of the European Charter. The proportionality, effectiveness and subsidiarity need to be respected in the light of fundamental rights such as the right to free speech, no arbitrary application of the law, and the right to a fair trial. Not prosecuting certain categories of unwanted behaviour or not even making them illegal, and instead “recommending” online platforms to undertake action against them, undercuts these fundamentals of our rule of law in a rather blunt way.

Moreover, these online platforms are not just random companies. As the founders of Google wrote in their original academic article on the search engine:

For example, in our prototype search engine one of the top results for cellular phone is ‘The Effect of Cellular Phone Use Upon Driver Attention’… It is clear that a search engine which was taking money for showing cellular phone ads would have difficulty justifying the page that our system returned to its paying advertisers. For this type of reason and historical experience with other media, we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.

Meanwhile, Google has become one of the largest advertising agencies in the world. Facebook and Twitter also obtain a majority of their revenue from selling ads. If the Commission were to ask whether we shouldn’t outsource the enforcement of our fundamental right to ad agencies, the response would be presumably quite different than to the umpteenth hollow announcement about how internet platforms should address illegal online content more quickly and more thoroughly.

A difficult diagnosis

If we look specifically at fake news, there are two additional problems: no definition and few studies about it in its current form. Even the public consultation on fake news by the European Commission caused confusion by giving hate speech, terrorism and child abuse as examples of “already illegal fake news”. However, none of these terms refer to concepts that are necessarily fake or news. This makes it hard to draw conclusions from the answers, because it is unknown what the respondents understood under the term. The Eurobarometer survey on this topic has similar problems.

This does not mean that no information exists. The German NGO Stiftung Neue Verantwortung performed an experiment by spreading fake news through bought Twitter-followers. They drew some interesting conclusions:

Fake news is like memes: their essence is not their existence, but how well they are picked up. This means that blocking the source of popular fake news will not stop it from spreading;
Fake news is over by strongly linked users, while debunking it happens by a much more varied group of people. Hence, people with similar opinions seem to share the same fake news, but it seems to influence the general public less strongly.

An analysis by EDRi member Panoptykon on artificially increasing the popularity of Twitter messages on Polish politics led to compatible conclusions. There are bubbles of people that interact very little with each other. Each bubble contains influencers that talk about the same topics, but they seldom talk directly to each other. Prominent figures and established organisations, rather than robots (fake accounts), steer the discussions. Robots can be used to amplify a message, but by themselves do not create or change trends.

It is virtually impossible to distinguish robots from professional accounts by only looking at their network and activity. Therefore it is very hard to automatically identify such accounts for the purpose of studying or blocking them. These are only small-scale studies and one has to be careful to draw general conclusions from them. They certainly do not claim that fake news has no influence, or that we should just ignore it. That said, they do contain more concrete information than all pages on this topic published to date by the European Commission.

So what should we do? Miracle cures are in short supply, but there are a few interesting examples from the past. The year 1905 saw a revolution against patent medicine after investigative journalists exposed its dangers. Later, in the 1950s, tv quizes were found out to favour telegenic candidates due to their beneficial effect on ratings. Income steering content has been around since forever. Independent media should therefore be an important part of the solution.

The cure and the disease

The spectacular failure of the political establishment both in the US and the UK could impossibly have been their own undoing, so a different explanation was called for. Forget about the ruckus back in the day about Obama’s birth certificate or his alleged secret muslim agenda, David Cameron’s desperate ploy to cling to power, and the tradition of tabloids and their made-up stories in the UK. This is something completely different. Flavour everything with the dangers of online content and present yourself as the digital knight on the white horse that will set things straight. Or rather, that orders the ones making money from sensation and clickbait (such as fake news) to set things straight as they see fit.

The above is oversimplified, but it is incredible how this European Commission is casually promoting the Facebooks and Googles of this world to become the keepers of European fundamental rights. Protecting democracy and the rule of law is not a business model. It is a calling. One that few will attribute to Mark Zuckerberg.

This article originally appeared in Dutch on Apache.be.
Original Article: De nepstrijd tegen het nepnieuws

 

Read more:

Press Release: “Fake news” strategy needs to be based on real evidence, not assumption (26.04.2018)
https://edri.org/press-release-fake-news-strategy-needs-based-real-evidence-not-assumption/

ENDitorial: Fake news about fake news being news (08.02.2017)
https://edri.org/enditorial-fake-news-about-fake-news-being-news/

(Contribution by Jonas Maebe, EDRi observer)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
25 Jul 2018

New Protocol on cybercrime: a recipe for human rights abuse?

By EDRi

From 11 to 13 July 2018, the Electronic Frontier Foundation (EFF) and European Digital Rights (EDRi) took part in the Octopus Conference 2018 at the Council of Europe together with Access Now to present the views of a global coalition of civil society groups on the negotiations of more than 60 countries on access to electronic data by law enforcement in the context of criminal investigations.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

There is a global consensus that mutual legal assistance among countries needs to be improved. However, recognising its inefficiencies should not translate into bypassing Mutual Legal Assistance Treaties (MLATs) by going to service providers directly, thereby losing procedural and human rights safeguards embedded in them. Some of the issues with MLATs can be solved by, for example, technical training for law enforcement authorities, simplification and standarisation of forms, single points of contact or by increasing resources. For instance, thanks to a recent US “MLAT reform programme” that increased resources to handle MLATs, the US Department of Justice reduced the amount of pending cases by a third.

There is a worrisome legislative trend  emerging through the US CLOUD Act and the European Commission’s “e-evidence” proposals to access data directly from service providers. This trend risks creating a race to the bottom in terms of due process, court checks, fair trials, privacy and other human rights safeguards.

If the current Council of Europe negotiations on cybercrime focused on improving mutual legal assistance, they could offer an opportunity to create a human rights-respecting alternative to dangerous shortcuts as proposed in the US CLOUD Act or the EU proposals. However, civil right groups have serious concerns from a procedural and substantive perspective.

This process is being conducted without regular and inclusive participation of civil society, or data protection authorities. Nearly 100 NGOs wrote in April 2018 to the Council of Europe’s Secretary General because they are not duly included in the process. While the Council of Europe issued a response, civil society groups reiterated that civil society participation and inclusion goes beyond a public consultation, participation in a conference and comments on texts preliminary agreed by States. Human rights NGOs should be present in drafting meetings to learn from the law enforcement expertise of the 60+ countries and provide human rights expert input in a timely manner.

From a substantive point of view, the process is being built on the faulty premise that anticipated signatories to the Convention on cybercrime (“the Budapest Convention”) share a common understanding on basic protections of human rights and legal safeguards. As a result of this presumption, it is unclear how the proposed Protocol can provide for strong data protection and critical human rights vetting mechanisms that are embedded in the current MLAT system.

One of the biggest challenges in the Council of Europe process to draft an additional protocol to the Cybercrime convention – a challenge that was evident in the initial Cybercrime convention itself and in its article 15 in particular – is the assumption that signatory Parties share (and will continue to share) a common baseline of understanding with respect to the scope and nature of human rights protections, including privacy.

Unfortunately, there is neither a harmonised legal framework among the countries participating in the negotiations nor a shared human rights understanding. Experience shows that there is a need for countries to bridge the gap between national legal frameworks and practices on the one hand, and human rights standards established by case law of the highest courts on the other. For example, the Court of Justice of the European Union (CJEU) held that blanket data retention is illegal under EU law on several occasions. Yet, the majority of the EU Member States still have blanket data retention laws in place. Other states involved in the protocol negotiations have implemented precisely the type of sweeping, unchecked, and indiscriminate data retention regime that the CJEU ruled out as well, such as Australia, Mexico or Colombia.

As a result of a lack of a harmonised human rights and legal safeguards protection, the forthcoming protocol proposals risk:

– Bypassing critical human rights vetting mechanisms inherent in the current MLAT system that are currently used to, among other things, navigate conflicts in fundamental human rights and legal safeguards that inevitably arise between countries;

– Seeking to encode practices that fall below minimum standards being established in various jurisdictions by ignoring human rights safeguards established primarily by the case law of the European Court of Human Rights, the Court of Justice of the European Union, among others;

– Including few substantial limits and instead relying on the legal systems of signatories to include enough safeguards to ensure human rights are not violated in cross-border access situations and a general and non-specific requirement that signatories ensure adequate safeguards (see Article 15 of the Cybercrime Convention) without any enforcement.

Parties to the negotiations should render human rights safeguards operational – as human rights are the cornerstones of our society. As a starting point, NGOs urge countries to sign, ratify and diligently implement Convention 108+ on data protection. In this sense, EDRi and EFF welcome the comments of the Council of Europe’s Convention 108 Committee.

Finally, civil society groups urge the forthcoming protocol not to engage in a mandatory or voluntary direct access mechanism to obtain data from companies directly without appropriate safeguards. While the proposals seem to be limited to subscriber data, there are serious risks that interpretation of what constitutes subscriber data is expanded so as to lower safeguards, including access to metadata directly from providers by non-judicial requests or demands.

This can conflict clear court rulings from the European Court of Human Rights, such as the Benedik v. Slovenia case or even States’ case law, such as that of Canada’s Supreme Court. The global NGO coalition therefore reiterates that the focus should be put on making mutual legal assistance among countries more efficient.

Civil society is ready to engage in the negotiations. Until now however, the future of the second additional protocol to the Cybercrime Convention remains unclear, raising many concerns and questions.

Read more:

Joint civil society response to discussion guide on a 2nd Additional Protocol to the Budapest Convention on Cybercrime (28.06.2018)
https://edri.org/files/consultations/globalcoalition-civilsocietyresponse_coe-t-cy_20180628.pdf

How law enforcement can access data across borders — without crushing human rights (04.07.2018)
https://ifex.org/digital_rights/2018/07/04/coe_convention_185_2ndamend_supletter/

Nearly 100 public interest organisations urge Council of Europe to ensure high transparency standards for cybercrime negotiations (03.04.2018)
https://edri.org/global-letter-cybercrime-negotiations-transparency/

A Tale of Two Poorly Designed Cross-Border Data Access Regimes (25.04.2018)
https://www.eff.org/deeplinks/2018/04/tale-two-poorly-designed-cross-border-data-access-regimes

Cross-border access to data has to respect human rights principles (20.09.2017)
https://edri.org/crossborder-access-to-data-has-to-respect-human-rights-principles/

(Contribution by Maryant Fernández Pérez, EDRi, and Katitza Rodríguez, EFF)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close