29 Oct 2019

Hate speech online: Lessons for protecting free expression

By Ella Jakubowska

On 21 October, David Kaye – UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression – released the preliminary findings of his sixth report on information and communication technology. They include tangible suggestions to internet companies and states whose current efforts to control hate speech online are failing to comply with the fundamental principles of human rights. The EU Commission should consider Kaye’s recommendations when creating new rules for the internet and – most importantly – when drafting the Digital Services Act (DSA).

The “Report of the Special Rapporteur to the General Assembly on online hate speech” (docx) draws on international legal instruments on civil, political and non-discrimination rights to show how human rights law already provides a robust framework for tackling hate speech online. The report offers an incisive critique of platform business models which, supported by States, profit from the spread of “hateful content” whilst violating free expression by wantonly deleting legal content. Instead, Kaye offers a blueprint for tackling hate speech in a way which empowers citizens, protects online freedom, and puts the burden of proof on States, not users. Whilst the report outlines a general approach, the European Commission should incorporate Kaye’s advice when developing the proposed Digital Services Act (DSA) and other related legislation and non-legal initiatives, to ensure that the regulation of hate speech does not inadvertently violate citizens’ digital rights.

Harmful content removal: under international law, there is a better way

Sexism, racism and other forms of hate speech (which Kaye defines as “incitement to discrimination, hostility or violence”) in the online environment are quite rightly areas of attention for global digital policy and law makers. But the report offers a much-needed reminder that restricting freedom of expression online through deleting content is not just an ineffective solution, but in fact threatens a multitude of rights and freedoms that are vital for the functioning of democratic societies. Freedom of expression is, as Kaye states, “fundamental to the enjoyment of all human rights”. If curtailed, it can open the door for repressive States to systematically suppress their citizens. Kaye gives the example of blasphemy laws: profanity, whilst offensive, must be protected – otherwise it can be used to punish and silence citizens that do not conform to a particular religion. And others such as journalist Glenn Greenwald have already pointed out in the past how “hate speech” legislation is used in the EU to suppress left-wing viewpoints.

Fundamental rules for restricting freedom of expression online

The report is clear that restrictions of online speech “must be exceptional, subject to narrow conditions and strict oversight”, with the burden of proof “on the authority restricting speech to justify the restriction”. Any restriction is thus subject to three criteria under human rights law:

Firstly under the legality criteria, Kaye uses human rights law to show that any regulation of hate speech online (as offline) must be genuinely unlawful, not just offensive or harmful. It must be regulated in a way that does not give “excessive discretion” to governments or private actors, and gives independent routes of appeal to impacted individuals. Conversely, the current situation gives de facto regulatory power to internet companies by allowing (and even pressuring) them to act as the arbiters of what does and does not constitute free speech. Coupled with error-prone automated filters and short takedown periods incentivising over-removal of content, this is a free speech crisis in motion.

Secondly on the question of legitimacy, the report outlines the requirement for online hate speech laws and policies to be treated in the same way as any other speech. This means ensuring that freedom of expression is restricted only for legitimate interests, and not curtailed for “illegitimate purposes” like suppressing criticism of States. Potential illegal suppression is enabled by overly broad definitions of hate speech, which can act as a catch-all for content that States find offensive, despite being legal. A lack of strict definitions in the counter-terrorism policy field has already had a strong impact on freedom of expression in Spain, for example. “National security” was proven to be abusively invoked to justify measures interfering in human rights, and used as a pretext to adopt vague and arbitrary limitations.

Lastly, necessity and proportionality are violated by current moderation practices including “nearly immediate takedown” requirements and automatic filters which clumsily censor legal content, becoming collateral damage in a war against hate speech. This violates rights to due process and redress, and unnecessarily puts the burden of justifying content on users. Worryingly, Kaye continues that “such filters disproportionately harm historically under-represented communities.”

A rational approach to tackling hate speech online

The report offers a wide range of solutions for tackling hate speech whilst avoiding content deletion or internet shutdowns. Guided by human rights documents including the so-called “Ruggie Principles” (the 2011 UN Guiding Principles on Business and Human Rights), the report emphasises that internet companies need to exercise a greater degree of human rights due diligence. This includes transparent review processes, human rights impact assessments, clear routes of appeal and human, rather than algorithmic, decision-making. Crucially, Kaye calls on internet platforms to “de-monetiz[e] harmful content” in order to counteract the business models that profit from viral, provocative, harmful content. He stresses that the biggest internet companies must bear the cost of developing solutions, and share them with smaller companies to ensure that fair competition is protected.

The report is also clear that States must take more responsibility, working in collaboration with the public to put in place clear laws and standards for internet companies, educational measures, and remedies (both judicial and non-judicial) in line with international human rights law. In particular, they must take care when developing intermediary liability laws to ensure that internet companies are not forced to delete legal content.

The report gives powerful lessons for the future DSA and other related policy initiatives. In the protection of fundamental human rights, we must limit content deletion (especially automated) and avoid measures that make internet companies de facto regulators: they are not – and nor would we want them to be – human rights decision-makers. We must take the burden of proof away from citizens, and create transparent routes for redress. Finally, we must remember that the human rights rules of the offline world apply just as strongly online.

Report of the Special Rapporteur on the promotion and protection of the freedom of opinion and expression, A/74/486 (Advanced unedited report)
https://www.ohchr.org/EN/Issues/FreedomOpinion/Pages/Annual.aspx

E-Commerce review: Opening Pandora’s box? (20.06.2019)
https://edri.org/e-commerce-review-1-pandoras-box/

In Europe, Hate Speech Laws are Often Used to Suppress and Punish Left-Wing Viewpoints (29.08.2017)
https://theintercept.com/2017/08/29/in-europe-hate-speech-laws-are-often-used-to-suppress-and-punish-left-wing-viewpoints/

EU copyright dialogues: The next battleground to prevent upload filters (18.10.2019)
https://edri.org/eu-copyright-dialogues-the-next-battleground-to-prevent-upload-filters/

Spain: Tweet… if you dare: How counter-terrorism laws restrict freedom of expression in Spain (13.03.2018)
https://www.amnesty.org/en/documents/eur41/7924/2018/en/

CCBE Recommendations on the protection of fundamental rights in the context of ‘national security’ 2019
https://www.ccbe.eu/fileadmin/speciality_distribution/public/documents/SURVEILLANCE/SVL_Guides_recommendations/EN_SVL_20190329_CCBE-Recommendations-on-the-protection-of-fundamental-rights-in-the-context-of-national-security.pdf

close
23 Oct 2019

#PrivacyCamp20: Technology and Activism

By Dean Willis

The 8th annual Privacy Camp will take place in Brussels on 21 January 2020.

With the focus on “Technology and Activism”, Privacy Camp 2020 will explore the significant role digital technology plays in activism, enabling people to bypass traditional power structures and fostering new forms of civil disobedience, but also enhancing the surveillance power of repressive regimes. Together with activists and scholars working at the intersection of technology and activism, this event will cover a broad range of topics from surveillance and censorship to civic participation in policy-making and more.

The call for panels invites classical panel submissions, but also interactive formats such as workshops. We have particular interest in providing space for discussions on and around social media and political dissent, hacktivism and civil disobedience, the critical public sphere, data justice and data activism, as well as commons, peer production, and platform cooperativism, and citizen science. The deadline for proposing a panel or a workshop is 10 November 2019.

In addition to traditional panel and workshop sessions, this year’s Privacy Camp invites critical makers to join the debate on technology and activism. We are hosting a Critical Makers Faire for counterculture and DIY artists and makers involved in activism. The Faire will provide a space to feature projects such as biohacking, wearables, bots, glitch art, and much more. The deadline for submissions to the Makers Faire is 30 November.

Privacy Camp is an annual event that brings together digital rights advocates, NGOs, activists, academics and policy-makers from Europe and beyond to discuss the most pressing issues facing human rights online. It is jointly organised by European Digital Rights (EDRi), Research Group on Law, Science, Technology & Society at Vrije Universiteit Brussel (LSTS-VUB), the Institute for European Studies at Université Saint-Louis – Bruxelles (USL-B), and Privacy Salon.

Privacy Camp 2020 takes place on 21 January 2020 in Brussels, Belgium. Participation is free and registrations open in December.

Privacy Camp 2020: Call for submissions
https://privacycamp.eu/?p=1601

Privacy Camp
https://privacycamp.eu/

(Contribution by Dean Willis, EDRi intern)

close
23 Oct 2019

Net neutrality overhaul: 5G, zero-rating, parental control, DPI

By Epicenter.works

The Body of European Regulators for Electronic Communications (BEREC) is currently in the process of overhauling their guidelines on the implementation of the Regulation (EU) 2015/2120, which forms the legal basis of the EU’s net neutrality rules. At its most recent plenary, BEREC produced new draft guidelines and opened a public consultation on this draft. The proposed changes to the guidelines seem like a mixed bag

5G network slicing

The new mobile network standard 5G specifies the ability of network operators to provide multiple virtual networks (“slices”) with different quality characteristics over the same network infrastructure, called “network slicing”. Because end-user equipment can be connected to multiple slices at the same time, providers could use the introduction of 5G to create new products where different applications make use of different slices with their associated quality levels. In its draft guidelines, BEREC clarifies that it‘s the user who has to be able to choose which application makes use of which slice. This is a welcome addition.

Zero-rating

Zero-rating is a practice of billing the traffic used by different applications differently, and in particular not deducting the traffic created by certain applications from a user’s available data volume. This pratice has been criticised, because it reduces the choice of consumers regarding which applications they can use, and disadvantages new, small application providers against the big, already established players. These offers broadly come in two types: “open” zero-rating offers, where application providers can apply to become part of the programme and have their application zero-rated, and “closed” offers where that is not the case. The draft outlines specific criteria according to which open offers can be assessed.

Parental control filters

While content- and application-specific pricing is an additional challenge for small content and application providers, content-specific blocking can create even greater problems. Nevertheless, the draft contains new language that creates a carve-out for products such as parental control filters operated by the access provider from the provisions of the Regulation that prohibit such blocking, instead subjecting them to a case-by-case assessment by the regulators (as is the case for zero-rating). The language does not clearly exclude filters that are sold in conjunction with the access product and are on by default, and the rules can even be read as to require users who do not want to be subjected to the filtering to manually reconfigure each of their devices.

Deep Packet Inspection

Additionally, BEREC is also running a consultation on two paragraphs in the guidelines to which it hasn‘t yet proposed any changes. These paragraphs establish important privacy protections for end-users. They prohibit access providers from using Deep Packet Inspection (DPI) when applying traffic management measures in their network and thus protect users from having the content of their communications inspected. However, according to statements made during the debriefing session of the latest BEREC plenary, some actors want to allow providers to look at domain names, which themselves can reveal very sensitive information about the user and require DPI to extract from the data stream.

EDRi member epicenter.works will respond to BEREC’s consultation and encourages other stakeholders to participate. The proposed changes are significant. That is why clearer language is required, and users‘ privacy needs to remain protected. The consultation period ends on 28 November 2019.

epicenter.works
https://epicenter.works/

Public consultation on the document on BEREC Guidelines on the Implementation of the Open Internet Regulation (10.10.2019)
https://berec.europa.eu/eng/news_consultations/ongoing_public_consultations/5947-public-consultation-on-the-document-on-berec-guidelines-on-the-implementation-of-the-open-internet-regulation

Zero rating: Why it is dangerous for our rights and freedoms (22.06.2016)
https://edri.org/zero-rating-why-dangerous-for-our-rights-freedoms/

NGOs and academics warn against Deep Packet Inspection (15.05.2019)
https://edri.org/ngos-and-academics-warn-against-deep-packet-inspection/

Net Neutrality vs. 5G: What to expect from the upcoming EU review? (05.12.2018)
https://edri.org/net-neutrality-vs-5g-what-to-expect-from-the-upcoming-eu-review/

(Contribution by Benedikt Gollatz, EDRi member epicenter.works, Austria)

close
23 Oct 2019

Austrian Passenger Name Records complaint – the key points

By Epicenter.works

Austrian EDRi member epicenter.works filed a complaint with the Austrian data protection authority (DPA) about the Passenger Name Records (PNR) in August 2019, with the aim to overturn the EU PNR Directive. On 6 September, the DPA rejected the complaint, which was a good news, because that was the only way to lodge a complaint to the Federal Administrative Court.

The complaint: Objections

Epicenter.works’ complaint about the PNR system to the Federal Administrative Court contains a number of objections. The largest and most central one concerns the entire PNR Directive itself. The Court of Justice of the European Union (CJEU) has already repeatedly declared similar mass surveillance measures to be contrary to fundamental rights, for example in the case of data retention or in the expert opinion on the PNR agreement with Canada.

A complaint can’t be directly lodged to the CJEU, but the Administrative Court must submit questions on the interpretation of the law to the CJEU, as epicenter.works suggested in the complaint. The first question suggested is summarised as follows: “Does the PNR Directive contradict the fundamental rights of the EU?”

Moreover, Austria has not correctly implemented the PNR Directive, has partially extended its application, and has not implemented important restrictions from the Directive. For example, the Directive obliges all automatic hits, for example when someone is identified as a potential terrorist, to be checked by a person. This has not been implemented in the Austrian PNR Act. The question to the CJEU proposed in the complaint is therefore: “If the PNR Directive is valid in principle, is the processing of PNR data permitted even though the automatic hits do not have to be checked by a person?”

Where the Austrian PNR Act goes beyond the Directive, epicenter.works suggests that the Court should request the Constitutional Court to repeal certain provisions.

The Austrian PNR Act goes further than the Directive

According to the PNR Directive, PNR data may only be processed for the purpose of prosecuting terrorist offences and certain serious criminal offences. These serious crimes are listed in an annex to the Austrian PNR Act, which are directly translated from the PNR Directive. However, some of these crimes do not have an equivalent crime in Austrian law, leaving the entire provision unclear. Because of this flaw, the complaint asks the Constitutional Court to repeal this provision of the PNR Act. The list of terrorist offences in the PNR Act also goes much further than the Directive.

The PNR Directive only requires EU Member States to record flights to or from third countries, leaving the recording of intra-EU flights optional for Member States. Many countries have also extended this to domestic flights. In Austria, the Minister of the Interior can do this by decree without giving any specific reason. The complaint suggests that the Constitutional Court should delete this provision, because it has a strong impact on the fundamental rights of millions of people — without any justification of its necessity or proportionality.

Finally, the PNR Act also provides for the possibility for customs authorities and even the military to have access to PNR data. This is neither provided for in the PNR Directive, nor necessary for the prosecution of alleged terrorist and those suspected of serious crimes, and therefore it’s an excessive measure. Here, too, the complaint suggests that the Constitutional Court should delete the provisions that give these authorities access to PNR data.

epicenter.works
https://en.epicenter.works/

Our PNR complaint to the Federal Administrative Court
https://en.epicenter.works/content/our-pnr-complaint-to-the-federal-administrative-court

PNR: EU Court rules that draft EU/Canada air passenger data deal is unacceptable (26.07.2017)
https://edri.org/pnr-eu-court-rules-draft-eu-canada-air-passenger-data-deal-is-unacceptable/

Why EU passenger surveillance fails its purpose (25.09.2019)
https://edri.org/why-eu-passenger-surveillance-fails-its-purpose/

Passenger surveillance brought before courts in Germany and Austria (22.05.2019)
https://edri.org/passenger-surveillance-brought-before-courts-in-germany-and-austria/

(Contribution by EDRi member epicenter.works, Austria)

close
23 Oct 2019

The sixth attempt to introduce mandatory SIM registration in Romania

By ApTI

A tragic failure by the police to save a teenage girl who was abducted but managed to call the 112 emergency number three times before she was murdered, led to the adoption of a new Emergency Ordinance in Romania. The law introduces several measures to improve the 112 system, one of which is mandatory SIM card registration for all prepaid users. Currently approximately ten million prepaid SIM cards are used in Romania.

This is the sixth legislative attempt in the last eight years to pass legislation for registering SIM card users despite a Constitutional Court decision in 2014 deeming it illegal. The measure was adopted through a fast legislative procedure and is supposed to enter into effect on 1 January 2020.

It seems like the main reason to introduce mandatory SIM card registration is that authorities want to localise the call to the emergency number and punish false emergency calls. However, this measure is not likely to be efficient for the purpose, as anyone who buys a SIM card could obviously give it to someone else. Another reason is to identify the caller in real emergency situations, to be able to more easily locate them and send help.

Romania is one of the few countries in the European Union where calling the emergency number without a SIM card is not possible. This has been a deliberate decision taken by Romanian authorities to limit the number of “non-urgent” calls.

What happened?

After the Emergency Ordinance was proposed, EDRi member ApTI, together with two other Romanian NGOs, launched a petition to the Ombudsman and the government calling for this law not to be adopted. After civil society’s calls for a public debate, the Ministry of Communications organised an oral hearing in which the participants were given no more than five minutes to express their views, without the possibility to have an actual dialogue. The Emergency Ordinance was adopted shortly after the hearing, despite the fact that the Romanian Constitution explicitly states that laws which affect fundamental rights cannot be adopted by emergency ordinances (Article 115 of the Romanian Constitution).

What did the court say in 2014?

In 2014, the Constitutional Court held that the “retention and storage of data is an obvious limitation of the right to personal data protection and to the fundamental rights protected by the Constitution on personal and family privacy, secrecy of correspondence and freedom of speech” (para. 43 of Decision nr. 461/2014, unofficial translation). The Court explained that restricting fundamental rights is possible only if the measure is necessary in a democratic society. The measure must also be proportionate, and must be applicable without discrimination and without affecting the essence of the right or liberty.

Collecting and storing the personal data of all citizens who buy prepaid SIM cards for the mere reason of punishing those who might abusively call the emergency number seems like a bluntly disproportionate measure that unjustifiably limits the right to private life. At the same time, such a measure inverses the presumption of innocence and automatically assumes that all prepaid SIM card users are potentially guilty.

What’s the current status?

The Ombudsman listened to civil society’s concerns, and challenged the Ordinance at the Constitutional Court. Together with human rights NGO APADOR-CH, ApTI is preparing an amicus curiae to support the unconstitutionality claims.

In the meantime, the Ordinance moved on to parliamentary approval and the provisions related to mandatory SIM card registration were rejected in the Senate, the first chamber to debate the law. The Chamber of Deputies can still introduce modifications.

Asociatia pentru Tehnologie si Internet (ApTI)
https://www.apti.ro/

Petition against Emergency Ordinance on mandatory sim card registration (only in Romanian, 12.08.2019)
https://www.apti.ro/petitie-cartele-prepay-initiativa2019/

ApTI’s response to the public consultation on Emergency Ordinance on mandatory SIM card registration (only in Romanian, 21.08.2019)
https://www.apti.ro/raspuns-apti-inregistrare-prepay-112

Constitutional Court decision nr. 461/2014 (only in Romanian)
https://privacy.apti.ro/decizie-curtea-constitutionala-prepay-461-2014/

Timeline of legislative initiatives to introduce mandatory SIM card registration (only in Romanian)
https://apti.ro/Ini%C5%A3iativ%C4%83-legislativ%C4%83-privind-%C3%AEnregistrarea-utilizatorilor-serviciilor-de-comunica%C5%A3ii-electronice-tip-Prepay

(Contribution by Valentina Pavel, EDRi member ApTI, Romania)

close
23 Oct 2019

EU Commissioners candidates spoke: State of play for digital rights

By Ella Jakubowska

On 1 November 2019, the new College of European Commissioners – comprising 27 representatives (one from each EU Member State), rather than the usual 28, thanks to Brexit – are scheduled to take their seats for the next five years, led by incoming President-elect, Ursula von der Leyen.

A leading role in Europe’s digital future

EU Commissioners are a powerful bunch: as the executive branch of the European Union – complementing the European Parliament and the Council of the European Union as legislators, and the Court of Justice of the European Union (CJEU) as judiciary – the College’s wide-ranging responsibilities cover EU policy, law, budget, and “political and strategic direction”. With digitalisation an issue that transcends borders, the choice of Commissioners could have an impact on digital rights across the world.

Between 30 September and 8 October 2019, the Commissioners-designate underwent marathon confirmation hearings in the European Parliament. These hearings give the EU elected representatives (Members of the European Parliament, MEPs) an opportunity, before voting on the Commissioners-designate, to ask them questions about their capacities and potential priorities if elected. Among the three that did not make the cut was France’s nominee for Internal Market, Sylvie Goulard, whose late-stage rejection may delay the start of the new Commission.

A shared task to update Europe for the digital age

Five of the incoming Commissioners’ portfolios are predicted to have a significant influence on digital policy. Carved up by President von der Leyen, their overlapping responsibilities to make Europe fit for the digital age could make or break citizens’ rights to privacy, data protection and online freedoms in general:

  • Sweden’s Ylva Johansson, Commissioner-designate for Home affairs, will inherit a portfolio including cybercrime, terrorist content Regulation and issues relating to privacy and surveillance. Whilst her hearing was relatively light on digital questions, it was certainly heavy in evasive answers. Her insistence on fundamental rights was a good start, but her call for compromise between security and privacy fell into the age-old myth of the two rights as mutually exclusive.
  • Belgium’s Didier Reynders, Commissioner-designate for Justice and Consumers, championed rights by committing to enforce the General Data Protection Regulation (GDPR) to its fullest extent. On Artificial Intelligence (AI) and data protection, he promised swift law, safety, trust, transparency, and for those making or judging the law to better understand the impacts of algorithmic decisions. He cited plans for a collective redress position in November.
  • No-longer-Commissioner-designate Sylvie Goulard, of France, lost her chance to oversee the Internal Market. Although Goulard pitched increased digital education and maintaining the EU’s data policy leadership, Members of the European Parliament (MEPs) were far more concerned with Goulard’s past. Accusations of impropriety in her former role as a French defence minister, and high earnings as a private consultant in office, led MEPs to conclude that she lacked the integrity to be a Commissioner. Update: According to several news sources, Thierry Breton has been proposed as the new candidate by Emmanuel Macron. (24 October 2019)
  • The Czech Republic’s Věra Jourová (current Commissioner for Justice) made her case as Commissioner-designate for Values and transparency. Democracy, freedom of expression and cracking down on disinformation were key topics. Despite an understated performance, she called Europeans “the safest people on the planet.” She is right that GDPR sets a strong global standard, but it has faced a rocky implementation, and as of today requires further efforts to ensure the harmonisation that the Regulation prescribed.
  • Last was Denmark’s Margrethe Vestager for Executive Vice-President for a Europe fit for the digital age, and continuing as Competition Commissioner. Her anti-Big-Tech, “privacy-friendly”, pro-equality, redistribution agenda was well received. She faced questions about breaking up Big Tech, leaving it on the table as a “tool” of last resort but emphasising her desire to exhaust other avenues first. But she stumbled when it came to accusations that her aspirations to rein in Big Tech are incompatible with her remit as leader of the EU’s digital affairs.

The implications on digital policy

Throughout the hearings, the Commissioners-designate made many commitments, emphasised their policy priorities, and shared their plans for the future. Although we do not know exactly how this will translate to concrete policy, their hearings give valuable insight into how the new College intend to tackle rights challenges in the online environment. This is not an exact science, but we invite you to join us – and our “rightsometer” – to speculate about what impact the nominees’ ideas will have on citizens’ digital rights over the next five years, based on what the nominees did (and did not) say.

Privacy
Key legislation: ePrivacy

The currently stalled ePrivacy Regulation was unsurprisingly raised by MEPs – and, reassuringly, Vestager shared that “passing ePrivacy” needs to be “a high priority”.

Result: with Vestager’s support, it is a cautiously optimistic 3/5 on the rightsometer – but the troubled history of the Regulation also warns us not to be too hopeful.

Platform power
Key legislation: E-Commerce Directive (ECD), slated to be replaced by the upcoming Digital Services Act (DSA)

Vestager was the champion of regulating Big Tech throughout her hearing, proposing to redress the balance of power in favour of citizens, and giving consumers more choice about platforms. But she later confessed to uncertainty around the shape that the DSA will take, saying that she needs to “take stock” before committing to a position on E-Commerce. Jourová committed to redress in the event of wrongful takedown of content, and emphasised her strong support for the DSA. However, she suggested her intention to explore platform “responsibility” for illegal content, a move which would threaten myriad human rights.

Result: the rightsometer gives an inconclusive 2.5/5, with commitments to strengthening Big Tech regulation promising, but risks of unintended consequences of some of their ideas remaining a big concern.

Disinformation
Key document: Code of Practice on Disinformation

Jourová committed to tackling the problem of online disinformation, promising to bring in codes of conduct for platforms; to make it clear where political advertisements come from, and by whom they are funded; as well as enforcing “rules” for political campaigning.

Result: it’s a positive 4/5, and we encourage Jourová to analyse the risks of targeted political advertising and the online tracking industry caused by dysfunctional business models. However, a cautionary approach is needed (see Access Now, EDRi and Liberties Guide on Disinformation).

Law enforcement and cross-border access to data
Key legislation: “e-Evidence” proposal

Under direct questioning from MEP Moritz Körner about plans to advance e-Evidence, Commissioner-designate Johansson declined to provide a reply. She also insinuated that fundamental rights to encryption might be incompatible with fighting terrorism.

Result: e-Evidence makes for a pessimistic 0/5 on the rightsometer, with nothing to give confidence that this controversial proposal is being reassessed.

Artificial Intelligence (AI)
Key legislation: none proposed yet – but both von der Leyen and Reynders promised “horizontal” legislation in 100 days

Jourová emphasised that fundamental rights in AI innovation will “ensure that our solutions put people first, and will be more sustainable as a result”. Vestager added that ethics will be at the heart of AI policy, and Reynders that Europe’s “added value” is in bringing protection for privacy and data to future AI legislation.

Result: a promising 4/5 on the rightsometer; we welcome the Commissioners’-designate focus on fundamental rights when implementing AI-based technologies.

Where does that leave our digital rights?

Disinformation, Artificial Intelligence, privacy, and mitigating platform power were all given substantive commitments by the Commissioners-designate. Protecting fundamental rights online was, thankfully, a persistent concern for all the nominees. Certain topics, such as “digital literacy” were mentioned, but not given any flesh, and nominees also declined to answer a number of “too specific” questions. Although there was lots about which we can be optimistic, the balance between rights and law enforcement or innovation means that we should stay cautious.

Access Now: Meet the European Commissioners: Who will shape the next five years of digital policy in the EU? (27.09.2019)
https://www.accessnow.org/meet-eu-commissioners/

EDRi: Open letter to EU Member States: Deliver ePrivacy now! (10.10.2019)
https://edri.org/open-letter-to-eu-member-states-deliver-eprivacy-now/

Access Now, Civil Liberties Union for Europe and European Digital Rights: Joint Report on Informing the “Disinformation” Debate (18.10.2018)
https://edri.org/files/online_disinformation.pdf

(Contribution by Ella Jakubowska, EDRi intern)

close
18 Oct 2019

EU copyright dialogues: The next battleground to prevent upload filters

By Ella Jakubowska

On 15 October, the European Commission held the first of the stakeholder dialogues, mandated by Article 17 of the EU copyright Directive, inviting 65 organisations to help map current practices, and opening the door for deeper collaboration in the future.

Organisations from all sides of the debate were able to present their positions. While the first meeting focused on music, software and gaming, the next one will focus on audiovisual, visual, sports and text. These live-streamed dialogues are probably the last window of opportunity at the EU level for those who campaigned against upload filters in the copyright Directive to achieve the alleged goals of the Directive – harmonisation and modernisation of the copyright framework – without the collateral damage to citizens’ liberties. If the dialogues fail to achieve this, the battle will move to EU Member States.

The Copyright Directive was adopted as part of plans to unite Europe’s Digital Single Market in June 2019 – just over a year after the General Data Protection Regulation (GDPR) was adopted, and in the midst of an ongoing struggle over the proposed ePrivacy Regulation. The contentious Directive was welcomed by rightsholders who were keen to see online platforms take responsibility for copyright infringement; but it received criticism across civil society, academia, UN Special Rapporteur on Freedom of Expression David Kaye and even Edward Snowden, for enabling the removal of citizens’ legal content by automatic filters.

“Techno-solutionism” as a knee-jerk reaction

Techno-solutionism describes attempts to solve any and all problems with technology. The technologically-focused approach taken in the Directive and advocated for by some rightsholders is the wrong solution for the alleged problem (lack of negotiating power between rightsholders and streaming services). The upload filters deriving from Article 17 are severely error-prone (from cat purring being mistaken for copyrighted music, to evidence of war crimes being lost) and do not understand the full range of nuanced human expression, for example caricature, parody or pastiche. This situation empowers tech giants, harms small and medium enterprises, and fails to adequately protect authors. Furthermore, Article 17(7) of the Directive offers only limited mandatory exceptions for the use of content for quotation, parody or pastiche. Member States still have the opportunity to go beyond these exceptions and make all exceptions and limitations mandatory. However, the proposed automated filters will not be able to deal with the analysis of most of them. A more nuanced approach towards copyrighted content will be needed, including human supervision.

Violations and harms in the current situation

More than just theoretically flawed, the application of the copyright Directive could lead to violation of freedoms. So-called “copyright trolling” is a phenomenon used to either extort or censor individual users. When implementing the Directive, Member States should enable systems that penalise such abuses. Furthermore, the use of automated filters may collide with Article 22 of the GDPR which gives the right to data subjects not to be subject to a decision based solely on automated processing if that decision significantly affects them. How this will be dealt with in practice is to be seen.

Fundamental incompatibility with the human right to redress

The right to redress is a fundamental principle for this Directive to avoid collateral damages. The current redress mechanism has already been shown to be inadequate, as platforms are likely to turn to their Terms of Service as the excuse to delete content rather than going through the hassle of deciding if this or that exception or limitation in the Directive protects their right to use copyrighted content. We hope that the non-judicial redress mechanisms mentioned in Article 17(9) are easily and freely available to anyone needing them.

Reframing the debate to prevent violations of free expression

If the goal is indeed to target services that unfairly benefit from authors’ work, then the definition of Online Content Sharing Service Providers (OCSSPs) must be made more specific; it has to better reflect the few services that specifically profit from infringing copyright at large scale to the extent that they become alternatives to paid streaming services and that do not adequately remunerate rightsholders. Another possible solution is to reverse the burden of proof so that disputed content is not immediately removed. In essence, silence cannot play to the disadvantage of citizens: if platforms ask rightsholders for a licence, and the rightsholder does not react, this should mean that the “best efforts” threshold to obtain license has been met by the platform. If a rightsholder asks to block the content of a user and the user claims that they were within their right, the silence of the rightsholder should imply that the disputed content stays or is reinstated as soon as possible. In the case of disagreement in the dispute, human intervention would be appropriate.

The next stakeholder meeting will be held on 5 November.

First meeting of the Stakeholder Dialogue on Art 17 of the Directive on Copyright in the Digital Single Market (15.10.2019)
https://ec.europa.eu/digital-single-market/en/news/first-meeting-stakeholder-dialogue-art-17-directive-copyright-digital-single-market

Organisation of a stakeholder dialogue on the application of Article 17 of Directive on Copyright in the Digital Single Market (28.08.2019)
https://ec.europa.eu/digital-single-market/en/news/organisation-stakeholder-dialogue-application-article-17-directive-copyright-digital-single

All you need to know about copyright and EDRi (15.03.2019)
https://edri.org/all-you-need-to-know-about-copyright-and-edri/

Copyfails: time to #fixcopyright! (23.05.2016)
https://edri.org/copyfails/

Article 17 Stakeholder Dialogue: We’ll Continue to Advocate for Safeguarding User Rights (08.10.2019)
https://www.communia-association.org/2019/10/08/article-17-stakeholder-dialogue-well-continue-advocate-safeguarding-user-rights/

(Contribution by Ella Jakubowska, EDRi intern)

close
17 Oct 2019

Trilogues on terrorist content: Upload or re-upload filters? Eachy peachy.

By Chloé Berthélémy

On 17 October 2019, the European Parliament, the Council of the European Union (EU) and the European Commission started closed-door negotiations, trilogues, with a view to reaching an early agreement on the Regulation on preventing the dissemination of terrorist content online.

The European Parliament improved the text proposed by the European Commission by addressing its dangerous pitfalls and by reinforcing rights-based and rights-protective measures. The position of the Council of the European Union, however, supported the“proactive measures” the Commission suggested, meaning potential “general monitoring obligations” and in practice, automated detection tools and upload filters to identity and delete “terrorist content”.

Finding middle ground

In trilogue negotiations, the parties – the European Parliament, Commission, and Council – attempt to reach a consensus starting from what can be very divergent texts. In the Commission’s and Council’s version of the proposed Regulation, national competent authorities have the option to force the use of technical measures upon service providers. The Parliament, on the contrary, deleted all references to forced pro-activity and thus, put in line the Regulation with Article 15 of the E-Commerce Directive that prohibits obligations on platforms to generally monitor the user-generated content they host on their platforms.

Ahead of the negotiations, the European Commission was exploring the possibility to suggest “re-upload filters” instead of upload filters as a way towards building a compromise. Also known as “stay-down filters”, these filters distinguish themselves from regular ones by only searching, identifying and taking down content that has been already taken down once. This is to ensure that a content that was first deemed illegal would stay down and does not spread further online.

Upload or re-upload filters: What’s the difference?

“Re-upload filters” entail the use of automated means and the creation of hash databases that contain digital hash “fingerprints” of every piece of content that hosting providers have identified as illegal and removed. They also mean that all user-generated content published on the intermediaries’ services is monitored and compared with the material contained in those databases, and is filtered out in case of a match. As the pieces of content included in those databases are in most cases not subject to a court’s judgment, this practice could amount to an obligation of general monitoring, which is prohibited under Article 15 of the E-Commerce Directive.

Filters are not equipped to make complex judgments on the legality of content posted online. They do not understand the context in which content is published and shared, and as a result, they often make mistakes. Such algorithmic tools do not take proper account of the legal use of the content, for example for educational, artistic, journalistic or research purposes, for expressing polemic, controversial and dissident views in the context of public debates or in the framework of awareness raising activities. They risk accidentally suppressing legal speech, with exacerbated impacts on already marginalised individual internet users.

Human rights defenders as collateral damage

The way the hash databases will be formed will likely reflect discriminatory societal biases. Indeed, certain types of content and speech are getting more reported than others. The decision by the platforms to characterise them as illegal and to add them to the databases often mirrors societal norms. As a result, content related to Islamic terrorism propaganda will be more likely targeted than white supremacist content – even in cases in which the former is actually a documentation of human rights violations or is serving an awareness-raising purpose against terrorist recruitment. Hash databases of alleged illegal content are not accountable, transparent and democratically audited and controlled and will likely disadvantage certain users based on their ethnic background, gender, religion, language, or location.

In addition, re-upload filters are easy to circumvent on mainstream platforms: Facebook declared that it has over 800 distinct edits of the Christchurch shooting video in its hash database because users constantly modified the original material in order to trick automatic identification. Lastly, hash databases and related algorithms are being developed by dominant platforms, which have the resources to invest in such sophisticated tools. Obliging all other actors on the market to adopt such databases risks reinforcing their dominant position.

A more human rights compatible approach would follow the Parliament’s proposal, in which platforms are required to implement measures – exclusive of monitoring and automated tools – only after it received a substantial number of removal orders and that do not hamper their users’ freedom of expression and right to receive and impart information. The negotiating team from the European Parliament should defend the improvements achieved after arduous negotiations with the Parliament’s different political groups and committees. Serious problems, such as terrorism, require serious legislation, and not technological solutionism.

Terrorist content online Regulation: Document pool
https://edri.org/terrorist-content-regulation-document-pool/

Open letter on the Terrorism Database (05.02.2019)
https://edri.org/open-letter-on-the-terrorism-database/

Terrorist Content Regulation: Successful “damage control” by LIBE Committee (08.04.2019)
https://edri.org/terrorist-content-libe-vote/

Vice, Why Won’t Twitter Treat White Supremacy Like ISIS? Because It Would Mean Banning Some Republican Politicians Too (25.04.2019)
https://www.vice.com/en_us/article/a3xgq5/why-wont-twitter-treat-white-supremacy-like-isis-because-it-would-mean-banning-some-republican-politicians-too

(Contribution by Chloé Berthélémy, EDRi)

close
10 Oct 2019

Open letter to EU Member States: Deliver ePrivacy now!

By EDRi

On 11 October 2019, EDRi, together with four other civil society organisations, sent an open letter to EU Member States, to urge to conclude the negotiations on the ePrivacy Regulation. The letter highlights the urgent need for a strong ePrivacy Regulation in order to tackle the problems created by the commercial surveillance business models, and expresses the deep concerns by the fact that the Member States, represented in the Council of the European Union, still have not made decisive progress, more than two and a half years since the Commission presented the proposal.

You can read the letter here (pdf) and below:

Open letter to EU Member States
11.10.2019

Dear Minister,

We, the undersigned organisations, urge you to swiftly reach an agreement in the Council of the European Union on the draft ePrivacy Regulation.

We are deeply concerned by the fact that, more than two and a half years since the Commission presented the proposal, the Council still has not made decisive progress. Meanwhile, one after another, privacy scandals are hitting the front pages, from issues around the exploitation of data in the political context, such as “Cambridge Analytica”, to the sharing of sensitive health data. In 2019, for example, an EDRi/CookieBot report demonstrated how EU governments unknowingly allow the ad tech industry to monitor citizens across public sector websites.1 An investigation by Privacy International revealed how popular websites about depression in France, Germany and the UK share user data with advertisers, data brokers and large tech companies, while some depression test websites leak answers and test results to third parties.2

A strong ePrivacy Regulation is necessary to tackle the problems created by the commercial surveillance business models. Those business models, which are built on tracking and cashing in on people’s most intimate moments, have taken over the internet and create incentives to promote disinformation, manipulation and illegal content.

What Europe gains with a strong ePrivacy Regulation

The reform of the current ePrivacy Directive is essential to strengthen – not weaken – individuals’ fundamental rights to privacy and confidentiality of communications.3 It is necessary to make current rules fit for the digital age.4 In addition, a strong and clear ePrivacy Regulation would push Europe’s global leadership in the creation of a healthy digital environment, providing strong protections for citizens, their fundamental rights and our societal values. All this is key for the EU to regain its digital sovereignty, one of the goals set out by Commission President-elect Ursula von der Leyen in her political guidelines.5

Far from being an obstacle to the development of new technologies and services, the ePrivacy Regulation is necessary to ensure a level playing field and legal certainty for market operators.6 It is an opportunity for businesses7 to innovate and invest in new, privacy-friendly, business models.

What Europe loses without a strong ePrivacy Regulation

Without the ePrivacy Regulation, Europe will continue living with an outdated Directive which is not being properly enforced8 and the completion of our legal framework initiated with the General Data Protection Regulation (GDPR) will not be achieved. Without a strong Regulation, surveillance-driven business models will be able to cement their dominant positions9 and continue posing serious risks to our democratic processes.10 11 The EU also risks losing the position as global standard-setter and digital champion that it earned though the adoption of the GDPR.

As a result, people’s trust in internet services will continue to fall. According to the Special Eurobarometer Survey of June 2019 the majority of users believe that they only have partial control over the information they provide online, with 62% of them being concerned about it.

The ePrivacy Regulation is urgently needed

We expect the EU to protect people’s fundamental rights and interests against practices that undermine the security and confidentiality of their online communications and intrude in their private lives.

As you meet today to discuss the next steps of the reform, we urge you to finally reach an agreement to conclude the negotiations and deliver an upgraded and improved ePrivacy Regulation for individuals and businesses. We stand ready to support your work.

Yours sincerely,

AccessNow
The European Consumer Organisation (BEUC)
European Digital Rights (EDRi)
Privacy International
Open Society European Policy Institute (OSEPI)

1 https://www.cookiebot.com/media/1121/cookiebot-report-2019-medium-size.pdf
2
https://privacyinternational.org/long-read/3194/privacy-international-study-shows-your-mental-health-sale
3
https://edpb.europa.eu/our-work-tools/our-documents/outros/statement-32019-eprivacy-regulation_en
4
https://www.beuc.eu/publications/beuc-x-2017-090_eprivacy-factsheet.pdf
5
https://ec.europa.eu/commission/sites/beta-political/files/political-guidelines-next-commission_en.pdf
6
https://edpb.europa.eu/our-work-tools/our-documents/outros/statement-32019-eprivacy-regulation_en
7 https://www.beuc.eu/publications/beuc-x-2018-108-eprivacy-reform-joint-letter-consumer-organisations-ngos-internet_companies.pdf
8
https://edri.org/cjeu-cookies-consent-or-be-tracked-not-an-option/
9
http://fortune.com/2017/04/26/google-facebook-digital-ads/
10
https://www.theguardian.com/technology/2016/dec/04/google-democracy-truth-internet-search-facebook
11
https://www.theguardian.com/technology/2017/may/07/the-great-british-brexit-robbery-hijacked-democracy

Read more:

Open letter to EU Member States on ePrivacy (11.10.2019)
https://edri.org/files/eprivacy/ePrivacy_NGO_letter_20191011.pdf

Right a wrong: ePrivacy now! (09.10.2019)
https://edri.org/right-a-wrong-eprivacy-now/

Civil society calls Council to adopt ePrivacy now (05.12.2018)
https://edri.org/civil-society-calls-council-to-adopt-eprivacy-now/

ePrivacy reform: Open letter to EU member states (27.03.2018)
https://edri.org/eprivacy-reform-open-letter-to-eu-member-states/

close
09 Oct 2019

Right a wrong: ePrivacy now!

By Ella Jakubowska

When the European Commission proposed to replace the outdated and improperly enforced 2002 ePrivacy Directive with a new ePrivacy Regulation in January 2017, it marked a cautiously hopeful moment for digital rights advocates across Europe. With the backdrop of the General Data Protection Regulation (GDPR), adopted in May 2018, Europe took a giant leap ahead for the protection of personal data. Yet by failing to adopt the only piece of legislation protecting the right to privacy and to the confidentiality of communications, the Council of the European Union seems to have prioritised private interests over the fundamental rights, securities and freedoms of citizens that would be protected by a strong ePrivacy Regulation.

This is not an abstract problem; commercial surveillance models – where businesses exploit user data as a key part of their business activity – pose a serious threat to our freedom to express ourselves without fear. This model relies on profiling, essentially putting people into the boxes in which the platforms believe they belong – which is a very slippery slope towards discrimination. And when children increasingly make up a large proportion of internet users, the risks become even more stark: their online actions could impact their access to opportunities in the future. Furthermore, these models are set up to profit from the mass sharing of content, and so platforms are perversely incentivised to promote sensationalist posts that could harm democracy (for example political disinformation).

The rise of highly personalised adverts (”microtargeting”) means that online platforms increasingly control and limit the parameters of the world that you see online, based on their biased and potentially discriminatory assumptions about who you are. And as for that online quiz about depression that you took? Well, that might not be as private as you thought.

It is high time that the Council of the European Union takes note of the risks to citizens caused by the current black hole where ePrivacy legislation should be. Amongst the doom and gloom, there are reasons to be optimistic. If delivered in its strongest form, an improved ePrivacy Regulation helps to complement the GDPR; will ensure compliance with essential principles such as privacy by design and by default; will tackle the perversive model of online tracking and the disinformation it creates; and it will give power back to citizens over their private life and interests. We urge the Council to swiftly update and adopt a strong, citizen-centered ePrivacy Regulation.

e-Privacy revision: Document pool: Document pool
https://edri.org/eprivacy-directive-document-pool/

ePrivacy: Private data retention through the back door (22.05.2019)
https://edri.org/eprivacy-private-data-retention-through-the-back-door/

Captured states – e-Privacy Regulation victim of a “lobby onslaught” (23.05.2019)
https://edri.org/coe-eprivacy-regulation-victim-of-lobby-onslaught/

NGOs urge Austrian Council Presidency to finalise e-Privacy reform (07.11.2018)
https://edri.org/ngos-open-letter-austrian-council-presidency-eprivacy/

e-Privacy: What happened and what happens next (29.11.2017)
https://edri.org/e-privacy-what-happened-and-what-happens-next/

(Contribution by Ella Jakubowska, EDRi intern)

close