17 Oct 2019

Trilogues on terrorist content: Upload or re-upload filters? Eachy peachy.

By Chloé Berthélémy

On 17 October 2019, the European Parliament, the Council of the European Union (EU) and the European Commission started closed-door negotiations, trilogues, with a view to reaching an early agreement on the Regulation on preventing the dissemination of terrorist content online.

The European Parliament improved the text proposed by the European Commission by addressing its dangerous pitfalls and by reinforcing rights-based and rights-protective measures. The position of the Council of the European Union, however, supported the“proactive measures” the Commission suggested, meaning potential “general monitoring obligations” and in practice, automated detection tools and upload filters to identity and delete “terrorist content”.

Finding middle ground

In trilogue negotiations, the parties – the European Parliament, Commission, and Council – attempt to reach a consensus starting from what can be very divergent texts. In the Commission’s and Council’s version of the proposed Regulation, national competent authorities have the option to force the use of technical measures upon service providers. The Parliament, on the contrary, deleted all references to forced pro-activity and thus, put in line the Regulation with Article 15 of the E-Commerce Directive that prohibits obligations on platforms to generally monitor the user-generated content they host on their platforms.

Ahead of the negotiations, the European Commission was exploring the possibility to suggest “re-upload filters” instead of upload filters as a way towards building a compromise. Also known as “stay-down filters”, these filters distinguish themselves from regular ones by only searching, identifying and taking down content that has been already taken down once. This is to ensure that a content that was first deemed illegal would stay down and does not spread further online.

Upload or re-upload filters: What’s the difference?

“Re-upload filters” entail the use of automated means and the creation of hash databases that contain digital hash “fingerprints” of every piece of content that hosting providers have identified as illegal and removed. They also mean that all user-generated content published on the intermediaries’ services is monitored and compared with the material contained in those databases, and is filtered out in case of a match. As the pieces of content included in those databases are in most cases not subject to a court’s judgment, this practice could amount to an obligation of general monitoring, which is prohibited under Article 15 of the E-Commerce Directive.

Filters are not equipped to make complex judgments on the legality of content posted online. They do not understand the context in which content is published and shared, and as a result, they often make mistakes. Such algorithmic tools do not take proper account of the legal use of the content, for example for educational, artistic, journalistic or research purposes, for expressing polemic, controversial and dissident views in the context of public debates or in the framework of awareness raising activities. They risk accidentally suppressing legal speech, with exacerbated impacts on already marginalised individual internet users.

Human rights defenders as collateral damage

The way the hash databases will be formed will likely reflect discriminatory societal biases. Indeed, certain types of content and speech are getting more reported than others. The decision by the platforms to characterise them as illegal and to add them to the databases often mirrors societal norms. As a result, content related to Islamic terrorism propaganda will be more likely targeted than white supremacist content – even in cases in which the former is actually a documentation of human rights violations or is serving an awareness-raising purpose against terrorist recruitment. Hash databases of alleged illegal content are not accountable, transparent and democratically audited and controlled and will likely disadvantage certain users based on their ethnic background, gender, religion, language, or location.

In addition, re-upload filters are easy to circumvent on mainstream platforms: Facebook declared that it has over 800 distinct edits of the Christchurch shooting video in its hash database because users constantly modified the original material in order to trick automatic identification. Lastly, hash databases and related algorithms are being developed by dominant platforms, which have the resources to invest in such sophisticated tools. Obliging all other actors on the market to adopt such databases risks reinforcing their dominant position.

A more human rights compatible approach would follow the Parliament’s proposal, in which platforms are required to implement measures – exclusive of monitoring and automated tools – only after it received a substantial number of removal orders and that do not hamper their users’ freedom of expression and right to receive and impart information. The negotiating team from the European Parliament should defend the improvements achieved after arduous negotiations with the Parliament’s different political groups and committees. Serious problems, such as terrorism, require serious legislation, and not technological solutionism.

Terrorist content online Regulation: Document pool
https://edri.org/terrorist-content-regulation-document-pool/

Open letter on the Terrorism Database (05.02.2019)
https://edri.org/open-letter-on-the-terrorism-database/

Terrorist Content Regulation: Successful “damage control” by LIBE Committee (08.04.2019)
https://edri.org/terrorist-content-libe-vote/

Vice, Why Won’t Twitter Treat White Supremacy Like ISIS? Because It Would Mean Banning Some Republican Politicians Too (25.04.2019)
https://www.vice.com/en_us/article/a3xgq5/why-wont-twitter-treat-white-supremacy-like-isis-because-it-would-mean-banning-some-republican-politicians-too

https://www.vice.com/en_us/article/a3xgq5/why-wont-twitter-treat-white-supremacy-like-isis-because-it-would-mean-banning-some-republican-politicians-too

(Contribution by Chloé Berthélémy, EDRi)

close
10 Oct 2019

Open letter to EU Member States: Deliver ePrivacy now!

By EDRi

On 11 October 2019, EDRi, together with four other civil society organisations, sent an open letter to EU Member States, to urge to conclude the negotiations on the ePrivacy Regulation. The letter highlights the urgent need for a strong ePrivacy Regulation in order to tackle the problems created by the commercial surveillance business models, and expresses the deep concerns by the fact that the Member States, represented in the Council of the European Union, still have not made decisive progress, more than two and a half years since the Commission presented the proposal.

You can read the letter here (pdf) and below:

Open letter to EU Member States
11.10.2019

Dear Minister,

We, the undersigned organisations, urge you to swiftly reach an agreement in the Council of the European Union on the draft ePrivacy Regulation.

We are deeply concerned by the fact that, more than two and a half years since the Commission presented the proposal, the Council still has not made decisive progress. Meanwhile, one after another, privacy scandals are hitting the front pages, from issues around the exploitation of data in the political context, such as “Cambridge Analytica”, to the sharing of sensitive health data. In 2019, for example, an EDRi/CookieBot report demonstrated how EU governments unknowingly allow the ad tech industry to monitor citizens across public sector websites.1 An investigation by Privacy International revealed how popular websites about depression in France, Germany and the UK share user data with advertisers, data brokers and large tech companies, while some depression test websites leak answers and test results to third parties.2

A strong ePrivacy Regulation is necessary to tackle the problems created by the commercial surveillance business models. Those business models, which are built on tracking and cashing in on people’s most intimate moments, have taken over the internet and create incentives to promote disinformation, manipulation and illegal content.

What Europe gains with a strong ePrivacy Regulation

The reform of the current ePrivacy Directive is essential to strengthen – not weaken – individuals’ fundamental rights to privacy and confidentiality of communications.3 It is necessary to make current rules fit for the digital age.4 In addition, a strong and clear ePrivacy Regulation would push Europe’s global leadership in the creation of a healthy digital environment, providing strong protections for citizens, their fundamental rights and our societal values. All this is key for the EU to regain its digital sovereignty, one of the goals set out by Commission President-elect Ursula von der Leyen in her political guidelines.5

Far from being an obstacle to the development of new technologies and services, the ePrivacy Regulation is necessary to ensure a level playing field and legal certainty for market operators.6 It is an opportunity for businesses7 to innovate and invest in new, privacy-friendly, business models.

What Europe loses without a strong ePrivacy Regulation

Without the ePrivacy Regulation, Europe will continue living with an outdated Directive which is not being properly enforced8 and the completion of our legal framework initiated with the General Data Protection Regulation (GDPR) will not be achieved. Without a strong Regulation, surveillance-driven business models will be able to cement their dominant positions9 and continue posing serious risks to our democratic processes.10 11 The EU also risks losing the position as global standard-setter and digital champion that it earned though the adoption of the GDPR.

As a result, people’s trust in internet services will continue to fall. According to the Special Eurobarometer Survey of June 2019 the majority of users believe that they only have partial control over the information they provide online, with 62% of them being concerned about it.

The ePrivacy Regulation is urgently needed

We expect the EU to protect people’s fundamental rights and interests against practices that undermine the security and confidentiality of their online communications and intrude in their private lives.

As you meet today to discuss the next steps of the reform, we urge you to finally reach an agreement to conclude the negotiations and deliver an upgraded and improved ePrivacy Regulation for individuals and businesses. We stand ready to support your work.

Yours sincerely,

AccessNow
The European Consumer Organisation (BEUC)
European Digital Rights (EDRi)
Privacy International
Open Society European Policy Institute (OSEPI)

1 https://www.cookiebot.com/media/1121/cookiebot-report-2019-medium-size.pdf
2
https://privacyinternational.org/long-read/3194/privacy-international-study-shows-your-mental-health-sale
3
https://edpb.europa.eu/our-work-tools/our-documents/outros/statement-32019-eprivacy-regulation_en
4
https://www.beuc.eu/publications/beuc-x-2017-090_eprivacy-factsheet.pdf
5
https://ec.europa.eu/commission/sites/beta-political/files/political-guidelines-next-commission_en.pdf
6
https://edpb.europa.eu/our-work-tools/our-documents/outros/statement-32019-eprivacy-regulation_en
7 https://www.beuc.eu/publications/beuc-x-2018-108-eprivacy-reform-joint-letter-consumer-organisations-ngos-internet_companies.pdf
8
https://edri.org/cjeu-cookies-consent-or-be-tracked-not-an-option/
9
http://fortune.com/2017/04/26/google-facebook-digital-ads/
10
https://www.theguardian.com/technology/2016/dec/04/google-democracy-truth-internet-search-facebook
11
https://www.theguardian.com/technology/2017/may/07/the-great-british-brexit-robbery-hijacked-democracy

Read more:

Open letter to EU Member States on ePrivacy (11.10.2019)
https://edri.org/files/eprivacy/ePrivacy_NGO_letter_20191011.pdf

Right a wrong: ePrivacy now! (09.10.2019)
https://edri.org/right-a-wrong-eprivacy-now/

Civil society calls Council to adopt ePrivacy now (05.12.2018)
https://edri.org/civil-society-calls-council-to-adopt-eprivacy-now/

ePrivacy reform: Open letter to EU member states (27.03.2018)
https://edri.org/eprivacy-reform-open-letter-to-eu-member-states/

close
09 Oct 2019

Right a wrong: ePrivacy now!

By Ella Jakubowska

When the European Commission proposed to replace the outdated and improperly enforced 2002 ePrivacy Directive with a new ePrivacy Regulation in January 2017, it marked a cautiously hopeful moment for digital rights advocates across Europe. With the backdrop of the General Data Protection Regulation (GDPR), adopted in May 2018, Europe took a giant leap ahead for the protection of personal data. Yet by failing to adopt the only piece of legislation protecting the right to privacy and to the confidentiality of communications, the Council of the European Union seems to have prioritised private interests over the fundamental rights, securities and freedoms of citizens that would be protected by a strong ePrivacy Regulation.

This is not an abstract problem; commercial surveillance models – where businesses exploit user data as a key part of their business activity – pose a serious threat to our freedom to express ourselves without fear. This model relies on profiling, essentially putting people into the boxes in which the platforms believe they belong – which is a very slippery slope towards discrimination. And when children increasingly make up a large proportion of internet users, the risks become even more stark: their online actions could impact their access to opportunities in the future. Furthermore, these models are set up to profit from the mass sharing of content, and so platforms are perversely incentivised to promote sensationalist posts that could harm democracy (for example political disinformation).

The rise of highly personalised adverts (”microtargeting”) means that online platforms increasingly control and limit the parameters of the world that you see online, based on their biased and potentially discriminatory assumptions about who you are. And as for that online quiz about depression that you took? Well, that might not be as private as you thought.

It is high time that the Council of the European Union takes note of the risks to citizens caused by the current black hole where ePrivacy legislation should be. Amongst the doom and gloom, there are reasons to be optimistic. If delivered in its strongest form, an improved ePrivacy Regulation helps to complement the GDPR; will ensure compliance with essential principles such as privacy by design and by default; will tackle the perversive model of online tracking and the disinformation it creates; and it will give power back to citizens over their private life and interests. We urge the Council to swiftly update and adopt a strong, citizen-centered ePrivacy Regulation.

e-Privacy revision: Document pool: Document pool
https://edri.org/eprivacy-directive-document-pool/

ePrivacy: Private data retention through the back door (22.05.2019)
https://edri.org/eprivacy-private-data-retention-through-the-back-door/

Captured states – e-Privacy Regulation victim of a “lobby onslaught” (23.05.2019)
https://edri.org/coe-eprivacy-regulation-victim-of-lobby-onslaught/

NGOs urge Austrian Council Presidency to finalise e-Privacy reform (07.11.2018)
https://edri.org/ngos-open-letter-austrian-council-presidency-eprivacy/

e-Privacy: What happened and what happens next (29.11.2017)
https://edri.org/e-privacy-what-happened-and-what-happens-next/

(Contribution by Ella Jakubowska, EDRi intern)

close
09 Oct 2019

Why weak encryption is everybody’s problem

By Ella Jakubowska

Representatives of the UK Home Department, US Attorney General, US Homeland Security and Australian Home Affairs have joined forces to issue an open letter to Mark Zuckerberg. In their letter of 4 October, they urge Facebook to halt plans for end-to-end (aka strong) encryption across Facebook’s messaging platforms, unless such plans include “a means for lawful access to the content of communications”. In other words, the signatories are requesting what security experts call a “backdoor” for law enforcement to circumvent legitimate encryption methods in order to access private communications.

The myth of weak encryption as safe

Whilst the US, UK and Australia are adamant that their position enhances the safety of citizens, there are many reasons to be skeptical of this. The open letter uses emotive language to emphasise the risk of “child sexual exploitation, terrorism and extortion” that the signatories claim is associated with strong encryption, but fails to give a balanced assessment which includes the risks to privacy, democracy and most business transactions of weak encryption. By positioning weak encryption as a “safety” measure, the US, UK and Australia imply (or even explicitly state) that supporters of strong encryption are supporting crime.

Government-led attacks on everybody’s digital safety aren’t new. Since the 1990s, the US has tried to prevent the export of strong encryption and—when that failed—worked on forcing software companies to build backdoors for the government. Those attempts were called the first “Cryptowars”.

In reality, however, arguing that encryption mostly helps criminals is like saying that vehicles should be banned and all knives blunt because both have been used by criminals and terrorists. Such reasoning ignores that in the huge majority of cases strong encryption greatly enhances people’s safety. From enabling secure online banking, to keeping citizens’ messages private, internet users and companies rely on strong encryption every single day. It is the foundation of trusted, secure digital infrastructure. Weak encryption, on the other hand, is like locking the front door of your home, only to leave the back one open. Police may be able to enter more easily – but so too can criminals.

Strong encryption is vital for protecting civil rights

The position outlined by the US, UK and Australia is fundamentally misleading. Undermining encryption harms innocent citizens. Encryption already protects some of the most vulnerable people worldwide – journalists, environmental activists, human rights defenders, and many more. State interception of private communications is frequently not benign: government hacking can and does lead to egregious violations of fundamental rights.

For many digital rights groups, this debate is the ultimate groundhog day, and valuable effort is expended year after year on challenging the false dichotomy of “privacy versus security”. Even the European Commission has struggled to sort fact from fear-mongering.

However, it is worth remembering that Facebook’s announcement to encrypt some user content is so far just that: an announcement. The advertisement company’s approach to privacy is a supreme example of surveillance capitalism: protecting some users when it is favourable for their PR, and exploiting user data when there is a financial incentive to do so. To best protect citizens’ rights, we need a concerted effort between policy-makers and civil society to enact laws and build better technology so that neither our governments nor social media platforms can exploit us and our personal data.

The bottom line

Facebook must refuse to build anything that could constitute a backdoor into their messaging platforms. Otherwise, Facebook is handing the US, UK and Australian governments a surveillance-shaped skeleton key that puts Facebook users at risk worldwide. And once that door is unlocked, there will be no way to control who will enter.

EDRi Position paper on encryption: High-grade encryption is essential for our economy and our democratic freedoms (25.01.2015)
https://www.edri.org/files/20160125-edri-crypto-position-paper.pdf

Encryption – debunking the myths (03.05.2017)
https://www.edri.org/files/20160125-edri-crypto-position-paper.pdf

Encryption Workarounds: a digital rights perspective (12.09.2017)
https://edri.org/files/encryption/workarounds_edriposition_20170912.pdf

(Contribution by Ella Jakubowska, EDRi intern)

close
09 Oct 2019

Content regulation – what’s the (online) harm?

By Access Now and EDRi

In recent years, the national legislators in EU Member States have been pushing for new laws to combat negative societal phenomena such as hateful or terrorist content online. These regulatory efforts have one common denominator: they shift the focus from conditional intermediary liability to holding intermediaries directly responsible for the dissemination of illegal content on their platforms.

Two prominent legislative and policy proposals of this kind that will significantly shape the European debate around the future of intermediary liability are the UK White Paper on Online Harms and the newly adopted Avia law in France.

UK experiment to fight online harm: overblocking on the horizon

In April 2019, the United Kingdom (UK) government proposed a new regulatory model including a so-called statutory duty of care, saying it wants to make platform companies more responsible for the safety of online users. The paper foresees a future regulation that holds companies accountable for a set of vaguely predefined “online harms” which includes illegal content, but also users’ behaviours that are deemed harmful but not necessarily illegal.

EDRi and Access Now have long emphasised the risk that privatised law enforcement and heavy reliance on automated content filters pose to human rights online. In this vein, multiple civil society organisations, including EDRi members (for example Article 19 and Index on Censorship), have warned against the alarming measures the British approach contains. To avoid liability, the envisaged duty of care, combined with heavy fines, create incentives for platform companies to block online content even if its illegality is doubtful. The regulatory approach proposed by the UK Online Harms White Paper will actually coerce companies into adopting content filtering measures that will ultimately result in the general monitoring of all information being shared on online platforms. Due to over-compliance with states’ demands, such conduct often amounts to illegitimate restrictions on freedom of expression or, in other words, online censorship. Moreover, a general monitoring obligation is currently prohibited by European law.

The White Paper also covers activities and content that are not illegal but potentially undesirable such as advocacy of self-harm or disinformation. This is highly problematic in regard to the human rights law criteria that guide restrictions on freedom of expression. The ill-defined and vague concept of “online harms” cannot serve as a proper legal basis to justify an interference with fundamental rights. Ultimately, the proposal falls short in providing substantial evidence that sustains its approach. It also bluntly fails to address key issues of online regulation, such as content distribution on platforms that lies in the core of companies’ business models, opacity of algorithms, violations of online privacy, and data breaches.

French Avia law: Another “quick fix” to online hate speech?

Inspired by the German Network Enforcement Act (NetzDG), France has now adopted its own piece of legislation, the so-called Avia law – named after the Rapporteur of the file, Member of the Parliament Laetitia Avia. Similarly to NetzDG, the law requires companies to remove manifestly illegal content within 24 hours from receiving a notification about it.

Following its German predecessor, the Avia law encourages companies to be overly cautious and pre-emptively remove or block content to avoid substantial fines for non-compliance. The time frame in which they are expected to take action is too short to allow for a proper assessment of each case at stake. Importantly, the French Parliament does not discard the possibility for companies to resort to automated decision-making tools in order to process the notices. Such measure in itself can be grounded in the legitimate objectives to fight against hatred, racism, LGBTQI+-phobic and other discriminatory content. However, tackling hate speech and other context-dependent content requires careful and balanced analysis. In practice, leaving the decision to private actors without adequate oversight and redress mechanisms to decide whether a piece of content meets the threshold of “manifest illegality” will be damaging for freedom of expression and the rule of law.

However, there are also positive aspects of the Avia law. It provides safeguards of the procedural fairness by establishing the requirement for individuals who notify potentially illegal content to state the reasons why they believe it should be removed. Moreover, the law sets out obligations for companies to establish internal complaints and appeal mechanisms for both the notifier and the content provider. Transparency obligations on content moderation policies are also introduced. Lastly, the regulator established by the Avia law does not focus its evaluation solely on numbers of content removed but also on scrutinising over-removal when monitoring compliance with the law.

Do not fall into the same trap!

We are currently witnessing regulatory efforts at the national and European level that seek to provide easy solutions to online phenomena such as terrorist content or hate speech, ignoring the underlying societal issues. Most of the suggested solutions rely on filters and content recognition technologies with limited ability to assess the context in which a given piece of content has been posted. Proper safeguards and requirements for meaningful transparency that should accompany these measures are often sidetracked by legislators. However, it is not only the EU and its Member States where similar trends can be observed. For instance, the Australian government recently adopted a new bill imposing criminal liability on executives of social media platforms. Section 230 of the American Communication Decency Act (CDA) may be placed under the review process triggered by a presidential executive order that significantly limits the liability protections granted to platform companies by the existing law.

Legislators around the globe have one thing in common: the urge to “eradicate” vaguely defined “online harms”. The rhetoric of danger comprised in online harm has become a driving force behind regulatory responses in liberal democracies. This is exactly the kind of logic frequently used by authoritarian regimes to restrict legitimate debate. With the upcoming Digital Services Act (DSA) potentially replacing the E-Commerce Directive in Europe, the EU has an extraordinary opportunity to become a trend-setter, establishing high standards for the protection of users’ human rights, while addressing legitimate concerns stemming from the spread of illegal online content.

For this to happen, the European Commission should propose a law that imposes workable, transparent and accountable content moderation procedures and a functioning notice and action system on platforms. Such positive examples of tackling platform regulation should be combined with forceful actions against the centralisation of power over data and information into the hands of few big tech companies. EDRi and Access Now developed specific recommendations containing human rights safeguards, which should be comprised in both content moderation exercised by companies and State regulation tackling illegal online content. The European Commission’s responsibility is to ensure fundamental rights during the process of drafting any future legislation governing intermediary liability and redefining content governance online.

For this to happen, the European Commission should propose a law that imposes workable, transparent and accountable content moderation procedures and a functioning notice and action system on platforms. Such positive examples of tackling platform regulation should be combined with forceful actions against the centralisation of power over data and information into the hands of few big tech companies. EDRi and Access Now developed specific recommendations containing human rights safeguards, which should be comprised in both content moderation exercised by companies and State regulation tackling illegal online content. The European Commission’s responsibility is to ensure fundamental rights during the process of drafting any future legislation governing intermediary liability and redefining content governance online.

Access Now
https://www.accessnow.org/

Access Now’s human rights guide on protecting freedom of expression in the era of online content moderation (13.05.2019)
https://www.accessnow.org/cms/assets/uploads/2019/05/AccessNow-Preliminary-Recommendations-On-Content-Moderation-and-Facebooks-Planned-Oversight-Board.pdf

E-Commerce review: Opening Pandora’s box? (20.06.2019)
https://edri.org/e-commerce-review-1-pandoras-box/

French law aimed at combating hate content on the internet (09.07.2019)
http://www.assemblee-nationale.fr/15/pdf/ta/ta0310.pdf

UK: Online Harms Strategy must “design in” fundamental rights (10.04.2019)
https://edri.org/uk-online-harms-strategy-must-design-in-fundamental-rights/

UK’s Online Harms White Paper (04.2019)
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/793360/Online_Harms_White_Paper.pdf

(Contribution by Eliška Pírková, EDRi member Access Now, and Chloé Berthélémy, EDRi)

close
03 Oct 2019

CJEU ruling on fighting defamation online could open the door for upload filters

By EDRi

Today, on 3 October 2019, the Court of Justice of the European Union (CJEU) gave its ruling in the case C‑18/18 Glawischnig-Piesczek v Facebook. The case is related to injunctions obliging a service provider to stop the dissemination of a defamatory comment. Some aspects of the decision could pose a threat for freedom of expression, in particular that of political dissidents who may be accused of defamatory practices.

This ruling could open the door for exploitative upload filters for all online content.

said Diego Naranjo, Head of Policy at EDRi.

Despite the positive intention to protect an individual from defamatory content, this decision could lead to severed freedom of expression for all internet users, with particular risks for political critics and human rights defenders by paving the road for automated content recognition technologies.

The ruling confirms that a hosting provider such as Facebook can be ordered, in the context of an injunction, to seek and identify, among all the content shared by its users, content that is identical to the content characterised as illegal by a court. If the obligation to block future content applies to all users on a large platform like Facebook, the Court has in effect considered it to be in line with the E-Commerce Directive that courts demand automated upload filters and blurred the distinction between general and specific monitoring in its previous case law. EDRi is concerned that automated upload filters for identical content will not be able to distinguish between legal and illegal content, in particular when applied to individual words that could have very different meanings depending on the context and the intent of the user.

EDRi welcomes the Court’s attempt to find a balance of rights (namely freedom of expression, freedom to conduct a business) and to limit the impact on freedom of expression by differentiating between the search for identical and equivalent content. However, the ruling seems to be departing from previous case law regarding the ban on general monitoring obligations (for example Scarlet v. Sabam). Imposing filtering of all communications in order to look for one specific piece of content, using non-transparent algorithms, is likely to unduly restrict legal speech – independently from whether they look for content that is identical or equivalent to illegal content.

The upcoming review of the E-Commerce Directive should clarify, among other things, how to deal with online content moderation. In the context of this review, it is crucial to address the problem of disinformation without unduly interfering with the fundamental right to freedom of expression for users of the platform. Specifically, the business model based on amplifying certain type of content in the detriment of other in order to attract users’ attention requires urgent scrutiny.

Read more:

No summer break for free expression in Europe: Facebook cases that matter for human rights (23.09.2019)
https://www.accessnow.org/no-summer-break-for-free-expression-in-europe-facebook-cases-that-matter-for-human-rights/

CJEU case C-18/18 – Glawischnig-Piesczek Press Release (03.10.2019)
https://curia.europa.eu/jcms/upload/docs/application/pdf/2019-10/cp190128en.pdf

CJEU case C-18/18 – Glawischnig-Piesczek ruling (03.10.2019)
http://curia.europa.eu/juris/document/document.jsf?text=&docid=218621&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=192400

Fighting defamation online – AG Opinion forgets that context matters (19.06.2019)
https://edri.org/fighting-defamation-online-ag-opinion-forgets-that-context-matters/

Dolphins in the Net, a New Stanford CIS White Paper
https://cyberlaw.stanford.edu/files/Dolphins-in-the-Net-AG-Analysis.pdf

SABAM vs Netlog – another important ruling for fundamental rights (16.02.2012)
https://edri.org/sabam_netlog_win/

close
01 Oct 2019

CJEU on cookies: ‘Consent or be tracked’ is not an option

By EDRi

Today, on 1 October 2019, the Court of Justice of the European Union (CJEU) gave its ruling on “cookie consent” requirements. European Digital Rights (EDRi) welcomes the CJEU’s confirmation that under the current data protection framework, cookies can only be set if users have given consent that is valid under the General Data Protection Regulation (GDPR). This means consent needs to be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of a user’s agreement.

‘Consent or be tracked’ is not an option. The CJEU ruling spells it out for the industry and calls for clear rules on confidentiality of our communications

said Diego Naranjo, Head of Policy at EDRi.

EU Members States need to finally move forward with legislating this practice, and take the much needed ePrivacy Regulation out of the EU Council’s closet.

This ruling is a positive step towards protecting people from hidden commercial surveillance techniques deployed by the advertisement industry. It is, however, crucial to also urgently finalise the new ePrivacy Regulation that complements the GDPR in strengthening the privacy and security of electronic communications.

Read more:

CJEU press release: Storing cookies requiresinternet users’ active consent – A pre-ticked checkbox is therefore insufficient (01.10.2019)
https://curia.europa.eu/jcms/upload/docs/application/pdf/2019-10/cp190125en.pdf

CJEU ruling C-673/17 (01.10.2019)
http://curia.europa.eu/juris/documents.jsf?num=C-673/17

Video: Cookies (05.09.2016)
https://edri.org/privacyvideos-cookies/

EU Council considers undermining ePrivacy (30.06.2018)
https://edri.org/eu-council-considers-undermining-eprivacy/

Civil society calls Council to adopt ePrivacy now (05.12.2018)
https://edri.org/civil-society-calls-council-to-adopt-eprivacy-now/

Freedom to be different: How to defend yourself against tracking (27.09.2016)
https://edri.org/freedom-to-be-different/

e-Privacy revision: Document pool
https://edri.org/eprivacy-directive-document-pool/

close
26 Sep 2019

Mozilla Fellow Petra Molnar joins us to work on AI & discrimination

By Guest author

Starting on 1 October, Petra Molnar will join our team as a Mozilla Fellow. She is a lawyer specialising in migration, human rights, and technology, and has a Masters of Social Anthropology from York University, a Juris Doctorate from the University of Toronto, and an LL.M in International Law from the University of Cambridge. Mozilla Fellowships are organised and supported by the Mozilla Foundation and engage in a specific project in collaboration with an association, such as EDRi. With our upcoming work on artificial intelligence (AI) and our experience working on surveillance and data protection, we look forward to working with Petra, to add our voice to the ongoing discussions on the impact of algorithms on vulnerable populations, such as migrants and refugees.

Artificial intelligence and migration management from a human rights perspective

The systematic detention of migrants at the US-Mexico border. The wrongful deportation of 7 000 foreign students accused of cheating on a language test. Racist or sexist discrimination based on social media profiles. What do these examples have in common? In every case, an algorithm made a decision with serious consequences for people’s lives.

Nearly 70 million people are currently on the move due to conflict, instability, environmental factors, and economic reasons. Many states and international organisations involved in migration management are exploring machine learning to increase efficiency and support border security. These experiments range from big data predictions about population movements in the Mediterranean, to Canada’s use of automated decision-making in immigration applications, to AI lie detectors deployed at European airports. However, most of these experiments fail to account for the far-reaching impacts on human lives and human rights. These unregulated technologies are developed with little oversight, transparency, and accountability.

Expanding on my work on the human rights impacts of automated decision-making in immigration, this ethnographic project and accompanying advocacy campaign aims to create a governance mechanism for AI in migration with human rights at the centre. While embedded at EDRi, I will interview affected populations, experts, technologists, and policy makers to produce a well-researched report on the human rights impacts of migration management technologies, collaborating with academics, tech developers, the UN, governments, and civil society. This project will build on the work already done in the EU and provide feedback to EDRi’s ongoing work on AI. I will engage with NGOs to help build EDRi’s network and broaden the scope of action to non-digital groups beyond the EU, translating these efforts into a global strategy for the governance of migration management technologies.

I am delighted to be working with EDRi on this important project as the 2019-2020 Mozilla Fellow!

Mozilla Fellowships
https://foundation.mozilla.org/en/fellowships/

Big Data and International Migration (16.06.2014)
https://www.unglobalpulse.org/big-data-migration

Bots at the Gate – A Human Rights Analysis of Automated Decision Making in Canada’s Immigration and Refugee System (26.09.2018)
https://citizenlab.ca/2018/09/bots-at-the-gate-human-rights-analysis-automated-decision-making-in-canadas-immigration-refugee-system/

Emerging Voices: Immigration, Iris-Scanning and iBorderCTRL–The Human Rights Impacts of Technological Experiments in Migration (19.08.2019)
https://opiniojuris.org/2019/08/19/emerging-voices-immigration-iris-scanning-and-iborderctrl-the-human-rights-impacts-of-technological-experiments-in-migration/

(Contribution, Petra Molnar, selected Mozilla Fellow, EDRi)

close
25 Sep 2019

PNR complaint advances to the Austrian Federal Administrative Court

By Epicenter.works

On 19 August 2019, Austrian EDRi member epicenter.works lodged a complaint with the Austrian data protection authority (DPA) against the Passenger Name Records (PNR) Directive. After only three weeks, on 6 September, they received the response from the DPA: The complaint was rejected. That sounds negative at first, but is actually good news. The complaint can and must now be lodged with the Federal Administrative Court.

Why was the complaint rejected?

The DPA has no authority to decide whether or not laws are constitutional. Moreover, it cannot refer the matter to the Court of Justice of the European Union (CJEU), which in this case is necessary, because the complaint concerns an EU Directive. It was to be expected that the DPA would decide in this way, but the speed of the decision was somewhat surprising – in a positive way. It was clear from the outset that the data protection authority would reject the complaint, but it was a necessary step that could not be skipped, as there is no other legal route to the Federal Administrative Court than via the DPA. All seven proceedings of the complainants lodged with the aid of epicenter.works were merged, and the organisation was given the power of representation. This means that epicenter.works is allowed to represent the complainants.

What are the next steps?

Meanwhile, epicenter.works is still waiting for a freedom of information (FOI) request they have sent to the Passenger Information Unit (PIU) that processes the PNR data in Austria. While an answer to one request was received within a few days, another one has been overdue since 23 August. The unanswered request concerns data protection framework conditions for the PNR implementation.

epicenter.works will file the complaint with the Federal Administrative Court within four weeks. It is to be expected that the court will submit legal questions to the Court of Justice of the European Union (CJEU).

Epicenter.works
https://en.epicenter.works/

Passenger Name Records
https://en.epicenter.works/thema/pnr-0

Passenger surveillance brought before courts in Germany and Austria (22.05.2019)
https://edri.org/passenger-surveillance-brought-before-courts-in-germany-and-austria/

PNR: EU Court rules that draft EU/Canada air passenger data deal is unacceptable (26.07.2017)
https://edri.org/pnr-eu-court-rules-draft-eu-canada-air-passenger-data-deal-is-unacceptable/

(Contribution by Iwona Laub, EDRi member Epicenter.works, Austria)

close
25 Sep 2019

Why EU passenger surveillance fails its purpose

By Epicenter.works

The EU Directive imposing the collection of flyers’ information (Passenger Name Record, PNR) was adopted in April 2016, the same day as the General Data Protection Regulation (GDPR). The collection of PNR data from all flights going in and out of Brussels has a strong impact on the right of privacy of individuals and it needs to be justified on the basis of necessity and proportionality, and only if it meets objectives of general interest. All of this lacks in the current EU PNR Directive, which is at the moment being implemented in the EU.

The Austrian implementation of the PNR Directive

In Austria, the Austrian Passenger Information Unit (PIU) has processed PNR since March 2019. On 9 July 2019, the Passenger Data central office (Fluggastdatenzentralstelle) issued a response to inquiries into PNR implementation in Austria. According to the document, from February 2019 to 14 May, 7 633 867 records had been transmitted to the PIU. On average, about 490 hits per day are reported, with an average of about 3 430 hits per week requiring further verification. According to the document, out of the 7 633 867 reported records, there were 51 confirmed matches and in 30 cases there was the intervention by staff at the airport concerned.

Impact on innocents

What this small show of success does not capture, however, is the damage inflicted on the thousands of innocent passengers who are wrongly flagged by the system and who can be subjected to damaging police investigations or denied entry into destination countries without proper cause. Mass surveillance that seeks a small, select population is invasive, inefficient, and counter to fundamental rights. It subjects the majority of people to extreme security measures that are not only ineffective at catching terrorists and criminals, but that undermine privacy rights and can cause immense personal damage.

Why is this happening? The rate fallacy

Imagine a city with a population of 1 000 000 people implements surveillance measures to catch terrorists. This particular surveillance system has a failure rate of 1%, meaning that (1) when a terrorist is detected, the system will register it as a hit 99% of the time, and fail to do so 1% of the time and (2) that when a non-terrorist is detected, the system will not flag them 99% of the time, but register the person as a hit 1% of the time. What is the probability that a person flagged by this system is actually a terrorist?

At first, it might look like there is a 99% chance of that person being a terrorist. Given the system’s failure rate of 1%, this prediction seems to make sense. However, this is an example of incorrect intuitive reasoning because it fails to take into account the error rate of hit detection.

This is based on the rate fallacy: The base rate fallacy is the tendency to ignore base rates – actual probabilities – in the presence of specific, individuating information. Rather than integrating general information and statistics with information about an individual case, the mind tends to ignore the former and focus on the latter. One type of base rate fallacy is the one we suggested above called the false positive paradox, in which false positive tests are more probable than true positive tests. This result occurs when the population overall has a low incidence of a given condition and the true incidence rate of the condition is lower than the false positive rate. Deconstructing the false positive paradox shows that the true chance of this person being a terrorist is closer to 1% than to 99%.

In our example, out of one million inhabitants, there would be 999 900 law-abiding citizens and 100 terrorists. The number of true positives registered by the city’s surveillance numbers 99, with the number of false positives at 9 999 – a number that would overwhelm even the best system. In all, 10 098 people total – 9 999 non-terrorists and 99 actual terrorists – will trigger the system. This means that, due to the high number of false positives, the probability that the system registers a terrorist is not 99% but rather is below 1%. Searching in large data sets for few suspects means that only a small number of hits will ever be genuine. This is a persistent mathematical problem that cannot be avoided, even with improved accuracy.

Security and privacy are not incompatible – rather there is a necessary balance that must be determined by a society. The PNR system, by relying on faulty mathematical assumptions, ensures that neither security nor privacy are protected.

Epicenter.works
https://en.epicenter.works/

PNR – Passenger Name Record
https://en.epicenter.works/thema/pnr-0

Passenger surveillance brought before courts in Germany and Austria (22.05.2019)
https://edri.org/passenger-surveillance-brought-before-courts-in-germany-and-austria/

We’re going to overturn the PNR directive (14.05.2019)
https://en.epicenter.works/content/were-going-to-overturn-the-pnr-directive-0

NoPNR – We are taking legal action against the mass processing of passenger data!
https://nopnr.eu/en/home/

An Explainer on the Base Rate Fallacy and PNR (22.07.2019)
https://en.epicenter.works/content/an-explainer-on-the-base-rate-fallacy-and-pnr

(Contribution by Kaitlin McDermott, EDRi-member Epicenter.works, Austria)

close