01 Apr 2020

Press Release: EDRi calls for fundamental rights-based responses to COVID-19

By EDRi

In a recent statement released on 20 March 2020, European Digital Rights (EDRi) calls on the Member States and institutions of the European Union (EU) to ensure that, while developing public health measures to tackle COVID-19, they:

  • Strictly uphold fundamental rights;
  • Protect data for now and the future;
  • Limit the purpose of data for COVID-19 crisis only;
  • Implement exceptional measures for the duration of the crisis only;
  • Condemn racism and discrimination;
  • Defend freedom of expression and information.

EDRi’s Head of Policy, Diego Naranjo, explains that:

EDRi supports necessary, proportionate measures, fully in line with national and international human rights and data protection and privacy legislation, taken in order to tackle the COVID – 19 global pandemic. These measures must not, however, set a precedent for rolling back the fundamental rights obligations enshrined in European law.

EDRi recognises that Coronavirus (COVID-19) disease poses a global public health challenge of unprecedented proportions. The use of good-quality data can support the development of evidence-based responses. However, we are witnessing a surge of emergency-related policy initiatives, some of them risking the abuse of sensitive personal data in an attempt to safeguard public health. When acting to address such a crisis, measures must comply with international human rights law and cannot lead to disproportionate and unnecessary actions. It is also vital that measures are not extended once we are no longer in a state of emergency.

EDRi’s Executive Director, Claire Fernandez, emphasises that:

In times of crisis, our authorities and communities must show responsibility, resilience, solidarity, and offer support to healthcare systems in order to protect our lives. States’ emergency responses to the COVID-19 pandemic must be proportionate, however, and be re-evaluated at specified intervals. By doing this, states will prevent the normalisation of rights-limiting measures, scope creep, data retention or enhanced surveillance that will otherwise be harmful long after the impacts of the pandemic have been managed.

In these times of pandemic and emergency measures, EDRi expresses solidarity towards collective protection and support for our health systems. We will continue monitoring and denouncing abuses of human rights in times when people are particularly vulnerable.

Read full statement: EDRi calls for fundamental rights-based responses to COVID-19:https://edri.org/covid19-edri-coronavirus-fundamentalrights/

EDRi Members and Observers’ Responses to COVID-19:

Joint civil society statement – “States use of digital surveillance technologies to fight pandemic must respect human rights.” https://edri.org/wp-content/uploads/2020/04/Joint-statement-COVID-19-and-surveillance-FINAL1.pdf

noyb – Active overview of projects using personal data to combat SARS-CoV-2. https://gdprhub.eu/index.php?title=Data_Protection_under_SARS-CoV-2

Access Now – “Protect digital rights, promote public health: toward a better coronavirus response.” https://www.accessnow.org/protect-digital-rights-promote-public-health-towards-a-better-coronavirus-response/

Article 19 – “Coronavirus: New ARTICLE 19 briefing on tackling misinformation.” https://www.article19.org/resources/coronavirus-new-article-19-briefing-on-tackling-misinformation/

Bits of Freedom – “Privacy is geen absoluut recht, maar wel een noodzaak.” https://www.bitsoffreedom.nl/2020/03/20/privacy-is-geen-absoluut-recht-maar-wel-een-noodzaak/

Defesa dos Dereitos Digitais (D3) – ” A pandemia COVID19 e os direitos digitais.” https://direitosdigitais.pt/comunicacao/noticias/88-a-pandemia-covid19-e-os-direitos-digitais

Digitalcourage – “Coronavirus: Tipps fürs Onlineleben und Grundrechtsfragen.” https://digitalcourage.de/corona

Digitale Gesellschaft – “Menschenrechte gelten nicht nur in „guten“ Zeiten.” https://digitalegesellschaft.de/2020/03/menschenrechte-gelten-nicht-nur-in-guten-zeiten/

EFF – “EFF and COVID-19: Protecting Openness, Security, and Civil Liberties.” https://www.eff.org/deeplinks/2020/03/eff-and-covid-19-protecting-openness-security-and-civil-liberties

epicenter.works – “Digital rights implications of the COVID-19 crisis.” https://en.epicenter.works/content/digital-rights-implications-of-the-covid-19-crisis

GFF – “Corona und Grundrechte: Fragen und Antworten.” https://freiheitsrechte.org/corona-und-grundrechte/

Hermes Center – “Il Centro Hermes chiede al governo una risposta all’emergenza COVID-19 nel pieno rispetto dei diritti umani.” https://www.hermescenter.org/hermes-governo-emergenza-covid19-rispetto-privacy-diritti-umani/

Homo Digitalis – “Homo Digitalis για την πανδημία του Κορωνοϊού.” https://www.homodigitalis.gr/posts/5340

noyb – “Data protection in times of corona: not a question of if, but of how.” https://noyb.eu/en/data-protection-times-corona

Open Rights Group – “In the Coronavirus crisis, privacy will be compromised—but our right to know must not be.” https://www.openrightsgroup.org/blog/2020/in-the-coronavirus-crisis-privacy-will-be-compromised-but-our-right-to-know-must-not-be

Panoptykon – “Wolność i prywatność w dobie koronawirusa.” https://panoptykon.org/wiadomosc/wolnosc-i-prywatnosc-w-dobie-koronawirusa

Privacy International – “Extraordinary powers need extraordinary protections.” https://privacyinternational.org/news-analysis/3461/extraordinary-powers-need-extraordinary-protections

SHARE Foundation – “Digitalna prava, pandemija i Balkan.” https://www.sharefoundation.info/sr/digitalna-prava-pandemija-i-balkan/

close
01 Apr 2020

Surveillance by default: PATRIOT Act extended?

By Rafael Hernandez

On 15 March, Section 215 of the USA PATRIOT Act, and several other similar legal provisions, were due to expire and begin the process of reform and review to incorporate new legal protections of privacy. However, as a result of a coordinated effort by both chambers of the US Congress, the provisions may be extended for at least 77 days.

Section 215 was originally introduced in 2001 as part of the USA PATRIOT Act, a landmark piece of legislation passed soon after the September 11th attacks as an amendment to the Foreign Intelligence Surveillance Act of 1978 (FISA). The PATRIOT Act was designed to strengthen national security and law enforcement capabilities. It gave federal agencies like the Federal Bureau of Investigation (FBI) new and expanded competences like the permission to search a home or business without consent from the owner, indefinite detention of immigrants, etc.

Section 215 is a provision of the PATRIOT Act known as the “business records” provision. It allows the government and law enforcement agencies to order third parties to produce “specific and tangible” things such as books, records, papers, documents, and other items, when the FBI is conducting either an investigation into a “foreign intelligence,” or an investigation to protect against “international terrorism” or “clandestine intelligence activities” (even if the investigation targets US citizens). It has been at the centre of many controversies of government overreaching and privacy violations. As EDRi member the Electronic Frontier Foundation (EFF) explained:

In the hearings last year, witnesses confirmed that the 215 ‘business records’ provision may allow the government to collect sensitive information, like medical records, location data, or even possibly footage from a Ring camera.

Section 215 had been the centrepiece of Edward Snowden’s leaks to The Guardian in 2013, where he revealed that the Bush and Obama administrations had been abusing the aforementioned provision to obtain phone data of US citizens in bulk. It was the most egregious violation of privacy by the US government in recent history; and it happened in secret. The Snowden leaks provoked a legislative reaction by Congress with the passage of the USA FREEDOM Act, which took several measures to curtail the authority of law enforcement agencies, though extended Section 215 almost in its entirety to the end of 2019, and later to March 2020.

The threat has not gone away

Section 215, along with at least two other provisions (the roving wiretap and lone wolf surveillance authorities), were meant to be included in FISA reform legislation designed to introduce amendments and changes that would increase protections of individual privacy against governmental intrusion. This was the hope of a host of activist groups, non-profit organizations, etc., that saw the expiration of these provisions as a chance to overhaul the information access system in the US. The reforms were timed to take advantage of FISA’s expiration date of March 15, 2020.

However, last week the House of Representatives passed a bill that essentially extended Section 215 for three more years through 2023 – though this House bill did include several minor changes that took some of the criticism into account, like extending prison penalties for engaging in secret surveillance. When the bill went to the Senate for final approval, however, Majority Leader Mitch McConnell (Republican) and the Senate, instead of voting on the bill and debating its proposed changes, decided to punt any decision regarding this legislative proposal and unanimously passed an extension of Section 215 of the USA PATRIOT Act for 77 days, though it would still be subject to opposition from recessed House members and to presidential approval. What would this extension mean? It would essentially delay any kind of discussion on whether Section 215 will be allowed to expire and what kind of replacement parameters will be introduced.

What happens now?

It remains unclear what will happen to Section 215, now that the COVID-19 crisis has thrown the political landscape into disarray. But, as the USA FREEDOM Act bipartisan effort demonstrates, the push to maintain this overbearing and invasive legislation endures. EDRi member EFF, who has been regularly advocating for privacy and legislative reform, is actively pushing for change:

It is past time for reform. Congress has already extended these authorities without reform once, without debate and without consideration of any meaningful privacy and civil liberties safeguards. If Congress attempts to extend these authorities again without significant reform, we urge members to vote no and to allow the authorities to sunset entirely.

What matters now is that this landmark legislative provision is allowed to sunset, and the reform process for the authority to access private data by law enforcement agencies begins anew. Whether we will see this hope come to fruition, however, remains to be seen.

Read more:

Reform or Expire (26.02.2020)
https://www.eff.org/deeplinks/2020/02/reform-or-expire

Enough is enough: Let it expire (18.03.2020)
https://www.eff.org/Enough-is-enough-let-215-expire

Congress extends Section 215 surveillance program (29.11.2019)
https://epic.org/2019/11/congress-extends-section-215-s.html

EPIC to Congress: End Section 215 Surveillance Program (10.12.2019)
https://epic.org/2019/12/epic-to-congress-end-section-2-1.html

Three FISA authorities sunset in December: Here’s what you need to know (16.01.2019)
https://www.lawfareblog.com/three-fisa-authorities-sunset-december-heres-what-you-need-know

What happened to FISA reform? (17.03.2020)
https://www.lawfareblog.com/what-happened-fisa-reform

(Contribution by Rafael Hernández, communications intern, EDRi)

close
01 Apr 2020

Competition law: what to do against Big Tech’s abuse?

By Laureline Lemoine

This is the second article in a series dealing with competition law and Big Tech. The aim of the series is to look at what competition law has achieved when it comes to protecting our digital rights, where it has failed to deliver on its promises, and how to remedy this.

Read the first article on the impact of competition law on your digital rights here.

Almost everybody uses products or online services from Big Tech companies. These companies make up a considerable part of our online life.

This concentration of power in some sectors of the digital market (think search, social media, operating systems) by a small number of companies is having devastating effects on our rights. These companies are able to grow exponentially by constantly watching us and harvesting our personal data, which they then sell to data brokers, governments and dodgy third parties. With billions of users, these companies acquire an unprecedented level of knowledge about people’s most intimate lives.

They were able to achieve this by nudging people into giving up their personal data and by artificially creating powerful network effects linked to their dominant position that keeps users on a platform despite its intrusiveness. Accessing large quantities of data and creating locked-in user communities gives dominant platforms a strong competitive advantage while creating barriers of entry for competitors.

While being in a dominant position is not illegal, abusing that position is. And most Big Tech companies have been fined for abuses or are currently under investigation. Google alone had to pay 8 billion euros of fines in only three years.

And yet, in an interview given in December of 2019, Competition Commissioner Margrethe Vestager admitted that her fines have been unable to restore competition between Big Tech and smaller rivals because companies had “already won the market”.

So if fines do not work, what does? Have current antitrust laws reached their limits?

Traditional antitrust law assess the abuse of a dominant position ex-post, when the harm has been done and through lengthy investigations. Several ideas to get antitrust law up to speed with the digital economy are being discussed and are worth considering.

Giving back the freedom to choose

Speed alone, however, is unlikely to solve the problem. Policy recommendations at EU and national levels highlight the need for new ex-ante measures “to ensure that markets characterised by large platforms with significant network effects acting as gate-keepers, remain fair and contestable for innovators, businesses, and new market entrants”.

The new Digital Services Act (DSA) announced by the European Commission provides an opportunity for the EU to put in place the most urgent ex-ante measures without having to go through a full reform of its long-standing antitrust rules. One key measure that EDRi and many others have been pointing at is to make dominant social media and messaging platforms interoperable. Interoperability would require platforms to open their ‘walled gardens’ to other comparable services so that different users from different platforms can connect and communicate with each other.

This would enable competitors to challenge the huge user bases of incumbent social media platforms which permit the dominance to persist, and allow a fairer redistribution of power between competitors as well as with users. Combined with the right to data portability under the General Data Protection Regulation (GDPR), consumers could regain control over their personal data as they would not feel obliged to use a second-best service just because all their friends and family use it. Interoperability has already been used as a competition remedy in the past: in the Microsoft case, the European Commission required Microsoft to open up its operating system in view of enabling third parties to offer Windows-compatible software programmes.

Moreover, mandatory interoperability would directly strengthen healthy competition among platforms and could even create whole new markets of online services built downstream or upstream, such as third-party client apps or content moderation plug-ins.

The DSA presents a huge opportunity for the EU to decide how central aspects of the internet will look like in the coming decade. By including requirements for Big Tech such as interoperability, the DSA would inspire new competition and drive into a broken market, limit the downsides of user lock-in and reduce negative network effects.

A special status for Big Tech?

Interoperability measures could also be implemented as part of a broader mechanism or scheme for dominant players.

In its contribution to the debate on competition policy and digital challenges, the French competition authority draws on suggestions from several reports and the current reform bill being discussed in Germany to propose a new mechanism for “structuring players”.

They suggest to define these players in three cumulative stages: 1. companies providing online intermediation services; 2. which hold structural market power and 3. which play a role in access to and in the functioning of certain markets in regards to competitors, users or third parties.

This new status could also allow for new ex-post measures. Whenever one of these players would implement a practice that raises competitive concerns, competition authority would be able to intervene, penalise the company, or prohibit the practice in the future. Such triggering practices could consist of hindering access to markets, preferencing their own services, using data to hamper access to a market or make interoperability or data portability more difficult.

Beyond competition law, because of the effect they have on our rights, these companies should be required to limit some of their harmful practices such as data extraction or message amplification. To this effect, they could be imposed other sets of obligations, such as obligations of transparency, access, non-discrimination or device neutrality. Some of these obligations already exist in the P2B regulation addressing relations between online platforms and businesses and could be extended for public scrutiny. Others should be explicitly written into the planned Digital Services Act. Together with strong ex-ante measures, they will help the EU to limit the most damaging behaviour of dominant platforms do today.

Read more:

The European Commission – Shaping Europe’s digital future.
https://ec.europa.eu/info/sites/info/files/communication-shaping-europes-digital-future-feb2020_en_4.pdf

The Autorité de la concurrence’s contribution to the debate on competition policy and digital challenges.
https://www.autoritedelaconcurrence.fr/sites/default/files/2020-03/2020.03.02_contribution_adlc_enjeux_numeriques_vf_en_0.pdf

EU competition chief struggles to tame ‘dark side’ of big tech despite record fines.
https://news.sky.com/story/eu-competition-chief-struggles-to-tame-dark-side-of-big-tech-despite-record-fines-11893440

(Contribution by Laureline Lemoine, EDRi)

close
01 Apr 2020

Facial recognition: Homo Digitalis calls on Greek DPA to speak up

By Homo Digitalis

In the spring of 2019, the Hellenic Police signed a €4 million contract with Intracom Telecom, a global telecommunication systems and solutions vendor, for a smart policing project. Seventy five percent of the project is funded by the Internal Security Fund (ISF) 2014-2020 of the European Commission. The Hellenic Police published a press release for the signature of this contract in December 2019, while the vendor had publicly announced it earlier, in July 2019.

Based on the technical specifications of the contract, the vendor will develop and deliver to the Hellenic Police smart devices with integrated software enabling facial recognition and automated fingerprint identification, among other functionalities. The devices will be in the size of a smartphone, and police officers will be able to use them during police stops and patrols in order check and identify on the spot individuals who do not carry identification documents with them. The police officers will also be able to take a close-up photograph of an individual’s face and collect her/his fingerprints. Then, the fingerprints and the photographs collected will immediately be compared with data already stored in central databases after which the police officers will get the identification results on their devices.

The Hellenic Police claims that this will be a more “efficient” way to identify individuals in comparison to the current procedure, i.e. bringing any individuals who do not carry identification documents to the nearest police station. Based on the timetable for the implementation of the project, the devices and the related systems should be fully functional and ready for use within 20 months of signing the contract. Thus, it is anticipated that the Hellenic Police will be able to use these devices by the beginning of 2021.

Once the Hellenic Police published its press release in December 2019, EDRi observer Homo Digitalis addressed an Open Letter to the corresponding Greek minister requesting clarifications about the project. More precisely, based on the provisions of the Directive 2016/680 (LED) and the Greek Law 4624/2019 implementing it, Homo Digitalis asked the Minister of Citizen’s Protection whether or not the Hellenic Police has consulted the Hellenic Data Protection Authority (DPA) on this matter and/or conducted a related Data Protection Impact Assessment (DPIA) and what the applicable safeguards are, as well as to clarify the legal provisions that allow for such data processing activities by the Hellenic Police.

In February 2020, the Hellenic Police replied but neither confirmed nor denied that a prior consultation with the Hellenic DPA took place or that a DPIA was conducted. Moreover, Homo Digitalis claims that the Hellenic Police did not adequately reply about the applicable safeguards and the legal regime that justifies such data processing activities.

As a result of this inaction from public authorities, on March 19, 2020 Homo Digitalis filed a request for opinion to the Hellenic DPA regarding this smart policing contract. The request is based on the national provisions implementing article 47 of the LED which provides for the investigatory, corrective and advisory powers of the DPAs.

With this request, Homo Digitalis claims that the processing of biometric data, such as the data described in the contract, is allowed only when three criteria are met: 1. it is authorised by Union or Member State law, 2. it is strictly necessary, 3. and it is subject to appropriate safeguards for the rights and freedoms of the individual concerned. None of the above mentioned criteria is applicable in this case. Specifically, there are no special legal provisions in place allowing for the collection of such biometric data during police stops by the Hellenic police. Moreover, the use of these devices cannot be justified as strictly necessary since the identification of an individual is adequately achieved by the current procedure used. Nevertheless, such processing activities are using new technologies, and are very likely to result in a high risk to the rights and freedoms of the data subjects. Therefore, the Hellenic Police is obliged to carry out, prior to the processing, a data protection impact assessment and to consult the Hellenic DPA.

Read more:
Homo Digitalis’ request for opinion to the Hellenic DPA (only in Greek, 19.03.2020)
https://www.homodigitalis.gr/wp-content/uploads/2020/03/HomoDigitalis.pdf

Press Release of Hellenic Police (only in Greek, 14.12.2019)
http://www.astynomia.gr/images/stories/2019/prokirikseis19/14122019anakoinosismartpolicing.pdf

Press Release of Intracom Telecom (02.07.2019)
http://www.intracom-telecom.com/en/news/press/press2019/2019_07_02.htm

The technical specifications of the smart policing contract (Only in Greek, 12.04.2018)
http://www.astynomia.gr/images/stories/2018/prokirikseis18/12042018-texn_prod.pdf

Homo Digitalis’ Open Letter to the Minister of Citizen’s Protection (only in Greek, 16.12.2019)
https://www.homodigitalis.gr/posts/4662

Reply to Homo Digitalis’ Open Letter by the Hellenic Police (only in Greek, 14.02.2020)
https://www.homodigitalis.gr/wp-content/uploads/2020/02.pdf

(Contribution by Eleftherios Chelioudakis, EDRi observer Homo Digitalis, Greece)

close
01 Apr 2020

#PrivacyCamp20: Event Summary

By EDRi

The 8th edition of Privacy Camp revolved in 2020 around the topic of Technology and Activism, the schedule being composed of ten sessions in different formats. What were these about? Read below a summary of each discussion, with references to full session recordings.

Storytelling session: Stories of Activism
Session description / Session recording

This session was under the format of a series of three stories from four activists. Jeff Deutch from The Syrian Archive opened the chat by pointing at the role of emerging tech in documenting conflicts, from the war in Vietnam to the rise of citizen journalism in the context of the Tunisian revolution in 2011 and the Syrian conflict. Due to platforms’ policies of content removal in this context, he pointed at three areas of work he’s currently invested in, as part of Syrian Internet Archive: archival, verification, and searching of content. Sergey Boyko from Internet Protection Society Russia continued the session by talking about his experience of using the internet while hiding from the law enforcement, who aimed to arrest him, with the goal of stopping a street protest against the Russian government’s pension reform that he was organising. He pointed at tactics to secure his communications, accommodation, and use of social media while in hide-out, and concluded it is possible to use the internet outside the governments’ eyes, if you understand how the internet works, and what the limitations of government surveillance capabilities are. Finally, Finn Sanders and Jan-Niklas Niebisch from Fridays for Future (FFF) Germany focused on the use of Social Media in FFF to attract people in protests with Instagram as instrumental in targeting young people. They outlined the tools used in national coordination, the cooperation with the police forces, as well as the moderation arrangements ensuring the content shared via these tools is legal and not harmful.

Defending digital civic space: How to counter digital threats against civil society
Session description / Session recording

With a background in free software and free culture, journalist Rula Asad from Syrian Female Journalists Network kicked off the session mentioning how defending digital resilience is key to defending activists. She explained how her organisation does that in the case of internet shutdowns, spyware, or explaining activists how to use certain security tools. As someone advocating for human rights defenders (HRDs), with a focus on women, she helped building the security helpline in the IT security club in her University. Specifically, she mentioned the threat for men because of power relations and that for women speaking out is more difficult since they are more often silenced online than offline. Some of the risks she brought up are stress, burnout, and self-censorship because of lack of solidarity. Hassen Selmi from Access Now, on the other hand, mentioned phishing as a very common threat for HRDs, as well as physical attacks, arrests, search of devices, and ransomware. Finally, Alexandra Hache from Digital Defenders Partnership at Hivos mentioned the raise of mass surveillance and the crackdown on internet shutdowns and slow-down. She also pointed out at the raise of privacy friendly technologies, but also at the increased difficulty for users to control their data. More, she mentioned that activists are often using tools that are not designed for activists, such as social media. One of the key issues raised was the role of tailored training sessions for activists – with adequate follow-ups to ensure that good practices become part of the culture of the organisation.

Investigative journalism in South East Europe
Session description / Session recording

In this lively and insightful debate, moderator Sofija Todorovic from the Balkan Investigative Reporting Network (BIRN) led the panel through an exploration of how the context of state power, in particular the presence or absence of democratic controls, can change what it means to protect investigative journalists. Andrej Director of Tech at Share Foundation, launched the discussion with an explanation of how attacks on journalists are becoming less technical, and more focused on social engineering or smear campaigns. Drawing on the Serbian context, he noted that replacing the control of public actors with private actors shifts, but does not solve, the problem. Peter Erdelyi, Senior Editor at 444.hu, continued that civil spaces in Hungary are shrinking, with systematic government pressure on independent media to stop investigations into corruption. Domagoj Zovak, Editor and Anchor at Prime Time, finished by talking about the monopoly of media control in Croatia, and how it has led to a culture of fear. The conclusion of the panel offered a powerful reminder that increasing internet regulation is not a panacea, as in some parts of the EU, it is the state that poses the biggest threat to free expression, not private platforms.

How To Parltrack Workshop
Session description / Not recorded
The Parltrack workshop gave participants the opportunity to understand Parltrack and how to use it and its data more efficiently. The workshop started by a presentation of the European institutions and the legislative system and processes. Parltrack was presented as a European initiative to improve the transparency of the legislative processes. Although it is not a perfect tool (it can be hard to obtain data and to render amendments), participants were explained how Parltrack combines information on dossiers, representatives, vote results and committee agendas into a unique database and allows the tracking of dossiers using email and RSS.

The impact of surveillance on today’s kids – tomorrow’s human rights activists ?
Session description / Session recording


Jen Persson, Director of DefendDigitalMe, opened the discussion with the remark that children are perceived as an “Other” to be protected. It is under this protection regime that often, she argued, children loose their rights. She talked about schools monitoring pupils under their responsibility to identify extremist affiliations, as well as the commercial re-appropriation of school consensus data. Further, Daniel Carey from civil rights and judicial review law firm Deighton Pierce Glynn focused on his case on how a pupil took back control of their data after being referred under UK’s pre-crime Prevent programme. He rounded up how easy it is that data generated by children is used against that child. The third intervention was Liberty’s Policy and Campaigns Manager Gracie Bradley’s. She talked about the Against Borders for Children (ABC) coalition and the Boycott School Census action. She situated the topic within the UK’s “hostile environment policy”, under which the UK Government introduced entitlement checks into essential public services and data sharing schemes between those public services and the Home Office. Finally, Gloria Gonzales Fuster, Research Professor at Law, Science, Technology and Society (LSTS) Research Group at Vrije Universiteit Brussel (VUB), argued that anyone who cares about data protection and privacy in general cannot position children on a lower level of protection because of their age. She mentioned the existing preamble under the General Data Protection Regulation (GDPR) that children deserve specific protection, as well as the strategies often used to circumvent the legal protections for children and their data under current data protection legal frameworks.

EDPS – Civil society summit
Session description / Session recording

The Privacy Camp hosted the European Data Protection Supervisor (EDPS) civil society summit that gave participants the opportunity to debate the rising threat of facial recognition with the EDPS himself, Wojciech Wiewiórowski, and members of his team. From across the EDRi network and beyond, attendees gathered at the roundtable to talk about violations to the principles of proportionality and necessity, and other rights impacts. This included examples of the deployment of facial surveillance systems in France, Serbia and Wales. The summit allowed participants to debate the merits of a ban compared to the benefits of a moratoria, and also to consider whether civil society should focus on improving enforcement of existing legislation instead. It also gave everyone the chance to consider the nuances between different uses of facial recognition – for example whether it is in public spaces or not. The EDPS closed the roundtable with a nod to the old CCTV privacy/security debates, and a recognition that the current approach to facial recognition is very fractured across Member States. He warned civil society not to focus on the accuracy question, and instead to look at tools to address the fundamental rights risks, such as impact assessments.

Access requests as a tool for activism
Session description / Session recording

Following an explanation of the right of access under the General Data Protection Regulation (GDPR), the moderator Joris van Hoboken from the Law, Science, Technology and Society (LSTS) Research Group at Vrije Universiteit Brussel (VUB) introduced the first speaker Gaëtan Goldberg from noyb who presented their activities: representing individuals in front of their Data Protection Authorities (DPA). Noyb focuses on comparing the response people get when submitting a data access request to what the company say in its privacy policy and marketing material. Taking as an example the results from a series of subject access requests directed at streaming service providers such as Netflix and Spotify, Gaëtan concluded that data subject access requests are a good tool, but GDPR enforcement is much needed. Karolina Iwanska of Panoptykon Foundation explained that they approach the topic throughindustry, and the power and influence that governments and companies have over our decisions. She explained that the way Panoptykon uses data subject access request is through a focus on the interpretations of collected data in the areas of advertising and banking credit scoring. Finally, René Mahieu from LSTS presented the subject of his PhD around the questions of “Is the right of access effective in practice?”. Adding to the uses of the data requests by digital rights organisations, René mentioned the spread of data subject requets as a tool for labour rights and consumer rights organisations. More, he pointed out that access is not easily given, but as soon as public spotlight exists, companies are quick in replying to such requests.

“Actually, In Google We Trust”? A ‘Deconstructing’ Conversation on Russian Internet
Session description / Session recording

The moderator Francesca Musiani from Centre for Internet and Society (CNRS) started the debate by briefly describing the new set of juridical measures that impact internet infrastructures and therefore the civil liberties of Russian population. She also listed some new ways of circumventing those limits that might be surprising for Western activists. Ksenia Ermoshina from CNRS talked about her research on the use and development of encryption protocols in the region, and how cryptographic researchers were surprised that endangered journalists were using Facebook, Whatsapp and similar tools. He continued stressing that the perception of security vs privacy is very different than in Western Europe. Anna Zaytseva from LLA CREATIS of University of Toulouse-Jean Jaurès gave examples on why some activists use Google services. The main reason was that, according to the Google Transparency Report, Google never replied to information requests from Belarusian or Russian authorities. She stressed that, due of geopolitics, if you want be an activist in the United States, you should use the Russian social network VK, whereas if you are an activist in Russia, you should use Facebook. Sergey Boyko, co-founder of the Internet Protection Society, highlighted that aspect: hundreds of VK users have been jailed for their opinions or posts, as there have been only two cases of people jailed for their posts in Facebook. In that sense, Facebook is relatively safe for Russian activists. Services like email.ru and VK give the info to SPB (Saint Petersburg Police) directly in real time. Boyko also mentioned that Russian authorities cannot use the Chinese way – they couldn’t simply ban Facebook and Google. They use other methods: they intimidate those companies with high fines, with menaces of blocking, and they work very closely with them, to get the sensitive contents removed. Activists are afraid that eventually Google and Facebook will start to collaborate much closer with Russian government. That’s why it’s important for the activists to work with those big platforms to make them understand the dangers of collaborating with Russian government.

Activism and digital infrastructures
Session description / Session recording

The discussion was started by Amber Macintyre from Tactical Tech, who pointed that the rise in data-driven tools in the NGO sector informs, but does not determine long-term decision making. Michael Hulet from Extinction Rebellion Belgium mentioned that, despite being aware of the dangers of exploitative data flows in activist circles, privacy-friendly tools can slow-down a grassroots movement. Tools used must, according to him, be accessible and global. Further, Glyn Thomas, Digital Strategy Consultant working with NGOs, shared his thoughts on the privacy-related behaviour of organisations of different sizes, by also focusing on what are the dangers in this respect for each type of an NGO. Moderator Jan Tobias Muehlberg facilitated the Q&A, addressing issues such as trust, platform censorship of activists, use habit and ways to transit to alternatives, among others. The discussion concluded with the idea that activists need to have concrete visibility of threats coming from the lack of privacy in order to be motivated to change their tools and practices.

Internet for All – Silenced and Harassed No More!
Session description / Session recording

This powerful panel drew attention to the need for digital rights work to better incorporate diverse, intersectional experiences in order to protect all internet users. EDRi’s Chloe Berthélémy, as moderator, noted that this is important for upcoming work on the Digital Services Act (DSA). Oumayma Hammadi from Rainbow House Brussels launched the panel by raising the issue of the disproportionate censoring of LGBTQ+ online spaces and bodies. Alejandro Moledo from European Disability Forum (EDF) continued that platforms are an important part of self-determination for people with disabilities, but they receive enormous online abuse. Štefan Balog from Romea revealed how the internet has exacerbated hatred of Roma people and has been responsible for even inciting physical violence. Lastly, Pamela Morinière from the International Federation of Journalists talked about how our gendered society affects women journalists, leading to hate and violence both on- and offline. She explained that online anonymity protects abusers from accountability.

What was your favourite session this year? Let us know by tweeting your thoughts with the hashtag #PrivacyCamp20.

Privacy Camp Updates Newsletter
https://mailman.edri.org/mailman/listinfo/privacycamp

Privacy Camp Event website
https://privacycamp.eu

#PrivacyCamp2020 – Programme
https://privacycamp.eu/?p=1126

close
27 Mar 2020

Open letter: Civil society urges Member States to respect the principles of the law in Terrorist Content Online Regulation

By EDRi

On 27 March 2020, European Digital Rights (EDRi) and 12 of its member organisations sent an open letter to representatives of Member States in the Council of the EU. In the letter, we voice our deep concern over the proposed legislation on the regulation of terrorist content online and what we view as serious potential threats to fundamental rights of privacy, freedom of expression, etc.

You can read the letter here (pdf) and below

Brussels, 27 March 2020

Dear representatives of Member States in the Council of the EU,

We hope that you are keeping well in this difficult time.

We are writing to you to voice our serious concerns with the proposed Regulation on preventing the dissemination of terrorist content online (COM/2018/640 final). We have raised these concerns before and many similar critiques have been expressed in letters opposing the Regulation from human rights officials, civil society groups, and human rights advocates.i

We firmly believe that any common position on this crucial file must respect fundamental rights and freedoms, the constitutional traditions of the Member States and existing Union law in this area. In order for this to happen, we urge you to ensure that the rule of law in cross-border cases is respected, that the competent authorities tasked with ordering the removal of illegal terrorist content are independent, to refrain from adopting mandatory (re)upload filters and guarantee that the exceptions for certain protected forms of expression, such as education, journalistic and research materials, are maintained in the proposal. We explain why in more detail further below.

First, we ask you to respect the principles of territoriality and ensure access to justice in cases of cross-border takedowns by ensuring that only the Member State in which the hosting service provider has its legal establishment can issue removal orders. The Regulation should also allow removal orders to be contested in the Member State of establishment to ensure meaningful access to an effective remedy. As recent CJEU case law has established “efficiency” or “national security” reasons cannot lead to short-cuts to rule of law mechanisms and safeguards.ii

Secondly, the principle of due process demands that the legality of content be determined by a court or independent administrative authority. This important principle should be reflected in the definition of ‘competent authorities’. For instance, we note that in the Digital Rights Ireland case, the Court of Justice of the European Union considered that the Data Retention Directive was invalid, inter alia, because access to personal data by law enforcement authorities was not made dependent on a prior review carried out by a court or independent administrative authority.iii In our view, the removal of alleged terrorist content entails a very significant interference with freedom of expression and as such, calls for the application of the same safeguards.

Thirdly, the Regulation should not impose the use of upload or re-upload filters (automated content recognition technologies) to those services under the scope of the Regulation. As the coronavirus crisis makes abundantly clear, filters are far from accurate. Only in recent days, Twitter, Facebook and YouTube have moved to full automation of removal of content, leading to bad scores of legitimate articles about coronavirus being removed.iv The same will happen if filters are applied to alleged terrorist content. There is also mounting data suggesting that algorithms are biased and have a discriminatory impact, which is a particular concern for communities affected by terrorism and whose counter-speech has proven to be vital against radicalisation and terrorist propaganda. Furthermore, a provision imposing specific measures on platforms should favour a model that gives room for manoeuvre to service providers on which actions to take to prevent the dissemination of illegal terrorist content, taking into account their capacities and resources, size and nature (whether non-for-profit, for-profit or community-led).

Finally, it is crucial that certain protected forms of expression, such as educational, artistic, journalistic and research materials are exempted from the proposal, and that it includes feasible measures to ensure how this can be successfully implemented. The determination of whether content amounts to incitement to terrorism or even glorification of terrorism is highly context specific. Research materials should be defined to include content that serves as evidence of human rights abuses. The jurisprudence of the European Court of Human Rights (ECtHR)v specifically requires a particular caution to ,such protected forms of speech and expression. It is vital that these principles are reflected in the Terrorist Content Regulation, including through the adoption of specific provisions protecting freedom of expression as outlined above.

We remain at your disposal for any support you may need from us in the future.

Sincerely,
Access Now – https://www.accessnow.org/
Bits of Freedom – https://www.bitsoffreedom.nl/
Centrum Cyfrowe – https://centrumcyfrowe.pl
CDT – https://cdt.org
Committee to Protect Journalists (CPJ) – https://cpj.org/
Daphne Keller – Director Program on Platform Regulation Stanford University
Digitale Gesellschaft – https://digitalegesellschaft.de/
Digitalcourage – https://digitalcourage.de/
D3 – Defensa dos Dereitos Digitais –
https://www.direitosdigitais.pt/
Državljan D – https://www.drzavljand.si/
EDRi – https://edri.org/
Electronic Frontier Foundation (EFF) – https://www.eff.org/
Epicenter.Works – https://epicenter.works
Free Knowledge Advocacy Group EU- https://wikimediafoundation.org/
Hermes Center – https://www.hermescenter.org/
Homo Digitalis – https://www.homodigitalis.gr/en/
IT-Political Association of Denmark – https://itpol.dk/
Panoptykon Foundation – https://en.panoptykon.org
Vrijschrifthttps://www.vrijschrift.org
Wikimedia Spain – https://wikimedia.es

Footnotes

i.

ii.

iii.

  • See Digital Rights Ireland v. Minister for Communications, Marine and Natural Resources, Joined Cases C‑293/12 and C‑594/12, 08 April 2014 at para. 62.

iv.

v.

  • In cases involving the dissemination of “incitement to violence” or terrorism by the press, the ECtHR’s starting point is that it is “incumbent [upon the press] to impart information and ideas on political issues just as on those in other areas of public interest. Not only does the press have the task of imparting such information and ideas: the public also has a right to receive them.” See Lingens v Austria, App. No. 9815/82,8 July 1986, para 41.
  • The ECtHR also repeatedly held that the public enjoyed the right to be informed of different perspectives, e.g. on the situation in South East Turkey, however unpalatable they might be to the authorities. See also Özgür Gündemv. Turkey, no. 23144/93, 16 March 2000, para.60 and 63 and the Council of Europe handbook on protecting the right to freedom of expression under the European Convention on Human Rights, summarizing the Court’s case law on positive obligations of States with regards to the protection of journalists (p.90-93), available at: https://rm.coe.int/handbook-freedom-of-expression-eng/1680732814
Twitter_tweet_and_follow_banner close
25 Mar 2020

Facial Recognition & Biometric Surveillance: Document Pool

By EDRi

At least 15 European countries have experimented with highly intrusive facial and biometric recognition systems for mass surveillance. The use of these systems can infringe on people’s right to conduct their daily lives in privacy and with respect for their fundamental freedoms. It can prevent them from participating fully in democratic activities, violate their right to equality and much more.

The gathering and use of biometric data for remote identification purposes, for instance through deployment of facial recognition in public places, carries specific risks for fundamental rights.

European Commission, White Paper on Artificial Intelligence

This has happened in the absence of proper public debate on what facial recognition means for our societies, how it amplifies existing inequalities and violations, and whether it fits with our conceptions of democracy, freedom, equality and social justice.

Considering the high risk of abuse, discrimination and violation of fundamental rights to privacy and data protection, the EU and its Member States must develop a strong, privacy-protective approach to all forms of biometric surveillance. In this document pool we will be listing relevant articles and documents related to the issue of facial and biometric recognition. This will allow you to follow the developments of surveillance measures and regulatory actions in Europe.

EDRi’s analysis and recommendations
EDRi members’ actions and reporting
EDRi’s blogposts and press releases
Guidance from data protection authorities
Key dates and official documents
Other useful resources


EDRi’s analysis and recommendations

Available in April 2020


EDRi members’ actions and reporting


EDRi’s blogposts and press releases


Guidance from data protection authorities

Pan-European authorities:

National authorities:


Key dates* and official documents


Other useful resources


* subject to change

Twitter_tweet_and_follow_banner
close
20 Mar 2020

EDRi calls for fundamental rights-based responses to COVID-19

By EDRi

The Coronavirus (COVID-19) disease poses a global public health challenge of unprecedented proportions. In order to tackle it, countries around the world need to engage in co-ordinated, evidence-based responses. Our responses should be grounded in solidarity, support and respect for human rights, as the Council of Europe Commissioner for Human Rights has highlighted. The use of high-quality data can support the vital work of scientists, researchers, and public health authorities in tracking and understanding current pandemic.

However, some of the actions taken by governments and businesses under exceptional circumstances today, can have significant repercussions on freedom of expression, privacy and other human rights both today and tomorrow. We are already seeing the launch of legal initiatives to tackle misinformation, but sometimes with disproportionate reactions from governments. Similarly, we are witnessing a surge in emergency-related policy initiatives, some of them risking the abuse of sensitive personal data in an attempt to safeguard public health. When acting to address such a crisis, measures cannot lead to disproportionate and unnecessary actions, and it is also vital that measures are not extended once we are no longer in a state of emergency.

In these circumstances, European Digital Rights (EDRi) calls on the Member States and institutions of the European Union (EU) to ensure that, while taking public health measures to tackle COVID-19, they:

  • Strictly uphold fundamental rights: Under the European Convention on Human Rights, any emergency measures which may infringe on rights must be “temporary, limited and supervised” in line with the Convention’s Article 15, and cannot be contradictory to international human rights obligations. Similar wording can be found in Article 52.1 of the EU Charter of Fundamental Rights. Actions to tackle coronavirus using personal health data, geolocation data or other metadata must still be necessary, proportionate and legitimate, must have proper safeguards, and cannot excessively undermine the fundamental right to a private life.
  • Protect data for now and the future: Under the General Data Protection Regulation (GDPR) and the ePrivacy Directive, location data is personal data, and therefore is subject to high levels of protection even when processed by public authorities or private companies. Location data revealing movement patterns of individuals is notoriously difficult to anonymise, although many companies claim that they can do this. Data must be anonymised to the fullest extent, for example through aggregation and statistical counting. COVID-19 cannot be an opportunity for private entities to profit, but rather can be an opportunity for the EU’s Member States to adhere to the highest standards of data quality, processing and protection, with the guidance of national data protection authorities, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS).
  • Limit the purpose of data for COVID-19 crisis only: Under law, the data collected, stored and analysed in support of public health measures must not be retained or used outside the purpose of controlling the coronavirus situation.
  • Implement exceptional measures only for the duration of the crisis: The necessity and proportionality of exceptional measures taken during the COVID – 19 crisis must be reassessed once the crisis is ameliorated. Measures should be time limited and subject to automatic review for renewal at short intervals.
  • Keep tools open: To preserve public trust, all technical measures to manage coronavirus must be transparent and must remain under public control. In practice, this means using free/open source software when designing public interest applications.
  • Condemn racism and discrimination: Measures taken should not lead to discrimination of any form, and governments must remain vigilant to the disproportionate harms that marginalised groups can face.
  • Defend freedom of expression and information: In order to take sensible, well-informed decisions, we need access to good-quality, trustworthy information. This means protecting the voices of human rights defenders, independent media, and health professionals more than ever. In addition to this, the increased use of automated tools to moderate content as a result of fewer human moderators being available needs to be carefully monitored. Moreover, a complete suspension of attention-driven advertising and recommendation algorithms should be considered to mitigate the spread of disinformation that is already ongoing.
  • Take a stand against internet shutdowns: During this crisis and beyond, an accessible, secure, and open internet will play a significant role in keeping us safe. Access for individuals, researchers, organisations and governments to accurate, reliable and correct information will save lives. Attempts by governments to cut or restrict access to the internet, block social media platforms or other communications services, or slow down internet speed will deny people vital access to accurate information, just when it is of paramount importance that we stop the spread of the virus. The EU and its Member States should call on governments to immediately end any and all deliberate interference with the right to access and share information, a human right and vital to any public health and humanitarian response to COVID-19.
  • Companies should not exploit this crisis for their own benefit: Tech companies, and the private sector more broadly, need to respect existing legislation in their efforts to contribute to the management of this crisis. While innovation will hopefully have a role in mitigating the pandemic, companies should not abuse the extraordinary circumstances to monetise information at their disposal.

Read more:

close
16 Mar 2020

Terrorist Content Online Regulation: Time to get things right

By Diego Naranjo

I am convinced that the only effective way to tackle terrorism is firmly rooted in the respect of fundamental and human rights.

EU Security Union Commissioner Sir Julian King, 14 November 2016.

Closed-door negotiations (“trilogues”) on the Regulation to prevent the dissemination of terrorist content continue in Brussels. After our open letter from December things have moved on fairly slowly at first, but, recently, new texts are quickly being discussed in order to try to reach an agreement soon. Nonetheless, according to MEP Patrick Breyer, many key issues remain open for discussion.

The Regulation, heavily criticised in its original proposal by the EU Fundamental Rights Agency, the European Data Protection Supervisor (EDPS) and UN Special Rapporteurs because of its potential impact on privacy and freedom of expression, is one of the key pieces of legislation to be negotiated during 2020. If not done correctly, the Regulation could lead to imposition of “terror filters” that could take out legitimate content because filters cannot understand context, limit investigative journalism (more information here) and become the instrument of governmental authorities to suppress legitimate dissent under the pretext of the fight against terrorism.

The European Parliament successfully included in the Report from the Civil Liberties Justice and Homes Affairs (LIBE) Committe some of the main safeguards we demanded. This Report also represents the position of the Parliament as a whole in the present negotiations.

The negotiators from EU Member States and the European Parliament need to ensure that the final text keeps enough safeguards as proposed in the Parliament’s Report, paying special attention to the following:

  • The definitions in the Regulation need to be clearly aligned with the ones from the Terrorism Directive and include “intent” as a core criteria to define what is “terrorist content”.
  • Competent authorities in Member States need to be independent from the executive, that is to say, not being able to seek or take instructions from any other government body when making take-down orders. Otherwise governments willing to crack on dissenting voices may be tempted to use “terrorism” as the excuse to silence them.
  • Member State authorities can have content removed directly only when the service providers are established in their jurisdiction. When the alleged illegal terrorist content is hosted by a company in another Member State, the requesting Member State needs to request that other State to remove the content. Otherwise, having extra-territorial enforcement of removal orders would circumvent rule of law mechanisms.
  • Referrals (suggestions by law enforcement authorities to check potential “terrorist” content against companies’ terms and conditions) need to be kept out of any future text to ensure the legal procedures are not subverted in the name of “efficiency”.
  • Terror filters (upload filters, re-upload filters or “proactive measures”) should not be imposed on companies, as it would be a breach of the prohibition of general monitoring obligations of the eCommerce Directive and lead to undesirable consequences regarding the use of legitimate content.
  • According to both the Parliament and the Council versions, all companies need to remove content within one hour. This rule does not take into consideration the lack of capacities of smaller companies or services provided by non-profit organisations. They cannot deal with such requests with the same capacity of internet giants. Even though it is unlikely that both institutions decide to disagree with themselves and removing the rule they both agreed during previous negotiations, it is worth bearing in mind that the rule is likely to lead to strengthening big tech companies that are the only ones capable of dealing with those requests in that very short amount of time. Smaller services could be seriously harmed by the combo of requirements by potential implementations of the Copyright Directive and this Regulation if the one-hour rule is not removed.

If this Regulation is to be adopted, policy makers need to ensure that it does not lead to the uncertainty that other vertical legislation regulating online content are creating. If the text does not take on board the voices of journalists, human rights groups, the EU Fundamental Rights Agency and three UN Special Rapporteurs, we risk setting a bad precedent for future evidence-based and human rights-centered legislation. Fortunately, there is still time to get things right. Contact your local digital rights organisation; see how to support their work in the current state of affairs.

Read more:

Terrorist Online Content Regulation: Document Pool (21.11.2018)
https://edri.org/terrorist-content-regulation-document-pool/

Committee to Protect Journalists (11.03.2020) (21.11.2018)
https://cpj.org/2020/03/eu-online-terrorist-content-legislation-press-freedom.php

Human rights defenders are not terrorists, and their content is not propaganda (21.01.2020)
https://blog.witness.org/2020/01/human-rights-defenders-not-terrorists-content-not-propaganda/

Lifting the veil on the secretive EU terror filter negotiations: Here’s where we stand (09.03.2020)
https://www.patrick-breyer.de/?p=590541&lang=en

FRA and EDPS: Terrorist Content Regulation requires improvement for fundamental rights (20.02.2019)
https://edri.org/fra-edps-terrorist-content-regulation-fundamental-rights-terreg/

Terrorist Content Regulation – prior authorisation of all uploads? (21.11.2018)
https://edri.org/terrorist-content-regulation-prior-authorisation-for-all-uploads/

Contribution by Diego Naranjo, EDRi

close
11 Mar 2020

Stuck under a cloud of suspicion: Profiling in the EU

By Chloé Berthélémy

As facial recognition technologies are gradually rolled out in police departments across Europe, anti-racism groups blow the whistle on the discriminatory over-policing of racialised communities linked to the increasing use of new technologies by law enforcement agents. In a report by the European Network Against Racism (ENAR) and the Open Society Justice Initiative, daily police practices supported by specific technologies – such as crime analytics, the use of mobile fingerprinting scanners, social media monitoring and mobile phone extraction – are analysed, to uncover their disproportionate impact on racialised communities.

Beside these local and national policing practices, the European Union (EU) has also played an important role in developing police cooperation tools that are based on data-driven profiling. Exploiting the narrative according to which criminals abuse the Schengen and free movement area, the EU justifies the mass monitoring of the population and profiling techniques as part of its Security Agenda. Unfortunately, no proper democratic debate is taking place before the technologies are deployed.

What is profiling in law enforcement?

Profiling is a technique whereby a large amount of data is extracted (“data mining”) and analysed (“processing”) to draw up certain patterns or types of behaviour that help classify individuals. In the context of security policies, some of these categories are then labeled as “presenting a risk”, and needing further examination – either by a human or another machine. Thus it works as a filter applied to the results of a general monitoring of everyone. It lies at the root of predictive policing.

In Europe, data-driven profiling, used mostly for security purposes spiked in the immediate wake of terrorist attacks such as the 2004 Madrid and 2005 London attacks. As a result, EU counter-terrorism and internal security policies – and their underlying policing practices and tools – are informed by racialised assumptions, including specifically anti-Muslim and anti-migrant sentiments, leading to racial profiling. Contrary to what security and law enforcement agencies claim, the technology is not immune to those discriminatory biases and not objective in its endeavour to prevent crime.

European initiatives

The EU has been actively supporting profiling practices. First, the Anti-Money Laundering and Counter-Terrorism Directives oblige private actors such as banks, auditors and notaries to report suspicious transactions that might be linked to money laundering or terrorist financing, as well as to establish risk assessment procedures. “Potentially risky” profiles are created on risk factors which are not always chosen objectively, but rather based on racialised prejudice of what constitutes an “abnormal financial activity”. As a consequence, among individuals matching this profile, there is usually an over-representation of migrants, cross-border workers and asylum seekers.

Another example is the Passenger Name Record (PNR) Directive of 2016. The Directive imposes airline companies to collect all personal data of people traveling from EU territory to third countries and to share it among all EU Member States. The aim is to identify certain categories of passengers as “high-risk passengers” that need further investigation. There are ongoing discussions on the possibility to extend this system to rail transportation and other public transports.

More recently, the multiplication of EU databases in the field of migration control and their interconnection facilitated the incorporation of profiling techniques to analyse and cherry-pick “good” candidates. For example, the Visa Information System, a proposal currently on a fast-track, consists of a database that currently holds up to 74 million short- and long-stay visa applications which are run against a set of “risk indicators”. Such “risk indicators” consist of a combination of data including the age range, sex, nationality, the country and city of residence, the EU Member State of first entry, the purpose of travel, and the current occupation. The same logic is applied in the European Travel Information and Authorisation System (ETIAS), a tool slated for 2022 aimed at gathering data about third-country nationals who do not require a visa to travel to the Schengen area. The risk indicators used in that system also aim at “pointing to security, illegal immigration or high epidemic risks”.

Why are fundamental rights in danger?

Profiling practices rely on the massive collection and processing of personal data, which represent a great risk for the rights to privacy and data protection. Since most policing instruments pursue public security interest, they are considered legitimate. However, few actually meet transparency and accountability requirements and thus, are difficult to audit. The essential legality tests of necessity and proportionality prescribed by the EU Charter of Fundamental Rights cannot be carried out: only a concrete danger – not the potentiality of one – can justify interferences with the rights to respect for private life and data protection.

In particular, the criteria used to determine which profiles need further examination are opaque and difficult to evaluate. Questions are: what categories and what data are being selected and evaluated? By whom? Talking about the ETIAS system, the EU Fundamental Rights Agency stressed that the possibility of using risk indicators without resulting in discriminating against certain categories of people in transit was unclear, and therefore recommended to postpone the use of profiling techniques. Generalising entire groups of persons based on specific grounds is definitely something to check against the right to non-discrimination. Further, it is troublesome that the missions of evaluation and monitoring of profiling practices are given to “advisory and guiding boards” that are hosted by law enforcement agencies such as Frontex. Excluding data protection supervisory authorities and democratic oversight bodies from this process is very problematic.

Turning several neutral features or conducts into signs of an undesirable or even mistrusted profile can have dramatic consequences for the life of individuals. The consequences of having your features match a “suspicious profile” can lead to restrictions of your rights. For example in the area of counter-terrorism, your right to effective remedies and a fair trial can be hampered; as you are usually not aware that you have been placed under surveillance as a result of a match in the system, and you find yourself unable to contest such a measure.

As law enforcement across Europe increasingly conduct profiling practices, it is crucial that substantive safeguards are put in place to mitigate the many dangers for the individuals’ rights and freedoms they entail.

Data-driven policing: the hardwiring of discriminatory policing practices across Europe (19.11.2019)
https://www.enar-eu.org/IMG/pdf/data-driven-profiling-web-final.pdf

New legal framework for predictive policing in Denmark (22.02.2017)
https://edri.org/new-legal-framework-for-predictive-policing-in-denmark/

Data Protection, Immigration Enforcement and Fundamental Rights: What the EU’s Regulations on Interoperability Mean for People with Irregular Status (14.11.2019)
https://www.statewatch.org/analyses/Data-Protection-Immigration-Enforcement-and-Fundamental-Rights-Full-Report-EN.pdf

Preventing unlawful profiling today and in the future: a guide (14.12.2018)
https://fra.europa.eu/sites/default/files/fra_uploads/fra-2018-preventing-unlawful-profiling-guide_en.pdf

(Contribution by Chloé Berthélémy, EDRi)

close