27 Mar 2020

Open letter: Civil society urges Member States to respect the principles of the law in Terrorist Content Online Regulation

By EDRi

On 27 March 2020, European Digital Rights (EDRi) and 12 of its member organizations sent an open letter to representatives of Member States in the Council of the EU. In the letter, we voice our deep concern over the proposed legislation on the regulation of terrorist content online and what we view as serious potential threats to fundamental rights of privacy, freedom of expression, etc.

You can read the letter here (pdf) and below

Brussels, 27 March 2020

Dear representatives of Member States in the Council of the EU,

We hope that you are keeping well in this difficult time.

We are writing to you to voice our serious concerns with the proposed Regulation on preventing the dissemination of terrorist content online (COM/2018/640 final). We have raised these concerns before and many similar critiques have been expressed in letters opposing the Regulation from human rights officials, civil society groups, and human rights advocates.i

We firmly believe that any common position on this crucial file must respect fundamental rights and freedoms, the constitutional traditions of the Member States and existing Union law in this area. In order for this to happen, we urge you to ensure that the rule of law in cross-border cases is respected, that the competent authorities tasked with ordering the removal of illegal terrorist content are independent, to refrain from adopting mandatory (re)upload filters and guarantee that the exceptions for certain protected forms of expression, such as education, journalistic and research materials, are maintained in the proposal. We explain why in more detail further below.

First, we ask you to respect the principles of territoriality and ensure access to justice in cases of cross-border takedowns by ensuring that only the Member State in which the hosting service provider has its legal establishment can issue removal orders. The Regulation should also allow removal orders to be contested in the Member State of establishment to ensure meaningful access to an effective remedy. As recent CJEU case law has established “efficiency” or “national security” reasons cannot lead to short-cuts to rule of law mechanisms and safeguards.ii

Secondly, the principle of due process demands that the legality of content be determined by a court or independent administrative authority. This important principle should be reflected in the definition of ‘competent authorities’. For instance, we note that in the Digital Rights Ireland case, the Court of Justice of the European Union considered that the Data Retention Directive was invalid, inter alia, because access to personal data by law enforcement authorities was not made dependent on a prior review carried out by a court or independent administrative authority.iii In our view, the removal of alleged terrorist content entails a very significant interference with freedom of expression and as such, calls for the application of the same safeguards.

Thirdly, the Regulation should not impose the use of upload or re-upload filters (automated content recognition technologies) to those services under the scope of the Regulation. As the coronavirus crisis makes abundantly clear, filters are far from accurate. Only in recent days, Twitter, Facebook and YouTube have moved to full automation of removal of content, leading to bad scores of legitimate articles about coronavirus being removed.iv The same will happen if filters are applied to alleged terrorist content. There is also mounting data suggesting that algorithms are biased and have a discriminatory impact, which is a particular concern for communities affected by terrorism and whose counter-speech has proven to be vital against radicalisation and terrorist propaganda. Furthermore, a provision imposing specific measures on platforms should favour a model that gives room for manoeuvre to service providers on which actions to take to prevent the dissemination of illegal terrorist content, taking into account their capacities and resources, size and nature (whether non-for-profit, for-profit or community-led).

Finally, it is crucial that certain protected forms of expression, such as educational, artistic, journalistic and research materials are exempted from the proposal, and that it includes feasible measures to ensure how this can be successfully implemented. The determination of whether content amounts to incitement to terrorism or even glorification of terrorism is highly context specific. Research materials should be defined to include content that serves as evidence of human rights abuses. The jurisprudence of the European Court of Human Rights (ECtHR)v specifically requires a particular caution to ,such protected forms of speech and expression. It is vital that these principles are reflected in the Terrorist Content Regulation, including through the adoption of specific provisions protecting freedom of expression as outlined above.

We remain at your disposal for any support you may need from us in the future.

Sincerely,
Access Now – https://www.accessnow.org/
Bits of Freedom – https://www.bitsoffreedom.nl/
Centrum Cyfrowe – https://centrumcyfrowe.pl
Committee to Protect Journalists (CPJ) – https://cpj.org/
Daphne Keller – Director Program on Platform Regulation Stanford University
Digitale Gesellschaft – https://digitalegesellschaft.de/
Digitalcourage – https://digitalcourage.de/
D3 – Defensa dos Dereitos Digitais –
https://www.direitosdigitais.pt/
Državljan D – https://www.drzavljand.si/
EDRi – https://edri.org/
Electronic Frontier Foundation (EFF) – https://www.eff.org/
Epicenter.Works – https://epicenter.works
Free Knowledge Advocacy Group EU- https://wikimediafoundation.org/
Hermes Center – https://www.hermescenter.org/
Homo Digitalis – https://www.homodigitalis.gr/en/
IT-Political Association of Denmark – https://itpol.dk/
Panoptykon Foundation – https://en.panoptykon.org
Vrijschrifthttps://www.vrijschrift.org
Wikimedia Spain – https://wikimedia.es

Footnotes

i.

ii.

iii.

  • See Digital Rights Ireland v. Minister for Communications, Marine and Natural Resources, Joined Cases C‑293/12 and C‑594/12, 08 April 2014 at para. 62.

iv.

v.

  • In cases involving the dissemination of “incitement to violence” or terrorism by the press, the ECtHR’s starting point is that it is “incumbent [upon the press] to impart information and ideas on political issues just as on those in other areas of public interest. Not only does the press have the task of imparting such information and ideas: the public also has a right to receive them.” See Lingens v Austria, App. No. 9815/82,8 July 1986, para 41.
  • The ECtHR also repeatedly held that the public enjoyed the right to be informed of different perspectives, e.g. on the situation in South East Turkey, however unpalatable they might be to the authorities. See also Özgür Gündemv. Turkey, no. 23144/93, 16 March 2000, para.60 and 63 and the Council of Europe handbook on protecting the right to freedom of expression under the European Convention on Human Rights, summarizing the Court’s case law on positive obligations of States with regards to the protection of journalists (p.90-93), available at: https://rm.coe.int/handbook-freedom-of-expression-eng/1680732814
Twitter_tweet_and_follow_banner close
25 Mar 2020

Facial Recognition & Biometric Surveillance: Document Pool

By EDRi

At least 15 European countries have experimented with highly intrusive facial and biometric recognition systems for mass surveillance. The use of these systems can infringe on people’s right to conduct their daily lives in privacy and with respect for their fundamental freedoms. It can prevent them from participating fully in democratic activities, violate their right to equality and much more.

The gathering and use of biometric data for remote identification purposes, for instance through deployment of facial recognition in public places, carries specific risks for fundamental rights.

European Commission, White Paper on Artificial Intelligence

This has happened in the absence of proper public debate on what facial recognition means for our societies, how it amplifies existing inequalities and violations, and whether it fits with our conceptions of democracy, freedom, equality and social justice.

Considering the high risk of abuse, discrimination and violation of fundamental rights to privacy and data protection, the EU and its Member States must develop a strong, privacy-protective approach to all forms of biometric surveillance. In this document pool we will be listing relevant articles and documents related to the issue of facial and biometric recognition. This will allow you to follow the developments of surveillance measures and regulatory actions in Europe.

EDRi’s analysis and recommendations
EDRi members’ actions and reporting
EDRi’s blogposts and press releases
Guidance from data protection authorities
Key dates and official documents
Other useful resources


EDRi’s analysis and recommendations

Available in April 2020


EDRi members’ actions and reporting


EDRi’s blogposts and press releases


Guidance from data protection authorities

Pan-European authorities:

National authorities:


Key dates* and official documents


Other useful resources


* subject to change

Twitter_tweet_and_follow_banner
close
20 Mar 2020

EDRi calls for fundamental rights-based responses to COVID-19

By EDRi

The Coronavirus (COVID-19) disease poses a global public health challenge of unprecedented proportions. In order to tackle it, countries around the world need to engage in co-ordinated, evidence-based responses. Our responses should be grounded in solidarity, support and respect for human rights, as the Council of Europe Commissioner for Human Rights has highlighted. The use of high-quality data can support the vital work of scientists, researchers, and public health authorities in tracking and understanding current pandemic.

However, some of the actions taken by governments and businesses under exceptional circumstances today, can have significant repercussions on freedom of expression, privacy and other human rights both today and tomorrow. We are already seeing the launch of legal initiatives to tackle misinformation, but sometimes with disproportionate reactions from governments. Similarly, we are witnessing a surge in emergency-related policy initiatives, some of them risking the abuse of sensitive personal data in an attempt to safeguard public health. When acting to address such a crisis, measures cannot lead to disproportionate and unnecessary actions, and it is also vital that measures are not extended once we are no longer in a state of emergency.

In these circumstances, European Digital Rights (EDRi) calls on the Member States and institutions of the European Union (EU) to ensure that, while taking public health measures to tackle COVID-19, they:

  • Strictly uphold fundamental rights: Under the European Convention on Human Rights, any emergency measures which may infringe on rights must be “temporary, limited and supervised” in line with the Convention’s Article 15, and cannot be contradictory to international human rights obligations. Similar wording can be found in Article 52.1 of the EU Charter of Fundamental Rights. Actions to tackle coronavirus using personal health data, geolocation data or other metadata must still be necessary, proportionate and legitimate, must have proper safeguards, and cannot excessively undermine the fundamental right to a private life.
  • Protect data for now and the future: Under the General Data Protection Regulation (GDPR) and the ePrivacy Directive, location data is personal data, and therefore is subject to high levels of protection even when processed by public authorities or private companies. Location data revealing movement patterns of individuals is notoriously difficult to anonymise, although many companies claim that they can do this. Data must be anonymised to the fullest extent, for example through aggregation and statistical counting. COVID-19 cannot be an opportunity for private entities to profit, but rather can be an opportunity for the EU’s Member States to adhere to the highest standards of data quality, processing and protection, with the guidance of national data protection authorities, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS).
  • Limit the purpose of data for COVID-19 crisis only: Under law, the data collected, stored and analysed in support of public health measures must not be retained or used outside the purpose of controlling the coronavirus situation.
  • Implement exceptional measures only for the duration of the crisis: The necessity and proportionality of exceptional measures taken during the COVID – 19 crisis must be reassessed once the crisis is ameliorated. Measures should be time limited and subject to automatic review for renewal at short intervals.
  • Keep tools open: To preserve public trust, all technical measures to manage coronavirus must be transparent and must remain under public control. In practice, this means using free/open source software when designing public interest applications.
  • Condemn racism and discrimination: Measures taken should not lead to discrimination of any form, and governments must remain vigilant to the disproportionate harms that marginalised groups can face.
  • Defend freedom of expression and information: In order to take sensible, well-informed decisions, we need access to good-quality, trustworthy information. This means protecting the voices of human rights defenders, independent media, and health professionals more than ever. In addition to this, the increased use of automated tools to moderate content as a result of fewer human moderators being available needs to be carefully monitored. Moreover, a complete suspension of attention-driven advertising and recommendation algorithms should be considered to mitigate the spread of disinformation that is already ongoing.
  • Take a stand against internet shutdowns: During this crisis and beyond, an accessible, secure, and open internet will play a significant role in keeping us safe. Access for individuals, researchers, organisations and governments to accurate, reliable and correct information will save lives. Attempts by governments to cut or restrict access to the internet, block social media platforms or other communications services, or slow down internet speed will deny people vital access to accurate information, just when it is of paramount importance that we stop the spread of the virus. The EU and its Member States should call on governments to immediately end any and all deliberate interference with the right to access and share information, a human right and vital to any public health and humanitarian response to COVID-19.
  • Companies should not exploit this crisis for their own benefit: Tech companies, and the private sector more broadly, need to respect existing legislation in their efforts to contribute to the management of this crisis. While innovation will hopefully have a role in mitigating the pandemic, companies should not abuse the extraordinary circumstances to monetise information at their disposal.

Read more:

close
16 Mar 2020

Terrorist Content Online Regulation: Time to get things right

By Diego Naranjo

I am convinced that the only effective way to tackle terrorism is firmly rooted in the respect of fundamental and human rights.

EU Security Union Commissioner Sir Julian King, 14 November 2016.

Closed-door negotiations (“trilogues”) on the Regulation to prevent the dissemination of terrorist content continue in Brussels. After our open letter from December things have moved on fairly slowly at first, but, recently, new texts are quickly being discussed in order to try to reach an agreement soon. Nonetheless, according to MEP Patrick Breyer, many key issues remain open for discussion.

The Regulation, heavily criticised in its original proposal by the EU Fundamental Rights Agency, the European Data Protection Supervisor (EDPS) and UN Special Rapporteurs because of its potential impact on privacy and freedom of expression, is one of the key pieces of legislation to be negotiated during 2020. If not done correctly, the Regulation could lead to imposition of “terror filters” that could take out legitimate content because filters cannot understand context, limit investigative journalism (more information here) and become the instrument of governmental authorities to suppress legitimate dissent under the pretext of the fight against terrorism.

The European Parliament successfully included in the Report from the Civil Liberties Justice and Homes Affairs (LIBE) Committe some of the main safeguards we demanded. This Report also represents the position of the Parliament as a whole in the present negotiations.

The negotiators from EU Member States and the European Parliament need to ensure that the final text keeps enough safeguards as proposed in the Parliament’s Report, paying special attention to the following:

  • The definitions in the Regulation need to be clearly aligned with the ones from the Terrorism Directive and include “intent” as a core criteria to define what is “terrorist content”.
  • Competent authorities in Member States need to be independent from the executive, that is to say, not being able to seek or take instructions from any other government body when making take-down orders. Otherwise governments willing to crack on dissenting voices may be tempted to use “terrorism” as the excuse to silence them.
  • Member State authorities can have content removed directly only when the service providers are established in their jurisdiction. When the alleged illegal terrorist content is hosted by a company in another Member State, the requesting Member State needs to request that other State to remove the content. Otherwise, having extra-territorial enforcement of removal orders would circumvent rule of law mechanisms.
  • Referrals (suggestions by law enforcement authorities to check potential “terrorist” content against companies’ terms and conditions) need to be kept out of any future text to ensure the legal procedures are not subverted in the name of “efficiency”.
  • Terror filters (upload filters, re-upload filters or “proactive measures”) should not be imposed on companies, as it would be a breach of the prohibition of general monitoring obligations of the eCommerce Directive and lead to undesirable consequences regarding the use of legitimate content.
  • According to both the Parliament and the Council versions, all companies need to remove content within one hour. This rule does not take into consideration the lack of capacities of smaller companies or services provided by non-profit organisations. They cannot deal with such requests with the same capacity of internet giants. Even though it is unlikely that both institutions decide to disagree with themselves and removing the rule they both agreed during previous negotiations, it is worth bearing in mind that the rule is likely to lead to strengthening big tech companies that are the only ones capable of dealing with those requests in that very short amount of time. Smaller services could be seriously harmed by the combo of requirements by potential implementations of the Copyright Directive and this Regulation if the one-hour rule is not removed.

If this Regulation is to be adopted, policy makers need to ensure that it does not lead to the uncertainty that other vertical legislation regulating online content are creating. If the text does not take on board the voices of journalists, human rights groups, the EU Fundamental Rights Agency and three UN Special Rapporteurs, we risk setting a bad precedent for future evidence-based and human rights-centered legislation. Fortunately, there is still time to get things right. Contact your local digital rights organisation; see how to support their work in the current state of affairs.

Read more:

Terrorist Online Content Regulation: Document Pool (21.11.2018)
https://edri.org/terrorist-content-regulation-document-pool/

Committee to Protect Journalists (11.03.2020) (21.11.2018)
https://cpj.org/2020/03/eu-online-terrorist-content-legislation-press-freedom.php

Human rights defenders are not terrorists, and their content is not propaganda (21.01.2020)
https://blog.witness.org/2020/01/human-rights-defenders-not-terrorists-content-not-propaganda/

Lifting the veil on the secretive EU terror filter negotiations: Here’s where we stand (09.03.2020)
https://www.patrick-breyer.de/?p=590541&lang=en

FRA and EDPS: Terrorist Content Regulation requires improvement for fundamental rights (20.02.2019)
https://edri.org/fra-edps-terrorist-content-regulation-fundamental-rights-terreg/

Terrorist Content Regulation – prior authorisation of all uploads? (21.11.2018)
https://edri.org/terrorist-content-regulation-prior-authorisation-for-all-uploads/

Contribution by Diego Naranjo, EDRi

close
11 Mar 2020

Stuck under a cloud of suspicion: Profiling in the EU

By Chloé Berthélémy

As facial recognition technologies are gradually rolled out in police departments across Europe, anti-racism groups blow the whistle on the discriminatory over-policing of racialised communities linked to the increasing use of new technologies by law enforcement agents. In a report by the European Network Against Racism (ENAR) and the Open Society Justice Initiative, daily police practices supported by specific technologies – such as crime analytics, the use of mobile fingerprinting scanners, social media monitoring and mobile phone extraction – are analysed, to uncover their disproportionate impact on racialised communities.

Beside these local and national policing practices, the European Union (EU) has also played an important role in developing police cooperation tools that are based on data-driven profiling. Exploiting the narrative according to which criminals abuse the Schengen and free movement area, the EU justifies the mass monitoring of the population and profiling techniques as part of its Security Agenda. Unfortunately, no proper democratic debate is taking place before the technologies are deployed.

What is profiling in law enforcement?

Profiling is a technique whereby a large amount of data is extracted (“data mining”) and analysed (“processing”) to draw up certain patterns or types of behaviour that help classify individuals. In the context of security policies, some of these categories are then labeled as “presenting a risk”, and needing further examination – either by a human or another machine. Thus it works as a filter applied to the results of a general monitoring of everyone. It lies at the root of predictive policing.

In Europe, data-driven profiling, used mostly for security purposes spiked in the immediate wake of terrorist attacks such as the 2004 Madrid and 2005 London attacks. As a result, EU counter-terrorism and internal security policies – and their underlying policing practices and tools – are informed by racialised assumptions, including specifically anti-Muslim and anti-migrant sentiments, leading to racial profiling. Contrary to what security and law enforcement agencies claim, the technology is not immune to those discriminatory biases and not objective in its endeavour to prevent crime.

European initiatives

The EU has been actively supporting profiling practices. First, the Anti-Money Laundering and Counter-Terrorism Directives oblige private actors such as banks, auditors and notaries to report suspicious transactions that might be linked to money laundering or terrorist financing, as well as to establish risk assessment procedures. “Potentially risky” profiles are created on risk factors which are not always chosen objectively, but rather based on racialised prejudice of what constitutes an “abnormal financial activity”. As a consequence, among individuals matching this profile, there is usually an over-representation of migrants, cross-border workers and asylum seekers.

Another example is the Passenger Name Record (PNR) Directive of 2016. The Directive imposes airline companies to collect all personal data of people traveling from EU territory to third countries and to share it among all EU Member States. The aim is to identify certain categories of passengers as “high-risk passengers” that need further investigation. There are ongoing discussions on the possibility to extend this system to rail transportation and other public transports.

More recently, the multiplication of EU databases in the field of migration control and their interconnection facilitated the incorporation of profiling techniques to analyse and cherry-pick “good” candidates. For example, the Visa Information System, a proposal currently on a fast-track, consists of a database that currently holds up to 74 million short- and long-stay visa applications which are run against a set of “risk indicators”. Such “risk indicators” consist of a combination of data including the age range, sex, nationality, the country and city of residence, the EU Member State of first entry, the purpose of travel, and the current occupation. The same logic is applied in the European Travel Information and Authorisation System (ETIAS), a tool slated for 2022 aimed at gathering data about third-country nationals who do not require a visa to travel to the Schengen area. The risk indicators used in that system also aim at “pointing to security, illegal immigration or high epidemic risks”.

Why are fundamental rights in danger?

Profiling practices rely on the massive collection and processing of personal data, which represent a great risk for the rights to privacy and data protection. Since most policing instruments pursue public security interest, they are considered legitimate. However, few actually meet transparency and accountability requirements and thus, are difficult to audit. The essential legality tests of necessity and proportionality prescribed by the EU Charter of Fundamental Rights cannot be carried out: only a concrete danger – not the potentiality of one – can justify interferences with the rights to respect for private life and data protection.

In particular, the criteria used to determine which profiles need further examination are opaque and difficult to evaluate. Questions are: what categories and what data are being selected and evaluated? By whom? Talking about the ETIAS system, the EU Fundamental Rights Agency stressed that the possibility of using risk indicators without resulting in discriminating against certain categories of people in transit was unclear, and therefore recommended to postpone the use of profiling techniques. Generalising entire groups of persons based on specific grounds is definitely something to check against the right to non-discrimination. Further, it is troublesome that the missions of evaluation and monitoring of profiling practices are given to “advisory and guiding boards” that are hosted by law enforcement agencies such as Frontex. Excluding data protection supervisory authorities and democratic oversight bodies from this process is very problematic.

Turning several neutral features or conducts into signs of an undesirable or even mistrusted profile can have dramatic consequences for the life of individuals. The consequences of having your features match a “suspicious profile” can lead to restrictions of your rights. For example in the area of counter-terrorism, your right to effective remedies and a fair trial can be hampered; as you are usually not aware that you have been placed under surveillance as a result of a match in the system, and you find yourself unable to contest such a measure.

As law enforcement across Europe increasingly conduct profiling practices, it is crucial that substantive safeguards are put in place to mitigate the many dangers for the individuals’ rights and freedoms they entail.

Data-driven policing: the hardwiring of discriminatory policing practices across Europe (19.11.2019)
https://www.enar-eu.org/IMG/pdf/data-driven-profiling-web-final.pdf

New legal framework for predictive policing in Denmark (22.02.2017)
https://edri.org/new-legal-framework-for-predictive-policing-in-denmark/

Data Protection, Immigration Enforcement and Fundamental Rights: What the EU’s Regulations on Interoperability Mean for People with Irregular Status (14.11.2019)
https://www.statewatch.org/analyses/Data-Protection-Immigration-Enforcement-and-Fundamental-Rights-Full-Report-EN.pdf

Preventing unlawful profiling today and in the future: a guide (14.12.2018)
https://fra.europa.eu/sites/default/files/fra_uploads/fra-2018-preventing-unlawful-profiling-guide_en.pdf

(Contribution by Chloé Berthélémy, EDRi)

close
11 Mar 2020

Facebook starts to increase transparency in political ads in the Balkans

By SHARE Foundation

Facebook has announced that it will expand its transparency system and confirmation of authenticity of ads about elections and politics starting from mid-March. Namely, Facebook will cover 32 additional countries, including Serbia and North Macedonia where the elections are to take place very soon.

This turn of events follows the efforts of EDRi member SHARE Foundation and its international partners to point out to representatives of Facebook the problem of Western Balkans countries being excluded from those where Facebook actively monitors political advertising. This issue is very important in light of the election campaigns in Serbia and North Macedonia, having in mind possible manipulations, the lack of transparency of funding of ads, and the use non-political pages to advertise for political purposes.

Facebook will in this manner expand the transparency of political advertising on their main social networking platform and Instagram in the abovementioned countries. Until now, such policies were implemented mainly because of suspicion of foreign interference into election processes during the US presidential elections and Brexit referendum in 2016. The Cambridge Analytica scandal, when data of tens of millions of citizens leaked and pressure from states followed, also pressured Facebook to improve the transparency of its platform.

Facebook Ad Library will provide access information on total advertising expenses, number of ads, as well as data about specific ads – demographic target group of the ad, geographic scope of that ad, and so on. In order to analyse political advertising, Facebook will enable researchers, journalists and the public access to the Ad Library API. In addition, by the end of April, it will be possible to download a report for the new 32 countries with aggregated data on ads about elections and politics.

All actors, including political parties, candidates and other organisations wishing to post ads about elections or politics on Facebook and Instagram will be required to register as advertisers, so it can be seen who paid for advertisements. It is also necessary for advertisers to confirm the identity with official documents issued by the state where they wish to publish ads, as well as additional information such as local address, telephone number, email and website if they wish to use the name of a Facebook page or organisation in the disclaimer. In case they do not register, Facebook may restrict posting ads about politics and elections during the verification process.

Facebook is starting to follow electoral and political advertising in the Balkans (09.03.2020)
https://www.sharefoundation.info/en/facebook-is-starting-to-follow-electoral-and-political-advertising-in-the-balkans/

(Contribution by Bojan Perkov, EDRi member SHARE Foundation)

close
11 Mar 2020

Germany: Invading refugees’ phones – security or population control?

By Gesellschaft für Freiheitsrechte

In its new study, EDRi member Society for Civil Rights (GFF) examines how German authorities sniff out refugees’ phones. The aim of “data carrier evaluation” is supposed to be determining a person’s identity and their country of origin. However, in reality, it violates refugees’ rights and does not produce any meaningful results.

If an asylum seeker in Germany cannot present either a passport or documents replacing it, the Federal Office for Migration and Refugees (BAMF), is authorised to carry out a “data carrier evaluation” – to extract and analyse data from the asylum seeker’s phones and other devices to check their owner’s stated origin and identity. Data that is analysed includes the country codes of their contacts, incoming and outgoing calls and messages, browser history, geodata from photos, as well as email addresses and usernames used in applications such as Facebook, booking.com or dating apps. Notably, BAMF carries out this data analysis regardless of any concrete suspicion that the asylum-seekers made untruthful statements regarding their identity or country of origin.

The study “Invading Refugees’ Phones: Digital Forms of Migration Control” examines and assesses how BAMF evaluates refugees’ data and what kinds of results data carrier evaluation has produced so far. For the study, journalist Anna Biselli and GFF lawyer Lea Beckmann comprehensively researched and evaluated numerous sources. These include data carrier evaluation reports, asylum files, internal BAMF regulations, such as a user manual for reading mobile data carriers, and training documents for BAMF employees, as well as information that was made public by parliamentary inquiries.

High costs, useless results

The study found that evaluating data carriers is not an effective way of establishing a person’s identity and country of origin. Since data carrier evaluations started in 2017, BAMF has examined about 20 000 mobile phones of asylum seekers. When invading refugees privacy via data carrier evaluation produces results, it usually only confirms what the persons themselves stated during their interviews with BAMF employees.

In 2018, only 2% of the successful data carrier evaluations revealed contradictions to the asylum seekers’ statements. Graphic: GFF/Julia Zé

There were already doubts about the effectiveness of data carrier evaluation before the law on Better Enforcement of the Obligation to leave the Country was passed. The law aims to speed up deportations. By introducing data carrier evaluations, legislators hoped to verify a person’s identity, country of origin and grounds for protection more quickly than before. In practice, the procedure has fallen short of these expectations. It has also turned out to be very expensive. 

In relation to the limited benefit of data carrier evaluations, the costs of the procedure are clearly disproportionate. In February 2017, the Federal Ministry of the Interior stated that installation costs of 3,2 million euros were to be expected. By the end of 2018, however, 7,6 million euros had already been spent on the system, more than twice as much as originally estimated.

Total costs of the BAMF for reading and evaluating data media: From just under 7 million euros in 2017 to an expected 17 million euros in 2022. graph: GFF/Julia Zé

A blatant violation of fundamental rights

Examining refugees’ phones can be seen as a human rights violation. Despite that, Germany has spent millions of euros on introducing and developing this practice. Data carrier evaluations circumvent the basic right to informational self-determination, which has been laid down by the German Federal Constitutional Court. Refugees are subject to second-class data protection. At the same time, they are especially vulnerable and lack meaningful access to legal remedies.

Germany is not the only country to experiment with digital forms of migration control. BAMF’s approach is part of a broader, international trend towards testing new surveillance and monitoring technologies on marginalised populations, including refugees. Individual people, as well as their individual histories, are increasingly being reduced to data records. GFF will combat this trend with legal means: We are currently preparing legal action against the BAMF’s data carrier evaluation. 

We thank the Digital Freedom Fund for their support.

Gesellschaft für Freiheitsrechte (GFF, Society for Civil Rights)
https://freiheitsrechte.org/english/

Invading Refugees’ Phones: Digital Forms of Migration Control
https://freiheitsrechte.org/home/wp-content/uploads/2020/02/Study_Invading-Refugees-Phones_Digital-Forms-of-Migration-Control.pdf

The human rights impacts of migration control technologies (12.02.2020)
https://edri.org/the-human-rights-impacts-of-migration-control-technologies/

Immigration, iris-scanning and iBorderCTRL (26.02.2020)
https://edri.org/immigration-iris-scanning-and-iborderctrl/

(Contribution by EDRi member Gesellschaft für Freiheitsrechte – GFF, Germany)

close
11 Mar 2020

Accountable Migration Tech: Transparency, governance and oversight

By Petra Molnar

Migration continues to dominate headlines around the world. For example, given the currently deteriorating situation at the border between Greece and Turkey, with reports of increasingly repressive measures to turn people away, new technologies already play a part in border surveillance and decision-making at the border.

Our previous two blogposts explored how far-reaching migration control technologies actually are. From refugee camps to border spaces to immigration hearing rooms, we are seeing the rise of automated decision-making tools replacing human officers making decisions about your migration journey. The use of these technologies also opens the door for violations of migrants’ rights.

How are these technologies of migration control impacting fundamental rights and what can we do about it?

Life and liberty

We should not underestimate the far-reaching impacts of new technologies on the lives and security of people on the move. The right to life and liberty is one of the most fundamental internationally protected rights, and highly relevant to migration and refugee contexts. Multiple technological experiments already impinge on the right to life and liberty. The starkest example is the denial of liberty when people are placed in detention. Immigration detention is highly discretionary. The justification of increased incarceration on the basis of algorithms that have been tampered with, such as at the US-Mexico border, shows just how far we are willing to justify incursions on basic human rights under the guise of national security and border enforcement. Errors, mis-calibrations, and deficiencies in training data can result in profound rights infringements of safety, security, and liberty of migrants when they are placed in unlawful detention. For example, aspects of training data which are mere coincidences in reality may be treated as relevant patterns by a machine-learning system, leading to outcomes which are considered arbitrary. This is one reason why the EU General Data Protection Regulation (GDPR) requires the ability to demonstrate that the correlations applied in algorithmic decision-making are “legitimate justifications for the automated decisions”.

Equality rights and freedom from discrimination

Equality and freedom from discrimination are integral to human dignity, particularly in situations where negative inferences against marginalised groups are frequently made. Algorithms are vulnerable to the same decision-making concerns that plague human decision-makers: transparency, accountability, discrimination, bias, and error. The opaque nature of immigration and refugee decision-making creates an environment ripe for algorithmic discrimination. Decisions in this system – from whether a refugee’s life story is “truthful” to whether a prospective immigrant’s marriage is “genuine” – are highly discretionary, and often hinge on assessment of a person’s credibility. In the experimental use of AI lie detectors at EU airports, what will constitute truthfulness and how will differences in cross-cultural communication be dealt with in order to ensure that problematic inferences are not encoded and reinforced into the system? The complexity of migration – and the human experience – is not easily reducible to an algorithm.

Privacy rights

Privacy is not only a consumer or property interest: it is a human right, rooted in foundational democratic principles of dignity and autonomy. We must consider the differential impacts of privacy infringements when looking at the experiences of people on the move. If collected information is shared with repressive governments from whom refugees are fleeing, the ramifications can be life-threatening. Or, if automated decision-making systems designed to predict a person’s sexual orientation are infiltrated by states targeting the LGBTQI+ community, discrimination and threats to life and liberty will likely occur. A facial recognition algorithm developed at Stanford University already tried to discern a person’s sexual orientation from photos. This use of technology has particular ramifications in the refugee and immigration context, where asylum applications based on sexual orientation grounds often rely on having to prove one’s persecution based on outdated tropes around non-heteronormative behaviour. This is why protecting people’s privacy is paramount for their safety, security, and well-being.

Procedural justice

When we talk about human rights of people on the move, we must also consider procedural justice principles that affect how a person’s application is reviewed, assessed, and what due process looks like in an increasingly automated context.

For example, in immigration and refugee decision-making, procedural justice dictates that the person affected by administrative processes has a right to be heard, the right to a fair, impartial and independent decision-maker, the right to reasons – also known as the right to an explanation – and the right to appeal an unfavourable decision. However, it is unclear how administrative law will handle the augmentation or even replacement of human decision-makers by algorithms.

While these technologies are often presented as tools to be used by human decision-makers, the line between machine-made and human-made decision-making is often unclear. Given the persistence of automation bias, or the predisposition towards considering automated decisions as more accurate and fair, what rubrics will human decision-makers use to determine how much weight to place on the algorithmic predictions, as opposed to any other information available to them, including their own judgment and intuition? When things go wrong and you wish to challenge an algorithmic decision, how will we decide what counts as a reasonable decision? It’s not clear how tribunals and courts will deal with automated decision-making, what standards of review will be used, and what redress or appeal will look like for people wishing to challenge incorrect or discriminatory decisions.

What we need: Context-specific governance and oversight

Technology replicates power in society, and its benefits are not experienced equally. Yet no global regulatory framework currently exists to oversee the use of new technologies in the management of migration. Much of technological development occurs in so-called “black boxes”, where intellectual property laws and proprietary considerations shield the public from fully understanding how the technology operates.

While conversations around the ethics of Artificial Intelligence (AI) are taking place, ethics do not go far enough. We need a sharper focus on oversight mechanisms grounded in fundamental human rights that recognise the high risk nature of developing and deploying technologies of migration control. Affected communities must also be involved in these conversations. Rather than developing more technology “for” or “about” refugees and migrants and collecting vast amounts of data, people who have themselves experienced displacement should be at the centre of discussions on when and how emerging technologies should be integrated into refugee camps, border security, or refugee hearings – if at all.
As a starting point, states and international organisations developing and deploying migration control technologies should, at the minimum:

  • commit to transparency and report publicly what technology is being developed and used and why;
  • adopt binding directives and laws that comply with internationally protected fundamental human rights obligations that recognise the high risk nature of migration control technologies;
  • establish an independent body to oversee and review all use of automated technologies in migration management;
  • foster conversations between policymakers, academics, technologists, civil society, and affected communities on the risks and promises of using new technologies.

Stay tuned for updates on our AI and migration project over the next couple of months as we document the lived experiences of people on the move who are affected by technologies of migration control. If you are interested in finding out more about this project or have feedback and ideas, please contact petra.molnar [at] utoronto [dot] ca.

Mozilla Fellow Petra Molnar joins us to work on AI & discrimination (26.09.2020)
https://edri.org/mozilla-fellow-petra-molnar-joins-us-to-work-on-ai-and-discrimination/

The human rights impacts of migration control technologies (12.02.2020)
https://edri.org/the-human-rights-impacts-of-migration-control-technologies/

Immigration, iris-scanning and iBorderCTRL (26.02.2020)
https://edri.org/immigration-iris-scanning-and-iborderctrl/

Introducing De-Carceral Futures: Bridging Prison and Migrant Justice – Editors’ Introduction: Detention, Prison, and Knowledge Translation in Canada and Beyond
http://carfms.org/introducing-de-carceral-futures/

The Privatization of Migration Control (24.02.2020)
https://www.cigionline.org/articles/privatization-migration-control

Law and Autonomous Systems Series: Automated Decisions Based on Profiling – Information, Explanation or Justification? That is the Question! (27.04.2018)
https://www.law.ox.ac.uk/business-law-blog/blog/2018/04/law-and-autonomous-systems-series-automated-decisions-based-profiling

Briefing: A manufactured refugee crisis at the Greek-Turkish border (04.03.2020)
https://www.thenewhumanitarian.org/analysis/2020/03/04/refugees-greece-turkey-border

Clearview’s Facial Recognition App Has Been Used By The Justice Department, ICE, Macy’s, Walmart, And The NBA (27.02.2020)
https://www.buzzfeednews.com/article/ryanmac/clearview-ai-fbi-ice-global-law-enforcement

Why faces don’t always tell the truth about feelings (26.02.2020)
https://www.nature.com/articles/d41586-020-00507-5

(Contribution, Petra Molnar, Mozilla Fellow, EDRi)

close
11 Mar 2020

Security Information Service wins the Czech Big Brother Awards

By Iuridicum Remedium

The Czech Big Brother Award (BBA) 2019 winners are the Czech Security Information Service (BIS), the antivirus company Avast, and the energy company PRE. Positive prize of Edward Snowden went to the city of Prague.

The awards were given by EDRi member Iuridicum remedium (IuRe) during a press conference on 4 March 2020. It was the 15th annual Awards, and the winners were chosen among public nominations by the nine members of the jury.

The BIS was selected over a law amendment it prepared and approved. According to the amendment, intelligence services can create databases of digital photographs downloaded from various state registers and use them for face recognition.

Czech company Avast received the BBA for the sale of its clients’ data. The jury highlighted the problem of incorrectly anonymising personal data, which allowed specific people to be re-identified through the connection of sold data with personal data from other sources.

Energy company PRE was awarded for long-standing recording of clients conversations on its branches and violations of the EU General Data Protection Regulation (GDPR) and Labor Code through monitoring employees and false information about it.

The “Big Brother Statement” prize was given to Christian Democrat (KDU-CSL) deputy Vit Kankovsky for the statement which was connected with presentation of legislation in the Chamber of Deputies. Under this legislation, the Czech Office for Personal Data Protection would be unable to punish civil service and local authorities over the GDPR.

The only positive “Edward Snowden Award” was given to the City of Prague for its refusal to introduce face recognition technology in the urban CCTV system in Prague.

Big Brother Awards is an event which seeks to highlight violations of our privacy, especially with regard to new methods of surveillance, associated with the development of technology. Since 1998, the Big Brother Awards have been organised in a number of countries around Europe – in some countries, the Awards are a new initiative, while in many others, a solid tradition has been established, and the BBA has become an annual event. Thanks to the BBA events, the information about the most striking violations in the field of privacy is shared with the broader public.

Iuridicum Remedium – IuRe
http://www.iure.org/

Czech Big Brother Awards
https://bigbrotherawards.cz/

Czech firms Avast, PRE, and BIS receive satirical Big Brother 2019 Awards (04.03.2020)
https://news.expats.cz/weekly-czech-news/czech-firms-avast-pre-and-bis-receive-satirical-big-brother-2019-awards/

Big Brother Awards International
http://www.bigbrotherawards.org/

(Contribution by EDRi member Iuridicum Remedium – IuRE, Czech Republic)

close
11 Mar 2020

Who should decide what we see online?

By Access Now

Online platforms rank and moderate content without letting us know how and why they do it. There is a pressing need for transparency of the practices and policies of these online platforms.

Our lives are closely intertwined with technology. One obvious example is how we browse, read, and communicate online. In this article we discuss two methods companies use to deliver you content: ranking and moderating.

Ranking content

Platforms use automated measures for ranking and moderating content we upload. When you search for those cat videos during lulls at work, your search result won’t offer every cat video online. The result depends on your location, your language settings, your recent searches, and all the data the search engine possesses about you.

Services curate and rank content while predicting our personal preferences and online behaviors. This way, they influence not only our access to information, but also how we form our opinions and participate in public discourse. By predicting our preferences, they also shape them and slowly change our online behavior.

They have a crucial role in determining what we read and watch. It’s like being in a foreign country on a tour where only the guide speaks the language. And the guide gets to choose what you see and who you talk to. Similarly, these online services decide what you see. By amplifying and quantifying the popularity of certain types of sensational content that boosts engagement, accompanied by the often unpredictable side effects of algorithmic personalisation, content ranking has become a commodity from which the platforms benefit. Moreover, this may lead to manipulation of your freedom to form an opinion. However, the freedom to form an opinion is an absolute right, which means that no interference with this freedom is allowed by law and cannot be accepted by any democratic society.

The automated curation of our content determines what type of information we receive and strongly impacts how much time we spend browsing the platform. Most of us don’t have enough information about how recommendation algorithms form the hierarchisation of content on the internet, and many don’t even know that ranking exists. The meaningful transparency in curation mechanisms is a precondition for enabling user agency over the tools that help to shape our informational landscape. We need to know when we are subjected to automated decision making, and we have the right to not only an explanation but also to object against it. In order to regain our agency over content curation, we need online platforms to implement meaningful transparency requirements. Robust transparency and explainability of automated measures are preconditions to exercise our rights to freedom of speech, so that we can effectively appeal against undue content restrictions.

Content moderation

Online platforms curate and moderate to help deliver information. They also do so because EU and national lawmakers impose more and more responsibility on them to police content uploaded by users, often under threat of heavy fines. According to the European legal framework, platforms are obliged to swiftly remove illegal content, such as child abuse material or terrorist content, once they are aware of its existence. We all agree that access to illegal content should be forbidden. However, in some cases the illegality of a piece of content is very difficult to assess and requires a proper legal evaluation. For instance, a video can be either a violation of copyright, or it could be freely reuploaded if used as a parody.

Drawing the line between illegal and legal can be challenging. The tricky part is that due to the scale of managing content, online platforms rely on automated decision-making tools as an ultimate solution to this very complex task. To avoid responsibility, platforms use automation to filter out any possibly illegal content. But we can’t exclusively rely on these tools – we need safeguards and human intervention to control automation.

What safeguards do we need?

Without a doubt, content moderation is an extremely difficult task. Every day, online platforms have to make tough choices and decide what pieces of content stay online and how we find them. The automated decision-making process is not likely to ever solve the social problems of hate speech, disinformation, or terrorism. While automation can work well for online content that is manifestly illegal irrespective of its context, such as child abuse material, it continues to fail in any area that is not strictly black and white. No tool should have the final say about protection of free speech or your private life.

As we stand now, online platforms rank and moderate content without letting us know how and why they do it. There is a pressing need for transparency of the practices and policies of these online platforms. They have to disclose information on how they respect our freedom of speech and what due-diligence mechanisms they have implemented. They have to be transparent about their everyday operation, their decision-making process and implementation, as well as about their impact assessments and other policies that have an impact on our fundamental human rights.

Besides transparency, we also need properly elaborated complaint mechanisms and human intervention whenever there is an automated decision-making process. Without people, with no accessible and transparent appeal mechanisms or without people being accountable for policies, there cannot be an effective remedy. If there is a chance that content has been removed incorrectly, then this needs to be checked by a real person who can decide whether the content was legal or not. We should also always have the right to bring the matter before a judge, who is legally qualified to make the final decision on any matter that may compromise our right to free speech.

Access Now
https://www.accessnow.org/

Who should decide what we see online? (20.02.2020)
https://www.accessnow.org/who-should-decide-what-we-see-online/

Can we rely on machines making decisions for us on illegal content? (26.02.2020)
https://edri.org/can-we-rely-on-machines-making-decisions-for-us-on-illegal-content/

A human-centric internet for Europe (19.02.2020)
https://edri.org/a-human-centric-internet-for-europe/

(Contribution by Eliška Pírková, EDRi member Access Now, and Eva Simon, Civil Liberties Union for Europe)

close