20 Mar 2020

EDRi calls for fundamental rights-based responses to COVID-19

By EDRi

The Coronavirus (COVID-19) disease poses a global public health challenge of unprecedented proportions. In order to tackle it, countries around the world need to engage in co-ordinated, evidence-based responses. Our responses should be grounded in solidarity, support and respect for human rights, as the Council of Europe Commissioner for Human Rights has highlighted. The use of high-quality data can support the vital work of scientists, researchers, and public health authorities in tracking and understanding current pandemic.

However, some of the actions taken by governments and businesses under exceptional circumstances today, can have significant repercussions on freedom of expression, privacy and other human rights both today and tomorrow. We are already seeing the launch of legal initiatives to tackle misinformation, but sometimes with disproportionate reactions from governments. Similarly, we are witnessing a surge in emergency-related policy initiatives, some of them risking the abuse of sensitive personal data in an attempt to safeguard public health. When acting to address such a crisis, measures cannot lead to disproportionate and unnecessary actions, and it is also vital that measures are not extended once we are no longer in a state of emergency.

In these circumstances, EDRi calls on the Member States and institutions of the European Union (EU) to ensure that, while taking public health measures to tackle COVID-19, they:

  • Strictly uphold fundamental rights: Under the European Convention on Human Rights, any emergency measures which may infringe on rights must be “temporary, limited and supervised” in line with the Convention’s Article 15, and cannot be contradictory to international human rights obligations. Similar wording can be found in Article 52.1 of the EU Charter of Fundamental Rights. Actions to tackle coronavirus using personal health data, geolocation data or other metadata must still be necessary, proportionate and legitimate, must have proper safeguards, and cannot excessively undermine the fundamental right to a private life.
  • Protect data for now and the future: Under the General Data Protection Regulation (GDPR) and the ePrivacy Directive, location data is personal data, and therefore is subject to high levels of protection even when processed by public authorities or private companies. Location data revealing movement patterns of individuals is notoriously difficult to anonymise, although many companies claim that they can do this. Data must be anonymised to the fullest extent, for example through aggregation and statistical counting. COVID-19 cannot be an opportunity for private entities to profit, but rather can be an opportunity for the EU’s Member States to adhere to the highest standards of data quality, processing and protection, with the guidance of national data protection authorities, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS).
  • Limit the purpose of data for COVID-19 crisis only: Under law, the data collected, stored and analysed in support of public health measures must not be retained or used outside the purpose of controlling the coronavirus situation.
  • Implement exceptional measures only for the duration of the crisis: The necessity and proportionality of exceptional measures taken during the COVID – 19 crisis must be reassessed once the crisis is ameliorated. Measures should be time limited and subject to automatic review for renewal at short intervals.
  • Keep tools open: To preserve public trust, all technical measures to manage coronavirus must be transparent and must remain under public control. In practice, this means using free/open source software when designing public interest applications.
  • Condemn racism and discrimination: Measures taken should not lead to discrimination of any form, and governments must remain vigilant to the disproportionate harms that marginalised groups can face.
  • Defend freedom of expression and information: In order to take sensible, well-informed decisions, we need access to good-quality, trustworthy information. This means protecting the voices of human rights defenders, independent media, and health professionals more than ever. In addition to this, the increased use of automated tools to moderate content as a result of fewer human moderators being available needs to be carefully monitored. Moreover, a complete suspension of attention-driven advertising and recommendation algorithms should be considered to mitigate the spread of disinformation that is already ongoing.
  • Take a stand against internet shutdowns: During this crisis and beyond, an accessible, secure, and open internet will play a significant role in keeping us safe. Access for individuals, researchers, organisations and governments to accurate, reliable and correct information will save lives. Attempts by governments to cut or restrict access to the internet, block social media platforms or other communications services, or slow down internet speed will deny people vital access to accurate information, just when it is of paramount importance that we stop the spread of the virus. The EU and its Member States should call on governments to immediately end any and all deliberate interference with the right to access and share information, a human right and vital to any public health and humanitarian response to COVID-19.
  • Companies should not exploit this crisis for their own benefit: Tech companies, and the private sector more broadly, need to respect existing legislation in their efforts to contribute to the management of this crisis. While innovation will hopefully have a role in mitigating the pandemic, companies should not abuse the extraordinary circumstances to monetise information at their disposal.

Read more:

close
16 Mar 2020

Terrorist Content Online Regulation: Time to get things right

By Diego Naranjo

I am convinced that the only effective way to tackle terrorism is firmly rooted in the respect of fundamental and human rights.

EU Security Union Commissioner Sir Julian King, 14 November 2016.

Closed-door negotiations (“trilogues”) on the Regulation to prevent the dissemination of terrorist content continue in Brussels. After our open letter from December things have moved on fairly slowly at first, but, recently, new texts are quickly being discussed in order to try to reach an agreement soon. Nonetheless, according to MEP Patrick Breyer, many key issues remain open for discussion.

The Regulation, heavily criticised in its original proposal by the EU Fundamental Rights Agency, the European Data Protection Supervisor (EDPS) and UN Special Rapporteurs because of its potential impact on privacy and freedom of expression, is one of the key pieces of legislation to be negotiated during 2020. If not done correctly, the Regulation could lead to imposition of “terror filters” that could take out legitimate content because filters cannot understand context, limit investigative journalism (more information here) and become the instrument of governmental authorities to suppress legitimate dissent under the pretext of the fight against terrorism.

The European Parliament successfully included in the Report from the Civil Liberties Justice and Homes Affairs (LIBE) Committe some of the main safeguards we demanded. This Report also represents the position of the Parliament as a whole in the present negotiations.

The negotiators from EU Member States and the European Parliament need to ensure that the final text keeps enough safeguards as proposed in the Parliament’s Report, paying special attention to the following:

  • The definitions in the Regulation need to be clearly aligned with the ones from the Terrorism Directive and include “intent” as a core criteria to define what is “terrorist content”.
  • Competent authorities in Member States need to be independent from the executive, that is to say, not being able to seek or take instructions from any other government body when making take-down orders. Otherwise governments willing to crack on dissenting voices may be tempted to use “terrorism” as the excuse to silence them.
  • Member State authorities can have content removed directly only when the service providers are established in their jurisdiction. When the alleged illegal terrorist content is hosted by a company in another Member State, the requesting Member State needs to request that other State to remove the content. Otherwise, having extra-territorial enforcement of removal orders would circumvent rule of law mechanisms.
  • Referrals (suggestions by law enforcement authorities to check potential “terrorist” content against companies’ terms and conditions) need to be kept out of any future text to ensure the legal procedures are not subverted in the name of “efficiency”.
  • Terror filters (upload filters, re-upload filters or “proactive measures”) should not be imposed on companies, as it would be a breach of the prohibition of general monitoring obligations of the eCommerce Directive and lead to undesirable consequences regarding the use of legitimate content.
  • According to both the Parliament and the Council versions, all companies need to remove content within one hour. This rule does not take into consideration the lack of capacities of smaller companies or services provided by non-profit organisations. They cannot deal with such requests with the same capacity of internet giants. Even though it is unlikely that both institutions decide to disagree with themselves and removing the rule they both agreed during previous negotiations, it is worth bearing in mind that the rule is likely to lead to strengthening big tech companies that are the only ones capable of dealing with those requests in that very short amount of time. Smaller services could be seriously harmed by the combo of requirements by potential implementations of the Copyright Directive and this Regulation if the one-hour rule is not removed.

If this Regulation is to be adopted, policy makers need to ensure that it does not lead to the uncertainty that other vertical legislation regulating online content are creating. If the text does not take on board the voices of journalists, human rights groups, the EU Fundamental Rights Agency and three UN Special Rapporteurs, we risk setting a bad precedent for future evidence-based and human rights-centered legislation. Fortunately, there is still time to get things right. Contact your local digital rights organisation; see how to support their work in the current state of affairs.

Read more:

Terrorist Online Content Regulation: Document Pool (21.11.2018)
https://edri.org/terrorist-content-regulation-document-pool/

Committee to Protect Journalists (11.03.2020) (21.11.2018)
https://cpj.org/2020/03/eu-online-terrorist-content-legislation-press-freedom.php

Human rights defenders are not terrorists, and their content is not propaganda (21.01.2020)
https://blog.witness.org/2020/01/human-rights-defenders-not-terrorists-content-not-propaganda/

Lifting the veil on the secretive EU terror filter negotiations: Here’s where we stand (09.03.2020)
https://www.patrick-breyer.de/?p=590541&lang=en

FRA and EDPS: Terrorist Content Regulation requires improvement for fundamental rights (20.02.2019)
https://edri.org/fra-edps-terrorist-content-regulation-fundamental-rights-terreg/

Terrorist Content Regulation – prior authorisation of all uploads? (21.11.2018)
https://edri.org/terrorist-content-regulation-prior-authorisation-for-all-uploads/

Contribution by Diego Naranjo, EDRi

close
11 Mar 2020

Stuck under a cloud of suspicion: Profiling in the EU

By Chloé Berthélémy

As facial recognition technologies are gradually rolled out in police departments across Europe, anti-racism groups blow the whistle on the discriminatory over-policing of racialised communities linked to the increasing use of new technologies by law enforcement agents. In a report by the European Network Against Racism (ENAR) and the Open Society Justice Initiative, daily police practices supported by specific technologies – such as crime analytics, the use of mobile fingerprinting scanners, social media monitoring and mobile phone extraction – are analysed, to uncover their disproportionate impact on racialised communities.

Beside these local and national policing practices, the European Union (EU) has also played an important role in developing police cooperation tools that are based on data-driven profiling. Exploiting the narrative according to which criminals abuse the Schengen and free movement area, the EU justifies the mass monitoring of the population and profiling techniques as part of its Security Agenda. Unfortunately, no proper democratic debate is taking place before the technologies are deployed.

What is profiling in law enforcement?

Profiling is a technique whereby a large amount of data is extracted (“data mining”) and analysed (“processing”) to draw up certain patterns or types of behaviour that help classify individuals. In the context of security policies, some of these categories are then labeled as “presenting a risk”, and needing further examination – either by a human or another machine. Thus it works as a filter applied to the results of a general monitoring of everyone. It lies at the root of predictive policing.

In Europe, data-driven profiling, used mostly for security purposes spiked in the immediate wake of terrorist attacks such as the 2004 Madrid and 2005 London attacks. As a result, EU counter-terrorism and internal security policies – and their underlying policing practices and tools – are informed by racialised assumptions, including specifically anti-Muslim and anti-migrant sentiments, leading to racial profiling. Contrary to what security and law enforcement agencies claim, the technology is not immune to those discriminatory biases and not objective in its endeavour to prevent crime.

European initiatives

The EU has been actively supporting profiling practices. First, the Anti-Money Laundering and Counter-Terrorism Directives oblige private actors such as banks, auditors and notaries to report suspicious transactions that might be linked to money laundering or terrorist financing, as well as to establish risk assessment procedures. “Potentially risky” profiles are created on risk factors which are not always chosen objectively, but rather based on racialised prejudice of what constitutes an “abnormal financial activity”. As a consequence, among individuals matching this profile, there is usually an over-representation of migrants, cross-border workers and asylum seekers.

Another example is the Passenger Name Record (PNR) Directive of 2016. The Directive imposes airline companies to collect all personal data of people traveling from EU territory to third countries and to share it among all EU Member States. The aim is to identify certain categories of passengers as “high-risk passengers” that need further investigation. There are ongoing discussions on the possibility to extend this system to rail transportation and other public transports.

More recently, the multiplication of EU databases in the field of migration control and their interconnection facilitated the incorporation of profiling techniques to analyse and cherry-pick “good” candidates. For example, the Visa Information System, a proposal currently on a fast-track, consists of a database that currently holds up to 74 million short- and long-stay visa applications which are run against a set of “risk indicators”. Such “risk indicators” consist of a combination of data including the age range, sex, nationality, the country and city of residence, the EU Member State of first entry, the purpose of travel, and the current occupation. The same logic is applied in the European Travel Information and Authorisation System (ETIAS), a tool slated for 2022 aimed at gathering data about third-country nationals who do not require a visa to travel to the Schengen area. The risk indicators used in that system also aim at “pointing to security, illegal immigration or high epidemic risks”.

Why are fundamental rights in danger?

Profiling practices rely on the massive collection and processing of personal data, which represent a great risk for the rights to privacy and data protection. Since most policing instruments pursue public security interest, they are considered legitimate. However, few actually meet transparency and accountability requirements and thus, are difficult to audit. The essential legality tests of necessity and proportionality prescribed by the EU Charter of Fundamental Rights cannot be carried out: only a concrete danger – not the potentiality of one – can justify interferences with the rights to respect for private life and data protection.

In particular, the criteria used to determine which profiles need further examination are opaque and difficult to evaluate. Questions are: what categories and what data are being selected and evaluated? By whom? Talking about the ETIAS system, the EU Fundamental Rights Agency stressed that the possibility of using risk indicators without resulting in discriminating against certain categories of people in transit was unclear, and therefore recommended to postpone the use of profiling techniques. Generalising entire groups of persons based on specific grounds is definitely something to check against the right to non-discrimination. Further, it is troublesome that the missions of evaluation and monitoring of profiling practices are given to “advisory and guiding boards” that are hosted by law enforcement agencies such as Frontex. Excluding data protection supervisory authorities and democratic oversight bodies from this process is very problematic.

Turning several neutral features or conducts into signs of an undesirable or even mistrusted profile can have dramatic consequences for the life of individuals. The consequences of having your features match a “suspicious profile” can lead to restrictions of your rights. For example in the area of counter-terrorism, your right to effective remedies and a fair trial can be hampered; as you are usually not aware that you have been placed under surveillance as a result of a match in the system, and you find yourself unable to contest such a measure.

As law enforcement across Europe increasingly conduct profiling practices, it is crucial that substantive safeguards are put in place to mitigate the many dangers for the individuals’ rights and freedoms they entail.

Data-driven policing: the hardwiring of discriminatory policing practices across Europe (19.11.2019)
https://www.enar-eu.org/IMG/pdf/data-driven-profiling-web-final.pdf

New legal framework for predictive policing in Denmark (22.02.2017)
https://edri.org/new-legal-framework-for-predictive-policing-in-denmark/

Data Protection, Immigration Enforcement and Fundamental Rights: What the EU’s Regulations on Interoperability Mean for People with Irregular Status (14.11.2019)
https://www.statewatch.org/analyses/Data-Protection-Immigration-Enforcement-and-Fundamental-Rights-Full-Report-EN.pdf

Preventing unlawful profiling today and in the future: a guide (14.12.2018)
https://fra.europa.eu/sites/default/files/fra_uploads/fra-2018-preventing-unlawful-profiling-guide_en.pdf

(Contribution by Chloé Berthélémy, EDRi)

close
11 Mar 2020

Facebook starts to increase transparency in political ads in the Balkans

By SHARE Foundation

Facebook has announced that it will expand its transparency system and confirmation of authenticity of ads about elections and politics starting from mid-March. Namely, Facebook will cover 32 additional countries, including Serbia and North Macedonia where the elections are to take place very soon.

This turn of events follows the efforts of EDRi member SHARE Foundation and its international partners to point out to representatives of Facebook the problem of Western Balkans countries being excluded from those where Facebook actively monitors political advertising. This issue is very important in light of the election campaigns in Serbia and North Macedonia, having in mind possible manipulations, the lack of transparency of funding of ads, and the use non-political pages to advertise for political purposes.

Facebook will in this manner expand the transparency of political advertising on their main social networking platform and Instagram in the abovementioned countries. Until now, such policies were implemented mainly because of suspicion of foreign interference into election processes during the US presidential elections and Brexit referendum in 2016. The Cambridge Analytica scandal, when data of tens of millions of citizens leaked and pressure from states followed, also pressured Facebook to improve the transparency of its platform.

Facebook Ad Library will provide access information on total advertising expenses, number of ads, as well as data about specific ads – demographic target group of the ad, geographic scope of that ad, and so on. In order to analyse political advertising, Facebook will enable researchers, journalists and the public access to the Ad Library API. In addition, by the end of April, it will be possible to download a report for the new 32 countries with aggregated data on ads about elections and politics.

All actors, including political parties, candidates and other organisations wishing to post ads about elections or politics on Facebook and Instagram will be required to register as advertisers, so it can be seen who paid for advertisements. It is also necessary for advertisers to confirm the identity with official documents issued by the state where they wish to publish ads, as well as additional information such as local address, telephone number, email and website if they wish to use the name of a Facebook page or organisation in the disclaimer. In case they do not register, Facebook may restrict posting ads about politics and elections during the verification process.

Facebook is starting to follow electoral and political advertising in the Balkans (09.03.2020)
https://www.sharefoundation.info/en/facebook-is-starting-to-follow-electoral-and-political-advertising-in-the-balkans/

(Contribution by Bojan Perkov, EDRi member SHARE Foundation)

close
11 Mar 2020

Germany: Invading refugees’ phones – security or population control?

By Gesellschaft für Freiheitsrechte

In its new study, EDRi member Society for Civil Rights (GFF) examines how German authorities sniff out refugees’ phones. The aim of “data carrier evaluation” is supposed to be determining a person’s identity and their country of origin. However, in reality, it violates refugees’ rights and does not produce any meaningful results.

If an asylum seeker in Germany cannot present either a passport or documents replacing it, the Federal Office for Migration and Refugees (BAMF), is authorised to carry out a “data carrier evaluation” – to extract and analyse data from the asylum seeker’s phones and other devices to check their owner’s stated origin and identity. Data that is analysed includes the country codes of their contacts, incoming and outgoing calls and messages, browser history, geodata from photos, as well as email addresses and usernames used in applications such as Facebook, booking.com or dating apps. Notably, BAMF carries out this data analysis regardless of any concrete suspicion that the asylum-seekers made untruthful statements regarding their identity or country of origin.

The study “Invading Refugees’ Phones: Digital Forms of Migration Control” examines and assesses how BAMF evaluates refugees’ data and what kinds of results data carrier evaluation has produced so far. For the study, journalist Anna Biselli and GFF lawyer Lea Beckmann comprehensively researched and evaluated numerous sources. These include data carrier evaluation reports, asylum files, internal BAMF regulations, such as a user manual for reading mobile data carriers, and training documents for BAMF employees, as well as information that was made public by parliamentary inquiries.

High costs, useless results

The study found that evaluating data carriers is not an effective way of establishing a person’s identity and country of origin. Since data carrier evaluations started in 2017, BAMF has examined about 20 000 mobile phones of asylum seekers. When invading refugees privacy via data carrier evaluation produces results, it usually only confirms what the persons themselves stated during their interviews with BAMF employees.

In 2018, only 2% of the successful data carrier evaluations revealed contradictions to the asylum seekers’ statements. Graphic: GFF/Julia Zé

There were already doubts about the effectiveness of data carrier evaluation before the law on Better Enforcement of the Obligation to leave the Country was passed. The law aims to speed up deportations. By introducing data carrier evaluations, legislators hoped to verify a person’s identity, country of origin and grounds for protection more quickly than before. In practice, the procedure has fallen short of these expectations. It has also turned out to be very expensive. 

In relation to the limited benefit of data carrier evaluations, the costs of the procedure are clearly disproportionate. In February 2017, the Federal Ministry of the Interior stated that installation costs of 3,2 million euros were to be expected. By the end of 2018, however, 7,6 million euros had already been spent on the system, more than twice as much as originally estimated.

Total costs of the BAMF for reading and evaluating data media: From just under 7 million euros in 2017 to an expected 17 million euros in 2022. graph: GFF/Julia Zé

A blatant violation of fundamental rights

Examining refugees’ phones can be seen as a human rights violation. Despite that, Germany has spent millions of euros on introducing and developing this practice. Data carrier evaluations circumvent the basic right to informational self-determination, which has been laid down by the German Federal Constitutional Court. Refugees are subject to second-class data protection. At the same time, they are especially vulnerable and lack meaningful access to legal remedies.

Germany is not the only country to experiment with digital forms of migration control. BAMF’s approach is part of a broader, international trend towards testing new surveillance and monitoring technologies on marginalised populations, including refugees. Individual people, as well as their individual histories, are increasingly being reduced to data records. GFF will combat this trend with legal means: We are currently preparing legal action against the BAMF’s data carrier evaluation. 

We thank the Digital Freedom Fund for their support.

Gesellschaft für Freiheitsrechte (GFF, Society for Civil Rights)
https://freiheitsrechte.org/english/

Invading Refugees’ Phones: Digital Forms of Migration Control
https://freiheitsrechte.org/home/wp-content/uploads/2020/02/Study_Invading-Refugees-Phones_Digital-Forms-of-Migration-Control.pdf

The human rights impacts of migration control technologies (12.02.2020)
https://edri.org/the-human-rights-impacts-of-migration-control-technologies/

Immigration, iris-scanning and iBorderCTRL (26.02.2020)
https://edri.org/immigration-iris-scanning-and-iborderctrl/

(Contribution by EDRi member Gesellschaft für Freiheitsrechte – GFF, Germany)

close
11 Mar 2020

Accountable Migration Tech: Transparency, governance and oversight

By Petra Molnar

Migration continues to dominate headlines around the world. For example, given the currently deteriorating situation at the border between Greece and Turkey, with reports of increasingly repressive measures to turn people away, new technologies already play a part in border surveillance and decision-making at the border.

Our previous two blogposts explored how far-reaching migration control technologies actually are. From refugee camps to border spaces to immigration hearing rooms, we are seeing the rise of automated decision-making tools replacing human officers making decisions about your migration journey. The use of these technologies also opens the door for violations of migrants’ rights.

How are these technologies of migration control impacting fundamental rights and what can we do about it?

Life and liberty

We should not underestimate the far-reaching impacts of new technologies on the lives and security of people on the move. The right to life and liberty is one of the most fundamental internationally protected rights, and highly relevant to migration and refugee contexts. Multiple technological experiments already impinge on the right to life and liberty. The starkest example is the denial of liberty when people are placed in detention. Immigration detention is highly discretionary. The justification of increased incarceration on the basis of algorithms that have been tampered with, such as at the US-Mexico border, shows just how far we are willing to justify incursions on basic human rights under the guise of national security and border enforcement. Errors, mis-calibrations, and deficiencies in training data can result in profound rights infringements of safety, security, and liberty of migrants when they are placed in unlawful detention. For example, aspects of training data which are mere coincidences in reality may be treated as relevant patterns by a machine-learning system, leading to outcomes which are considered arbitrary. This is one reason why the EU General Data Protection Regulation (GDPR) requires the ability to demonstrate that the correlations applied in algorithmic decision-making are “legitimate justifications for the automated decisions”.

Equality rights and freedom from discrimination

Equality and freedom from discrimination are integral to human dignity, particularly in situations where negative inferences against marginalised groups are frequently made. Algorithms are vulnerable to the same decision-making concerns that plague human decision-makers: transparency, accountability, discrimination, bias, and error. The opaque nature of immigration and refugee decision-making creates an environment ripe for algorithmic discrimination. Decisions in this system – from whether a refugee’s life story is “truthful” to whether a prospective immigrant’s marriage is “genuine” – are highly discretionary, and often hinge on assessment of a person’s credibility. In the experimental use of AI lie detectors at EU airports, what will constitute truthfulness and how will differences in cross-cultural communication be dealt with in order to ensure that problematic inferences are not encoded and reinforced into the system? The complexity of migration – and the human experience – is not easily reducible to an algorithm.

Privacy rights

Privacy is not only a consumer or property interest: it is a human right, rooted in foundational democratic principles of dignity and autonomy. We must consider the differential impacts of privacy infringements when looking at the experiences of people on the move. If collected information is shared with repressive governments from whom refugees are fleeing, the ramifications can be life-threatening. Or, if automated decision-making systems designed to predict a person’s sexual orientation are infiltrated by states targeting the LGBTQI+ community, discrimination and threats to life and liberty will likely occur. A facial recognition algorithm developed at Stanford University already tried to discern a person’s sexual orientation from photos. This use of technology has particular ramifications in the refugee and immigration context, where asylum applications based on sexual orientation grounds often rely on having to prove one’s persecution based on outdated tropes around non-heteronormative behaviour. This is why protecting people’s privacy is paramount for their safety, security, and well-being.

Procedural justice

When we talk about human rights of people on the move, we must also consider procedural justice principles that affect how a person’s application is reviewed, assessed, and what due process looks like in an increasingly automated context.

For example, in immigration and refugee decision-making, procedural justice dictates that the person affected by administrative processes has a right to be heard, the right to a fair, impartial and independent decision-maker, the right to reasons – also known as the right to an explanation – and the right to appeal an unfavourable decision. However, it is unclear how administrative law will handle the augmentation or even replacement of human decision-makers by algorithms.

While these technologies are often presented as tools to be used by human decision-makers, the line between machine-made and human-made decision-making is often unclear. Given the persistence of automation bias, or the predisposition towards considering automated decisions as more accurate and fair, what rubrics will human decision-makers use to determine how much weight to place on the algorithmic predictions, as opposed to any other information available to them, including their own judgment and intuition? When things go wrong and you wish to challenge an algorithmic decision, how will we decide what counts as a reasonable decision? It’s not clear how tribunals and courts will deal with automated decision-making, what standards of review will be used, and what redress or appeal will look like for people wishing to challenge incorrect or discriminatory decisions.

What we need: Context-specific governance and oversight

Technology replicates power in society, and its benefits are not experienced equally. Yet no global regulatory framework currently exists to oversee the use of new technologies in the management of migration. Much of technological development occurs in so-called “black boxes”, where intellectual property laws and proprietary considerations shield the public from fully understanding how the technology operates.

While conversations around the ethics of Artificial Intelligence (AI) are taking place, ethics do not go far enough. We need a sharper focus on oversight mechanisms grounded in fundamental human rights that recognise the high risk nature of developing and deploying technologies of migration control. Affected communities must also be involved in these conversations. Rather than developing more technology “for” or “about” refugees and migrants and collecting vast amounts of data, people who have themselves experienced displacement should be at the centre of discussions on when and how emerging technologies should be integrated into refugee camps, border security, or refugee hearings – if at all.
As a starting point, states and international organisations developing and deploying migration control technologies should, at the minimum:

  • commit to transparency and report publicly what technology is being developed and used and why;
  • adopt binding directives and laws that comply with internationally protected fundamental human rights obligations that recognise the high risk nature of migration control technologies;
  • establish an independent body to oversee and review all use of automated technologies in migration management;
  • foster conversations between policymakers, academics, technologists, civil society, and affected communities on the risks and promises of using new technologies.

Stay tuned for updates on our AI and migration project over the next couple of months as we document the lived experiences of people on the move who are affected by technologies of migration control. If you are interested in finding out more about this project or have feedback and ideas, please contact petra.molnar [at] utoronto [dot] ca.

Mozilla Fellow Petra Molnar joins us to work on AI & discrimination (26.09.2020)
https://edri.org/mozilla-fellow-petra-molnar-joins-us-to-work-on-ai-and-discrimination/

The human rights impacts of migration control technologies (12.02.2020)
https://edri.org/the-human-rights-impacts-of-migration-control-technologies/

Immigration, iris-scanning and iBorderCTRL (26.02.2020)
https://edri.org/immigration-iris-scanning-and-iborderctrl/

Introducing De-Carceral Futures: Bridging Prison and Migrant Justice – Editors’ Introduction: Detention, Prison, and Knowledge Translation in Canada and Beyond
http://carfms.org/introducing-de-carceral-futures/

The Privatization of Migration Control (24.02.2020)
https://www.cigionline.org/articles/privatization-migration-control

Law and Autonomous Systems Series: Automated Decisions Based on Profiling – Information, Explanation or Justification? That is the Question! (27.04.2018)
https://www.law.ox.ac.uk/business-law-blog/blog/2018/04/law-and-autonomous-systems-series-automated-decisions-based-profiling

Briefing: A manufactured refugee crisis at the Greek-Turkish border (04.03.2020)
https://www.thenewhumanitarian.org/analysis/2020/03/04/refugees-greece-turkey-border

Clearview’s Facial Recognition App Has Been Used By The Justice Department, ICE, Macy’s, Walmart, And The NBA (27.02.2020)
https://www.buzzfeednews.com/article/ryanmac/clearview-ai-fbi-ice-global-law-enforcement

Why faces don’t always tell the truth about feelings (26.02.2020)
https://www.nature.com/articles/d41586-020-00507-5

(Contribution, Petra Molnar, Mozilla Fellow, EDRi)

close
11 Mar 2020

Security Information Service wins the Czech Big Brother Awards

By Iuridicum Remedium

The Czech Big Brother Award (BBA) 2019 winners are the Czech Security Information Service (BIS), the antivirus company Avast, and the energy company PRE. Positive prize of Edward Snowden went to the city of Prague.

The awards were given by EDRi member Iuridicum remedium (IuRe) during a press conference on 4 March 2020. It was the 15th annual Awards, and the winners were chosen among public nominations by the nine members of the jury.

The BIS was selected over a law amendment it prepared and approved. According to the amendment, intelligence services can create databases of digital photographs downloaded from various state registers and use them for face recognition.

Czech company Avast received the BBA for the sale of its clients’ data. The jury highlighted the problem of incorrectly anonymising personal data, which allowed specific people to be re-identified through the connection of sold data with personal data from other sources.

Energy company PRE was awarded for long-standing recording of clients conversations on its branches and violations of the EU General Data Protection Regulation (GDPR) and Labor Code through monitoring employees and false information about it.

The “Big Brother Statement” prize was given to Christian Democrat (KDU-CSL) deputy Vit Kankovsky for the statement which was connected with presentation of legislation in the Chamber of Deputies. Under this legislation, the Czech Office for Personal Data Protection would be unable to punish civil service and local authorities over the GDPR.

The only positive “Edward Snowden Award” was given to the City of Prague for its refusal to introduce face recognition technology in the urban CCTV system in Prague.

Big Brother Awards is an event which seeks to highlight violations of our privacy, especially with regard to new methods of surveillance, associated with the development of technology. Since 1998, the Big Brother Awards have been organised in a number of countries around Europe – in some countries, the Awards are a new initiative, while in many others, a solid tradition has been established, and the BBA has become an annual event. Thanks to the BBA events, the information about the most striking violations in the field of privacy is shared with the broader public.

Iuridicum Remedium – IuRe
http://www.iure.org/

Czech Big Brother Awards
https://bigbrotherawards.cz/

Czech firms Avast, PRE, and BIS receive satirical Big Brother 2019 Awards (04.03.2020)
https://news.expats.cz/weekly-czech-news/czech-firms-avast-pre-and-bis-receive-satirical-big-brother-2019-awards/

Big Brother Awards International
http://www.bigbrotherawards.org/

(Contribution by EDRi member Iuridicum Remedium – IuRE, Czech Republic)

close
11 Mar 2020

Who should decide what we see online?

By Access Now

Online platforms rank and moderate content without letting us know how and why they do it. There is a pressing need for transparency of the practices and policies of these online platforms.

Our lives are closely intertwined with technology. One obvious example is how we browse, read, and communicate online. In this article we discuss two methods companies use to deliver you content: ranking and moderating.

Ranking content

Platforms use automated measures for ranking and moderating content we upload. When you search for those cat videos during lulls at work, your search result won’t offer every cat video online. The result depends on your location, your language settings, your recent searches, and all the data the search engine possesses about you.

Services curate and rank content while predicting our personal preferences and online behaviors. This way, they influence not only our access to information, but also how we form our opinions and participate in public discourse. By predicting our preferences, they also shape them and slowly change our online behavior.

They have a crucial role in determining what we read and watch. It’s like being in a foreign country on a tour where only the guide speaks the language. And the guide gets to choose what you see and who you talk to. Similarly, these online services decide what you see. By amplifying and quantifying the popularity of certain types of sensational content that boosts engagement, accompanied by the often unpredictable side effects of algorithmic personalisation, content ranking has become a commodity from which the platforms benefit. Moreover, this may lead to manipulation of your freedom to form an opinion. However, the freedom to form an opinion is an absolute right, which means that no interference with this freedom is allowed by law and cannot be accepted by any democratic society.

The automated curation of our content determines what type of information we receive and strongly impacts how much time we spend browsing the platform. Most of us don’t have enough information about how recommendation algorithms form the hierarchisation of content on the internet, and many don’t even know that ranking exists. The meaningful transparency in curation mechanisms is a precondition for enabling user agency over the tools that help to shape our informational landscape. We need to know when we are subjected to automated decision making, and we have the right to not only an explanation but also to object against it. In order to regain our agency over content curation, we need online platforms to implement meaningful transparency requirements. Robust transparency and explainability of automated measures are preconditions to exercise our rights to freedom of speech, so that we can effectively appeal against undue content restrictions.

Content moderation

Online platforms curate and moderate to help deliver information. They also do so because EU and national lawmakers impose more and more responsibility on them to police content uploaded by users, often under threat of heavy fines. According to the European legal framework, platforms are obliged to swiftly remove illegal content, such as child abuse material or terrorist content, once they are aware of its existence. We all agree that access to illegal content should be forbidden. However, in some cases the illegality of a piece of content is very difficult to assess and requires a proper legal evaluation. For instance, a video can be either a violation of copyright, or it could be freely reuploaded if used as a parody.

Drawing the line between illegal and legal can be challenging. The tricky part is that due to the scale of managing content, online platforms rely on automated decision-making tools as an ultimate solution to this very complex task. To avoid responsibility, platforms use automation to filter out any possibly illegal content. But we can’t exclusively rely on these tools – we need safeguards and human intervention to control automation.

What safeguards do we need?

Without a doubt, content moderation is an extremely difficult task. Every day, online platforms have to make tough choices and decide what pieces of content stay online and how we find them. The automated decision-making process is not likely to ever solve the social problems of hate speech, disinformation, or terrorism. While automation can work well for online content that is manifestly illegal irrespective of its context, such as child abuse material, it continues to fail in any area that is not strictly black and white. No tool should have the final say about protection of free speech or your private life.

As we stand now, online platforms rank and moderate content without letting us know how and why they do it. There is a pressing need for transparency of the practices and policies of these online platforms. They have to disclose information on how they respect our freedom of speech and what due-diligence mechanisms they have implemented. They have to be transparent about their everyday operation, their decision-making process and implementation, as well as about their impact assessments and other policies that have an impact on our fundamental human rights.

Besides transparency, we also need properly elaborated complaint mechanisms and human intervention whenever there is an automated decision-making process. Without people, with no accessible and transparent appeal mechanisms or without people being accountable for policies, there cannot be an effective remedy. If there is a chance that content has been removed incorrectly, then this needs to be checked by a real person who can decide whether the content was legal or not. We should also always have the right to bring the matter before a judge, who is legally qualified to make the final decision on any matter that may compromise our right to free speech.

Access Now
https://www.accessnow.org/

Who should decide what we see online? (20.02.2020)
https://www.accessnow.org/who-should-decide-what-we-see-online/

Can we rely on machines making decisions for us on illegal content? (26.02.2020)
https://edri.org/can-we-rely-on-machines-making-decisions-for-us-on-illegal-content/

A human-centric internet for Europe (19.02.2020)
https://edri.org/a-human-centric-internet-for-europe/

(Contribution by Eliška Pírková, EDRi member Access Now, and Eva Simon, Civil Liberties Union for Europe)

close
11 Mar 2020

Welcoming our new Senior Policy Advisor Sarah Chander!

By EDRi

European Digital Rights is proud to announce that Sarah Chander has joined the team at the Brussels office as the new Senior Policy Advisor. Sarah will work as Senior Policy Adviser for EDRi, focusing on AI, discrimination, Adtech and hate speech, amongst other issues. In addition, she will explore intersections between EDRi’s work and other social issues.

Sarah has experience in in social movements and civil society efforts and is excited to enter the digital rights field. Previously she worked in advocacy at the European Network Against Racism (ENAR), on a wide range of topics including anti-discrimination law and policy, hate crime and speech, racial profiling, and diversity and inclusion. Before that she worked on youth employment policy for the UK civil service. She was actively involved in movements against immigration detention. She holds a masters in Migration, Mobility and Development from SOAS, University of London and a Law Degree from the University of Warwick.

At ENAR Sarah began to explore how the anti-racism and digital rights field intersect, in particular how tech is increasingly used by police and immigration control across Europe, and also through AI in the field of recruitment. You can read more about this in her blog post on data racism, found here in English and here translated in German.

Twitter_tweet_and_follow_banner

close
04 Mar 2020

E-evidence and human rights: The Parliament is not quite there yet

By Chloé Berthélémy

The European Parliament Committee on Civil Liberties (LIBE) is currently busy working out a compromise between its different political groups in order to establish a common position on the “e-evidence” Regulation. It is an important step of the legislative process since the Parliament’s position will be the only bulwark standing between the proper protection of human rights in cross-border law enforcement and the Council’s and the Commission’s highly problematic e-evidence ideas.

To ensure that fundamental rights are protected when law enforcement authorities in an EU Member State act outside their own country, the Parliament compromise should do the following:

Do: The authority in the executing Member State – and where applicable the affected State – should be obliged to confirm or reject an order before the online service provider can execute it. Some compromises on the table suggest that after a certain period of time (for now, 10 days) without a reaction from the executing authority, the service provider should simply assume green light. This is a risky shortcut because it creates the incentive for an underfunded and understaffed executing authority to ignore requests and let the 10-day deadline pass without action. Given that many service providers are currently based in Ireland, whose law enforcement authorities are disproportionately flooded by these requests, this is a very real scenario which would practically annul many of the otherwise very important human rights safeguards built in by the Member of the European Parliament (MEP) Birgit Sippel, Rapporteur of this file in LIBE Committee. In practice, it would mean that the service providers become the ultimate safety net against abusive requests and fundamental rights breaches. The explicit approval by the executing authority must therefore be mandatory before an order can be executed.

❌ Don’t: Some compromise proposals imply that orders to access subscriber data would have no suspensive effect on the handing out of data. Those proposals assume that – if an order is found invalid at a later state – the data could simply be declared inadmissible in court and be deleted by the issuing authority. In practice, however, this notion is misleading, at best. Once an investigating police officer learns a suspect’s identity, they will not be able to unknow that identity just because the subscriber data they learned it from has been declared inadmissible and is deleted. What is worse, if indeed the knowledge of the identity itself was declared inadmissible, the whole investigation would collapse. In order to ensure legal certainty for investigating officers, again, an order should not be executed without the authorisation of the executing authority. This procedural requirement should apply for all types of data and orders.

Do: Just like the service provider is given the possibility to refuse an order because it is manifestly abusive, the executing authority should be obliged to check that the order is proportionate as part of its refusal grounds checklist.

❌ Don’t: Considering the state of the rule of law in certain EU Member States, executing an order from a State where the independence of the judiciary is not guaranteed would be incredibly risky for the protection of fundamental rights. In line with the jurisprudence of the Court of Justice of the EU on the independence of judicial authorities, any executing authority should refuse to execute orders from States subjected to Article 7 proceedings.

If adopted, these changes would strengthen the Parliament’s Report and ensure it is able to defend citizens’ rights during the e-evidence negotiations with the Commission and the Council.

EU Council’s general approach on “e-evidence”: From bad to worse (19.12.2019)
https://edri.org/eu-councils-general-approach-on-e-evidence-from-bad-to-worse/

Cross-border access to data for law enforcement: Document pool
https://edri.org/cross-border-access-to-data-for-law-enforcement-document-pool/

close