02 Sep 2020

Down with (discriminating) systems

By EDRi

Amidst a particularly hectic time for digital rights policy in Europe, there remains a large elephant in the room. Europe is still pulsating with the repercussions of sustained global uprisings against racism following the murder of George Floyd. As the EU formulates its response in its upcoming ‘Action Plan on Racism’, EDRi outlines why it must address structural racism in technology as part of upcoming legislation.

Much of the discussion surrounding the ‘social’ side of technology focuses on the promise of a range of benefits that stem from digital innovation. The EU’s ongoing consultation on its AI legislation consistently reiterates the ‘wide array of economic and social benefits’ artificial intelligence can offer.

As our European societies increasingly wake up to the existence of structural, racial inequality, we have to ask ourselves, who will really feel the benefits of innovation? And what are the costs on human dignity and life that automated decision making systems will bring? What impact do they already have on existing systems of racism and discrimination?

AI presents huge potential for exacerbating racism in society, at a scale and to a degree of opacity unlike discrimination perpetuated by humans. Automated decision making has often been portrayed as neutral and ‘objective’, when in fact they neatly embed and amplify the underlying structural biases of our societies.

For example, increasing evidence demonstrates how new technologies developed and deployed in the field of law enforcement differentiate, target and experiment on communities at the margins. Another example is the increased use of both place-based and person-based “predictive policing” technologies to forecast where, and by whom, a narrow type of crimes are likely to be committed, that repeatedly score racialised communities with a higher likelihood of presumed future criminality. Such systems include the Dutch Crime Anticipation System, and the UK’s National Data Analytics Solution (‘NDAS’).

The various matrixes (the Gangs Matrix, ProKid-12 SI, the NDAS) dedicated to monitoring and data collection on future crime and ‘gangs’ target Black, Brown and Roma men and boys, highlighting discriminatory patterns on the basis of race and class. Not only do such systems infringe on the presumption of innocence and the fundamental right to privacy, they codify the notion that if you are of a certain race, you are suspicious and need to be watched. Predictive policing systems cause redirecting of policing toward certain areas, increasing the likelihood of, often fatal, encounters with the police.

Over-surveillance already occurs towards racialised groups, and undocumented people. In Europe, undocumented migrants are generally unable to avail themselves of data protection rights. This vulnerability is heightened due to the development of mass-scale, interoperable repositories of biometric data to facilitate immigration enforcement.

How can we address structural racism perpetuated through technology and the digital space? With the Action Plan on Racism and upcoming legislation on the Digital Services Act and artificial intelligence (AI), EDRi argues that the link between racism and technology cannot be ignored. In order to make sure that the ‘benefits’ of technology are felt by all of our societies equally, the EU must apply a racial justice lens to its future digital legislation and policy.

In our briefing ‘Structural racism, digital rights and technology’ EDRi calls on the EU to prevent abuses towards racialised communities by legally restricting impermissible uses of artificial intelligence, such as predictive policing, biometric surveillance, and uses of AI at the border.

There are numerous examples of this to draw from. In 2019, the city of San Francisco banned the use of facial recognition technology by police after racial justice activists highlighted the harmful impacts of the technology. This year, the UN Special Rapporteur on contemporary forms of racism recommended that Member States prohibit the use of technologies with a racially discriminatory impact. EDRi member Foxglove, working with the Joint Council on the Welfare of Immigrants took legal action and forced the Home Office to end its use of a racially discriminatory visa algorithm. There is a growing international consensus that racism perpetuated through technology must be halted with radical measures. The EU must follow suit.

Read more:

Structural Racism briefing https://edri.org/wp-content/uploads/2020/08/Structural-Racism-Digital-Rights-and-Technology_Final.pdf

AI recommendations paper https://edri.org/wp-content/uploads/2020/06/AI_EDRiRecommendations.pdf

Ban Biometrics Paper: https://edri.org/wp-content/uploads/2020/05/Paper-Ban-Biometric-Mass-Surveillance.pdf

Foxlgove: https://www.foxglove.org.uk/news/home-office-says-it-will-abandon-its-racist-visa-algorithm-nbsp-after-we-sued-them

PICUM and Statewatch (2019) “Data Protection, Immigration Enforcement and Fundamental Rights: What the EU’s Regulations on Interoperability Mean for People with Irregular Status“ https://www.statewatch.org/analyses/2019/data-protection-immigration-enforcement-and-fundamental-rights-what-the-eu-s-regulations-on-interoperability-mean-for-people-with-irregular-status/

(Contribution by Sarah Chander, Senior Policy Advisor, EDRi)

close
31 Aug 2020

Digital Services Act: what we learned about tackling the power of digital platforms

By EDRi

A year into EDRi’s policy and advocacy efforts to improve the DSA, we take stock of our efforts in mapping challenges and successes in enabling positive change.

Over a year ago, EDRi started engaging on the reform of the e-Commerce Directive on platform liability. Now called the Digital Services Act (DSA), it is an ambitious yet undefined piece of EU legislation on platform regulation.

Why is this future piece of legislation important for people and democracy in Europe and beyond? And how successful have we been in enabling a stronger human rights led legislation so far?

From moderating online content to preserving democracy

From February 2019, EDRi met with academics, civil society organisations and representatives of the European Commission and the Parliament to better understand what type of reform was envisaged for the e-Commerce Directive. Based on these conversations, EDRi put together a working group promoting discussions with members. This led to the publication of a series of blogposts explaining our recommendations for the reform. Initially, we had envisaged that the reform would regulate illegal content online following the European Commission previous approach regarding ‘terrorist’ content or Copyright infringements.

In June 2019, we wondered whether the “review of the E-Commerce Directive [would] open Pandora’s box and become one of this decade’s biggest threats to citizens’ rights and freedoms online – or [whether it would be] a chance to clarify and improve the current situation”.

The rest of the world is watching the EU as a regulatory power to see if it will rise to the occasion of curbing surveillance capitalism.”

The European Commission announced that as part of its European Digital Strategy, it would put forward a Digital Services Act, mainly as a way to “strengthen the Single Market for digital services and foster innovation and competitiveness of the European online environment”. Since then, the EDRi network has moved to a shared understanding of the DSA, not just as a way to develop the digital economy but as a unique opportunity to limit the powers of dominant digital platforms whose business practices threaten our democracies today. The rest of the world is watching the EU as a regulatory power to see if it will rise to the occasion of curbing surveillance capitalism.

Digital rights advocacy in a complex arena

Our challenges towards remaining a critical and influential voice in this debate are multi-fold.

First, we need to build consensus in a network of 44 member organisations of various sizes, geographical and topical scope and diverging societal visions. This diversity is valuable in evaluating the Digital Services Act as a way to improve individual users experience on the internet, redress ‘collective harms’ such as digital discrimination and put forward an overhaul of the rules to rein in power imbalances between corporations and people.

As the DSA has been presented as a way to address online hate speech and illegal content, EDRi has engaged with organisations working to promote racial justice and to combat all forms of discrimination and hate speech. EDRi has continued these discussions at a Privacy Camp panel on online violence with representatives of marginalised groups (female journalists, Roma people, people with disabilities, LGBTQI+ community). In July 2020, we held a second workshop with ‘non-digital’ rights groups to provide updates and gather ideas for the DSA consultation and EDRi’s answering guide to the consultation. EDRi has made it a point to engage as early as possible with other civil society organisations to ensure that our advocacy efforts complement each other. This has undoubtedly come with challenges in working with a larger coalition, from human rights groups to joint efforts to protect democracy.

Additionally, Big Tech is heavily lobbying against a Digital Services Act that puts people’s rights above profit, both in Brussels and through EU Presidencies. The Big Tech corporate lobby has profited immensely from the pandemic, promoting themselves as ‘saviors’ pushing technological solutions for fundamentally social and health care problems. Putting Big tech corporations at the center of the fight against the COVID pandemic, however, can help shield them against meaningful regulation and move the focus away from fundamental rights and freedoms. As a result, EDRi and the broader human rights community face unprecedented corporate lobbying pressure in favour of keeping the Digital Services Act small and mainly focused on selected aspects of the Single Market and tech ‘innovation’.

First evaluation and next steps

EDRi has regular exchanges with representatives of the European Commission and government officials as well as with Members of the European Parliament. This has included meetings with the European Commission Vice-President Věra Jourová, with the Commissioner for Digital Policy Thierry Breton as well as with the cabinet of the Executive Vice President Margrethe Vestager. EDRipublished its position paper ‘Platform Regulation Done Rights’ in April 2020 as an attempt to support the European Commission in asking the right questions in its consultation. Some proposals were already picked up in the European Parliament committees’ three DSA reports such as protecting the prohibition of any general monitoring obligation, the distinction between online marketplaces and content hosting providers (in the IMCO report), meaningful Ad tech and algorithmic optimisation regulation, a functioning notice-and-action regimeand out of court dispute settlement for users.

The EDRi network will continue to invite others to join civil society efforts and ensure that the European Union lives up to its fundamental rights and democratic principles. You can read our answer to the DSA consultation and submit your own with the help of our answering guide here.

(Contribution by Claire Fernandez, EDRi Executive Director)

close
19 Aug 2020

EDRi demands an open, safe and accountable internet – will you join us?

By EDRi

Today, 19th August 2020, European Digital Rights (EDRi) submitted its response to the European Commission’s public consultation on the Digital Services Act package. In addition, EDRi releases its official DSA Consultation Answering Guide designed to help other civil society organisations, collectives and citizens with an interest in upholding human rights to submit their own response to the European Commission.

In the past decade or two, the open internet has been colonised by centralised platform monopolies. Additionally, these monopolies use broken business models that profile people and sell their data to the highest bidder. The Digital Services Act is an opportunity for the European Union to decide how the internet will look in the next twenty years.

Rather than trying to replicate the toxic business model of today’s Googles and Facebooks, Europe should strive for an open, safe and accountable internet. Users deserve real choice and the power to decide for themselves where their online lives take place and what happens to their personal data. Read EDRi’s paper “Platform Regulation Done Right” to learn how we can get there.

Therefore, EDRi calls on the European Commission to propose measures that break open the centralised platform economy, introduce strong transparency requirements, and guarantee effective regulatory oversight. The Digital Services Act provides the chance to leave the technological dead-end of monopolistic, centralised mega platforms behind and help strengthen a digital economy that respects human rights.

Will you make your voice heard in this crucial moment for the future of the public online sphere? Submit your own response to the DSA consultation online and check out our Answering Guide on recommendations that could inspire your own responses.

Read more:

EDRi Consultation response: European Commission consultation on the Digital Services Act package (19.08.2020)
https://edri.org/wp-content/uploads/2020/08/DSA-Consultation-Response.pdf

EDRi Answering Guide to the European Commission consultation on the Digital Services Act package (19.08.2020)
https://edri.org/EDRiDSAAnsweringGuide.html

EDRi position paper “Platform Regulation Done Right” on how to draft an human-centric Digital Services Act (09.04.2020)
https://edri.org/wp-content/uploads/2020/04/DSA_EDRiPositionPaper.pdf

Digital Services Act: Document pool (constantly updated)
https://edri.org/digital-service-act-document-pool

close
30 Jul 2020

Wanted: Communication Intern (Paid)

By EDRi

European Digital Rights (EDRi) is an international not-for-profit association of 44 digital human rights organisations. We defend and promote rights and freedoms in the digital environment, such as the right to privacy, personal data protection, freedom of expression, and access to information.

Join EDRi now and become a superhero for the defense of our rights and freedoms online!

communication intern EDRi

The EDRi Brussels office is currently looking for an intern to support our communications. The internship will focus on website, social media, publications, visual design, press work, and the production of written materials.

This is your chance to work in a fast paced environment with a passionate team of digital rights activists supporting grass-roots efforts across Europe to build a better digital future.

The internship will begin in September 2020 for a period of 4-6 months. The chosen communication champion will receive a monthly remuneration of minimum 750 EUR (according to “convention d’immersion professionnelle”).

Key tasks:

  • Social media: drafting and scheduling posts, managing followers’ inquiries, monitoring, reporting
  • Publications: designing and editing publication layouts and visuals
  • Press: drafting press releases and briefings
  • Website: assisting in making website changes, updates and migrating content from the previous website
  • Newsletter: formatting WordPress posts and designing visuals for the bi-weekly EDRi-gram
  • Dissemination strategy: mapping key digital rights and tech for good organisations
  • Ad-hoc: Assisting in other communications tasks, such as maintenance of mailing lists, monitoring media visibility, updating and analysing communications statistics

Needed:

  • Experience in social media community management and social media performance reporting
  • Strong layout, photo and visual editing skills
  • Excellent command of spoken and written English
  • Knowledge of website management (WordPress)
  • Ability to multi-task and strong time management skills
  • Strong communication and relationship building skills
  • Proactive problem-solver

Desired:

  • Experience with journalism, media or public relations
  • Interest in online activism and campaigning for digital human rights
  • Excellent story telling skills
  • Knowledge of open source software such as Matomo, Thunderbird, Libre Office

How to apply:

To apply please send a maximum one-page cover letter and a maximum two-page CV (only PDFs are accepted) by email to gail >dot< rego >at< edri >dot< org. Closing date for applications is midnight on Sunday, 23 August 2020. Due to limited capacity, we are only able to contact successful candidates.

Unfortunately, due to the lengthy process to obtain work permits for non EU residents, the ability to work in Belgium is desirable for applicants.

The first phase of the recruitment process involves a written exercise and is expected to take place in the last week of August. Interviews with selected candidates will take place in the second recruitment phase during the first week of September, The internship is scheduled to start in the first weeks of September, with an existing possibility for remote-working.

We are an equal opportunities employer with a strong commitment to transparency and inclusion. We strive to have a diverse and inclusive working environment. We encourage individual members of groups at risk of racism or other forms of discrimination to apply for this post.

close
27 Jul 2020

SHARE Foundation presents #hiljadekamera : A documentary on biometric mass surveillance

By SHARE Foundation

SHARE Foundation has recently released a short documentary on the controversial use of the mass surveillance system in Belgrade, Serbia. Various digital experts and activists took part in the documentary, including the national Data Protection Authority in Serbia and EDRi’s own  Policy and Campaigns Officer, Ella Jakubowska.

The 10-minute video explains all the key questions of the use of smart surveillance technology in public spaces, as well as the arguments of the Thousands of Cameras initiative, which is demanding the respect of the Constitution and the laws

The Government of Serbia in cooperation with Huawei has been actively working on the implementation of the “Safe City” project in Belgrade. This project involves the installation of thousands of smart surveillance cameras with object and face recognition features. The procurement also involves an artificial intelligence system used for the analytics of the feed captured with these cameras.

A civic initiative, #hiljadekamera [Thousands of Cameras] is tracking the development of the mass surveillance system in Belgrade and has so far collected and verified data on 689 facial recognition cameras across the city. Composed of concerned citizens, experts and digital rights organisations, has been vocal about the deterioration of privacy as a result of this project for over a year. The website with the map showing locations of smart cameras hiljade.kamera.rs was launched in mid-May, together with social media accounts.

In the first two months of this crowdsourcing action, the citizen map revealed twice as many smart cameras than there are on the official police list. Major discrepancies are noted in Novi Beograd, Zvezdara, Stari Grad, but also in other municipalities of Belgrade.

facial recognition woman face camera
Still from the documentary

The Thousands of Cameras initiative gathers citizens, activists and human rights organisations asking for transparency and opening a wide public debate on the system of non-selective invasion of constitutional rights. At the same time, the hiljade.kamera.rs website serves as a portal where citizens can inform themselves about their rights or legal and technical aspects of the use of facial recognition technologies, or join the activities of the initiative.

The initiative, led by SHARE Foundation, a Belgrade – based digital rights organisation has done numerous activities in the field of crowdmapping the infrastructure, community building, research, advocacy and content production.

close
17 Jul 2020

IBM’s facial recognition: the solution cannot be left to companies

By Ella Jakubowska

On 8 June 2020, IBM’s CEO announced to the US Congress that – on the grounds of “justice and racial equity” – the company would “sunset” its “general purpose” facial recognition technologies. EDRi has addressed the company through a letter, but IBM’s response suggests the organisation is motivated by public relations, instead of fundamental rights.

Around the world, newspapers have reported that facial recognition has already been used to target protesters, from pro-democracy activists in Hong Kong, to people protesting against the death of Freddie Gray in police custody in the US. It is therefore clear that facial recognition systems are a threat for our rights to free expression, assembly and association.

The EDRi network has reported on the serious risks posed by facial recognition to the full spectrum of fundamental rights and the rule of law. Besides facial recognition, biometric data surveillance includes the way we walk (gait) and voice recognition. Governments continue to use this surveillance to track us, turning public spaces into perpetual police line-ups.

Instead of enabling free, vibrant and democratic societies, facial recognition as a form of surveillance creates societies of suspicion, control and discrimination.

These risks are so severe that EDRi has called on the European Commission to ban biometric mass surveillance in both law and practice across the EU.

On 25 June, EDRi sent IBM a letter asking the company to provide more information about their commitment to stopping facial surveillance. It included questions like “Which contracts will be stopped as a result? Which contracts won’t? How does IBM define general purpose? Has IBM engaged fundamental rights experts? Do these steps apply only to the US, or to IBM’s global activities?”

On 8 July, IBM’s Chief Privacy Officer sent a short response to EDRi’s letter. Their one-page reply reiterated the general commitment of their earlier statement, and elaborated on IBM’s participation in various initiatives on artificial intelligence and ethics. Their response did not answer a single one of EDRi’s fifteen questions.

This response suggests that IBM’s motivation is driven by public relations and not fundamental rights. They have failed to provide any information that could enable us to substantively assess their commitments. Our questions still stand.

How can we know that Europe’s people and communities are protected from the threats of biometric surveillance across our public spaces, when there is a toxic trio of:

  1.  a lack of public transparency by IBM;
  2. a failure at European level to provide national data protection authorities with adequate resources to hold IBM (not to mention the more prolific facial recognition players such as ClearviewAI, PimEyes, NEC, 3M and many more) to account; and
  3. a failure to ban the development, procurement and deployment of these harmful tools?

The short answer is: we can’t.

IBM’s statement in reply to EDRi’s letter shows that relying on the self-regulation or ethical principles of the companies developing these technologies can never be sufficient.

It is clear that corporate PR is not and can never be a policy solution, as exemplified by Amazon pausing the sale of its facial recognition technologies to law enforcement, despite having aggressively pushed its sinister Rekognition technology to police and communities across the US in recent years.

It is high time that the European Commission and EU member states take the necessary steps to protect the EU’s democracy and commitment to fundamental rights. We strongly urge decision makers to permanently and comprehensively ban biometric mass surveillance in upcoming rules on AI.

Read more:

Ban Biometric Mass Surveillance (13.05.2020)
https://edri.org/blog-ban-biometric-mass-surveillance/
Defund Facial Recognition (05.07.2020)
https://www.theatlantic.com/technology/archive/2020/07/defund-facial-recognition/613771/
How Amazon’s Moratorium on Facial Recognition Tech Is Different From IBM’s and Microsoft’s (11.06.2020)
https://slate.com/technology/2020/06/ibm-microsoft-amazon-facial-recognition-technology.html
Facial recognition and fundamental rights 101 (04.12.2019)
https://edri.org/facial-recognition-and-fundamental-rights-101/

close
16 Jul 2020

A victory for us all: European Court of Justice makes landmark ruling to invalidate the Privacy Shield

By EDRi

Today, 16 July 2020, the Court of Justice of the European Union (CJEU) invalidated the EU-US Privacy Shield. The ruling proves a major victory for EU residents on how their personal data is processed and used by platforms like Facebook. The decision mandates the need to bring strong privacy legislation in the US and and generally a close scrutiny to data protection systems in place to avoid the misuse and unnecessary handling of private data of EU residents.

The huge power of US intelligence services, as disclosed by Edward Snowden in 2013, proved that the data protection and privacy rights of EU residents are not sufficiently protected. We cannot allow any foreign agency to track and surveil our communities with such a disregard for fundamental rights.

“Today’s European Court of Justice ruling is a victory for privacy against mass surveillance”, says Diego Naranjo, Head of Policy at EDRi. “This is a win both for Europeans, whose personal data will be better protected, and a call for US authorities to reform the way intelligence service operate.”, he further adds.

At its core, this case is about a conflict of law between US surveillance laws which demand surveillance and EU data protection laws that require privacy. The CJEU has decided today to bin Privacy Shield and instead reinforce that Standard Contractual Clauses (SCCs). SCCs which is one of the ways in which companies can make data transfers need very close scrutiny or should be suspended, if protections in the third country cannot be ensured. As noyb notes in their first reaction, Facebook and similar companies may also not use “SCCs” to transfer data as DPC must stop transfers under this instrument. The ruling is great news for all of those defending human rights online.

The background

In 2013, Edward Snowden publicly disclosed that US Intelligence Agencies use surveillance programs such as PRISM to access the personal data of Europeans. The documents disclosed listed several US companies such as Apple, Microsoft, Facebook, Google and Yahoo sharing data with the US government for surveillance programs.

Based on this whistleblowing case, Mr Max Schrems (currently of EDRi member, noyb) filed a complaint against Facebook Ireland Ltd before the Irish Data Protection Commissioner (DPC). The complaint argued that under the EU-US Safe Harbor Decision 2000/520/EC, Mr Schrems’ (and therefore any European platform user) personal data should not be sent from Facebook Ireland Ltd (serving Facebook users outside of the US and Canada) to Facebook Inc. (the US parent company), given that Facebook has to grant the US National Security Agency access to such data.

Next steps

Today’s CJEU ruling is just the beginning. It is now up to the EU to start negotiating a new framework with the US and ensure deep reforms in order for the new framework to be valid and respectful of fundamental rights.

Read more:

CJEU invalidates “Privacy Shield” in US Surveillance case.  SCCs cannot be used by Facebook and similar companies (16.07.20)
https://noyb.eu/en/cjeu

CJEU Media Page (Background, FAQ & other resources)
https://noyb.eu/en/CJEU-Media-Page

EU-US-Datenabkommen gekippt (16.07.20) https://digitalcourage.de/blog/2020/eu-us-datenabkommen-gekippt

In a victory for privacy, the EU Court of Justice bins EU-US Privacy Shield (16.07.20)
https://www.accessnow.org/in-a-victory-for-privacy-the-eu-court-of-justice-bins-eu-us-privacy-shield/

close
08 Jul 2020

Digital rights for all

By Sarah Chander

In this article we set out the background to EDRis’ work on anti-discrimination in the digital age. Here we take the first step to explore anti-discrimination as a digital rights issue, and then, what can EDRi do about it? The project is motivated by the need to recognise how oppression, discrimination and inequality impact the enjoyment of digital rights, and to live up to our commitment to uphold the digital rights of all.

The first half of 2020 has brought with it challenges and shifts of a global scale. From COVID-19 to #BlackLivesMatter – these events necessarily impact EDRi’s work as issues of digital and human rights – our privacy, our safety, and our freedoms, online and off. Not only have these events brought issues of privacy and surveillance to the forefront of global politics, they also teach us about vulnerability.

Vulnerability is not a new concept to digital rights. It is core to the fight to defend rights and freedoms online – we are vulnerable to targeted advertising, to exploitation of our personal data, to censorship, and to increased surveillance. Particularly in times of crisis, this vulnerability is at the same time exposed as it is exacerbated, with increased surveillance justified for the public good.

How exactly can we understand vulnerability in terms of digital rights? In many senses, this vulnerability is universal. Ever-encroaching threats to our privacy, state surveillance, the mining of data on our personal lives for profit, are all universal threats facing individuals in a digital age.

Yet – just as we have seen that the myth of universal vulnerability in the face of Coronavirus debunked, we are also learning that we are not equally vulnerable to threats to privacy, censorship and surveillance. State and private actors abuse their power in ways that exacerbate injustice, threatens democracy and the rule of law. The way technologies are deployed often amplifies inequalities, especially when location and/or biometric data are used. Taking a leaf out of the book of anti-racism movements – instead of being ‘vulnerable’ to discrimination, exploitation and other harms, we know they are imposed on us. Rather than vulnerable, some groups are marginalised, as active processes with people, institutions and structures of power as the cause.

Going forward, an awareness of how marginalised groups enjoy their digital rights is crucial to a better defence and protection for all. From the black, brown and roma communities who are likely to be impacted by data-driven profiling, predictive policing, and biometric surveillance; the mother who only sees online job advertisements that fit her low-income profile; the child whose online learning experience should not be tainted by harmful content; the undocumented person who does not access health services due to the expectation of deportation and data-sharing, the queer and trans people who rely on anonymity to ensure a safe experience online, the black woman who has had her account suspended for using anti-racist terminologies, to the protester worried about protecting their identity, infringements of ‘digital rights’ manifest differently. Often, the harm cannot be corrected with a GDPR fine alone. It cannot be resolved with better terms and conditions. This is not just a matter of data protection, but of broader violations of human rights in a digital context.

These wider questions of harms and infringements in the digital age will challenge our existing frameworks. Is there a universal ‘subject’ for digital rights? Who are we referring to most often under the term ‘user’? Does this fully recognise the varying degrees of harm we are exposed to? Will the concept of rights holders as ‘users’ help or hinder this nuanced approach? Beyond ‘rights’, how do ideas of equality and justice inform our work?

EDRi members such as Privacy International, have denounced data exploitations and how marginalised groups are disproportionately affected by digital rights violations. Panoptykon have explored how algorithmic profiling systems impact the unemployed in Poland, and integrate the risks of discrimination into their analysis of why the online advertising system is broken. At Privacy Camp, EDRi members are reflecting on how children’s rights, the issues of hate speech online impact our work as a digital rights network. Building on this work EDRi is mapping the organisations, projects and initiatives in the European digital rights field which include a discrimination angle, or that explore how people in different life situations experience digital rights. Once we have a picture of which work is ongoing in the field and the main gaps, we will explore how EDRi can move forward, potentially including further research, campaigns, or efforts to connect digital and non-digital organisations.

We hope that this project will help us to meet our commitment to uphold digital rights for all, and to challenge power imbalance. We are learning that a true universal approach recognises marginalisation in order to contest it. In order to protect digital rights for all we must understand these differences, highlight them, and then fight for collective solutions.

Read more:

Who They Target – Privacy International
https://privacyinternational.org/learn/who-they-target

Profiling the unemployed in Poland: social and political implications of algorithmic decision-making (2015)
https://panoptykon.org/sites/default/files/leadimage-biblioteka/panoptykon_profiling_report_final.pdf

The digital rights of LGBTQ+ people: When technology reinforces societal oppressions (17.07.19)
https://edri.org/the-digital-rights-lgbtq-technology-reinforces-societal-oppressions/

10 Reasons Why Online Advertising is Broken (09.01.2020)
https://en.panoptykon.org/online-advertising-is-broken

More than the sum of our parts: a strategy for the EDRi Network (27.05.20)
https://edri.org/more-than-the-sum-of-our-parts-a-strategy-for-the-edri-network/

COVID-Tech: Surveillance is a pre-existing condition (27.05.2020)
https://edri.org/surveillance-is-a-pre-existing-condition/

close
08 Jul 2020

Europol: Non-accountable cooperation with IT companies could go further

By Chloé Berthélémy

There is an ongoing mantra among law enforcement authorities in Europe according to which private companies are indispensable partners in the fight against “cyber-enabled” crimes as they are often in possession of personal data relevant for law enforcement operations. For that reason, police authorities increasingly attempt to lay hands on data held by companies – sometimes in disregard to the safeguards imposed by long-standing judicial cooperation mechanisms. Several initiatives at European Union (EU) level, like the proposed regulation on European Production and Preservation Orders for electronic evidence in criminal matters (so called “e-evidence” Regulation), seek to “facilitate” that access to personal data by national law enforcement authorities. Now it’s Europol’s turn.

The Europol Regulation entered into force in 2017, authorising the European Police Cooperation Agency (Europol) to “receive” (but not directly request) personal data from private parties like Facebook and Twitter directly. The goal was to enable Europol to gather personal data, feed it into its databases and support Member States in their criminal investigations. The Commission was supposed to specifically evaluate this practice of reception and transfer of personal data with private companies after two years of implementation (in May 2019). However, there is no public information on whether the Commission actually conducted such evaluation, what were its modalities as well as its results.

Regardless of the absence of this assessment’s results and of a fully-fledged evaluation of Europol’s mandate, the Commission and the Council consider the current legal framework as too limiting and therefore decided to revise it. The legislative proposal for a new Europol Regulation is planned to be released at the end of this year.

One of the main policy option foreseen is to lift the ban on Europol’s ability to proactively request data from private companies or query databases managed by private parties (e.g. WHOIS). However, disclosures by private actors would remain “voluntary”. Just as the EU Internet Referral Unit operates without any procedural safeguards or strong judicial oversight, this extension of Europol’s executive powers would barely comply with the EU Charter of Fundamental Rights – that requires that restrictions of fundamental rights (on the right to privacy in this case) must be necessary, proportionate and “provided for by law” (rather than on ad hoc “cooperation” arrangements).

This is why, in light of the Commission’s consultation call, EDRi shared the following remarks:

  • EDRi recommends to first carry out a full evaluation of the 2016 Europol Regulation, before expanding the agency’s powers, in order to base the revision of its mandate on proper evidence;
  • EDRi opposes the Commission’s proposal to expand Europol’s powers in the field of data exchange with private parties as it goes beyond Europol’s legal basis (Article 88(2));
  • The extension of Europol’s mandate to request personal data from private parties promotes the voluntary disclosure of personal data by online service providers which goes against the EU Charter of Fundamental Rights and national and European procedural safeguards;
  • The procedure by which Europol accesses EU databases should be reviewed and include the involvement of an independent judicial authority;
  • The Europol Regulation should grant the Joint Parliamentary Scrutiny Group with real oversight powers.

Read our full contribution to the consultation here.

Read more:

Europol: Non-transparent cooperation with IT companies (18.05.16)
https://edri.org/europol-non-transparent-cooperation-with-it-companies/

Europol: Delete criminals’ data, but keep watch on the innocent (27.03.18)
https://edri.org/europol-delete-criminals-data-but-keep-watch-on-the-innocent/

Oversight of the new Europol regulation likely to remain superficial (12.07.16)
https://edri.org/europol-delete-criminals-data-but-keep-watch-on-the-innocent/

(Contribution by Chloé Berthélémy, EDRi policy advisor)

close
08 Jul 2020

Web browser privacy: ARTICLE 19 welcomes initiatives to protect users

By Article 19

There are widespread web tracking practices that undermine users’ human rights. However, safeguards against web tracking can and are being deployed by various service providers. EDRi member ARTICLE 19, and more generally EDRi as a whole, support these initiatives to protect user privacy and anonymity as part of a wider shift toward a more rights-respecting sector.

Web browsers are our guide across the internet. We use them to connect with others around the globe, orient ourselves, and find what we need or want online. The resulting trail of data that we generate of our preferences and actions has been exploited by the increasingly interdependent business models of the online advertising industry and web browsers. As advertising publishers, agencies, and service providers aim to maximise profit from advertisers by delivering increasingly personalised content to users, web browsers have strong incentives to collect as much data as possible about what each user searches, visits, and clicks on to feed into these targeted advertising models.

These practices not only threaten users’ right to privacy, but can also undermine other fundamental rights, such as freedom of expression and access to information and non-discrimination.

How we are tracked online

A number of mechanisms used by web browsers for ad targeting and tracking can also be used to cross-reference and track users, block access to websites, or discriminate among users based on profiles generated about them from their online activities and physical location. These mechanisms include:

  • Web usage mining, where the underlying data, such as pages visited and time spent on each page, is collected as clickstreams;
  • Fingerprinting, where information such as a user’s OS version, browser version, language, time zone, and screen settings are collected to identify the device;
  • Beacons, which are graphic images placed on a website or email to monitor the behaviour of the user and their remote device; and
  • Cookies, which are small files holding client and website data that can remain in browsers for long periods of time and are often used by third parties.

Being subject to these practices should not be the non-negotiable price of using the internet. An increasing number of service providers are developing and implementing privacy-oriented approaches to serve as alternatives – or even the new default – in web browsing. These changes range from stronger, more ubiquitous encryption of data to the configuration and use of trusted servers for different tasks. These safeguards may be deployed as entirely new architectures and protocols by browsers and applications, and are being deployed at different layers of the internet architecture.

Encrypting the Domain Name System (DNS)

One advancement has been the development and deployment of internet protocols that support greater and stronger encryption of the data generated by users when they visit websites, redressing historical vulnerabilities in the Domain Name System (DNS). Encrypted Server Name Indication (eSNI) encrypts each domain’s identifiers when multiple domains are hosted by a single IP address, so that it is more difficult for Internet Service Providers (ISPs) and eavesdroppers to pinpoint which sites a user visits. DNS-over-HTTPS (DoH) sends encrypted DNS traffic over the Hypertext Transfer Protocol Secure (HTTPS) port and looks up encrypted queries made in the browser using the servers of a trusted DNS provider. These protocols make it difficult to detect, track, and block users’ DNS queries and therefore introduce needed privacy and security features to web browsing.

Privacy-oriented web browsers

Another shift is in the architectures and advertising models of web browsers themselves. Increasingly popular privacy browsers such as Tor and Brave help protect user data and identity. Tor encrypts and anonymises users’ traffic by routing it through the Tor network while Brave anonymises user authentication by using the Privacy Pass protocol, which allows users to prove that they are trusted without revealing identifying information to the browser. Brave’s efforts to develop a privacy-centric model for web advertising, including a protocol that confirms when a user observes an ad without revealing who they are and an anonymised, blockchain-based system to compensate publishers, have been closely followed by Apple and Google, which aim to standardise their own web architectures including Apple Webkit’s ad click attribution technology and Google Chrome’s Conversion Measurement API.

Although there are some differences, Brave’s, Apple’s, and Google’s advertising models all include mechanisms to limit the amount of data passed between parties and the amount of time this data is kept in their systems, disallow data such as cookies for reporting purposes, delay reports randomly to prevent identifiability through timestamp cross-referencing, and prevent arbitrary third parties from registering user data. As such, they not only protect users’ privacy and anonymity, but also prevent cross-site tracking and user profiling.

Despite protocols such as eSNI and DoH and recent privacy advances in web browser advertising models and architectures, tracking of online activities continues to be the norm. For this reason, service providers that are working toward industry change are advocating for the widespread adoption of secure protocols and the standardisation of web browsing privacy models to redress existing vulnerabilities that have been exploited to monetise users’ data without their knowledge, monitor and profile them, and restrict the availability of content.

If privacy-oriented protocols and privacy-respecting web browsing models are standardised and widely adopted by the sector, respect for privacy will become an essential parameter for competition among not only web browsers, but also ISPs and DNS servers. This change can stimulate innovation and provide users with the choice between more and better services that guarantee their fundamental rights.

Challenges for privacy-enhancing initiatives

While these protocols and models have been welcomed by a number of stakeholders, they have also been challenged. Critics claim that these measures make it more difficult, if not impossible, to perform internet blocking and filtering. They claim that, as a result, privacy models undermine features such as parental controls and thwart the ability of ISPs and governments to identify malware traffic and malicious actors. These challenges rest on the assumption that there is a natural trade-off between the power of parties who retain control of the internet and the privacy of individual users.

In reality, however, technological advancement constantly occurs as a whole; updated models lead to updated tools and mechanisms. Take DoH and its impact on parental controls as an example. DoH encrypts DNS queries, rendering most current DNS-filtering mechanisms used for parental controls obsolete; these mechanisms rely on DNS packet inspection that cannot be done on encrypted data without intercepting and decrypting the stream first. In response, both browsers and DNS servers are developing new technologies and services. Mozilla launched its “Canary Domains” mechanism, where queries for ISP-restricted domains are flagged and trigger DoH to be disabled. DoH-compatible DNS server providers like cleanbrowsing.org implement their own filtering policies at the resolver level. While these responses do not mitigate the need to ensure users’ privacy and access to information rights through strong legal and regulatory protections, accountability and transparency of service providers to users, and meaningful user choice, they demonstrate that the real benefits of browser privacy and security measures should not be thwarted on the basis of perceived threats to the status quo.

Leadership opportunity for the EU

In the European Union, the adoption of the General Data Protection Regulation (GDPR) has obliged all stakeholders in the debate to recognise and comply with data protection and privacy-by-design principles. Moreover, the Body of European Electronic Communication Regulators, whose main task is to contribute to the development and better functioning of the EU internal market for electronic communications networks and services, has identified users’ empowerment among its priorities. These dynamics create an opportunity for EU actors to advance global leadership in efforts toward a privacy-oriented internet infrastructure.

Recommendations 

ARTICLE 19 strongly supports initiatives to advance browser privacy, including the implementation of protocols such as eSNI and DoH that facilitate stronger, more ubiquitous encryption of the Domain Name System and privacy-centric web advertising models for browsers. We believe these initiatives will lead to greater respect for privacy and human rights across the sector. In particular, we recommend that:

  • ISPs must help decentralize the encrypted DNS model by deploying their own DoH-compatible servers and encrypted services, taking advantage of the relatively low number of devices currently using DoH and the easy adoption curve it implies;
  • Browsers and DNS service providers should not override users’ configurations regarding when to enable or disable encryption services and which DNS service provider to use. Meaningful user choice should be facilitated by clear terms of service and accessible and clearly defined default, opt-in, and opt-out settings and options;
  • Browsers must additionally ensure that, even as they build privacy-friendly revenue generation schemes and move away from targeted ad models, all of these practices are transparent and clearly defined for users, both in the terms of service and codebase;
  • Internet standards bodies should encourage the inclusion of strong privacy and accountability considerations in the design of protocol specifications themselves, acknowledging the effects of these protocols in real-life testing and deployment; and
  • Civil society must promote the widespread adoption of secure tools, designs, and protocols through information dissemination to help educate the community and empower users’ choices;

Finally, Article 19 urges internet users to support the development and application of privacy-based tools that do not monetise their data by demanding products from their service providers that better protect their privacy.

Read more:

Ethical Web Development booklet:
https://edri.org/files/ethical_web_dev_web.pdf

US companies to implement better privacy for website browsing (29.08.2018)
https://edri.org/us-companies-to-implement-better-privacy-for-website-browsing/

Internet protocol community has a new tool to respect human rights (15.11.2017)
https://edri.org/internet-protocol-community-has-a-new-tool-to-respect-human-rights

(Contribution from Maria Luisa Stasi & Joey Salazar, from EDRi member ARTICLE 19)

close