07 Sep 2020

Keep private communications private

By EDRi

On 27 July, the European Commission published a Communication on a EU strategy for a more effective fight against child sexual abuse material (CSAM). The Communication indicates that messaging services (WhatsApp, Facebook Messenger…) may see their privacy protections undermined under new legislation that will be proposed this week. This would eventually oblige platforms to snoop on all private communication. Additionally, the Commission also risks hindering all forms of encryption. What could go wrong?

First, the Communication proposes to allow the continuation of “voluntary practices” to detect child sexual abuse after December 2020. Because of the extended scope of the ePrivacy Directive after the adoption of the European Electronic Communications Code (EECC), these “voluntary practices” would not be allowed as of December 2020. The new scope of the ePrivacy Directive as of December 2020 would highly protect so-called Over-The-Top services (OTTs) like Facebook Messenger, WhatsApp, Instagram messages, etc. This means that unless Member States adopt national legislation, Article 5 of the ePrivacy Directive (confidentiality of communications) will prevent OTTs from scanning (snooping on) the content of private communication, for commercial or other purposes, unless the end-user has given their consent (e.g. scanning emails for computer viruses). The Commission fears that what is generally an improvement of privacy protections is a risk to prevent criminal actions, as regular “voluntary” snooping of private communications will not be allowed. Therefore, the Communication calls for “immediate action” with a “narrowly-defined targeted solution”. This immediate action is expected to be published this week.

Second, in 2021 the Commission will propose the “necessary legislation to tackle child sexual abuse online effectively including by requiring relevant online services providers to detect known child sexual abuse material and require them to report that material to public authorities”.

Third, the Communication clearly labels encryption as a threat: “The use of encryption technology for criminal purposes therefore needs to be immediately addressed through possible solutions which could allow companies to detect and report child sexual abuse in end-to-end encrypted electronic communications.” Whether this will be done by undermining encryption or allowing service providers to snoop on communications in apps before they are encrypted remains to be seen, but it is as vague as it is dangerous. The Communication on the EU Security Union Strategy, which has a much broader scope than CSAM, also mentions encryptions saying that “the Commission will explore and support balanced technical, operational and legal solutions to the challenges and promote an approach which both maintains the effectiveness of encryption in protecting privacy and security of communications, while providing an effective response to crime and terrorism.”

What does all of this means for privacy and confidentiality?

Despite the positive aim to protect children by preventing CSAM from being shared widely, the measures proposed in the Communication risk undermining privacy and confidentiality of communications for all. Regarding the alleged “solutions” (namely the “targeted”, the Platforms-snoop-all, or the break-encryption measures), the risks will vary depending on the actual measures. However, once measures that undermine privacy become part of the toolbox for law enforcement and security agencies, it becomes a slippery slope from CSAM to other criminal offences. Ultimately this could result in measures against content that is legal, but regarded as undesired or “harmful” by the State, e.g. “misinformation”. Communication scanning goes hand in hand with undermining privacy protections such as end-to-end encryption. It will thus become impossible for individuals to know the extent to which their private communications are being surveilled by companies acting on behalf of the State. This would have a massive impact on our freedom of expression and other fundamental rights.

Solutions at hand: We have already shared how law enforcement can do their jobs without breaking encryption. Our work, based on recommendations from others such as Schneier, brings concrete ideas and solutions to problems that law enforcement agencies and intelligence services may face. Our solutions are targeted, whereas the initiatives proposed by the Commission are general and indiscriminate, imposing needless surveillance against the entire population. Today, we are well aware that once you are targeted by security services or law enforcement agencies, little can be done to counter that. It is therefore important to explore dealing with criminality online in a way that allows for encryption or other means of obfuscating identity, before attacking the general population’s ability to use end-to-end encryption and to communicate privately.

We will closely monitor the initiatives being proposed and continue to address any potential threats to encryption or privacy of communications. Privacy and confidentiality of communications are not absolute rights, but this shouldn’t mean that they can be dismantled in our online communications routinely and without safeguards. In the end, our goal is not to be put at risk of being snooped on by national and foreign security services, intelligence services or private companies in return for the perceived protection of children’s rights.

Read more:

Communication on a EU strategy for a more effective fight against child sexual abuse:https://ec.europa.eu/home-affairs/sites/homeaffairs/files/what-we-do/policies/european-agenda-security/20200724_com-2020-607-commission-communication_en.pdf

EDRi encryption position paper: https://www.edri.org/files/20160125-edri-crypto-position-paper.pdf

(Contribution by Diego Naranjo, Head of Policy from EDRi and Jesper Lund, from EDRi member IT-Pol Denmark)

close
02 Sep 2020

Down with (discriminating) systems

By EDRi

Amidst a particularly hectic time for digital rights policy in Europe, there remains a large elephant in the room. Europe is still pulsating with the repercussions of sustained global uprisings against racism following the murder of George Floyd. As the EU formulates its response in its upcoming ‘Action Plan on Racism’, EDRi outlines why it must address structural racism in technology as part of upcoming legislation.

Much of the discussion surrounding the ‘social’ side of technology focuses on the promise of a range of benefits that stem from digital innovation. The EU’s ongoing consultation on its AI legislation consistently reiterates the ‘wide array of economic and social benefits’ artificial intelligence can offer.

As our European societies increasingly wake up to the existence of structural, racial inequality, we have to ask ourselves, who will really feel the benefits of innovation? And what are the costs on human dignity and life that automated decision making systems will bring? What impact do they already have on existing systems of racism and discrimination?

AI presents huge potential for exacerbating racism in society, at a scale and to a degree of opacity unlike discrimination perpetuated by humans. Automated decision making has often been portrayed as neutral and ‘objective’, when in fact they neatly embed and amplify the underlying structural biases of our societies.

For example, increasing evidence demonstrates how new technologies developed and deployed in the field of law enforcement differentiate, target and experiment on communities at the margins. Another example is the increased use of both place-based and person-based “predictive policing” technologies to forecast where, and by whom, a narrow type of crimes are likely to be committed, that repeatedly score racialised communities with a higher likelihood of presumed future criminality. Such systems include the Dutch Crime Anticipation System, and the UK’s National Data Analytics Solution (‘NDAS’).

The various matrixes (the Gangs Matrix, ProKid-12 SI, the NDAS) dedicated to monitoring and data collection on future crime and ‘gangs’ target Black, Brown and Roma men and boys, highlighting discriminatory patterns on the basis of race and class. Not only do such systems infringe on the presumption of innocence and the fundamental right to privacy, they codify the notion that if you are of a certain race, you are suspicious and need to be watched. Predictive policing systems cause redirecting of policing toward certain areas, increasing the likelihood of, often fatal, encounters with the police.

Over-surveillance already occurs towards racialised groups, and undocumented people. In Europe, undocumented migrants are generally unable to avail themselves of data protection rights. This vulnerability is heightened due to the development of mass-scale, interoperable repositories of biometric data to facilitate immigration enforcement.

How can we address structural racism perpetuated through technology and the digital space? With the Action Plan on Racism and upcoming legislation on the Digital Services Act and artificial intelligence (AI), EDRi argues that the link between racism and technology cannot be ignored. In order to make sure that the ‘benefits’ of technology are felt by all of our societies equally, the EU must apply a racial justice lens to its future digital legislation and policy.

In our briefing ‘Structural racism, digital rights and technology’ EDRi calls on the EU to prevent abuses towards racialised communities by legally restricting impermissible uses of artificial intelligence, such as predictive policing, biometric surveillance, and uses of AI at the border.

There are numerous examples of this to draw from. In 2019, the city of San Francisco banned the use of facial recognition technology by police after racial justice activists highlighted the harmful impacts of the technology. This year, the UN Special Rapporteur on contemporary forms of racism recommended that Member States prohibit the use of technologies with a racially discriminatory impact. EDRi member Foxglove, working with the Joint Council on the Welfare of Immigrants took legal action and forced the Home Office to end its use of a racially discriminatory visa algorithm. There is a growing international consensus that racism perpetuated through technology must be halted with radical measures. The EU must follow suit.

Read more:

Structural Racism briefing https://edri.org/wp-content/uploads/2020/08/Structural-Racism-Digital-Rights-and-Technology_Final.pdf

AI recommendations paper https://edri.org/wp-content/uploads/2020/06/AI_EDRiRecommendations.pdf

Ban Biometrics Paper: https://edri.org/wp-content/uploads/2020/05/Paper-Ban-Biometric-Mass-Surveillance.pdf

Foxlgove: https://www.foxglove.org.uk/news/home-office-says-it-will-abandon-its-racist-visa-algorithm-nbsp-after-we-sued-them

PICUM and Statewatch (2019) “Data Protection, Immigration Enforcement and Fundamental Rights: What the EU’s Regulations on Interoperability Mean for People with Irregular Status“ https://www.statewatch.org/analyses/2019/data-protection-immigration-enforcement-and-fundamental-rights-what-the-eu-s-regulations-on-interoperability-mean-for-people-with-irregular-status/

(Contribution by Sarah Chander, Senior Policy Advisor, EDRi)

close
31 Aug 2020

Digital Services Act: what we learned about tackling the power of digital platforms

By EDRi

A year into EDRi’s policy and advocacy efforts to improve the DSA, we take stock of our efforts in mapping challenges and successes in enabling positive change.

Over a year ago, EDRi started engaging on the reform of the e-Commerce Directive on platform liability. Now called the Digital Services Act (DSA), it is an ambitious yet undefined piece of EU legislation on platform regulation.

Why is this future piece of legislation important for people and democracy in Europe and beyond? And how successful have we been in enabling a stronger human rights led legislation so far?

From moderating online content to preserving democracy

From February 2019, EDRi met with academics, civil society organisations and representatives of the European Commission and the Parliament to better understand what type of reform was envisaged for the e-Commerce Directive. Based on these conversations, EDRi put together a working group promoting discussions with members. This led to the publication of a series of blogposts explaining our recommendations for the reform. Initially, we had envisaged that the reform would regulate illegal content online following the European Commission previous approach regarding ‘terrorist’ content or Copyright infringements.

In June 2019, we wondered whether the “review of the E-Commerce Directive [would] open Pandora’s box and become one of this decade’s biggest threats to citizens’ rights and freedoms online – or [whether it would be] a chance to clarify and improve the current situation”.

The rest of the world is watching the EU as a regulatory power to see if it will rise to the occasion of curbing surveillance capitalism.”

The European Commission announced that as part of its European Digital Strategy, it would put forward a Digital Services Act, mainly as a way to “strengthen the Single Market for digital services and foster innovation and competitiveness of the European online environment”. Since then, the EDRi network has moved to a shared understanding of the DSA, not just as a way to develop the digital economy but as a unique opportunity to limit the powers of dominant digital platforms whose business practices threaten our democracies today. The rest of the world is watching the EU as a regulatory power to see if it will rise to the occasion of curbing surveillance capitalism.

Digital rights advocacy in a complex arena

Our challenges towards remaining a critical and influential voice in this debate are multi-fold.

First, we need to build consensus in a network of 44 member organisations of various sizes, geographical and topical scope and diverging societal visions. This diversity is valuable in evaluating the Digital Services Act as a way to improve individual users experience on the internet, redress ‘collective harms’ such as digital discrimination and put forward an overhaul of the rules to rein in power imbalances between corporations and people.

As the DSA has been presented as a way to address online hate speech and illegal content, EDRi has engaged with organisations working to promote racial justice and to combat all forms of discrimination and hate speech. EDRi has continued these discussions at a Privacy Camp panel on online violence with representatives of marginalised groups (female journalists, Roma people, people with disabilities, LGBTQI+ community). In July 2020, we held a second workshop with ‘non-digital’ rights groups to provide updates and gather ideas for the DSA consultation and EDRi’s answering guide to the consultation. EDRi has made it a point to engage as early as possible with other civil society organisations to ensure that our advocacy efforts complement each other. This has undoubtedly come with challenges in working with a larger coalition, from human rights groups to joint efforts to protect democracy.

Additionally, Big Tech is heavily lobbying against a Digital Services Act that puts people’s rights above profit, both in Brussels and through EU Presidencies. The Big Tech corporate lobby has profited immensely from the pandemic, promoting themselves as ‘saviors’ pushing technological solutions for fundamentally social and health care problems. Putting Big tech corporations at the center of the fight against the COVID pandemic, however, can help shield them against meaningful regulation and move the focus away from fundamental rights and freedoms. As a result, EDRi and the broader human rights community face unprecedented corporate lobbying pressure in favour of keeping the Digital Services Act small and mainly focused on selected aspects of the Single Market and tech ‘innovation’.

First evaluation and next steps

EDRi has regular exchanges with representatives of the European Commission and government officials as well as with Members of the European Parliament. This has included meetings with the European Commission Vice-President Věra Jourová, with the Commissioner for Digital Policy Thierry Breton as well as with the cabinet of the Executive Vice President Margrethe Vestager. EDRipublished its position paper ‘Platform Regulation Done Rights’ in April 2020 as an attempt to support the European Commission in asking the right questions in its consultation. Some proposals were already picked up in the European Parliament committees’ three DSA reports such as protecting the prohibition of any general monitoring obligation, the distinction between online marketplaces and content hosting providers (in the IMCO report), meaningful Ad tech and algorithmic optimisation regulation, a functioning notice-and-action regimeand out of court dispute settlement for users.

The EDRi network will continue to invite others to join civil society efforts and ensure that the European Union lives up to its fundamental rights and democratic principles. You can read our answer to the DSA consultation and submit your own with the help of our answering guide here.

(Contribution by Claire Fernandez, EDRi Executive Director)

close
19 Aug 2020

EDRi demands an open, safe and accountable internet – will you join us?

By EDRi

Today, 19th August 2020, European Digital Rights (EDRi) submitted its response to the European Commission’s public consultation on the Digital Services Act package. In addition, EDRi releases its official DSA Consultation Answering Guide designed to help other civil society organisations, collectives and citizens with an interest in upholding human rights to submit their own response to the European Commission.

In the past decade or two, the open internet has been colonised by centralised platform monopolies. Additionally, these monopolies use broken business models that profile people and sell their data to the highest bidder. The Digital Services Act is an opportunity for the European Union to decide how the internet will look in the next twenty years.

Rather than trying to replicate the toxic business model of today’s Googles and Facebooks, Europe should strive for an open, safe and accountable internet. Users deserve real choice and the power to decide for themselves where their online lives take place and what happens to their personal data. Read EDRi’s paper “Platform Regulation Done Right” to learn how we can get there.

Therefore, EDRi calls on the European Commission to propose measures that break open the centralised platform economy, introduce strong transparency requirements, and guarantee effective regulatory oversight. The Digital Services Act provides the chance to leave the technological dead-end of monopolistic, centralised mega platforms behind and help strengthen a digital economy that respects human rights.

Will you make your voice heard in this crucial moment for the future of the public online sphere? Submit your own response to the DSA consultation online and check out our Answering Guide on recommendations that could inspire your own responses.

Read more:

EDRi Consultation response: European Commission consultation on the Digital Services Act package (19.08.2020)
https://edri.org/wp-content/uploads/2020/09/DSA-Consultation-Response.pdf

EDRi Answering Guide to the European Commission consultation on the Digital Services Act package (19.08.2020)
https://edri.org/EDRiDSAAnsweringGuide.html

EDRi position paper “Platform Regulation Done Right” on how to draft an human-centric Digital Services Act (09.04.2020)
https://edri.org/wp-content/uploads/2020/04/DSA_EDRiPositionPaper.pdf

Digital Services Act: Document pool (constantly updated)
https://edri.org/digital-service-act-document-pool

close
30 Jul 2020

Wanted: Communication Intern (Paid)

By EDRi

European Digital Rights (EDRi) is an international not-for-profit association of 44 digital human rights organisations. We defend and promote rights and freedoms in the digital environment, such as the right to privacy, personal data protection, freedom of expression, and access to information.

Join EDRi now and become a superhero for the defense of our rights and freedoms online!

communication intern EDRi

The EDRi Brussels office is currently looking for an intern to support our communications. The internship will focus on website, social media, publications, visual design, press work, and the production of written materials.

This is your chance to work in a fast paced environment with a passionate team of digital rights activists supporting grass-roots efforts across Europe to build a better digital future.

The internship will begin in September 2020 for a period of 4-6 months. The chosen communication champion will receive a monthly remuneration of minimum 750 EUR (according to “convention d’immersion professionnelle”).

Key tasks:

  • Social media: drafting and scheduling posts, managing followers’ inquiries, monitoring, reporting
  • Publications: designing and editing publication layouts and visuals
  • Press: drafting press releases and briefings
  • Website: assisting in making website changes, updates and migrating content from the previous website
  • Newsletter: formatting WordPress posts and designing visuals for the bi-weekly EDRi-gram
  • Dissemination strategy: mapping key digital rights and tech for good organisations
  • Ad-hoc: Assisting in other communications tasks, such as maintenance of mailing lists, monitoring media visibility, updating and analysing communications statistics

Needed:

  • Experience in social media community management and social media performance reporting
  • Strong layout, photo and visual editing skills
  • Excellent command of spoken and written English
  • Knowledge of website management (WordPress)
  • Ability to multi-task and strong time management skills
  • Strong communication and relationship building skills
  • Proactive problem-solver

Desired:

  • Experience with journalism, media or public relations
  • Interest in online activism and campaigning for digital human rights
  • Excellent story telling skills
  • Knowledge of open source software such as Matomo, Thunderbird, Libre Office

How to apply:

To apply please send a maximum one-page cover letter and a maximum two-page CV (only PDFs are accepted) by email to gail >dot< rego >at< edri >dot< org. Closing date for applications is midnight on Sunday, 23 August 2020. Due to limited capacity, we are only able to contact successful candidates.

Unfortunately, due to the lengthy process to obtain work permits for non EU residents, the ability to work in Belgium is desirable for applicants.

The first phase of the recruitment process involves a written exercise and is expected to take place in the last week of August. Interviews with selected candidates will take place in the second recruitment phase during the first week of September, The internship is scheduled to start in the first weeks of September, with an existing possibility for remote-working.

We are an equal opportunities employer with a strong commitment to transparency and inclusion. We strive to have a diverse and inclusive working environment. We encourage individual members of groups at risk of racism or other forms of discrimination to apply for this post.

close
27 Jul 2020

SHARE Foundation presents #hiljadekamera : A documentary on biometric mass surveillance

By SHARE Foundation

SHARE Foundation has recently released a short documentary on the controversial use of the mass surveillance system in Belgrade, Serbia. Various digital experts and activists took part in the documentary, including the national Data Protection Authority in Serbia and EDRi’s own  Policy and Campaigns Officer, Ella Jakubowska.

The 10-minute video explains all the key questions of the use of smart surveillance technology in public spaces, as well as the arguments of the Thousands of Cameras initiative, which is demanding the respect of the Constitution and the laws

The Government of Serbia in cooperation with Huawei has been actively working on the implementation of the “Safe City” project in Belgrade. This project involves the installation of thousands of smart surveillance cameras with object and face recognition features. The procurement also involves an artificial intelligence system used for the analytics of the feed captured with these cameras.

A civic initiative, #hiljadekamera [Thousands of Cameras] is tracking the development of the mass surveillance system in Belgrade and has so far collected and verified data on 689 facial recognition cameras across the city. Composed of concerned citizens, experts and digital rights organisations, has been vocal about the deterioration of privacy as a result of this project for over a year. The website with the map showing locations of smart cameras hiljade.kamera.rs was launched in mid-May, together with social media accounts.

In the first two months of this crowdsourcing action, the citizen map revealed twice as many smart cameras than there are on the official police list. Major discrepancies are noted in Novi Beograd, Zvezdara, Stari Grad, but also in other municipalities of Belgrade.

facial recognition woman face camera
Still from the documentary

The Thousands of Cameras initiative gathers citizens, activists and human rights organisations asking for transparency and opening a wide public debate on the system of non-selective invasion of constitutional rights. At the same time, the hiljade.kamera.rs website serves as a portal where citizens can inform themselves about their rights or legal and technical aspects of the use of facial recognition technologies, or join the activities of the initiative.

The initiative, led by SHARE Foundation, a Belgrade – based digital rights organisation has done numerous activities in the field of crowdmapping the infrastructure, community building, research, advocacy and content production.

close
17 Jul 2020

IBM’s facial recognition: the solution cannot be left to companies

By Ella Jakubowska

On 8 June 2020, IBM’s CEO announced to the US Congress that – on the grounds of “justice and racial equity” – the company would “sunset” its “general purpose” facial recognition technologies. EDRi has addressed the company through a letter, but IBM’s response suggests the organisation is motivated by public relations, instead of fundamental rights.

Around the world, newspapers have reported that facial recognition has already been used to target protesters, from pro-democracy activists in Hong Kong, to people protesting against the death of Freddie Gray in police custody in the US. It is therefore clear that facial recognition systems are a threat for our rights to free expression, assembly and association.

The EDRi network has reported on the serious risks posed by facial recognition to the full spectrum of fundamental rights and the rule of law. Besides facial recognition, biometric data surveillance includes the way we walk (gait) and voice recognition. Governments continue to use this surveillance to track us, turning public spaces into perpetual police line-ups.

Instead of enabling free, vibrant and democratic societies, facial recognition as a form of surveillance creates societies of suspicion, control and discrimination.

These risks are so severe that EDRi has called on the European Commission to ban biometric mass surveillance in both law and practice across the EU.

On 25 June, EDRi sent IBM a letter asking the company to provide more information about their commitment to stopping facial surveillance. It included questions like “Which contracts will be stopped as a result? Which contracts won’t? How does IBM define general purpose? Has IBM engaged fundamental rights experts? Do these steps apply only to the US, or to IBM’s global activities?”

On 8 July, IBM’s Chief Privacy Officer sent a short response to EDRi’s letter. Their one-page reply reiterated the general commitment of their earlier statement, and elaborated on IBM’s participation in various initiatives on artificial intelligence and ethics. Their response did not answer a single one of EDRi’s fifteen questions.

This response suggests that IBM’s motivation is driven by public relations and not fundamental rights. They have failed to provide any information that could enable us to substantively assess their commitments. Our questions still stand.

How can we know that Europe’s people and communities are protected from the threats of biometric surveillance across our public spaces, when there is a toxic trio of:

  1.  a lack of public transparency by IBM;
  2. a failure at European level to provide national data protection authorities with adequate resources to hold IBM (not to mention the more prolific facial recognition players such as ClearviewAI, PimEyes, NEC, 3M and many more) to account; and
  3. a failure to ban the development, procurement and deployment of these harmful tools?

The short answer is: we can’t.

IBM’s statement in reply to EDRi’s letter shows that relying on the self-regulation or ethical principles of the companies developing these technologies can never be sufficient.

It is clear that corporate PR is not and can never be a policy solution, as exemplified by Amazon pausing the sale of its facial recognition technologies to law enforcement, despite having aggressively pushed its sinister Rekognition technology to police and communities across the US in recent years.

It is high time that the European Commission and EU member states take the necessary steps to protect the EU’s democracy and commitment to fundamental rights. We strongly urge decision makers to permanently and comprehensively ban biometric mass surveillance in upcoming rules on AI.

Read more:

Ban Biometric Mass Surveillance (13.05.2020)
https://edri.org/blog-ban-biometric-mass-surveillance/
Defund Facial Recognition (05.07.2020)
https://www.theatlantic.com/technology/archive/2020/07/defund-facial-recognition/613771/
How Amazon’s Moratorium on Facial Recognition Tech Is Different From IBM’s and Microsoft’s (11.06.2020)
https://slate.com/technology/2020/06/ibm-microsoft-amazon-facial-recognition-technology.html
Facial recognition and fundamental rights 101 (04.12.2019)
https://edri.org/facial-recognition-and-fundamental-rights-101/

close
16 Jul 2020

A victory for us all: European Court of Justice makes landmark ruling to invalidate the Privacy Shield

By EDRi

Today, 16 July 2020, the Court of Justice of the European Union (CJEU) invalidated the EU-US Privacy Shield. The ruling proves a major victory for EU residents on how their personal data is processed and used by platforms like Facebook. The decision mandates the need to bring strong privacy legislation in the US and and generally a close scrutiny to data protection systems in place to avoid the misuse and unnecessary handling of private data of EU residents.

The huge power of US intelligence services, as disclosed by Edward Snowden in 2013, proved that the data protection and privacy rights of EU residents are not sufficiently protected. We cannot allow any foreign agency to track and surveil our communities with such a disregard for fundamental rights.

“Today’s European Court of Justice ruling is a victory for privacy against mass surveillance”, says Diego Naranjo, Head of Policy at EDRi. “This is a win both for Europeans, whose personal data will be better protected, and a call for US authorities to reform the way intelligence service operate.”, he further adds.

At its core, this case is about a conflict of law between US surveillance laws which demand surveillance and EU data protection laws that require privacy. The CJEU has decided today to bin Privacy Shield and instead reinforce that Standard Contractual Clauses (SCCs). SCCs which is one of the ways in which companies can make data transfers need very close scrutiny or should be suspended, if protections in the third country cannot be ensured. As noyb notes in their first reaction, Facebook and similar companies may also not use “SCCs” to transfer data as DPC must stop transfers under this instrument. The ruling is great news for all of those defending human rights online.

The background

In 2013, Edward Snowden publicly disclosed that US Intelligence Agencies use surveillance programs such as PRISM to access the personal data of Europeans. The documents disclosed listed several US companies such as Apple, Microsoft, Facebook, Google and Yahoo sharing data with the US government for surveillance programs.

Based on this whistleblowing case, Mr Max Schrems (currently of EDRi member, noyb) filed a complaint against Facebook Ireland Ltd before the Irish Data Protection Commissioner (DPC). The complaint argued that under the EU-US Safe Harbor Decision 2000/520/EC, Mr Schrems’ (and therefore any European platform user) personal data should not be sent from Facebook Ireland Ltd (serving Facebook users outside of the US and Canada) to Facebook Inc. (the US parent company), given that Facebook has to grant the US National Security Agency access to such data.

Next steps

Today’s CJEU ruling is just the beginning. It is now up to the EU to start negotiating a new framework with the US and ensure deep reforms in order for the new framework to be valid and respectful of fundamental rights.

Read more:

CJEU invalidates “Privacy Shield” in US Surveillance case.  SCCs cannot be used by Facebook and similar companies (16.07.20)
https://noyb.eu/en/cjeu

CJEU Media Page (Background, FAQ & other resources)
https://noyb.eu/en/CJEU-Media-Page

EU-US-Datenabkommen gekippt (16.07.20) https://digitalcourage.de/blog/2020/eu-us-datenabkommen-gekippt

In a victory for privacy, the EU Court of Justice bins EU-US Privacy Shield (16.07.20)
https://www.accessnow.org/in-a-victory-for-privacy-the-eu-court-of-justice-bins-eu-us-privacy-shield/

close
08 Jul 2020

Digital rights for all

By Sarah Chander

In this article we set out the background to EDRis’ work on anti-discrimination in the digital age. Here we take the first step to explore anti-discrimination as a digital rights issue, and then, what can EDRi do about it? The project is motivated by the need to recognise how oppression, discrimination and inequality impact the enjoyment of digital rights, and to live up to our commitment to uphold the digital rights of all.

The first half of 2020 has brought with it challenges and shifts of a global scale. From COVID-19 to #BlackLivesMatter – these events necessarily impact EDRi’s work as issues of digital and human rights – our privacy, our safety, and our freedoms, online and off. Not only have these events brought issues of privacy and surveillance to the forefront of global politics, they also teach us about vulnerability.

Vulnerability is not a new concept to digital rights. It is core to the fight to defend rights and freedoms online – we are vulnerable to targeted advertising, to exploitation of our personal data, to censorship, and to increased surveillance. Particularly in times of crisis, this vulnerability is at the same time exposed as it is exacerbated, with increased surveillance justified for the public good.

How exactly can we understand vulnerability in terms of digital rights? In many senses, this vulnerability is universal. Ever-encroaching threats to our privacy, state surveillance, the mining of data on our personal lives for profit, are all universal threats facing individuals in a digital age.

Yet – just as we have seen that the myth of universal vulnerability in the face of Coronavirus debunked, we are also learning that we are not equally vulnerable to threats to privacy, censorship and surveillance. State and private actors abuse their power in ways that exacerbate injustice, threatens democracy and the rule of law. The way technologies are deployed often amplifies inequalities, especially when location and/or biometric data are used. Taking a leaf out of the book of anti-racism movements – instead of being ‘vulnerable’ to discrimination, exploitation and other harms, we know they are imposed on us. Rather than vulnerable, some groups are marginalised, as active processes with people, institutions and structures of power as the cause.

Going forward, an awareness of how marginalised groups enjoy their digital rights is crucial to a better defence and protection for all. From the black, brown and roma communities who are likely to be impacted by data-driven profiling, predictive policing, and biometric surveillance; the mother who only sees online job advertisements that fit her low-income profile; the child whose online learning experience should not be tainted by harmful content; the undocumented person who does not access health services due to the expectation of deportation and data-sharing, the queer and trans people who rely on anonymity to ensure a safe experience online, the black woman who has had her account suspended for using anti-racist terminologies, to the protester worried about protecting their identity, infringements of ‘digital rights’ manifest differently. Often, the harm cannot be corrected with a GDPR fine alone. It cannot be resolved with better terms and conditions. This is not just a matter of data protection, but of broader violations of human rights in a digital context.

These wider questions of harms and infringements in the digital age will challenge our existing frameworks. Is there a universal ‘subject’ for digital rights? Who are we referring to most often under the term ‘user’? Does this fully recognise the varying degrees of harm we are exposed to? Will the concept of rights holders as ‘users’ help or hinder this nuanced approach? Beyond ‘rights’, how do ideas of equality and justice inform our work?

EDRi members such as Privacy International, have denounced data exploitations and how marginalised groups are disproportionately affected by digital rights violations. Panoptykon have explored how algorithmic profiling systems impact the unemployed in Poland, and integrate the risks of discrimination into their analysis of why the online advertising system is broken. At Privacy Camp, EDRi members are reflecting on how children’s rights, the issues of hate speech online impact our work as a digital rights network. Building on this work EDRi is mapping the organisations, projects and initiatives in the European digital rights field which include a discrimination angle, or that explore how people in different life situations experience digital rights. Once we have a picture of which work is ongoing in the field and the main gaps, we will explore how EDRi can move forward, potentially including further research, campaigns, or efforts to connect digital and non-digital organisations.

We hope that this project will help us to meet our commitment to uphold digital rights for all, and to challenge power imbalance. We are learning that a true universal approach recognises marginalisation in order to contest it. In order to protect digital rights for all we must understand these differences, highlight them, and then fight for collective solutions.

Read more:

Who They Target – Privacy International
https://privacyinternational.org/learn/who-they-target

Profiling the unemployed in Poland: social and political implications of algorithmic decision-making (2015)
https://panoptykon.org/sites/default/files/leadimage-biblioteka/panoptykon_profiling_report_final.pdf

The digital rights of LGBTQ+ people: When technology reinforces societal oppressions (17.07.19)
https://edri.org/the-digital-rights-lgbtq-technology-reinforces-societal-oppressions/

10 Reasons Why Online Advertising is Broken (09.01.2020)
https://en.panoptykon.org/online-advertising-is-broken

More than the sum of our parts: a strategy for the EDRi Network (27.05.20)
https://edri.org/more-than-the-sum-of-our-parts-a-strategy-for-the-edri-network/

COVID-Tech: Surveillance is a pre-existing condition (27.05.2020)
https://edri.org/surveillance-is-a-pre-existing-condition/

close
08 Jul 2020

Europol: Non-accountable cooperation with IT companies could go further

By Chloé Berthélémy

There is an ongoing mantra among law enforcement authorities in Europe according to which private companies are indispensable partners in the fight against “cyber-enabled” crimes as they are often in possession of personal data relevant for law enforcement operations. For that reason, police authorities increasingly attempt to lay hands on data held by companies – sometimes in disregard to the safeguards imposed by long-standing judicial cooperation mechanisms. Several initiatives at European Union (EU) level, like the proposed regulation on European Production and Preservation Orders for electronic evidence in criminal matters (so called “e-evidence” Regulation), seek to “facilitate” that access to personal data by national law enforcement authorities. Now it’s Europol’s turn.

The Europol Regulation entered into force in 2017, authorising the European Police Cooperation Agency (Europol) to “receive” (but not directly request) personal data from private parties like Facebook and Twitter directly. The goal was to enable Europol to gather personal data, feed it into its databases and support Member States in their criminal investigations. The Commission was supposed to specifically evaluate this practice of reception and transfer of personal data with private companies after two years of implementation (in May 2019). However, there is no public information on whether the Commission actually conducted such evaluation, what were its modalities as well as its results.

Regardless of the absence of this assessment’s results and of a fully-fledged evaluation of Europol’s mandate, the Commission and the Council consider the current legal framework as too limiting and therefore decided to revise it. The legislative proposal for a new Europol Regulation is planned to be released at the end of this year.

One of the main policy option foreseen is to lift the ban on Europol’s ability to proactively request data from private companies or query databases managed by private parties (e.g. WHOIS). However, disclosures by private actors would remain “voluntary”. Just as the EU Internet Referral Unit operates without any procedural safeguards or strong judicial oversight, this extension of Europol’s executive powers would barely comply with the EU Charter of Fundamental Rights – that requires that restrictions of fundamental rights (on the right to privacy in this case) must be necessary, proportionate and “provided for by law” (rather than on ad hoc “cooperation” arrangements).

This is why, in light of the Commission’s consultation call, EDRi shared the following remarks:

  • EDRi recommends to first carry out a full evaluation of the 2016 Europol Regulation, before expanding the agency’s powers, in order to base the revision of its mandate on proper evidence;
  • EDRi opposes the Commission’s proposal to expand Europol’s powers in the field of data exchange with private parties as it goes beyond Europol’s legal basis (Article 88(2));
  • The extension of Europol’s mandate to request personal data from private parties promotes the voluntary disclosure of personal data by online service providers which goes against the EU Charter of Fundamental Rights and national and European procedural safeguards;
  • The procedure by which Europol accesses EU databases should be reviewed and include the involvement of an independent judicial authority;
  • The Europol Regulation should grant the Joint Parliamentary Scrutiny Group with real oversight powers.

Read our full contribution to the consultation here.

Read more:

Europol: Non-transparent cooperation with IT companies (18.05.16)
https://edri.org/europol-non-transparent-cooperation-with-it-companies/

Europol: Delete criminals’ data, but keep watch on the innocent (27.03.18)
https://edri.org/europol-delete-criminals-data-but-keep-watch-on-the-innocent/

Oversight of the new Europol regulation likely to remain superficial (12.07.16)
https://edri.org/europol-delete-criminals-data-but-keep-watch-on-the-innocent/

(Contribution by Chloé Berthélémy, EDRi policy advisor)

close