26 Jul 2019

Job alert: EDRi is looking for a Senior Policy Advisor

By EDRi

European Digital Rights (EDRi) is an international not-for-profit association of 42 digital human rights organisations from across Europe and beyond. We defend and promote rights and freedoms in the digital environment, such as the right to privacy, personal data protection, freedom of expression, and access to information.

EDRi is looking for a talented and dedicated Senior Policy Advisor to join EDRi’s team in Brussels. This is a unique opportunity to be part of a growing and well-respected NGO that is making a real difference in the defence and promotion of online rights and freedoms in Europe and beyond. The deadline to apply is 15 September 2019. This full-time, permanent position is to be filled as soon as possible. The start date would be 15 October 2019 .

Key responsibilities:

As a Senior Policy Advisor, your main tasks will be to:

  • Monitor, analyse and report about human rights implications of EU digital policy developments;
  • Advocate for the protection of digital rights, particularly but not exclusively in the areas of artificial intelligence, data protection, privacy, net neutrality and copyright;
  • Provide policy-makers with expert, timely and accurate input;
  • Draft policy documents, such as briefings, position papers, amendments, advocacy one-pagers, letters, blogposts and EDRi-gram articles;
  • Provide EDRi members with information about EU’s relevant legislative processes, coordinate working groups, help developing campaign messages and providing the public with information about EU’s relevant legislative processes and EDRi’s activities.
  • Represent EDRi at European and global events;
  • Organise and participate in expert meetings;
  • Maintain good relationships with policy-makers, stakeholders and the press;
  • Support and work closely with other staff members including policy, communications and campaigns colleagues and report to the Head of Policy and to the Executive Director;
  • Contribute to the policy strategy of the organisation;

Desired qualifications and experience:

  • Minimum 3 years of relevant experience in a similar role or EU institution;
  • A university degree in law, EU affairs, policy, human rights or related field or equivalent experience;
  • Demonstrable knowledge of, and interest in data protection, privacy and copyright, as well as other internet policy issues;
  • Knowledge and understanding of the EU, its institutions and its role in digital rights policies;
  • Experience in leading advocacy efforts and creating networks of influence;
  • Exceptional written and oral communications skills;
  • IT skills; experience using free software and free/open operation systems, WordPress and Nextcloud are an asset;
  • Strong multitasking abilities and ability to manage multiple deadlines;
  • Experience of working with and in small teams;
  • Experience of organising events and/or workshops;
  • Ability to work in English. Other European languages, especially French, is an advantage.

What EDRi offers:

  • A permanent, full-time contract;
  • A dynamic, multicultural and enthusiastic team of experts based in Brussels;
  • A competitive NGO salary with benefits;
  • The opportunity to foster the protection of fundamental rights in important legislative proposals;
  • A high degree of autonomy and flexibility;
  • An international and diverse network;
  • Internal career growth;
  • Networking opportunities.

How to apply:

To apply, please send a maximum one-page cover letter and a maximum two-page CV in English and in .pdf format to applications (at) edri.org with “Senior Policy Advisor” in the subject line by 15 September 2019 (11.59p.m.). Candidates will be expected to be available for interviews throughout September.

We are an equal opportunities employer with a strong commitment to transparency and inclusion. We strive to have a diverse and inclusive working environment and ideally, we would like to strive for a gender balance in the policy team. Therefore, we particularly encourage applications from individuals who identify as women. We also encourage individual members of groups at risk of racism or other forms of discrimination to apply for this post.

Please note that only shortlisted candidates will be contacted.

close
26 Jul 2019

Diego Naranjo becomes EDRi’s new Head of Policy

By EDRi

European Digital Rights is happy to announce that – following an open recruitment process – Diego Naranjo will step up from his role as Senior Policy Advisor, and start his work as EDRi’s Head of Policy in September 2019.

In his new position, Diego will occupy a central role in our advocacy strategies. He will coordinate, design, and execute our action plan to reach policy goals, and ensure that the policy team’s workplan is in line with EDRi’s overall objectives. Diego will also provide support to the EDRi office team and EDRi members.

Diego joined EDRi in October 2014. He has been covering data protection, privacy and copyright related work for EDRi. Previously, he gained international experience in the International Criminal Tribunal for former Yugoslavia, the EU Fundamental Rights Agency (FRA) and the Free Software Foundation Europe (FSFE). At national level, he worked as a lawyer in Spain, co-founded the Andalusian human rights organisation Grupo 17 de Marzo, and was appointed one of the seven members of the expert group on digital rights of the Spanish Ministry of Energy, Tourism and Digital Agenda between 2017 and 2018.

In his free time, Diego spends a considerable amount of time playing drums in a jazz trio and practicing rock climbing.


close
23 Jul 2019

Your family is none of their business

By Andreea Belu
  • Today’s children have the most complex digital footprint in human history, with their data being collected by private companies and governments alike.
  • The consequences on a child’s future revolve around one’s freedom to learn from mistakes, the reputation damage caused by past mistakes, and the traumatic effects of discriminatory algorithms.

Summer is that time of the year when parents get to spend more time with their children. Often enough, this also means children get to spend more time with electronic devices, their own or their parents’. Taking a selfie with the little one, or keeping them busy with a Facebook game or a Youtube animations playlist – these are examples that make the digital footprint of today’s child the largest in human history.

Who wants your child’s data?

Mobile phones, tablets and other electronic devices can open the door for the exploitation of the data about the person using that device – how old they are, what race they are, where are they located, what websites they visit etc. Often enough, that person is a child. But who would want a child’s data?

Companies that develop “smart” toys are the first example. In the past year, they’ve been in the spotlight for excessively collecting, storing and mis-handling minors’ data. Perhaps you still remember the notorious case of “My Friend Cayla”, the “smart” doll that was proved to record the conversations between it and children, and share them with advertisers. In fact, the doll was banned in Germany as an illegal “hidden espionage device”. However, the list of “smart” technologies collecting children data is long. Another example of a private company mistreating children’s data was the case of Google offering its school products to young American students and tracking them across their different (home) devices to train other Google products. A German DPA (Data Protection Authority) decided to ban Microsoft Office 365 from schools over privacy concerns.

Besides private companies, state authorities have an interest to record, store and use children’s online activity. For example, a Big Brother Watch 2018 report points that in the United Kingdom “Department for Education (DfE) demands a huge volume of data about individual children from state funded schools and nurseries, three times every year in the School Census, and other annual surveys.” Data collected by schools (child’s name, birth date, ethnicity, school performance, special educational needs and so on) is combined with social media profile or other data (e.g household data) bought from data brokers. Why linking all these records? Local authorities wish to focus more on training algorithms that predict children’s behaviour in order to identify “certain” children prone to gang affiliations or political radicalisation.

Consequences for a child’s future

Today’s children have the biggest digital footprint out of all humans in human history. Sometimes, the collection of a child’s data starts even before they are born, and this data will increasingly determine their future. What does this mean for kids’ development and their life choices?

The extensive data collection of today’s children aims at neutralising behavioural “errors” and optimising their performance. But mistakes are valuable during a child’s self-development – committing errors and learning lessons is an important complementary to receiving knowledge from adults. In fact, a recent psychology study shows that failure to provide an answer to a test is benefiting the learning process. Constantly using algorithms to optimise performance based on a child’s digital footprint will damage the child’s right to make and learn from mistakes.

Click to watch the animation

A child’s mistakes are not only a source of important lessons. With a rising number of attacks targeted at school’s IT systems, children’s data can get in the wrong hands. Silly mistakes could also be used to damage the reputation of the future adult a child grows into. Some mistakes must be forgotten. However, logging every step in a child’s development increases the risk that the past mistakes are later used against them.

More, children’s data can contribute to them being discriminated against. As mentioned above, data is used to predict child behaviour, with authorities aiming to intervene where they consider necessary. But algorithms portray human biases, for example against people of colour. What happens when a child of colour is predicted to be at risk of gang affiliation? Reports show that authorities treat children in danger to be recruited by a gang as if they were part of the gang already. Therefore, racial profiling by algorithms can turn into a traumatic experience for a child.

EDRi is actively trying to protect you and your beloved ones

European Digital Rights is a network of 42 organisations that promote the respect of privacy and other human rights online.

Our free “Digital Defenders” booklet for children (available in many languages) teaches in a fun and practical way why and how to protect our privacy online. EDRi is also working on the ongoing reform of the online privacy (ePrivacy) rules. This reform has a great potential to diminish practices of data exploitation online.

Read more:

Privacy for Kids: Your guide to Digital Defenders vs. Data Intruders (free download)
https://edri.org/papers

DefendDigitalMe: a call to action to protect children’s rights to privacy and family life.
https://defenddigitalme.com/

Blogpost series: Your privacy, security and freedom online are in danger (14.09.2016)
https://edri.org/privacy-security-freedom/

e-Privacy revision: Document pool (10.01.2017)
https://edri.org/eprivacy-directive-document-pool/

close
23 Jul 2019

Civil society calls for a proper assessment of data retention

By Diego Naranjo

In preparation of a possible proposal for new legislation, the European Commission is conducting informal dialogues with different stakeholders to research about the possibilities of data retention legislation that complies with the rulings of the Court of Justice of the European Union (CJEU) and the European Court of Human Rights (ECtHR). As part of these dialogues, EDRi has previously met with the Commission Directorate-General for Migration and Home Affairs (DG HOME) on 6 June 2019.

On 22 July 2019, 30 civil society organisations sent an open letter to the European Commission President-elect Ursula von der Leyen and Commissioners Avramopoulos, Jourová and King, urging the commissions of the EU Commission to conduct an independent assessment on the necessity and proportionality of existing and potential legislative measures around data retention. Furthermore, signatories asked to ensure that the debate around data retention does not prevent the ePrivacy Regulation from being adopted swiftly.

You can read the letter here, and below:

22 July 2019

By email:
President-elect von der Leyen
First Vice-President Timmermans

CC:
Commissioner Avramopoulos
Commissioner Jourová
Commissioner King

Dear First Vice-President Timmermans,
Dear President-elect von der Leyen,

The undersigned organisations represent non-governmental organisations working to protect and promote human rights in digital and connected spaces. We are writing to put forward suggestions to ensure compliance with the EU Charter of Fundamental Rights and the CJEU case law on data retention.

EU Member States (and EEA countries) have had different degrees of implementation of the CJEU ruling on 8 April 2014 invalidating the Data Retention Directive. EDRi’s 2015 study reported that six Member States1 have kept data retention laws which contained features that are similar or identical to those that were ruled to be contrary to the EU Charter. Other evidence pointed in the same direction.2 While personal data of millions of Europeans were being stored illegally, the European Commission had not launched any infringement procedures. On 21 December 2016, the CJEU delivered its judgment in the Tele2/Watson case regarding data retention in Member States’ national law. In the aftermath of this judgment, the Council Legal Service unambiguously concluded that “a general and indiscriminate retention obligation for crime prevention and other security reasons would no more be possible at national level than it is at EU level, since it would violate just as much the fundamental requirements as demonstrated by the Court’s insistence in two judgments delivered in Grand Chamber.”3

On 6 June 2019 the Council adopted “conclusions on the way forward with regard to the retention of electronic communication data for the purpose of fighting crime” which claim that “data retention is an essential tool for investigating serious crime efficiently”. The Council tasked the Commission to “gather further information and organise targeted consultations as part of a comprehensive study on possible solutions for retaining data, including the consideration of a future legislative initiative.”

While the concept of blanket data retention appeals to law enforcement agencies, it has never been shown that the indiscriminate retention of traffic and location data of over 500 million Europeans was necessary, proportionate or even effective.

Blanket data retention is an invasive surveillance measure of the entire population. This can entail the collection of sensitive information about social contacts (including business contacts), movements and private lives (e.g. contacts with physicians, lawyers, workers councils, psychologists, helplines, etc.) of hundreds of millions of Europeans, in the absence of any suspicion. Telecommunications data retention undermines professional confidentiality and deters citizens from making confidential communications via electronic communication networks. The retained data is also of high interest for criminal organisations and unauthorised state actors from all over the world. Several successful data breaches have been documented.4 Blanket data retention also undermines the protection of journalistic sources and thus compromises the freedom of the press. Overall, it damages preconditions of open and democratic societies.

The undersigned organisations have therefore been in constructive dialogue with the European Commission services to ensure that the way forward includes the following suggestions:

  • The European Commission commissions an independent, scientific study on the necessity and proportionality of existing and potential legislative measures around data retention, including a human rights impact assessment and a comparison of crime clearance rates;
  • The European Commission and the Council ensure that the debate around data retention does not prevent the ePrivacy Regulation from being adopted swiftly;
  • The European Commission tasks the EU Fundamental Rights Agency (FRA) to prepare a comprehensive study on all existing data retention legislation and their compliance with the Charter and the CJEU/European Court of Human Rights case law on this matter;
  • The European Commission considers launching infringement procedures against Member States that enforce illegal data retention laws.

We look forward to your response and remain at your disposal to support the necessary initiatives to uphold EU law in this policy area.

Signatories:

European Digital Rights (EDRi)
Access Now
Chaos Computer Club (CCC)
Bits of Freedom
Asociatia pentru Tehnologie si Internet (ApTI)
Epicenter.works
Electronic Frontier Norway (EFN)
Dataskydd.net
Digital Rights Ireland
Digitalcourage
Privacy International
Vrijschrift
FITUG e.V.
Hermes Center for Transparency and Digital Human Rights
Access Info
Aktion Freiheit statt Angst
Homo Digitalis
Electronic Privacy Information Center (EPIC)
Iuridicum Remedium (IuRe)
La Quadrature du Net
Associação D3 – Defesa dos Direitos Digitais
IT-Political Association of Denmark (IT-Pol)
Panoptykon Foundation
Open Rights Group (ORG)
Electronic Frontier Finland (Effi ry)
Državljan D
Deutsche Vereinigung für Datenschutz e. V. (DVD)
//datenschutzraum
Föreningen för Digitala Fri- och Rättigheter (:DFRI)
AK Vorrat


1) https://edri.org/edri-asks-european-commission-investigate-illegal-data-retention-laws/
2) See, for example. Privacy International, 2017, National Data Retention Laws since Tele-2/Watson Judgment: https://www.privacyinternational.org/sites/default/files/2017-12/Data%20Retention_2017.pdf
3) Council document 5884/17, paragraph 13
4) A recent example can be found here: https://techcrunch.com/2019/06/24/hackers-cell-networks-call-records-theft/

close
22 Jul 2019

Von der Leyen: An ambitious agenda for digital rights

By Diego Naranjo

On 16 July 2019, the European Parliament elected Ursula von der Leyen as President of the European Commission with 383 votes, which is only nine votes above the minimum needed. Parts of the Socialists, Liberals, and Greens initially had doubts regarding the candidate. However, her speech in the Plenary before the vote and the agenda [link] that incorporated a number of key issues to Greens and Socialists probably had an influence in changing some MEP’s minds. The European United Left / Nordic Green Left (GUE-NGL) group continued to oppose von der Leyen because of her lack of ambition on social policies and climate change, on top of her background as Minister of Defence.

Her “agenda for Europe” includes six key areas, one of which is dedicated to how to make “Europe fit for the digital age”.

Towards a “Europe fit for the digital age”?

Most pressingly, within her first 100 days in office, von der Leyen wants to propose a legislation on the human and ethical implications of artificial intelligence (AI). It’s difficult to foresee how, even if building from the work already done by the High-Level Expert Group working on AI, the Commission could possibly do, within this timeframe, all internal preparations, public consultations and inter-service consultations necessary in order to formulate a meaningful and future-proof piece of legislation on this topic.

We agree with von der Leyen with her aims of achieving technological sovereignty in “some critical technology areas”. Even though Europe has set a strong agenda on data protection and competition, building the necessary hardware and software that, among other features, protect privacy by design and by default could ensure better protection for all.

After the adoption of the Copyright Directive and the Terrorist Content Regulation, which both regulate some types of online content, it has now become popular to also look at updating the old E-Commerce Directive. The Directive dates back to the year 2000 and since then remains one of the cornerstones of European internet regulation. Von der Leyen has made updating it one of the goals of her Commission Presidency: a new “Digital Services Act” will be proposed in order to “upgrade our liability and safety rules for digital platforms, services, and products”. The devil of this proposal-to-come is in the details. Not all of what has leaked out of the Commission so far about the Digital Services Act gives reason for concern. But policymakers’ desire to force US-based platforms to assume more responsibility in tackling unwanted online content may just lead to increased censorship in over-removal of totally legal expressions of opinion.

The Council needs to be pushed, too

Von der Leyen’s proposal to “jointly define standards for this new generation of technologies that will become the global norm” is welcome. A first good step would be if she led the Council of the European Union into promptly adopting a General Approach on the ePrivacy Regulation, which has been blocked by Member States for now almost 1000 days.

Of course, von der Leyen’s proposals, including the non-digital ones, are merely a general framework and not a workplan. So much more needs to be done by the Commission to meet the criticism over a lack of ambition regarding certain policies, such as climate emergency.

The coming months will be key for civil society to make sure that the EU starts implementing the “human-centric” vision emphasised by von der Leyen, and upholds the values of sustainability to fight climate emergency, social justice to prevent poverty, as well as more democracy and transparency to prevent authoritarian tendencies. In order to do all of this, the protection of fundamental rights in a technology-intensive and increasingly interconnected environment will be more necessary than ever.

More responsibility to online platforms – but at what cost? (19.07.2019)
https://edri.org/more-responsibility-to-online-platforms-but-at-what-cost/

E-Commerce review: Opening Pandora’s box? (20.06.2019)
https://edri.org/e-commerce-review-1-pandoras-box/

E-Commerce review: Technology is the solution. What is the problem? (11.07.2019)
https://edri.org/e-commerce-review-technology-is-the-solution-what-is-the-problem/

Civil society calls Council to adopt ePrivacy now (05.12.2018)
https://edri.org/civil-society-calls-council-to-adopt-eprivacy-now/

close
19 Jul 2019

More responsibility to online platforms – but at what cost?

By EDRi

In the European Commission’s internal note published by Netzpolitik.org on 16 July 2019, the Commission presents current problems around the regulation of digital services and proposes a revision of the current E-Commerce Directive. Such a revision would have a huge impact on fundamental rights and freedoms. This is why it’s crucial for the EU to get it right this time.

From a fundamental rights perspective, the internal note contains a few good proposals, a number of bad ones, and one pretty ugly.

The Good

In its note, the Commission maintains that no online platform should be forced to actively monitor all user-uploaded content. As the Commission rightly says, this prohibition of a general monitoring obligation is a “foundational cornerstone” of global internet regulation. It has allowed the internet to become a place for everyone to enjoy the freedom of expression and communicate globally without having to go through online gatekeepers.

Unfortunately, the note is somewhat weak with regard to upload filters: the Commission merely says that transparency and accountability should be “considered” when algorithmic filters are used. It’s no secret though that filtering algorithms make too many mistakes – they do not understand context, political activism, or satire. Creating more transparency around the logic and data behind algorithmic decisions of big online platforms is certainly a good start. However, it isn’t enough to prevent fundamental rights violations and discrimination.

The Commission note recognises the need to re-assess whether and how different platform companies should be regulated differently. However, the Commission should bear in mind that not all so-called “hosting intermediaries” covered by its note are platforms similar to Facebook, Google, or Twitter. There are successful hosting intermediaries across Europe – such as the file sharing provider Tresorit or the hosting company Gandi.net – which host their customers’ content in a largely “content-agnostic” way.

Lastly, the Commission acknowledges that since the adoption of the current E-Commerce Directive, the internet has changed considerably: A small number of US-based online platforms developed into businesses with unprecedented market power. The Commission, therefore, proposes to examine “options to define a category of services on the basis of a large or significant market status (…) in order to impose supplementary conditions”. When doing so, the Commission must be careful to clearly define which services would fall into which category in order to avoid collateral damage for other types of services, including those who have not yet been invented.

The Bad

To guide its future policy initiatives, the Commission says it wants to analyse policy options for both illegal and potentially “harmful” but legal content. While the definition of what is illegal is decided as part of the democratic process in our societies, it is unclear which content should be considered “harmful” and who makes that call. Moreover, the term “harmful” lacks a legal definition, is vague and its meaning often varies depending on the context, time, and people involved. The term should therefore not form the basis for lawful restrictions on freedom of expression under European human rights law.

The Commission acknowledges that when platform companies are pushed to take measures against potentially illegal and harmful content, their balancing of interests pushes them to over-block legal speech and monitor people’s communications to prevent legal liability for user content. At the same time, the note proposes that harmful content should best be dealt with through voluntary codes of conduct, which shifts the censorship burden to the platform companies. However, companies’ terms of service are often a convenient way of removing legal content as they are vague and redress mechanisms are often ineffective.

Drawing from the experience of the EU’s Code of Conduct on Hate Speech and the Code of Practice on Disinformation, this approach pushes platform companies to measure their success only based on the number of deleted accounts or removed pieces of content as well as on how speedy those deletions have been carried out. It does not, however, improve legal certainty for users, nor does it provide for proper review and counter-notice mechanisms, or allow for investigations into whether or not the removed material was even illegal.

The Ugly

The leaked Commission note claims that recent sector-specific content regulation laws such as the disastrous Copyright Directive or the proposed Terrorist Content Regulation had left “most of” the current E-Commerce Directive unaffected. This is euphemistic at the very least. According to these pieces of legislation, all online platforms are required to pro-actively monitor and search for certain types of content to prevent their upload, which makes them “active” under current case law and should flush their liability exemption down the toilet. This is not changed by the Copyright Directive’s claim on paper that it shall not affect the E-Commerce’s liability rules.

But the EU Commission turning a blind eye on this obvious legal inconsistency isn’t the only ugly thing in there. The question that remains unanswered is: how can the Commission save the current liability exemption for the sake of internet users and their fundamental rights, all the while making it compatible with the hair-raising provisions of the Copyright Directive? It looks almost as if everybody secretly hopes that by the time the new Digital Services Act comes into force, sectoral laws such as the Copyright Directive will have been declared invalid by the European Court of Justice.

While such a turn of events would certainly be welcome, in the meantime the Commission should approach this issue transparently, and discuss with civil society and other stakeholders how the liability exemption can be salvaged and the negative impact of the sectoral laws contained.

EDRi’s recommendations

How to move ahead with an upcoming review of the E-Commerce Directive? Here are our recommendations (that are also explained in more detail in our blog post series on liability and content moderation):

  1. Before reviewing the E-Commerce Directive, policymakers should answer the following questions: What are the problems that the Digital Services Act should address? Is there a clear understanding of the nature, size, and evolution of those problems? And what does scientific evidence tell us about which solutions could help us solve those problems?
  2. The Commission should analyse and mitigate any unwanted negative side effects of the proposals in the planned Digital Services Act in order to avoid that problems are only treated superficially while doing immense damage to fundamental rights such as the freedom of expression of millions of people.
  3. The Commission should strictly limit the scope of the Digital Services Act to illegal content. It would be wise to not venture into the slippery territory of potentially harmful but legal content. Instead, the Commission should follow its own 2016 Communication on platforms.
  4. Policymakers should seize this unique opportunity to put in place fundamental rights safeguards, due process guarantees, as well as a binding notice-and-action regime. That way, the EU could take the global lead by setting the right standards for moderating online content while protecting fundamental rights.

Leaked document: EU Commission mulls new law to regulate online platforms (16.07.2019)
https://netzpolitik.org/2019/leaked-document-eu-commission-mulls-new-law-to-regulate-online-platforms/

EU Commission’s leaked internal note on revision of the current E-Commerce Directive (16.07.2019)
https://cdn.netzpolitik.org/wp-upload/2019/07/Digital-Services-Act-note-DG-Connect-June-2019.pdf

E-Commerce review: Opening Pandora’s box? (20.06.2019)
https://edri.org/e-commerce-review-1-pandoras-box/

E-Commerce review: Technology is the solution. What is the problem? (11.07.2019)
https://edri.org/e-commerce-review-technology-is-the-solution-what-is-the-problem/

close
17 Jul 2019

New privacy alliance to be formed in Russia, Central and Eastern Europe

By EDRi

Civil Society advocates from Russia, and Central and Eastern Europe have joined forces to form a new inter-regional NGO to promote privacy in countries bordering the EU.

The initiative also involves activists from the Post-Soviet countries, the Balkans and the EU Accession candidate countries. One of its primary objectives is to build coalitions and campaigns in countries that have weak or non-existing privacy protections. The project emerged from a three-day regional privacy workshop held earlier in 2019 at the Nordic Non-violence Study Group (NORNONS) centre in Sweden. The workshop agreed that public awareness of privacy in the countries represented was at a dangerously poor level, and concluded that better collaboration between advocates is one solution.

There has been a pressing need for such an alliance for many years. A vast arc of countries from Russia through Western Asia and into the Balkans has been largely overlooked by international NGOs and intergovernmental organisations (IGOs) concerned with privacy and surveillance.

The initiative was convened by Simon Davies, founder of EDRi member Privacy International and the Big Brother Awards. He warned that government surveillance and abuse of personal information has become endemic in many of those countries:

“There is an urgency to our project. The citizens of places like Azerbaijan, Kazakhstan, Kyrgyzstan, Turkmenistan, and Armenia are exposed to wholesale privacy invasion, and we have little knowledge of what’s going on there. Many of these countries have no visibility in international networks. Most have little genuine civil society, and their governments engage in rampant surveillance. Where there is privacy law, it is usually an illusion. This situation applies even in Russia.”

A Working Group has been formed involving advocates from Russia, Serbia, Georgia, Ukraine and Belarus, and its membership includes Danilo Krivokapić from EDRi member SHARE foundation in Serbia. The role of this group is to steer the legal foundation of the initiative and to approve a formal Constitution.

The initiative’s Moderator is the former Ombudsman of Georgia, Ucha Nanuashvili. He too believes that the new NGO will fill a desperately needed void in privacy activism:

“In my view, regions outside the EU need this initiative. Privacy is an issue that is becoming more prominent, and yet there is very little regional collaboration and representation. Particularly in the former Soviet states there’s an urgent need for an initiative that brings together advocates and experts in a strong alliance.”

Seed funding for the project has been provided by the Public Voice Fund of the Electronic Privacy Information Center (EPIC). EPIC’s president, Marc Rotenberg, welcomed the initiative and said he believed it would “contribute substantially” to the global privacy movement:

“We have been aware for some time that there is a dangerous void around privacy protection in those regions. We appreciate the good work of NGOs and academics to undertake this important collaboration.”

The Working Group hopes to formally launch the NGO in October in Albania. The group is presently considering several options for a name. Anyone interested in supporting the work of the initiative or wanting more information can contact Simon Davies at simon <at> privacysurgeon <dot> org.

The Nordic Nonviolence Study Group
https://www.nornons.org/

SHARE Foundation
https://www.sharefoundation.info/en/

EPIC’s Public Voice fund
https://epic.org/epic/publicvoicefund/

Mass surveillance in Russia
https://en.wikipedia.org/wiki/Mass_surveillance_in_Russia

Ucha Nanuashvili, Georgian Human Rights Centre
http://www.hridc.org/

close
17 Jul 2019

The first GDPR fines in Romania

By ApTI

The Romanian Data Protection Authority (DPA) has recently announced the first three fines applied in Romania as a result of the enforcement of the EU General Data Protection Regulation (GDPR).

On 27 June 2019, a Romanian bank was fined approximately 130 000 euro (613 912 RON) for revealing too much personal information such as the national identification number and the postal address of the payment issuers to the payment recipients. According to the Romanian DPA, 337 042 individuals were affected between February and December 2018.

The Romanian DPA based their decision on Article 5 (1) c) of the GDPR on data minimisation, and also mentioned Recital 78. Inadequate technical and organisational measures and the inability to design processes that reduce the collected personal information to the minimum necessary led to the failure to integrate appropriate safeguards for protecting individuals’ data.

It could be discussed why the DPA did not fine the bank for breaching Article 5 (1) b) on purpose limitation and Article 5 (1) f) on integrity and confidentiality of the data. The national identification number and the address of individuals were collected for internal identification purposes, not for revealing this information to third parties. The bank failed to ensure the security and confidentiality of the data by revealing it to the beneficiaries of the payments, exposing individuals’ personal data to potential unauthorised or unlawful processing.

Another fine of approximately 15 000 euro (71 028 RON) followed on 2 July 2019. It was given to a hotel unit for breaching the security of personal information of its clients. A list with information about 46 guests who were serving breakfast at the hotel was photographed by an unauthorised person and published online. The hotel filed a data security breach to the DPA and after the investigation, the DPA fined the hotel based on Article 24 of the GDPR for the lack of implementing appropriate technical and organisational safeguards to protect personal data. The hotel did not take measures to assure the security of the data against accidental or illegal disclosure and against unauthorised processing. The DPA’s decision reminds of Recital 75 mentioning the risk and type of damages associated with the processing of personal data.

A third GDPR fine was announced on 12 July 2019. It was applied to a website that, due to improper security measures after a platform migration, allowed public access via two links to a list of files, including details of several business contacts, which included name, surname, postal address, email, phone, workplace and transaction details. The company was fined 3 000 euros.

The first GDPR fine (04.07.2019)
https://www.dataprotection.ro/index.jsp?page=Comunicat_Amenda_Unicredit&lang=en

The second GDPR fine (only in Romanian, 08.07.2019)
https://www.dataprotection.ro/index.jsp?page=O_noua_amenda_GDPR&lang=ro

The third GDPR fine (only in Romanian, 12.07.2019)
https://www.dataprotection.ro/?page=2019%20A%20treia%20amenda%20in%20aplicarea%20RGPD&lang=ro

(Contribution by Valentina Pavel, EDRi member ApTI, Romania)

close
17 Jul 2019

The digital rights of LGBTQ+ people: When technology reinforces societal oppressions

By Chloé Berthélémy

Online surveillance and censorship impact everyone’s rights, and particularly those of already marginalised groups such as lesbian, gay, bisexual, transgender and queer and others (LGBTQ+) people. The use of new technologies usually reinforces existing societal biases, making those communities particularly prone to discrimination and security threats. As a follow-up to Pride Month, here is an attempt to map out what is at stake for LGBTQ+ people in digital and connected spaces.

The internet has played a considerable role in the development and organisation of the LGBTQ+ community. It represents an empowering tool for LGBTQ+ people to meet with each other, to build networks and join forces, to access information and acquire knowledge about vital health care issues, as well as to express, spread and strengthen their political claims.

We’ve got a monopoly problem

The centralisation of electronic communications services around a few platforms has created new barriers for LGBTQ+ people to exercising their digital rights. Trapped into a network effect – whereby the decision to leave the platform would represent a big lost for the user – most of them have only one place to go to meet and spread their ideas. The content they post is moderated arbitrarily by these privately owned platforms, following standards and “community guidelines”.

Powerful platforms’ practices result in many LGBTQ+ accounts, posts and themed ads being taken down on, while homophobic, transphobic and sexist content often remains untouched. In practice, these double-standards for reporting and banning contents mean that when queer and transgender people use typical slurs to reclaim and take pride from them, social media reviewers often disregard the intent and block them; whereas attackers use identical offensive terms without fearing the same punishment. More, the process being automated just worsens the injustice as algorithms are incapable of making the difference between the two cases. This leaves the LGBTQ+ community disenfranchised without reasonable explanations and possibilities to appeal the decisions.

Community standards apply both on the open part of social media as well as on the related private chats (such as Facebook Messenger and Wired). Since those networks play an essential role to discuss queer issues, to date and to engage in sexting, LGBTQ+ people become highly dependent on the platforms’ tolerance for sexual expression and nudity. Sometimes sudden changes in community guidelines are carried out without any user consultation or control. For example, the LGBTQ+ community was particularly harmed when Tumblr decided not to allow Not Safe For Work (NSFW) content anymore and Facebook banned “sexual solicitation” on its services.

Another example of companies’ policies affecting transgender people specifically is the rising trend of applying strict real-name policies online. The authentication requirement based on official ID documents prevents transgender people to use their new name and identity. For many of them, notably those living in repressive countries, it is difficult to obtain the change of their name and gender markers. As a consequence, they see their accounts deleted on a regular basis, after a few months of use, losing all their content and contacts. With little chance to retrieve their accounts, their freedoms online are severely hindered.

There is no such thing as a safe space online

Even when LGBTQ+ people leave the social media giants, they cannot necessarily turn to a safer platform online. Grindr, the biggest social networking app for gay, bi, trans, and queer people, was used by Egyptian authorities to track down and persecute LGBTQ+ people. Using fake profiles, the police is able to collect evidence, imprison, torture and prosecute for illegal sexual behaviour. This led to a chilling effect on the community, reluctant to engage in new encounters.

Other dangerous practices imply the outing of LGBTQ+ people online. For instance, a Twitter account was purposely set up in Paraguay to expose people’s sexual orientation by extracting revealing contents, such as nude pictures posted on Grindr, and posting them publicly. Despite many appeals made against the account, it disseminated content during six weeks before the platform finally deleted it. The damages for the victims are long-term and irreparable. This is, in particular, the cases in countries where there is no hate crime legislation, or where this legislation is not fully implemented, resulting in impunity for State and non-State actor’s homophobic and transphobic violence.

Technology is not neutral

The way those services and apps are built with poor security levels reflects their Western-centric, heteronormative and gender-biased nature. This endangers already vulnerable LGBTQ+ communities when they develop globally and become viral, especially in the Global South. Technologies, in particular emerging ones, can be misused to discriminate. For instance, a facial recognition system has been trained to recognise homosexual people based on their facial features. Not only the purpose of this technology is dubious, but it is also dangerous if scaled up and lands in the hands of repressive governments.

The main problem is that communities are not involved in the production stages. It’s hard to incentivise profit-driven companies to change their services according to specific needs while maintaining them free and accessible for all. Marginalised groups can usually not afford additional premium security features. Furthermore, the developers community remains in the majority white, middle aged and heterosexual, with little understanding of the local realities and dangers in other regions in the world. Encouraging LGBTQ+ people with diverse regional backgrounds to join this community would improve sensibly the offer of community-led, free, open and secure services. A lot remains to be made to push companies to engage with affected communities in order to develop tools that are privacy friendly and inclusive-by-design.

A leading good example is the Grindr initiative by EDRi member ARTICLE 19 that includes the ability to change the app’s icon appearance and the addition of a password security lock to better protect LGBTQ+ users.

This article is based on an interview of Eduardo Carrillo, digital LGBTQI+ activist in Paraguay and project director at TEDIC. TEDIC applies a gender perspective to its work on digital rights and carries out support activities for the local LGBTQ+ community to mitigate the discrimination it encounters.

In this article, we use the term LGBTQ+ to designate Lesbians, Gays, Bisexuals, Transsexuals, Queers, and all the other gender identities and sexual orientations that do not correspond to the heterosexual and cisgender (when the gender identity of a person matches the sex assigned at birth) norms.

Women’s rights online: tips for a safer digital life (08.03.2019)
https://edri.org/womens-rights-online-tips-for-a-safer-digital-life/

How to retrieve our account on Facebook: Online censorship of the LGBTQI community (02.05.2018)
https://www.tedic.org/como-recuperar-nuestra-cuenta-en-facebook-censura-en-linea-hacia-colectivo-lgbtqi/

App Security Flaws Could Create Added Risks for LGBTQI Communities (17.12.2018)
https://cyborgfeminista.tedic.org/app-security-flaws-could-create-added-risks-for-lgbtqi-communities/

No, Facebook’s updated sex policy doesn’t prohibit discussing your sexual orientation (06.12.2018)
https://www.wired.com/story/facebooks-hate-speech-policies-censor-marginalized-users/

Designing for the crackdown (25.4.2018)
https://www.theverge.com/2018/4/25/17279270/lgbtq-dating-apps-egypt-illegal-human-rights

(Contribution by Chloe Berthélémy, EDRi)

close
17 Jul 2019

“SIN vs Facebook”: First victory against privatised censorship

By Panoptykon Foundation

In an interim measures ruling on 11 June 2019, the District Court in Warsaw has temporarily prohibited Facebook from removing fan pages, profiles, and groups run by Civil Society Drug Policy Initiative (SIN) on Facebook and Instagram, as well as from blocking individual posts. SIN, a Polish non-profit organisation promoting evidence-based drug policy, filed a lawsuit in May 2019 against Facebook, with the support of the Polish EDRi member Panoptykon Foundation.

SIN filed a lawsuit against Facebook in May 2019 that blocking content restricted, in an unjustified way, the possibility to disseminate information by the organisation, express opinions and communicate with their audience. Concerned about further censorship, SIN was not able to freely carry out their educational activities. Moreover, the removal of content suggested that the organisation’s activity on the platforms was harmful, thus undermining SIN’s credibility. By allowing the request for interim measures, the court decided that SIN substantiated their claims. Although it is only the beginning of the trial, this is a first important step in the fight against excessive and opaque content blocking practices on social media.

This interim measures ruling from 11 June imply that – at least until the final judgement in the case – SIN’s activists may carry out their activities on drug policy without concerns that they will suddenly lose the possibility to communicate with their audience. The court has furthermore obliged Facebook to store profiles, fan pages and groups deleted in 2018 and 2019 but not to restore them. The storage would allow SIN – if they were to win the case – to have the content quickly restored, together with the entire published content, comments by other users, as well as followers and people who liked the fan page. This is not the only good news: the court has also confirmed that Polish users can enforce their rights against the tech giant in Poland. Unfortunately, the court did not approve, at this stage, the request to restore pre-emptively deleted fan pages, profiles, and groups for the duration of the trial. The court argued that it would be a far-fetched measure, which would, in practice, lead to recognising the fundamental claim expressed in the lawsuit.

In June 2019, educational posts in which SIN’s educators cautioned against the use of some substances during hot weather were again blocked from Instagram. SIN received a warning that “subsequent infringements of the community standards” may result in removing the entire profile. Now, after the interim measures ruling, they will be able to catch a breath and continue their social media activity without worrying that they may be blocked again at any time. This “private censorship” is one of the modern-day threats to freedom of speech. Platforms such as Facebook and Instagram have become “gatekeepers” to online expression, and, just as in the SIN’s case, there’s no viable alternative to them. Getting blocked on these platforms is a significant limitation to disseminating information.

The court’s interim decision means that for now, Facebook will not be able to arbitrarily decide to block content published by SIN. By issuing this decision, the court also recognised its jurisdiction to hear the case in Poland under Polish law. This is great news for Polish users and possibly users from other EU Member States. In cases against global internet companies the possibility to claim one’s rights before the domestic court is a condition for a viable access to justice – if the only possibility was to sue them in their home countries, the costs, the language barrier, and a foreign legal system would make it very difficult, if not impossible, for most citizens to exercise their rights.

However, the court’s decision is not final – after the delivery of the decision, Facebook Ireland will have the right to appeal it with the Appeal Court. The decision has been made ex parte, solely on the basis of a position presented by SIN, without the participation of the other party, and it only implements a temporary measure and does not prejudge the final verdict of the entire trial – the main proceedings are only about to begin.

Panoptykon Foundation
https://panoptykon.org/

SIN vs Facebook
https://panoptykon.org/sinvsfacebook

SIN v Facebook: Tech giant sued over censorship in landmark case (08.05.2019)
https://edri.org/sin-v-facebook/

(Contribution by Anna Obem and Dorota Glowacka, EDRi member Panoptykon Foundation, Poland)

close