03 Dec 2019

Wanted: Communications Intern!

By EDRi

European Digital Rights (EDRi) is an international not-for-profit association of 42 digital human rights organisations. We defend and promote rights and freedoms in the digital environment, such as the right to privacy, personal data protection, freedom of expression, and access to information.

Join EDRi now and become a superhero for the defense of our rights and freedoms online!

The EDRi Brussels office is currently looking for an intern to support our communications, campaigning and community coordination team. The internship will focus on social media, publications, campaigning, press work, and the production of written materials. The intern will also assist in tasks related to community coordination.

The internship will begin in February 2020 and go on for 4-6 months. You will receive a monthly remuneration of minimum 750 EUR (according to “convention d’immersion professionnelle”).

Key tasks:

  • Social media: drafting posts, engaging with followers, monitoring
  • Layouts and visuals: layouts and editing of visuals (specifically for social media)
  • Writing and editing: drafting and editing of press releases and briefings, newsletter articles, and supporter mailings
  • Assisting in other communications, campaigning and community coordination tasks, such as maintenance of mailing lists, monitoring media visibility, updating and analysing communications statistics, and event organisation

Needed:

  • experience in social media community management and publications
  • layout, photo and visual editing skills
  • excellent skills in writing and editing
  • fluent command of spoken and written English

Desired:

  • experience in journalism, media or public relations
  • interest in online activism and campaigning for digital human rights

How to apply:

To apply please send a maximum one-page cover letter and a maximum two-page CV (only PDFs are accepted) by email to heini >dot< jarvinen >at< edri >dot< org. Closing date for applications is 5 January 2020. Interviews with selected candidates will take place during the first half of January, and the internship is scheduled to start in February.

We are an equal opportunities employer with a strong commitment to transparency and inclusion. We strive to have a diverse and inclusive working environment. We encourage individual members of groups at risk of racism or other forms of discrimination to apply for this post.

close
25 Nov 2019

New Protocol on cybercrime: cutting red tape ≠ cutting human rights safeguards

By Chloé Berthélémy

From 20 to 22 November 2019, European Digital Rights (EDRi) and the Electronic Frontier Foundation (EFF) took part in the Octopus Conference 2019 at the Council of Europe (CoE) to present the comments submitted by EFF, EDRi, IT-Pol Denmark and the Electronic Privacy Information Center (EPIC) on draft provisions of the Second Additional Protocol to the Cybercrime Convention respect human rights. The Protocol sets the conditions for access to electronic data by law enforcement in the context of criminal investigations.

17 civil society organisations joint their call in a letter to the CoE Cybercrime Committee (T-CY) to ensure that the negotiations between more than 60 countries include substantial human rights safeguards in the draft text. The list of potential signatories goes far beyond the Council of Europe Parties and includes countries like the United States, Turkey, Morocco and Azerbaijan.

The procedures proposed by the Cybercrime Convention Committee (T-CY) exacerbate the challenges of the Cybercrime Convention (CCC), and create the potential for serious interference with human rights.

– the letter reads.

While the United States and the EU are engaging in a race to the bottom against one another in terms of privacy protections, it is essential that the T-CY listens to civil society concerns and avoids creating a mechanism that bypasses critical legal protections inherent in the current Mutual Legal Assistance Treaties (MLATs) – falsely considered as “red tape”.

Read the letter here.

Joint civil society response to discussion guide on a 2nd Additional Protocol to the Budapest Convention on Cybercrime (28.06.2018)
https://edri.org/files/consultations/globalcoalition-civilsocietyresponse_coe-t-cy_20180628.pdf

Nearly 100 public interest organisations urge Council of Europe to ensure high transparency standards for cybercrime negotiations (03.04.2018)
https://edri.org/global-letter-cybercrime-negotiations-transparency/

New Protocol on cybercrime: a recipe for human rights abuse? (25.08.2018)
https://edri.org/global-letter-cybercrime-negotiations-transparency/

close
22 Nov 2019

ePrivacy: EU Member States push crucial reform on privacy norms close to a dead end

By EDRi

Today, on 22 November 2019, the Permanent Representatives Committee of the Council of the European Union (COREPER) has rejected the Council’s position on a draft ePrivacy Regulation.

“In this era of disinformation and privacy scandals, refusing to ensure strong privacy protections in the ePrivacy Regulation is a step backwards for the EU,” said Diego Naranjo, Head of Policy at European Digital Rights (EDRi). “By first watering down the text and now halting the ePrivacy Regulation, the Council takes a stance to protect the interests of online tracking advertisers and to ensure the dominance of big tech. We hope the European Commission will stand on the side of citizens by defending the proposal and asking the Council to ensure a strong revised text soon in 2020.”

“The ePrivacy Regulation aims to strengthen users’ right to privacy and create protective measures against online tracking. Instead, EU states turned it into a surveillance toolkit,” said Estelle Massé, Senior Policy Analyst at EDRi member’s Access Now. “Today’s rejection should not be a signal that the reform cannot happen. Instead, it should be a signal that states must go back to the negotiating table and deliver what was promised to EU citizens: stronger privacy protections.”

In January 2017, the European Commission launched its proposal for a new ePrivacy Regulation, aiming at complementing the General Data Protection Regulation (GDPR), to protect the right to privacy and to the confidentiality of communications. An update to the outdated 2002 ePrivacy Directive is sorely needed – in today’s world where technology is intertwined in our everyday life, a strong regulation is crucial to protect us against the negative impacts of “surveillance capitalism”, to safeguard the functioning of our democracies, and to put people as the core element of the internet. The European Parliament took a strong stance towards the proposal when it adopted its position in October 2017. For over two years, the Council halted the proposal from advancing, presenting suggestions that lowered the fundamental rights protections that were proposed by the Commission and strengthened by the Parliament.

Today, the Council has voted to reject its own text. This leaves the door open for current practices that endanger citizens’ rights to continue happening. Now it is up to the Commission to either withdraw the entire proposal and leave citizens unprotected, or to the Council to prepare a new text that can get enough support to allow moving forward with the proposal. To meet the aims set for the ePrivacy Regulation, the new text should ensure privacy by design and by default, protect communications in transit and when stored, ban tracking walls, prevent backdoors to scan private communications without a court order and avoid secondary processing of communications data without consent.

Read more:

e-Privacy revision: Document pool
https://edri.org/eprivacy-directive-document-pool/

EU states vote on ePrivacy reform: We were promised more privacy. Instead, we are getting a surveillance toolkit. (22.11.2019)
https://www.accessnow.org/eu-states-vote-on-eprivacy-reform-we-were-promised-more-privacy-instead-we-are-getting-a-surveillance-toolkit/

EU Council considers undermining ePrivacy (25.07.2018)
https://edri.org/eu-council-considers-undermining-eprivacy/

Five reasons to be concerned about the Council ePrivacy draft (26.09.2018)
https://edri.org/five-reasons-to-be-concerned-about-the-council-eprivacy-draft/

Open letter to EU Member States: Deliver ePrivacy now! (10.10.2019)
https://edri.org/open-letter-to-eu-member-states-deliver-eprivacy-now/

The most recent European Council ePrivacy text (15.11.2019)
https://www.politico.eu/wp-content/uploads/2019/11/file.pdf

close
20 Nov 2019

Dance. Enjoy. Share. With Care.

By Ella Jakubowska
  • Anyone using cloud services should be aware of what the “cloud” is, what it is not, and how it can affect our privacy and security.
  • Our information stored in “clouds” can be protected if the EU says “Yes!” to a strong ePrivacy Regulation, greater enforcement of the General Data Protection Regulation (GDPR), and drops the “e-evidence” proposals.

Storing our information in “clouds” gives us access to funny photos of our dogs at the touch of a button, lets us back-up our mobile phones so that we don’t lose our crush’s number forever if we drop our phone down the toilet (oops!), and the cloud also gives us the means to binge-watch that addictive TV show that everyone is talking about. It can even amplify computing capacity, giving doctors the power to treat rare diseases more effectively. Many of these things were unimaginable just ten years ago – but today, we carry this incredible power in the palm of our hands.

It is important that cloud users have the knowledge and control to upload data to cloud services safely, securely, and in an enjoyable way. Your personal data should be protected online, including when you upload it to and store it in the cloud. One of the fundamental aims of 2018’s General Data Protection Regulation (GDPR), after all, was to protect the personal data of all citizens in the EU, and to set a globally-leading standard for personal data protection.

The not-so-fluffy cloud

Yet, while the word “cloud” sounds soft and fluffy, the truth is that there is no such thing as “the cloud” or “your cloud”. People outsource the storage of data from their own device to the internet servers of a private company. In reality, these servers are “the cloud” and company they belong to most often profits from gathering more and more data. In some cases, uploaded data will be subject to only very weak data protections. And with the proposed ePrivacy text – a vital complement to GDPR – still stuck at the European Council after over two and a half years, anyone using the internet in the EU is left vulnerable and inadequately protected.

EU laws can keep it together

This is where stronger EU legislation is needed. Under the European Parliament’s ePrivacy text, a wide range of online rights will be protected. This includes the storage, transit and encryption of online communications, which would help to protect users when their communications data is backed up to the cloud. Personal data, other than communications data, is already protected by the GDPR. This is important because, as recent cases in Germany have shown, unlawful data breaches of minors’ data are already happening in Microsoft’s cloud services.

This is also an issue in the context of the so-called “e-evidence” debate on proposed legislation for law enforcement to access European citizens’ data across borders, straight from service providers. The legislation would allow police forces from other EU countries to directly access the private information that you have stored on the cloud: without a judicial warrant, without you or your own government knowing that this is happening, and even without you being a suspect. Under this proposal, cloud providers have very little opportunity to refuse requests to hand over cloud data, and crucial human rights accountability measures and due process mechanisms are completely missing. E-evidence legislation therefore poses a huge threat to the security and privacy of data that is stored on a cloud.

The cloud can give you flexibility, convenience and peace of mind – but it is important to know where your data is going, and who might have access to it. The cloud is no longer a source of reassurance and convenience if a private company (or a hacker) can misuse funny videos of you and your friends, personal messages with your parents about a health condition, or an intimate browser history that contains information about your sexual activities. In order to protect the information of millions of European citizens, the EU must adopt ePrivacy, enforce GDPR and drop the e-evidence proposals.

Remember, data protection is cool – and knowing your rights pays off!

Click to watch the animation

Read more:

Your family is none of their business (23.07.2019)
https://edri.org/your-family-is-none-of-their-business/

Real-time bidding: The auction for your attention (04.07.2019)
https://edri.org/real-time-bidding-the-auction-for-your-attention/

Video: Dance. Enjoy. Share. With care.
https://www.youtube.com/watch?v=5N_lrtOkW3g

Right a wrong: ePrivacy now! (09.10.2019)
https://edri.org/right-a-wrong-eprivacy-now/

“E-evidence”: Repairing the unrepairable (14.11.2019)
https://edri.org/e-evidence-repairing-the-unrepairable/

(Contribution by Ella Jakubowska, EDRi intern)

close
20 Nov 2019

A privately managed public space?

By Heini Järvinen
  • Our “public spaces” online where we meet each other, organise, or speak about social issues, are often controlled and dominated by private companies (platforms like Facebook and YouTube).
  • Pushing platforms to decide which opinions we are allowed to express and which not is not going to solve major problems in our society.
  • The EU rules on online content moderation are soon going to be reviewed. To ensure our right to freedom of expression, we need to make sure these updated rules will not encourage online platforms to over-removal of content to avoid being taken to court.

Your video in YouTube got removed, without a warning. Or the page you manage on Facebook was blocked because your posts breached the “community standards”. You’ve sent messages to the platform to sort this out, but there’s no reply, and you have no way of getting your contents back online. Maybe you’ve experienced this? Or if not, you surely know someone who has.

The internet is a great place – a sort of “public space” where everyone has equal possibilities to share their ideas, creations, and knowledge. However, the websites and platforms where we most frequently hang out, share and communicate, like Facebook, Twitter, Instagram or YouTube, are not actually public spaces. They are spaces controlled by private businesses, with private business interests. That’s why your page got blocked, and your video removed.

Anyone should be free to express their opinions and views, even if not everyone likes those opinions, as long as they aren’t breaking any laws. The problem is that the private businesses dominating our “public spaces online would rather delete anything that looks even remotely risky for them (a potential copyright infringement, for example). There are also financial interests: these businesses exist to make profit, and if certain content doesn’t please their ad business clients, they will likely limit its visibility on their platform. And they can easily do it, because they can use their arbitrary “terms of service” or “community standards” as a cover, without having to justify their decisions to anyone. This is why it shouldn’t be left for the online companies to decide what is illegal and what is not.

There’s an increasing trend to push online platforms to do more about “harmful” contents and to take more responsibility. However, obliging the platforms to remove contents is not going to solve the problems of online hate speech, violence, or polarisation of our societies. Rather than fiddling around trying to treat the symptoms, the focus should be on addressing the underlying societal problems.

Whenever contents are taken down, there’s always a risk that our freedom to express our opinions is being limited in an unjustified way. It is, however, better that the decisions about what you can say and what you can’t are done based on a law than on interests of a profit-seeking company.

There are rules in place that limit online companies’ legal responsibility for the contents users post or upload on their platforms. One of them is the EU E-Commerce Directive. To update the rules on how online services should deal with illegal and “harmful” content, the new European Commission will likely soon review it, and replace it by a new set of rules: Digital Services Act (DSA). To ensure we can keep our right to freedom of expression, we need to make sure these updated rules will not encourage online platforms to over-remove content.

When dealing with videos, texts, memes and other content online, we need to find a nuanced approach to treating the different types of content differently. How do you think the future of freedom of expression online should look like?

E-Commerce review: Opening Pandora’s box? (20.06.2019)
https://edri.org/e-commerce-review-1-pandoras-box/

Facebook and Google’s pervasive surveillance poses an unprecedented danger to human rights (21.11.2019)
https://www.amnesty.org/en/latest/news/2019/11/google-facebook-surveillance-privacy/

LGBTQ YouTubers are suing YouTube over alleged discrimination (14.08.2019)
https://www.theverge.com/2019/8/14/20805283/lgbtq-youtuber-lawsuit-discrimination-alleged-video-recommendations-demonetization

(Contribution by Heini Järvinen, EDRi)

close
20 Nov 2019

ePrivacy hangs in the balance, but it’s not over yet…

By Ella Jakubowska

Unless you have been living under a rock (read: outside the “Brussels bubble”) you will likely be aware of the long and winding road on which the proposed ePrivacy Regulation has been for the last three years. This is not unusual for a piece of European Union (EU) legislation – the 2018 General Data Protection Regulation (GDPR) is a great example of the painful, imperfect, but ultimately fruitful processes that EU law goes through, in this case in a marathon spanning almost 25 years! Even now, Data Protection Authority (DPA) fines, litigation and regulatory reviews are testing the benefits and boundaries of GDPR, helping to shape it progressively into an even more effective piece of legislation.

Let us rewind to January 2017, when the European Commission delivered its long-awaited proposal for a Regulation on Privacy and Electronic Communications, also known as “ePrivacy”. In October of the same year, the European Parliament Committee on Civil Liberties, Justice and Home Affairs (LIBE) proposed a comprehensive series of improvements to the text in order to better protect fundamental rights. This included enhanced confidentiality of communications and privacy as a central foundation of online product and service design. We welcomed these amendments for their respect for and promotion of digital rights.

Unfortunately, the European Council has since seriously watered down the draft text, introducing worrying limits to the safeguards that ePrivacy offers for personal data and communications. In response to the worsening protections – and the negotiations lingering like a bad smell – EDRi, Access Now, Privacy International, and two other civil society organisations co-authored an open letter to the EU members states on 10 October 2019, urging them to swiftly adopt a strong ePrivacy Regulation. Yet the most recent European Council text has still not improved in any aspect. Concerningly, its introductory remarks use the emotive age-old arguments of child protection and terrorism to justify some vague “processing of communications data for preventing other serious crimes”. We believe this represents a slippery slope of surveillance and intrusion, and undermines the fundamental purpose of ePrivacy: protecting our fundamental right to privacy and confidentiality of communications.

The political stage of the file is now coming to a close after almost three painful years of back and forth. The imminent fate of the Council’s proposal will be decided on 22 November 2019 at the COREPER level. If the Member States vote to adopt the Council text, the file will move forward to the trilogue stage, where the States will be able to engage in technical discussions about the shape of the legislation. If the Member States vote to reject the Council text, however, all options – including the complete withdrawal of the ePrivacy proposal by the European Commission – will be on the table.

Despite these challenges, ePrivacy remains an essential piece of legislation for safeguarding fundamental rights in the online environment. Complementing the GDPR, a strong ePrivacy text can still protect the privacy of individuals, ensure mechanisms for meaningful consent, and establish rules on the role of each Member State’s Data Protection Authority (DPA) as their supervisory authority. It will embed privacy by design and default, making the internet a more secure space for everyone.

To quote an infamous political figure, we will not “die in a ditch” over ePrivacy. Whatever the outcome of the COREPER vote, we will continue to work tirelessly to secure the right to online privacy across Europe. So get your popcorn ready, stay tuned for the next episode in this epic saga, and be prepared in the event of some last minute plot-twists!

The History of the General Data Protection Regulation
https://edps.europa.eu/data-protection/data-protection/legislation/history-general-data-protection-regulation_en

e-Privacy revision: Document pool
https://edri.org/eprivacy-directive-document-pool/

EU Council considers undermining ePrivacy (25.07.2018)
https://edri.org/eu-council-considers-undermining-eprivacy/

Five reasons to be concerned about the Council ePrivacy draft (26.09.2018)
https://edri.org/five-reasons-to-be-concerned-about-the-council-eprivacy-draft/

Open letter to EU Member States: Deliver ePrivacy now! (10.10.2019)
https://edri.org/open-letter-to-eu-member-states-deliver-eprivacy-now/

The most recent European Council ePrivacy text (15.11.2019)
https://www.politico.eu/wp-content/uploads/2019/11/file.pdf

(Contribution by Ella Jakubowska, EDRi intern)

close
14 Nov 2019

“E-evidence”: Repairing the unrepairable

By EDRi

On 11 November 2019, Member of the European Parliament (MEP) Birgit Sippel (S&D), Rapporteur for the Committee on Civil Liberties, Justice and Home Affairs (LIBE) presented her draft Report, attempting to fix the many flaws of the European Commission’s “e-evidence” proposal. Has Sippel MEP been successful at repairing the unrepairable?

The initial e-evidence proposal by the Commission aims to allow law enforcement agencies across the EU to access electronic information more quickly by requesting it directly from online service providers in other EU countries. Unfortunately, the Commission forgot to build in meaningful human rights safeguards that would protect suspects and other affected persons from unwarranted data access.

The Commission proposal is not only harmful, but simply not needed at this point. To speed up cross-border access to data for law enforcement, there already is the European Investigation Order (EIO). It exists only since 2018 and has never been systematically evaluated, let alone improved.

From a fundamental rights perspective, the draft Report comes with a number of very important improvements. If adopted, they would help fixing some of the worst flaws in the original e-evidence proposal.

Here is what Member of the European Parliament (MEP) Birgit Sippel suggests, and what that means for fundamental rights:

👍 Framing is important. While the Commission’s proposal treats all information accessed under the new law as if it was admissible evidence, Sippel MEP recalls that what law enforcement actually accesses is people’s data. Only a fraction of that data is likely to be relevant for ongoing criminal proceedings. She therefore correctly proposes to replace “electronic evidence” with the more accurate term “electronic information”.

👍 One of the Commission proposal’s biggest flaws is that it would allow any law enforcement agency or court in the EU to force companies like email providers and social networks in other EU countries to directly hand over the personal information of their users. The judicial authorities of that other EU country would no longer be involved and would in fact never know about the data access. To mitigate those risks, Sippel MEP proposes a mandatory notification to the judicial authorities of the country in which the online provider is located. That way, authorities can intervene in cases that threaten fundamental rights and stop unwarranted data access requests.

👍 & 👎 Sippel MEP proposes that authorities requesting data must consult the judicial authorities of the country in which the affected person has their habitual place of residence “where it is clear” that the person whose data is sought is residing in another country. Involving the country of residence makes a lot of sense because only their authorities may know about particular protections a lawyer, doctor, or journalist has. Unfortunately, according to the draft Report, this consultation only needs to happen where it is clear that the affected person lives in another country—a term that is undefined and easy to bend.
🔧 How to repair it: The involvement of the country of residence should be mandatory when it’s known or could have been known that the person whose data is sought lives there.

👎 Although the judicial authorities of the affected person’s country of residence would be consulted in some instances under the proposal by Sippel MEP (see point above), their opinion in any given case would only be “duly taken into account”.
🔧 How to repair it: The authorities of the affected person’s country of residence should be able to block infringing foreign data requests. The affected person’s country of residence is usually best placed to protect their fundamental and procedural rights and to know about potential special protections of journalists, doctors, lawyers, and similar professions.

👍 The draft Report streamlines and fixes the skewed data definitions introduced by the Commission and brings them in line with existing EU legislation. “Traffic data” replaces former overlapping “access” and “transactional” data categories. IP addresses, which can be very revealing of private lives and daily habits, benefit from a higher protection level by being defined as traffic data.

👍 The draft Report introduces an extensive list of possible grounds for non-recognition or non-execution of foreign data access requests aimed at protecting accused persons from illegitimate requests. The grounds of refusal include the non-respect of the principles of ne bis in idem (one cannot be judged twice for the same offence) and of dual criminality (the investigated conduct need to be a criminal offence in all jurisdictions concerned).

👍 Sippel MEP proposes to extend the data access request instruments created by the new law to the defence of the suspected or accused person. This approach strengthens the principle of “equality of arms”, according to which the suspected or accused person should have a genuine opportunity to prepare and present their case in the event of a trial.

👍 The LIBE draft Report beefs up the rights of the affected person to obtain effective remedies and to a fair trial. She proposes that the person who is targeted by a data access request should be notified by default by the service provider, except in circumstances where such notification would negatively impact an investigation. In that case, the state requesting the data (issuing state) has to obtain a court order to receive it.

👎 Lastly, the draft Report fails to question whether direct cooperation with online service providers is at all needed. The Commission argues that direct cooperation for law enforcement is necessary to prevent relevant electronic evidence from being removed by suspects. However, the proposed instrument of a European Preservation Order would be less intrusive and most likely sufficient to achieve that aim (similar to a “quick data freeze” order).
🔧 How to repair it: The European Production Order Certificate (EPOC) should be completely removed from the law. Law enforcement agencies should use the European Preservation Order to quick-freeze data they believe could contain relevant electronic evidence. The acquisition of that data should be done through the safer channels of the European Investigation Order (EIO) and Mutual legal assistance treaty (MLAT).

LIBE draft Report on the “e-evidence” proposal (24.10.2019)
https://www.europarl.europa.eu/doceo/document/LIBE-PR-642987_EN.pdf

EDRi Recommendations on cross-border access to data (25.04.2019)
https://edri.org/files/e-evidence/20190425-EDRi_PositionPaper_e-evidence_final.pdf

Cross-border access to data for law enforcement: Document pool
https://edri.org/cross-border-access-to-data-for-law-enforcement-document-pool/

EDPS opinion on Proposals regarding European Production and Preservation Orders for electronic evidence in criminal matters (06.11.2019)
https://edps.europa.eu/sites/edp/files/publication/opinion_on_e_evidence_proposals_en.pdf

EU rushes into e-evidence negotiations without common position (19.06.2019)
https://edri.org/eu-rushes-into-e-evidence-negotiations-without-common-position/

(Contribution by Jan Penfrat and Chloé Berthélémy, EDRi)

close
12 Nov 2019

EDRi is looking for a Senior Policy Advisor

By EDRi

European Digital Rights (EDRi) is an international not-for-profit association of 42 digital human rights organisations from across Europe and beyond. We defend and promote rights and freedoms in the digital environment, such as the right to privacy, personal data protection, freedom of expression, and access to information.

EDRi is looking for a talented and dedicated Senior Policy Advisor to join EDRi’s team in Brussels. This is a unique opportunity to be part of a growing and well-respected NGO that is making a real difference in the defence and promotion of online rights and freedoms in Europe and beyond. The deadline to apply is 2 December 2019.

Key responsibilities:

As a Senior Policy Advisor, your main tasks will be to:

  • Monitor, analyse and report about human rights implications of EU digital policy developments;
  • Advocate for the protection of digital rights, particularly but not exclusively in the areas of artificial intelligence, data protection, privacy, net neutrality and copyright;
  • Provide policy-makers with expert, timely and accurate input;
  • Draft policy documents, such as briefings, position papers, amendments, advocacy one-pagers, letters, blogposts and EDRi-gram articles;
  • Provide EDRi members with information about EU’s relevant legislative processes, coordinate working groups, help developing campaign messages and providing the public with information about EU’s relevant legislative processes and EDRi’s activities.
  • Represent EDRi at European and global events;
  • Organise and participate in expert meetings;
  • Maintain good relationships with policy-makers, stakeholders and the press;
  • Support and work closely with other staff members including policy, communications and campaigns colleagues and report to the Head of Policy and to the Executive Director;
  • Contribute to the policy strategy of the organisation;

Desired qualifications and experience:

  • Minimum 3 years of relevant experience in a similar role or EU institution;
  • A university degree in law, EU affairs, policy, human rights or related field or equivalent experience;
  • Demonstrable knowledge of, and interest in data protection, privacy and copyright, as well as other internet policy issues;
  • Knowledge and understanding of the EU, its institutions and its role in digital rights policies;
  • Experience in leading advocacy efforts and creating networks of influence;
  • Exceptional written and oral communications skills;
  • IT skills; experience using free software and free/open operation systems, WordPress and Nextcloud are an asset;
  • Strong multitasking abilities and ability to manage multiple deadlines;
  • Experience of working with and in small teams;
  • Experience of organising events and/or workshops;
  • Ability to work in English. Other European languages, especially French, is an advantage.

What EDRi offers:

  • A permanent, full-time contract;
  • Salary: 3 200 euros gross per month;
  • A dynamic, multicultural and enthusiastic team of experts based in Brussels;
  • The opportunity to foster the protection of fundamental rights in important legislative proposals;
  • A high degree of autonomy and flexibility;
  • An international and diverse network;
  • Networking opportunities.

Starting date: as soon as possible

How to apply:

To apply, please send a maximum one-page cover letter and a maximum two-page CV in English and in .pdf format to applications (at) edri (dot) org with “Senior Policy Advisor” in the subject line by 2 December 2019 (11.59 pm). Candidates will be expected to be available for interviews on the week of 11th December.

We are an equal opportunities employer with a strong commitment to transparency and inclusion. We strive to have a diverse and inclusive working environment and ideally, we would like to strive for a gender balance in the policy team. Therefore, we particularly encourage applications from individuals who identify as women. We also encourage individual members of groups at risk of racism or other forms of discrimination to apply for this post.

Please note that only shortlisted candidates will be contacted.

close
06 Nov 2019

Why tech is not “just a tool”

By Ella Jakubowska

Throughout October 2019, digital rights-watchers welcomed new reports warning about the human rights crises of Artificial Intelligence (AI) and other digital technologies. From Philip Alston’s caution that the UK risks “stumbling zombie-like into a digital welfare dystopia” to David Kaye’s critique of internet companies’ and States’ failure to respect human rights online, civil society is increasingly demanding greater insight into the impact of technology on society. Individuals who do not work on “digital rights” are also becoming progressively more aware of the exponentially increasing power and control of technology giants such as Facebook and Google.

Whilst every citizen is and will continue to be affected (whether positively or negatively) by the rise of technology for everyday services, the risks are becoming more evident for some of the groups that already suffer systematic discrimination. Take this woman who was automatically barred from entering her gym because the system did not recognise that she could be both a doctor, and a woman; or this evidence that people of colour get worse medical treatment when decisions are made by algorithms. Not to mention the environmental and human impact of mining precious metals for smartphones (which disproportionately impacts the global south) and the incredibly high emissions released by training just one single algorithm. The list, sadly, goes on and on.

The idea that human beings are biased is hardly a surprise. Most of us make “implicit associations”, unconscious assumptions and stereotypes about the things and the people that we see in the world. According to some scientists, there are evolutionary reasons for this, in order to allow our ancestors to distinguish between friends and foes. These biases, however, become problematic when they lead to unfair or discriminatory treatment – certain groups being surveilled more closely, censored more frequently, or punished more harshly. In the context of human rights in the online environment, this matters because everyone has a right to equal access to privacy, to free speech, and to justice.

States are the actors that are responsible for respecting and protecting their citizens’ human rights. Typically, representatives of a state (such as social workers, judges, police and parole officers) are responsible for making decisions that can impact citizens’ rights: working out the amount of benefits that a person will receive, deciding on the length of a prison sentence, or making a prediction about the likelihood of them re-offending. Increasingly, however, these decisions are starting to be made by algorithms.

Many well-meaning people have fallen into the trap of thinking that tech, with its structured 1s and 0s, removes humans’ messy bias, and allows us to make better, fairer decisions. Yet technology is made by humans, and we unconsciously build our world views into the technology that we produce. This encodes and amplifies underlying biases, whilst outwardly giving the appearance of being “neutral”. Even the data that is used to train algorithms or to make decisions reflects a particular social history. And if that history is racist, or sexist, or ableist? You guessed it: this past discrimination will continue to impact the decisions that are made today.

The decisions made by social workers, police and judges are, of course, frequently difficult, imperfect, and susceptible to human bias too. But they are made by state representatives with an awareness of the social context of their decision, and crucially, an ability to be challenged by the impacted citizen – and overturned if an appropriate authority feels they have judged incorrectly. Humans also have a nifty way of being able to learn from mistakes so that they do not repeat them in the future. Machines making these decisions do not “learn” in the same way as humans: they “learn” to get more precise with their bias, and they lack the self-awareness to know that it leads to discrimination. To make things worse, many algorithms that are used for public services are currently protected under intellectual property laws. This means that citizens do not have a route to challenge decisions that an algorithm has made about them. Recent cases such as Loomis v. Wisconsin, which saw a citizen challenge a prison sentence determined by the US’s COMPAS algorithm, have worryingly ruled in favour of upholding the algorithm’s proprietary protections, refusing to reveal how the sentencing decision was made.

Technology is not just a tool, but a social product. It is not intrinsically good or bad, but it is embedded with the views and biases of its makers. It uses flawed data to make assumptions about who you are, which can impact the world that you see. Another example of this is the use of highly personalised adverts in the EU, which may breach our fundamental right to privacy. Technology cannot – at least for now – make fair decisions that require judgement or assessment of human qualities. When it comes to granting or denying access to services and rights, this is even more important. Humans can be aware of their bias, work towards mitigating it, and challenge it when they see it in others. For anyone creating, buying or using algorithms, active measures about how the tech will impact social justice and human rights must be at the heart of design and use.

Hate speech online: Lessons for protecting free expression (29.10.2019)
https://edri.org/hate-speech-online-lessons-for-protecting-free-expression/

Millions of black people affected by racial bias in health-care algorithms (24.10.2019)
https://www.nature.com/articles/d41586-019-03228-6

Anatomy of an AI System
https://anatomyof.ai/

Profiling the unemployed in Poland: Social and political implications of algorithmic decision making
https://panoptykon.org/sites/default/files/leadimage-biblioteka/panoptykon_profiling_report_final.pdf

Project Implicit
https://implicit.harvard.edu/implicit/takeatest.html

Digital dystopia: how algorithms punish the poor (14.10.2019)
https://www.theguardian.com/technology/2019/oct/14/automating-poverty-algorithms-punish-poor

(Contribution by Ella Jakubowska, EDRi intern)

close
06 Nov 2019

Danish data retention: Back to normal after major crisis

By IT-Pol

The Danish police and the Ministry of Justice consider access to electronic communications data to be a crucial tool for investigation and prosecution of criminal offences. Legal requirements for blanket data retention, which originally transposed the EU Data Retention Directive, are still in place in Denmark, despite the judgments from the Court of Justice of the European Union (CJEU) in 2014 and 2016 that declared general and indiscriminate data retention illegal under EU law.

In March 2017, in the aftermath of the Tele2 judgment, the Danish Minister of Justice informed the Parliament that it was necessary to amend the Danish data retention law. However, when it comes to illegal data retention, the political willingness to uphold the rule of law seems to be low – every year the revision is postponed by the Danish government with consent from Parliament, citing various formal excuses. Currently, the Danish government is officially hoping that the CJEU will revise the jurisprudence of the Tele2 judgment in the new data retention cases from Belgium, France and the United Kingdom which are expected to be decided in May 2020. This latest postponement, announced on 1 October 2019, barely caught any media attention.

However, data retention has been almost constantly in the news for other reasons since 17 June 2019 when it was revealed to the public that flawed electronic communications data had been used as evidence in up to 10000 police investigations and criminal trials since 2012. Quickly dubbed the “telecommunications data scandal” by the media, the ramifications of the case have revealed severely inadequate data management practices by the Danish police for almost ten years. This is obviously very concerning for the functioning of the criminal justice system and the right to a fair trial, but also rather surprising in light of the consistent official position of the Danish police that access to telecommunications data is a crucial tool for investigation of criminal offences. The mismatch between the public claims of access to telecommunications data being crucial, and the attention devoted to proper data management, could hardly be any bigger.

According to the initial reports in June 2019, the flawed data was caused by an IT system used by the Danish police to convert telecommunications data from different mobile service providers to a common format. Apparently, the IT system sometimes discarded parts of the data received from mobile service providers. During the Summer of 2019, a new source of error was identified. In some cases, the data conversion system had modified the geolocation position of mobile towers by up to 200 meters.

Based on the new information of involuntary evidence tampering, the Director of Public Prosecutions decided on 18 August 2019 to impose a temporary two-month ban on the use of telecommunications data as evidence in criminal trials and pre-trial detention cases. Somewhat inconsequential, the police could still use the potentially flawed data for investigative purposes. Since telecommunications data are frequently used in criminal trials in Denmark, for example as evidence that the indicted person was in the vicinity of the crime scene, the two-month moratorium caused a number of criminal trials to be postponed. Furthermore, about 30 persons were released from pre-trial detention, something that generated media attention even outside Denmark.

In late August 2019, the Danish National Police commissioned the consultancy firm Deloitte to conduct an external investigation of its handling of telecommunications data and to provide recommendations for improving the data management practices. The report from Deloitte was published on 3 October 2019, together with statements from the Danish National Police, the Director of Public Prosecutions, and the Ministry of Justice.

The first part of the report identifies the main technical and organisational causes for the flawed data. The IT system used for converting telecommunications data to a common format contained a timer which sometimes submitted the converted data to the police investigator before the conversion job was completed. This explains, at least at technical level, why parts of the data received from mobile service providers were sometimes discarded. The timer error mainly affected large data sets, such as mobile tower dumps (information about all mobile devices in a certain geographical area and time period) and access to historical location data for individual subscribers.

The flaws in the geolocation information for mobile towers that triggered the August moratorium were traced to errors in the conversion of geographical coordinates. Mobile service providers in Denmark use two different systems for geographical coordinates, and the police uses a third system internally. During a short period in 2016, the conversion algorithm was applied twice to some mobile tower data, which moved the geolocation positions by a couple of hundred meters.

On the face of it, these errors in the IT system should be relatively straightforward to correct, but the Deloitte report also identifies more fundamental deficiencies in the police practices of handling telecommunications data. In short, the report describes the IT systems and the associated IT infrastructure as complex, outdated, and difficult to maintain. The IT system used for converting telecommunications data was developed internally by the police and maintained by a single employee. Before December 2018, there were no administrative practices for quality control of the data conversion system, not even simple checks to ensure that the entire data set received from mobile service providers had been properly converted.

The only viable solution for the Danish police, according to the assessment in the report, is to develop an entirely new infrastructure for handling telecommunications data. Deloitte recommends that the new infrastructure should be based on standard software elements which are accepted globally, rather than internally developed systems which cannot be verified. Concretely, the reports suggests using POL-INTEL, a big data policing system supplied by Palantir Technologies, for the new IT infrastructure. In the short term, some investment in the existing infrastructure will be necessary in order to improve the stability of the legacy IT systems and reduce the risk of creating new data flaws. Finally, the report recommends systematic independent quality control and data validation by an external vendor. The Danish National Police has accepted all recommendations in the report.

Deloitte also delivered a short briefing note about the use of telecommunications data in criminal cases. The briefing note, intended for police investigators, prosecutors, defence lawyers and judges, explains the basic use cases of telecommunications data in police investigations, as well as information about how the data is generated in mobile networks. The possible uncertainties and limitations of telecommunications data are also mentioned. For example, it is pointed out that mobile devices do not necessarily connect to the nearest mobile tower, so it cannot simply be assumed that the user of the device is close to the mobile tower with almost “GPS level” accuracy. This addresses a frequent critique against the police and prosecutors for overstating the accuracy of mobile location data – an issue that was covered in depth by the newspaper Information in a series of articles in 2015. Quite interestingly, the briefing note also mentions the possibility of spoofing telephone numbers, so that the incoming telephone call or text message may originate from a different source than the telephone number registered by the mobile service provider under its data retention obligation.

On 16 October 2019, the Director of Public Prosecutions decided not to extend the moratorium on the use of telecommunications data. Along with this decision, the Director issued new and more specific instructions for prosecutors regarding the use of telecommunications data. The Deloitte briefing note should be part of the criminal case (and distributed to the defence lawyer), and police investigators are required to present a quality control report to prosecutors with an assessment of possible sources of error and uncertainty in the interpretation of the telecommunications data used in the case. Documentation of telecommunications data evidence should, to the extent possible, be based on the raw data received from mobile service providers and not the converted data.

For law enforcement, the October 16 decision marks the end of the data retention crisis which erupted in public four months earlier. However, only the most imminent problems at the technical level have really been addressed, and several of the underlying causes of the crisis are still looming under the surface, for example the severely inadequate IT infrastructure used by the Danish police for handling telecommunications data. The Minister of Justice has announced further initiatives, including investment in new IT systems, organisational changes to improve the focus on data management, improved training for police investigators in the proper use and interpretation of telecommunications data, and the creation of a new independent supervisory authority for technical investigation methods used by the police.

Denmark: Our data retention law is illegal, but we keep it for now (08.03.2017)
https://edri.org/denmark-our-data-retention-law-is-illegal-but-we-keep-it-for-now/

Denmark frees 32 inmates over flaws in phone geolocation evidence, The Guardian (12.09.2019)
https://www.theguardian.com/world/2019/sep/12/denmark-frees-32-inmates-over-flawed-geolocation-revelations

Response from the Minister of Justice to the reports on telecommunications data (in Danish only, 03.10.2019)
http://www.justitsministeriet.dk/nyt-og-presse/pressemeddelelser/2019/justitsministerens-reaktion-paa-teledata-redegoerelser

Can cell tower data be trusted as evidence? Blog post by the journalist covering telecommunications data for the newspaper Information (26.09.2015)
https://andreas-rasmussen.dk/2015/09/26/can-cell-tower-data-be-trusted-as-evidence/

(Contribution by Jesper Lund, EDRi member IT-pol, Denmark)

close