15 Jan 2020

Support our work by investing in a piece of e-clothing!

By EDRi

Your privacy is increasingly under threat. European Digital Rights works hard to have you covered. But there’s only so much we can do.

Help us help you. Help us get you covered.

Click the image to watch the video!

Check out our 2020 collection!*

*The items listed below are e-clothes. That means they are electronic. Not tangible. But still very real – like many other things online.

Your winter stock(ings) – 5€
A pair of hot winter stockings can really help one get through cold and lonely winter days. Help us to fight for your digital rights by investing in a pair of these superb privacy–preserving fishnet stockings. This delight is also a lovely gift for someone special.


A hat you can leave on – 10€
Keep your head undercover with this marvelous piece of surveillance resistance. Adaptable to any temperature types and – for the record – to several CCTV models, the item really lives up to its value. This hat is an indispensable accessory when visiting your favourite public space packed with facial recognition technologies.


Winter/Summer Cape – 25€
Are you feeling heroic yet? Our flamboyant Winter/Summer cape is designed to keep you warm and cool. This stylish accessory takes the weight off your shoulders – purchase it and let us take care of fighting for your digital rights!


Just another White T-Shirt – 50€
A white t-shirt can do wonders when you’re trying to blend in with a white wall. This wildly unexciting but versatile classic is one of the uncontested fundamental pillars of your privacy enhancing e-wardrobe.


THE privacy pants ⭐️ – 100€
This ultimate piece of resistance is engineered to keep your bottom warm in the coldest winter, but also aired up during the hottest summer days. Its colour guarantees the ultimate tree (of knowledge) look. The item comes with a smart zipper.


Anti-tracksuit ⭐️ – 250€
Keep your digital life healthy with the anti-tracking tracksuit. The fabric is engineered to bounce out any attempt to get your privacy off track. Plus, you can impress your beloved babushka too.


Little black dress ⭐️ – 500€
Whether at a work cocktail party, a funeral, shopping spree or Christmas party – this dress will turn you into the center of attention, in a (strangely) privacy-respecting manner.


Sew your own ⭐️ – xxx€
Unsure of any of the items above? Let your inner tailor free, customise your very own unique, designer garment, and put a price tag of your choice on it.



⭐️ The items of value superior to 100€ are delivered with an (actual, analog, non-symbolic) EDRi iron-on privacy patch that you can attach on your existing (actual, analog, non-symbolic) piece of clothing or accessory. If you wish to receive this additional style and privacy enhancer, don’t forget to provide us with your postal address (either via the donation form, or in your bank transfer message)!


Question? Remark? Idea? Please contact us brussels [at] edri [dot] org !

close
15 Jan 2020

Your face rings a bell: Three common uses of facial recognition

By Ella Jakubowska

Not all applications of facial recognition are created equal. As we explored in the first and second instalments of this series, different uses of facial recognition pose distinct but equally complex challenges. Here we sift through the hype to analyse three increasingly common uses of facial recognition: tagging pictures on Facebook, automated border control gates, and police surveillance.

The chances are that your face has been captured by a facial recognition system, if not today, then at least in the last month. It is worryingly easy to stroll through automated passport gates at an airport, preoccupied with the thought of seeing your loved ones, rather than consider potential threats to your privacy. And you can quite happily walk through a public space or shop without being aware that you are being watched, let alone that your facial expressions might be used to label you a criminal. Social media platforms increasingly employ facial recognition, and governments around the world have rolled it out in public. What does this mean for our human rights? And is it too late to do something about it?

First: What the f…ace? – Asking the right questions about facial recognition!

As the use of facial recognition skyrockets, it can feel that there are more questions than answers. This does not have to be a bad thing: asking the right questions can empower you to challenge the uses that will infringe on your rights before further damage is done.

A good starting point is to look at impacts on fundamental rights such as privacy, data protection, non-discrimination and freedoms, and compliance with international standards of necessity, remedy and proportionality. Do you trust the owners of facial recognition systems (or indeed other types of biometric recognition and surveillance) whether public or private, to keep your data safe and to use it only for specific, legitimate and justifiable purposes? Do they provide sufficient evidence of effectiveness, beyond just the vague notion of “public security”?

Going further, it is important to ask societal questions like: does being constantly watched and analysed make you feel safer, or just creeped out? Will biometric surveillance substantially improve your life and your society, or are there less invasive ways to achieve the same goals?

Looking at biometric surveillance in the wild

As explored in the second instalment of this series, many public face surveillance systems have been shown to violate rights and been deemed illegal by data protection authorities. Even consent-based, optional applications may not be as unproblematic as they first seem. This is our “starter for ten” for thinking through the potentials and risks of some increasingly common uses of facial verification and identification – we’ll be considering classification and other biometrics next time. Think we’ve missed something? Tweet us your ideas @edri using #FacialRecognition.

Automatic tagging of pictures on Facebook

Facebook uses facial recognition to tag users in pictures, as well as other “broader” uses. Under public pressure, in September 2019, they made it opt-in – but this applies only to new, not existing, users.

Potentials:

  • Saves time compared to manual tagging
  • Alerts you when someone has uploaded a picture of you without your knowledge

Risks:

  • The world’s biggest ad-tech company can find you on photos or videos across the web – forever
  • Facebook will automatically scan, analyse and categorise every photo uploaded
  • You will automatically be tagged in photos you might want to avoid
  • Errors especially for people with very light or very dark skin

Evidence:

Creepy, verging on dystopian, especially as the feature is on by default for some users (here’s how to turn it off: https://www.cnet.com/news/neons-ceo-explains-artificial-humans-to-me-and-im-more-confused-than-ever/). We’ll leave it to you to decide if the potentials outweigh the risks.

Automated border control (ePassport gates)

Automated border control (ABC) systems, sometimes known as e-gates or ePassport gates, are self-serve systems that authenticate travellers against their identity documents – a type of verification.

Potentials:

  • Suggested as a solution for congestion as air travel increases
  • Matches you to your passport, rather than a central database – so in theory your data isn’t stored

Risks:

  • Longer queues for those who cannot or do not want to use it
  • Lack of evidence that it saves time overall
  • Difficult for elderly passengers to use
  • May cause immigration issues or tax problems
  • Normalises face recognition
  • Disproportionately error-prone for people of colour, leading to unjustified interrogations
  • Supports state austerity measures

Evidence:

  • Stats vary wildly, but credible sources suggest the average border guard takes 10 seconds to process a traveler, faster than the best gates which take 10-15 seconds
  • Starting to be used in conjunction with other data to predict behaviour
  • High volume of human intervention needed due to user or system errors
  • Extended delays for the 5% of people falsely rejected
  • Evidence of falsely criminalising innocent people
  • Evidence of falsely accepting people with wrong passport

Evidence of effectiveness can be contradictory, but the impacts – especially on already marginalised groups – and the ability to combine face data with other data to induce additional information about travellers bear major potential for abuse. We suspect that offline solutions such as funding more border agents and investing in queue management could be equally efficient and less invasive.

Police surveillance

Sometimes referred to as face surveillance, police forces across Europe – often in conjunction with private companies – are using surveillance cameras to perform live identification in public spaces.

Potentials:

  • Facilitates the analysis of video recordings in investigations

Risks:

  • Police hold a database of faces and are able to track and follow every individual ever scanned
  • Replaces investment in police recruitment and training
  • Can discourage use of public spaces – especially those who have suffered disproportionate targeting
  • Chilling effect on freedom of speech and assembly, an important part of democratic participation
  • May also rely on pseudo-scientific emotion “recognition”
  • Legal ramifications for people wrongly identified
  • No ability to opt out

Evidence:

Increased public security could be achieved by measures to tackle issues such as inequality or antisocial behaviour or generally investing in police capability rather than surveillance technology.

Facing reality: towards a mass surveillance society?

Without intervention, facial recognition is on a path to omniscience. In this post, we have only scratched the surface. However, these examples identify some of the different actors that may want to collect and analyse your face data, what they gain from it, and how they may (ab)use it. They have also shown that benefits of facial surveillance are frequently cost-cutting reasons, rather than user benefit.

We’ve said it before: tech is not neutral. It reflects and reinforces the biases and world views of its makers. The risks are amplified when systems are deployed rapidly, without considering the big picture or the slippery slope towards authoritarianism. The motivations behind each use must be scrutinised and proper assessments carried out before deployment. As citizens, it is our right to demand this.

Your face has a significance beyond just your appearance – it is a marker of your unique identity and individuality. But with prolific facial recognition, your face becomes a collection of data points which can be leveraged against you and infringe on your ability to live your life in safety and with privacy. With companies profiting from the algorithms covertly built using photos of users, faces are literally commodified and traded. This has serious repercussions on our privacy, dignity and bodily integrity.

Facial Recognition and Fundamental Rights 101 (04.12.2019)
https://edri.org/facial-recognition-and-fundamental-rights-101/

The many faces of facial recognition in the EU (18.12.2019)
https://edri.org/the-many-faces-of-facial-recognition-in-the-eu/

Data-Driven Policing: The Hardwiring of Discriminatory Policing Practices across Europe (05.11.2019)
https://www.enar-eu.org/IMG/pdf/data-driven-profiling-web-final.pdf

Facial recognition technology: fundamental rights considerations in the context of law enforcement (27.11.2019)
https://fra.europa.eu/sites/default/files/fra_uploads/fra-2019-facial-recognition-technology-focus-paper.pdf

What the “digital welfare state” really means for human rights (08.01.2020)
https://www.openglobalrights.org/digital-welfare-state-and-what-it-means-for-human-rights/

Resist Facial Recognition
https://www.libertyhumanrights.org.uk/resist-facial-recognition

(Contribution by Ella Jakubowska, EDRi intern)

close
15 Jan 2020

Serbia: Complaints filed against Facebook and Google

By SHARE Foundation

EDRi member SHARE Foundation has filed complaints to the Commissioner for Information of Public Importance and Personal Data Protection of Serbia against Facebook and Google for their non-compliance with the obligation to appoint representatives in Serbia for data protection issues. In May 2019, before the start of application of the new Serbian Law on Personal Data Protection, SHARE Foundation sent letters to 20 international companies and called upon them to appoint representatives in Serbia, in accordance with the new legal obligations.

Appointing representatives of these companies is not a formality – it is essential to exercising the rights of Serbian citizens prescribed by law. In the current circumstances, companies like Google and Facebook view Serbia, like many other developing countries, as a territory for unregulated exploitation of citizens’ private data, even though Serbia harmonised its rules with the EU Digital Single Market by adopting the new Law on Personal Data Protection. Namely, these companies recognise Serbia as a relevant market, offer their services to citizens of the Republic of Serbia and monitor their activities. In the course of doing business, these companies process a large amount of data of Serbian citizens and make huge profits. On the other hand, the new law guarantees numerous rights to citizens in relation to such data processing, but at the moment it seems that exercising these rights would face many difficulties.

Among other things, these companies do not provide clear contact points thatcitizens can contact – they mostly have application forms available in a foreign language. Experience has shown that such forms are not adequate, not only because they require advanced knowledge of a foreign language by Serbian citizens, but also because this type of communication is mostly done by programs that send generic automated responses.

Although fines under the domestic Law on Personal Data Protection that the Commissioner may impose, in this case 100 000 Serbian dinars (around 940 USD or 850 EUR), wouldn’t have a major impact on the budgets of these gigantic companies, SHARE believes that they would show that the competent authorities of the Republic of Serbia intend to protect its citizens and point out that these companies are not operating in accordance with domestic regulations.

SHARE Foundation
https://www.sharefoundation.info/en/

Facebook and Google asked to appoint representatives in Serbia (05.06.2019)
https://edri.org/facebook-and-google-asked-to-appoint-representatives-in-serbia/

Will Serbia adjust its data protection framework to GDPR? (24.04.2019)
https://edri.org/will-serbia-adjust-its-data-protection-framework-to-gdpr/

(Contribution by EDRi member SHARE Foundation, Serbia)

close
15 Jan 2020

ECtHR demands explanations on Polish intelligence agency surveillance

By Panoptykon Foundation

The European Court of Human Rights (ECtHR) has demanded the Polish government to provide an explanation on surveillance by its intelligence agencies. This is a result of complaints filed with the Strasbourg court in late 2017 and early 2018 by activists from EDRi member Panoptykon Foundation and Helsinki Foundation for Human Rights as well as attorney Mr. Mikołaj Pietrzak. The attorney points out that uncontrolled surveillance by the Polish government violates not only his privacy but most importantly the rights and freedoms of his clients. Activists add that as active citizens, they are at a particular risk of being subject to government surveillance.

Panoptykon has been criticising the lack of control over government surveillance for years. Without appropriate controls concerns and doubts exist about what intelligence agencies can use their broad powers without proper limitations. However, there’s no way of verifying to which extent these powers are used, because the law does not envision access to information about whether an individual has been subject to surveillance – even if surveillance has finished and the individual has not been charged. Therefore, as citizens we are defenceless and we cannot protect our rights.

The ECtHR decided that the complaints meet formal requirements and communicated the case to the Polish government which will have to answer the question whether its actions violated our privacy (Article 8 of the European Convention on Human Rights) and the right to an effective remedy (Article 13 of the Convention).

What’s at stake is not just the right to privacy. As attorney Mikołaj Pietrzak explains, the basis of the attorney-client relationship is trust that can only exist on condition of confidentiality. Attorneys are obliged to protect legal privilege, especially when it comes to defence in criminal cases. Current laws make it impossible. This infringes on the rights and freedoms of their clients, and in particular their right to defence.

The Polish Constitutional Court pointed out that the law should have been changed already in July 2014. However, so-called Surveillance Act and Counter-terrorism Act that were adopted in 2016, only expanded the intelligence agencies’ powers, without introducing any mechanisms of control. Compared to other EU countries where independent control over the activities of intelligence agencies is not surprising to anyone, Poland stands out in a negative way. These irregularities have been pointed out, among others, by the Venice Commission in a June 2016 Opinion. The obligation to inform the data subject about the fact that intelligence agencies accessed their telecommunication data results from multiple ECtHR (e.g. Szabo and Vissy v. Hungary, Saravia v. Germany or Zakharov v. Russia) and Court of Justice of the European Union (CJEU) judgements (e.g. Tele2 Sverige).

The complainants are represented by attorney Małgorzata Mączka-Pacholak.

Panoptykon Foundation
https://en.panoptykon.org/

No control over surveillance by Polish intelligence agencies. ECHR demands explanations from the government (18.12.2019)
https://en.panoptykon.org/government-surveillance-echr-complaint

(Contribution by EDRi member Panoptykon Foundation, Poland)

close
15 Jan 2020

Copyright stakeholder dialogues: Filters can’t understand context

By Laureline Lemoine

On 16 December 2019, the European Commission held the fourth meeting of the Copyright Directive Article 17 stakeholder dialogues. During the “first phase”, meetings focused on the practices in different industries such as music, games, software, audiovisual and publishing. This meeting was the last of what the Commission called the “second phase”, where meetings were focused on technical presentations on content management technologies and existing licensing practices.

During this fourth meeting, presentations were given by platforms (Facebook, Seznam, Wattpad), providers of content management tools (Audible Magic, Ardito, Fifthfreedom, Smart protection), rightsholders (European Grouping of Societies of Authors and Composers – GESAC, Universal Music Publishing, Bundesliga) as well as by consumer group BEUC and the Lumen database.

Say it again louder for the people in the back: Filters cannot understand context

After Youtube’s Content ID presentation during the third meeting, Facebook’s Rights Management tool presentation reiterated what civil society has been repeating during the entire duration of the copyright debates: filtering tools cannot understand context. Content recognition technologies are only capable of matching files and cannot recognise copyright exceptions such as caricature or parody.

This argument has now been clearly and repeatedly laid out to the European Commission by both civil society organisations and providers of content recognition technology. We would therefore expect that the Commission’s guidelines will take this into account and recommend that filters should not be used to automatically block or remove uploaded content.

A lack of trust

As the meetings usually help revive old divisions between stakeholders, it also tells us about new ones. Facebook’s Rights Management tool pointed out that one of their biggest issues was the misuse of the tool by the rightsholders who claim rights on work they do not own. As a result, not every rightsholder get access to the same tools. Some tools such as automated actions are limited or reserved for what the provider calls “trusted rightsholders”.

On the other side, rightsholders such as GESAC have criticised the way they are treated by the big platforms such as YouTube. In particular, they highlighted the categorisation made by the content recognition tools which can lead to loss of revenue. Indeed, rightsholders sometimes have no choice but to use tools created and controlled by big platforms with their own opaque rules, and therefore emphasised the need for transparency and accuracy of the information on the way platforms like YouTube operate with content whose rights they own.

Transparency is key

With the aim of understanding the management practices of copyright-protected content, quantitative information is crucial. Faced with the issue of filters, content recognition providers said they have been relying on redress mechanisms and human judgment. But when asked for factual information on the functioning of their practices, no number or percentage was available. It is therefore impossible to understand the necessity, proportionality or efficiency of the use of automated content recognition tools.

According the Article 17(10) of the Copyright Directive, which provides the basis for the ongoing stakeholder dialogue, “users’ organisations shall have access to adequate information from online content-sharing service providers on the functioning of their practices with regard to paragraph 4.”

After four meetings and still lacking such information from companies, civil society organisations participating in the dialogue decided to send a request for information to the European Commission. We hope that the Commission will be able to gather such factual information from platforms so that the ongoing dialogue can lead to an evidence-based outcome.

As part of these transparency needs, EDRi also signed an open letter asking the Commission to share the draft guidelines they will produce at the end of the dialogue. In the letter, we asked that the guidelines should also be opened to a consultation with the participants of the stakeholder dialogues and to the broader public, to seek feedback on whether the document can be further improved to ensure compliance with the Charter of Fundamental Rights of the EU.

What’s next?

The next stakeholder dialogue meeting will be held on 16 January and will open the “third phase” of consultation, which will focus on the practicality of Article 17. The Commission already sent out the agenda, and the topics covered on 16 January will be authorisations, notices and the notion of “best efforts”, while the following session on 10 February will cover safeguards and redress mechanisms.

EU copyright dialogues: The next battleground to prevent upload filters (18.10.2019)
https://edri.org/eu-copyright-dialogues-the-next-battleground-to-prevent-upload-filters/

NGOs call to ensure fundamental rights in copyright implementation (20.05.2019)
https://edri.org/ngos-call-to-ensure-fundamental-rights-in-copyright-implementation/

Copyright: Open letter asking for transparency in implementing guidelines (15.01.2020)
https://edri.org/copyright-open-letter-asking-for-transparency-in-implementing-guidelines

(Contribution by Laureline Lemoine, EDRi)

close
15 Jan 2020

Our New Year’s wishes for European Commissioners

By Laureline Lemoine

EDRi wishes all readers a happy new year 2020!

In 2019, we had a number of victories in multiple fields. The European Parliament added necessary safeguards to the proposed Terrorist Content Online (TCO) Regulation to protect fundamental rights against overly broad and disproportionate censorship measures. The Court of Justice of the European Union (CJEU) ruled that clear and affirmative consent needs to be given to set cookies on our devices. Member States have been increasingly issuing fines under the General Data Protection Regulation (GDPR). Also, Google was fined for its abusing online ad practices, and new security standards for consumer Internet of Things (IoT) devices were introduced.

However, 2019 was also the year when some governments positioned themselves against encryption and started to normalise facial recognition in public spaces without adequate safeguards, public debate or fundamental rights assessment (France, Sweden, the UK). Mandatory upload filters were approved at EU level, and data breaches and privacy scandals frequently made the news.

For 2020, we need to ensure that the EU pushes forward policies that will lead to a human-centric internet rather than data exploitation models which deepen inequalities and enable surveillance capitalism. We are sending our wishes to the fresh new European Commissioners, so that they can help us defend our rights and freedoms online.

In 2020, we wish for President Ursula von der Leyen to:

  • Start implementing a human-centric vision for the internet to ensure the protection of fundamental rights online (and offline);
  • Define high privacy, security, safety and ethical standards for the new generation of technologies that will become the global norm;
  • Ensure that EU decision making is strengthened by ensuring transparency in the Council;
  • Ensure that any future measures on Artificial Intelligence (AI) leads to AI systems in Europe are based in the principles of legality, robustness, ethics, and human rights and where current data protection and privacy laws are not circumvented, but strengthened;
  • Ensure that the upcoming proposal Digital Services Act (DSA) (reforming the current e-Commerce Directive) creates legal certainty and introduce safeguards that will enable users to enjoy their rights and freedoms.

In 2020, we wish for Executive Vice President for A Europe Fit for the Digital Age Margrethe Vestager to:

  • Provide clarity on safeguards, red lines, and enforcement mechanisms to ensure that the automated decision making systems — and AI more broadly — developed and deployed in the EU respect fundamental rights;
  • Assess the fundamental rights and societal impacts of facial recognition and other biometric detection systems, and propose criteria to assess or define domains or use cases where AI-assisted technologies should not be developed;
  • Tackle exploitative business models and their violation of personal data protections through the Digital Services Act and any other necessary legislative or non-legislative initiatives;
  • Promote equality and fight discrimination in the development and use of technology;
  • Guarantee and promote the respect of fundamental rights through competition policy by investigating abuses by dominant platforms and exploring cooperation with data protection authorities.

In 2020, we wish for Commissioner for Internal Market Thierry Breton to:

  • Unlock the ePrivacy reform through discussion with the EU Council and the Member States;
  • Develop a sustainable, human-centric and rights-promoting Digital Services Act;
  • Ensure privacy by design and by default in current and future tech-related proposals;
  • Achieve digital sovereignty by ensuring the development of the necessary free and open hardware and software;
  • Ensure that the strategy on data developed as part of the EU’s approach on AI respects fundamental rights.

In 2020, we wish for Vice President and Commissioner for Values and Transparency Věra Jourová to:

  • Ensure transparency in trilogue negotiations;
  • Address the harms caused by hate speech, political disinformation and the abuse of internet controls by authoritarian states;
  • Analyse the risks of targeted political advertising and the online tracking industry;
  • Protect and promote freedom of expression online.

In 2020, we wish for Commissioner for Home Affairs Ylva Johansson to:

  • Ensure that illegal mass surveillance is not deployed, for example in any future attempts to implement data retention in Member States;Review all PNR frameworks in light of the jurisprudence of the CJEU;
  • Reassess the “e-evidence” proposal and its necessity or to include meaningful human rights safeguards;
  • Ensure that the safeguards adopted by the European Parliament and advocated by human rights groups are part of the final TCO Regulation.

In 2020, we wish for Commissioner for Justice Didier Reynders to:

  • Ensure the full enforcement of the GDPR in Member States by ensuring that data protection authorities have the necessary funding, resources, and independence to protect our rights;
  • Promote the European approach to data protection as a global model;
  • Contribute to legislation on AI to ensure that fundamental rights are fully protected, and especially, equality for everyone, by adopting rules that mitigate the harms caused by discrimination.

The new year is a time to reflect on the past year and pledge to do better in the next. Looking for new year’s resolutions? You can do more to stay safe online or donate to EDRi, to help us continue defending your digital human rights and freedoms in 2020 and beyond.

CJEU on cookies: ‘Consent or be tracked’ is not an option (01.10.2019)
https://edri.org/cjeu-cookies-consent-or-be-tracked-not-an-option/

Light at the end of the cyber tunnel: New IoT consumer standard (27.02.2019)
https://edri.org/light-at-the-end-of-the-cyber-tunnel-new-iot-consumer-standard/

The Dangers of High-Tech Profiling, Using Big Data (07.08.2014)
https://www.nytimes.com/roomfordebate/2014/08/06/is-big-data-spreading-inequality/the-dangers-of-high-tech-profiling-using-big-data

EU Commissioners candidates spoke: State of play for digital rights (23.10.2019)
https://edri.org/eu-commissioner-candidates-spoke-state-of-play-for-digital-rights/

A Human-Centric Digital Manifesto for Europe
https://www.opensocietyfoundations.org/publications/a-human-centric-digital-manifesto-for-europe

Cross-border access to data for law enforcement: Document pool (12.04.2019)
https://edri.org/cross-border-access-to-data-for-law-enforcement-document-pool/

(Contribution by Laureline Lemoine, EDRi)

close
15 Jan 2020

#EthicalWebDev – guide for ethical website development and maintenance

By Guillermo Peris

We’ve finally published our new guide for ethical website development and maintenance, Ethical Web Dev! It’s aimed at web developers and maintainers who have a strong understanding of technical concepts, to assist them in bringing the web back to its roots – a decentralised tool that can enhance fundamental rights, democracy and freedom of expression.

The goal of the project, which started more than a year ago, was to provide guidance to developers on how to move away from third-party infected, data-leaking, unethical and unsafe practices. We hope this guide will be a beneficial tool for the field, and will help us walk the path towards the web we want.

Click the image to download the guide (pdf)

A website is almost like a living thing. Most of the time the basic site itself is not static and in addition to its own dynamic features, its environment is also subject to continuous change which in turn leads to even more changes. Visitors of a website can also be very diverse. The technologies they use and their expertise may vary widely. Many websites themselves also rely on a variety of external services and resources. These also continue to evolve.

As website developers have to cope at the same time with the increasing expectations of users and the limited resources most organisations devote to website development, there is a growing tendency to use more external services and resources. For example, it has become more and more common for web developers to take “free” resources, such as fonts and scripts and use them on the websites that they design. While these are “free” for the developer, they can have undesirable side effects for the users and the organisations that provide the website. For example some resources and services, particularly those provided by certain data hungry internet companies, can undermine user privacy. Others can have adverse affects on security. In both cases, the reputation of the website owner may suffer, or it may even face legal challenges. This warrants attention. However, there is a general lack of awareness of this problem, and these practices have already become quite pervasive. The purpose of this new guide is to clarify the problems and, where possible, identify some usable solutions.

The guide is a result of an extensive collective work, with inputs from experts of the EDRi network (Anders Jensen-Urstad, Walter van Holst, Maddalena Falzoni, Hanno “Rince” Wagner, Piksel), external contributions (Gordon Lennox, Achim Klabunde, Laura Kalbag, Aral Balkan), and the crucial help of Sid Rao, Public Interest Technologist and ex-Ford-Mozilla Fellow at EDRi. Special thanks to Joe McNamee who had the original idea for this booklet and steered the process to a successful conclusion, and Guillermo Peris for coordinating the project.

The guide is distributed under a Creative Commons 4.0 Licence.

Ethical Web Dev – Guide for ethical website development and maintenance
https://edri.org/files/ethical_web_dev_web.pdf

close
15 Jan 2020

Amazon’s Rekognition shows its true colors

By Bits of Freedom

EDRi member Bits of Freedom has been investigating the problems associated with the use of facial recognition by the police in the public space. As part of this investigation they wanted to put this technology to the test themselves. How does facial recognition technology really work?

Digital tourism

On Dam Square, in the center of Amsterdam, you’ll find a camera. It’s no ordinary security camera: this camera broadcasts images of Dam Square, 24 hours a day, 7 days a week, in extremely high quality on YouTube. The camera can zoom in from one side of the square straight to the other side. As we’re told on the supplier’s website: this is good for “digital tourism”. The camera is also good for the investigation: it offers the perfect opportunity to test facial recognition technology. How bizarre is it that the thousands of people that cross Dam Square every day, can, without their knowledge, also be seen on YouTube? And an even scarier vision of the future: what if they could all be registered using facial recognition technology?

Amazon’s Rekognition

“Rekognition”, Amazon’s facial recognition technology, is used by various police units in the United States, and can be used directly via the internet by anyone with a credit card. Bits of Freedom investigated whether this program would recognise anyone visiting Dam Square.

We uploaded a picture as a test, to allow the software to become familiar with one specific face. The software subsequently located different characteristics in the face, so-called “landmarks”, such as the lower tip of the chin, the nostrils, the pupils and the jawline.

But Rekognition creates even more data regarding our test face. It estimates her age, whether she’s laughing, whether she’s wearing (sun)glasses, what her gender is and other facial features such as if the individual has a beard or a moustache. In addition to this, Rekognition also registers “emotions”. According to it the individual is happy, but also a little bit anxious (that’s right – our test case does not love surveillance cameras).

The first encounter is behind us. Now that Rekognition claims to know our test case, it’s time for our first test. We sent our individual to Dam Square and to the camera’s field of vision.

Bingo! Rekognition recognized her. With 100 percent certainty (rounded off), Rekognition recognized the face as a face, and Rekognition was 90 percent sure that this face matched the previously uploaded picture. But the software also saw that we had brought someone along. Rekognition was convinced that this other person was not the same individual

Very little is needed to recognize a face.

A grainy picture is all Rekognition needs to recognise anyone. Very little! The picture we used was taken from the internet. Nowadays almost everyone has a picture of themselves online. And if you are a little handy with computers, you can “teach” Rekognition multiple faces. This means that, theoretically, the perfect stalker tool can be developed, especially if you link this to multiple cameras that broadcast their images on the internet (and those exist). It is not possible to have any control over this. You do not know what others are doing with this technology and maybe even with your pictures.

Facial recognition technology is a mass surveillance tool

After two months of research into facial recognition technology, a feeling of disbelief dominates: why does this technology exist? Facial recognition can easily be used as a mass surveillance tool that makes it possible to continuously spy on and manipulate groups and individuals on a scale and with a speed that was previously impossible. Our insatiable hunger for greater efficiency and convenience means that we are losing sight of the fact that this technology violates the rights and freedoms of citizens. The use of facial recognition technology must be thoroughly debated and researched before it is fully normalised in our society, and people accept the inevitable corrosion of our fundamental rights as necessary for “progress”.

Bits of Freedom
https://www.bitsoffreedom.nl/

Amazon’s Rekognition shows its true colors (12.12.2019)
https://www.bitsoffreedom.nl/2019/12/12/amazons-rekognition-shows-its-true-colors/

(Contribution by Paula Hooyman, EDRi member Bits of Freedom, the Netherlands)

close
15 Jan 2020

Copyright: Open letter asking for transparency in implementing guidelines

By EDRi

Today, on 15 January 2020, EDRi joined 41 other human rights and users’ rights organisations to demand increased transparency during the implementation of the EU copyright Directive. Specifically, the open letter asks the European Commission to publish any draft guidelines when available and to include concerns raised by the signing organisations during the stakeholder dialogues organised by the European Commission for the implementation of the copyright Directive.

Read the letter here (pdf) or below:

14 January 2020

Dear Commissioner Breton,
Dear Commissioner Gabriel,
Dear Director General Viola,
Dear Deputy Director-General Bury,
Dear Director Abbamonte,
Dear Head of Unit Giorello,
Dear Executive Vice-President Vestager,
Dear Executive Vice-President Jourová,

The undersigned stakeholders represent users’ organizations, fundamental and digital rights organizations, the knowledge community (in particular libraries), free and open-source software developers, and organizations of users as consumers and creators of content from across the European Union.

The European Commission organized a series of Stakeholder Dialogues in line with the Directive on Copyright in the Digital Single Market (DSM Directive) Article 17 (10). We appreciate that user representation was ensured throughout the series of Stakeholder Dialogues so that we could express our concerns and preferred solutions for the transposition of Article 17.

According to the DSM Directive, the Commission shall issue guidance on the application of Article 17 after the last Stakeholder Dialogue, which is scheduled for the spring of 2020.

To ensure an adequate level of transparency, the undersigned organizations believe that the guidelines, drafted by the Commission, should not be the final step of the dialogue but instead part of the discussion. We believe that once the draft guidelines have been finalized, they should be opened to consultation with the participants of the Stakeholder Dialogues and the broader public. The purpose of this consultation should be to seek feedback on whether the document can be further improved to ensure compliance with the Charter of Fundamental Rights.

This request is based on the requirement of transparency, which is a core principle of the rule of law. This means laws are crafted under the principle of legal certainty under a transparent, accountable, and democratic process.

The DSM Directive as adopted does not provide sufficient legal certainty as to the rights and obligations of those affected by the legislation. This is why we need the guidelines: to ensure that Article 17 is transposed correctly and uniformly by member states.

The undersigned organizations would like to ensure that the guidelines are in line with the right to freedom of expression and information and also data protection guaranteed by the Charter of Fundamental Rights. We aim to ensure that the transposition of Article 17 (4) (a) (b) (c) is implemented by governments and private parties without interference of Articles 8 and 11 of the Charter. The guidelines must ensure that the protection of “legitimate uses, such as uses under exceptions or limitations” as required by Article 17(9) of the Directive takes precedence over any measures implemented by online content-sharing service providers (OCSSPs) to comply with their obligations under 17(4) (b) (c). Automated filtering technologies can only be used if OCSSPs can demonstrate that their use does not affect legitimate uses in any negative ways.

The undersigned organizations have, on numerous occasions throughout the legislative debate and the Stakeholder Dialogue, expressed their very explicit concerns about upload filters, the exceptions and limitations and the problem of the liability regime set out in the DSM Directive.

These concerns have also been shared by the broad academic community of intellectual property scholars.

Yours sincerely,
Balázs Dénes
Executive Director
Civil Liberties Union for Europe

ANSOL – National Association for Free Software
Antigone
ApTI
Article 19
Big Brother Watch
Center for Democracy & Technology
Centrum Cyfrowe Foundation
Civil Liberties Union for Europe
Coalizione Italiana per le Libertà e i Diritti civili (CILD)
COMMUNIA
Copyright for Creativity (C4C)
D3 – Defesa dos Direitos Digitais
Digital Society Forum
Digitale Gesellschaft e.V.
EDRi
Electronic Frontier Foundation
Electronic Frontier Norway
epicenter.works
Estonian Human Rights Centre
Förderverein Informationstechnik und Gesellschaft (FITUG e.V.)
Gong
Hermes Center
Homo Digitalis
HRMI
Hungarian Civil Liberties Union
Initiative für Netzfreiheit
Irish Council for Civil Liberties
IT-Pol Denmark
Iuridicum Remedium
League of Human Rights
Open Knowledge Foundation
Open Rights Group
Peace Institute
Platform in Defence of Freedom of Information
Privacy First
Rights International Spain
Save the Internet
South East Europe Media Organisation (SEEMO)
The International Federation of Library
Associations and Institutions IFLA
Vrijschrift
Xnet

close
15 Jan 2020

Indiscriminate data retention considered disproportionate, once again

By EDRi

EDRi’s initial reaction on the press release of the AG Opinion on data retention

Today’s Court of Justice of the European Union (CJEU) Advocate General’s Opinions continue the firmly established case-law of the CJEU considering mass collection of individuals communications data incompatible with EU law. The Advocate General reaffirms that blanket retention of telecommunication data is disproportionate to its purported goal of national security and combating crime and terrorism.

Today, on 15 January, the CJEU Advocate General Campos Sánchez-Bordona delivered his Opinionsn on four cases regarding data retention regimes in France, Belgium and the UK. These cases focus on the compatibility of these Member States’ surveillances programmes with the existing case law on data retention and the applicability of the ePrivacy Directive in those cases.

Once again, the Advocate General of the CJEU has firmly sided to defend the right to privacy, and declared that indiscriminate retention of all traffic and location data of all subscribers and registered users is disproportionate.

said Diego Naranjo, Head of Policy at EDRi.

The European Commission needs to take note of yet another strong message against illegal data retention laws. While combating crime and terrorism are legitimate goals, this should not come at the expense of fundamental rights. It’s crucial to ensure that the EU upholds the Charter of Fundamental Rights and prevents any new proposal for data retention legislation of a general and indiscriminate nature.

The Opinions respond to four references for a preliminary ruling, sent by the French Council of State (joined cases C-511/18 and C-512/18, La Quadrature du Net and Others), Belgian Constitutional Court (Case C-520/18, Ordre des barreaux francophones et germanophone and Others) and the UK Investigatory Powers Tribunal (Case C-623/17, Privacy International). The Advocate General confirms that the ePrivacy Directive and EU law applies to data retention for the purpose of national security. He proposes to uphold the case-law of the Tele2 case and stressed that “a general and indiscriminate retention of all traffic and location data of all subscribers and registered users is disproportionate” and that only limited and discriminate retention with limited access to that data is lawful. He states that “the obligation to retain data imposed by the French legislation is general and indiscriminate, and therefore is a particularly serious interference in the fundamental rights enshrined in the Charter” and similar criticism is raised on the Belgian and UK laws.

Following the invalidation of the data retention Directive in the Digital Rights Ireland case in 2014, Member States have been relying on the ePrivacy Directive to enact national data retention legislation. In 2016, the CJEU clarified this possibility and ruled in the Tele2 case that blanket data retention measures are incompatible with the Charter of Fundamental Rights of the European Union. Since then, as the Commission has been reluctant to intervene, civil society organisations have been challenging unlawful data retention legislation in different Member States.Blanket data retention of telecommunications data is a very invasive surveillance measure of the entire population. This can entail the collection of sensitive information about citizens’ social contacts, movements and private lives, without any suspicion. Telecommunications data retention also undermines professional confidentiality, the protection of journalistic sources and compromises the freedom of the press, and prevents confidential electronic communications. The retained data is also of high interest for criminal organisations and unauthorised state actors from all over the world – several successful data breaches have been documented. Overall, blanket data retetion damages preconditions of open and democratic societies.

EDRi member Privacy International has also issued a preliminary statement, it can be found here: https://privacyinternational.org/press-release/3332/preliminary-statement-advocate-generals-opinion-advises-mass-surveillance-regime

Note: This press release is a quick response based solely on the Court’s press release. A detailed analysis will follow in due time.

close