04 Dec 2019

Facial recognition and fundamental rights 101

By Ella Jakubowska

This is the first post in a series about the fundamental rights impacts of facial recognition. Private companies and governments worldwide are already experimenting with facial recognition technology. Individuals, lawmakers, developers – and everyone in between – should be aware of the rise of facial recognition, and the risks it poses to rights to privacy, freedom, democracy and non-discrimination.

In November 2019, an online search on “facial recognition” churned up over 241 million hits – and the suggested results imply that many people are unsure about what facial recognition is and whether it is legal. Although the first uses that come to mind might be e-passport gates or phone apps, facial recognition has a much broader and more complex set of applications, and is becoming increasingly ubiquitous in both public and private spaces – which can impact a wide range of fundamental rights.

What the tech is facial recognition all about?

Biometrics is the process that makes data out of the human body – literally, your unique “bio”-logical qualities become “metrics.” Facial recognition is a type of biometric application which uses statistical analysis and algorithmic predictions to automatically measure and identify people’s faces in order to make an assessment or decision. Facial recognition can broadly be categorised in terms of the increasing complexity of the analytics used: from verifying a face (this person matches their passport photo); identifying a face (this person matches someone in our database), to classifying a face (this person is young). Not all uses of facial recognition are the same and, therefore, neither are the associated risks. Facial recognition can be done live (e.g. analysis of CCTV feeds to see if someone on the street matches a criminal in a police database) or non-live (e.g. matching two photos), which has a higher rate of accuracy.

There are opportunities for error and inaccuracy in each category of facial recognition, with classification being the most controversial because it claims to judge a person’s gender, race, or other characteristics. This categorisation can lead to assessments or decisions that infringe on the dignity of gender non-conforming people, embed harmful gender or racial stereotypes, and lead to unfair and discriminatory outcomes.

Furthermore, facial recognition is not about facts. According to the European Union Agency for Fundamental Rights (FRA), “an algorithm never returns a definitive result, but only probabilities” – and the problem is exacerbated as the data on which that the probabilities are based reflects social biases. When these statistical likelihoods are interpreted as if they are a neutral certainty, this can threaten important rights to fair and due process. This in turn has an impact on individuals’ ability to seek justice when facial recognition infringes on their rights. Digital rights NGOs warn that facial recognition can harm privacy, security and access to services, especially for marginalised communities. A powerful example of this is when facial recognition is used in migration and asylum systems.

A question of social justice and democracy

Whilst discrimination resulting from technical issues or biased data-sets is a genuine problem, accuracy is not the crux of why facial recognition is so concerning. A facial recognition system claiming to identify terrorists at an airport, for example, could be considered 99% accurate even if it did not correctly identify a single terrorist. And greater accuracy is not necessarily the answer either, as it can make it easier for police to target or profile people of colour based on damaging racialised stereotypes. The real heart of the problem lies in what facial recognition means for our societies, including how it amplifies existing inequalities and violations, and whether it fits with our conceptions of democracy, freedom, privacy, equality, and social good. Facial recognition by definition raises questions about the balance of personal data protection, mass surveillance, commercial interests and national security which societies should carefully consider. Technology is frequently incredible, impressive, and efficient – but this should not be confused with its use being necessary, beneficial, or useful for us as a society. Unfortunately, these important questions and key issues are often decided out of public sight, with little accountability and oversight.

What’s in a face?

Your face has a particular sensitivity in the context of surveillance, says France’s data protection watchdog – and as a very personal form of personal data, both live and non-live images of your face are already protected from unlawful processing under the General Data Protection Regulation (GDPR).

Unlike a password, your face is unique to you. Passwords can be kept out of sight and reset if needed – but your face cannot. If your eye is hacked, for example, there is no way to wipe the slate clean. And your face is also distinct from other forms of biometric data such as fingerprints because it is almost impossible to avoid being subject to facial surveillance when such technology is used in public places. Unlike having your fingerprints taken, your face can be surveilled and analysed without your knowledge. Your face can also be a marker of protected characteristics under international law such as your right to freely practice your religion. For these reasons, facial recognition is highly intrusive and can infringe on rights to privacy and personal data protection, among many other rights.

Researchers have highlighted the frightening assumptions underpinning much of the current hype about facial recognition, especially when used to categorise emotions or qualities based on individuals’ facial movements or dimensions. This harks back to the discredited pseudo-science of physiognomy – a favourite of Nazi eugenicists – and can have massive implications on individuals’ safety and dignity when used to make a judgement about things like their sexuality or whether they are telling the truth about their immigration status. Its use in recruitment also increases discrimination against people with disabilities. Experts warn that there is no scientific basis for these assertions – but that has not stopped tech companies churning out facial classification systems. When used in authoritarian societies or where being LGBTQI+ is a crime, this sort of mass surveillance threatens the lives of journalists, human rights defenders, and anyone that does not conform – which in turn threatens everyone’s freedom.

Why can’t we open the Black Box?

The statistical analysis underpinning facial recognition and other similar technology is often referred to as a “black box”. Sometimes this is because the technological complexity of deep learning systems means that even data scientists do not fully understand the way that the algorithmic models make decisions. Other times, this is because the private companies creating the systems use intellectual property or other commercial protections to hide their models. This means that individuals and even states are prevented from scrutinising the inner workings and decision-making processes of facial recognition tech, even though it impacts so many fundamental rights, which violates principles of transparency and informed consent.

Facial recognition and the rule of law

If this article has felt like a roll-call of human rights violations – that’s because it is. Mass surveillance through facial recognition technology threatens not just the right to privacy, but also democracy, freedom, and the opportunity to develop one’s self with dignity, autonomy and equality in society. It can have what is known as a “chilling effect” on legal dissent, stifling legitimate criticism, protest, journalism and activism by creating a culture of fear and surveillance in public spaces. Different uses of facial recognition will have different rights implications – depending not only on what and why they are analysing people’s faces, but also because of the justification for the analysis. This includes whether the system meets legal requirements for necessity and proportionality – which, as the next article in this series will explore, many current applications do not.

The rule of law is of vital importance across the European Union, applying to both national institutions and private companies – and facial recognition is no exception. The EU can contribute to protecting people from the threats of facial recognition by strongly enforcing GDPR and by considering how existing or future legislation may impact upon facial recognition too. The EU should foster debates with citizens and civil society to help illuminate important questions including the differences between state and private uses of facial recognition and the definition of public spaces, and undertake research to better understand the human rights implications of the wide variety of uses of this technology. Finally, prior to deploying facial recognition in public spaces, authorities need to produce human rights impact assessments and ensure that the use passes the necessity and proportionality test.

When it comes to facial recognition, just because we can use it does not necessarily mean that we should. But what if we continue to be seduced by the allure of facial recognition? Well, we must be prepared for the violations that arise, implement safeguards for protecting rights, and create meaningful avenues for redress.

Facial recognition technology: fundamental rights considerations in the context of law enforcement (27.11.2019)
https://fra.europa.eu/sites/default/files/fra_uploads/fra-2019-facial-recognition-technology-focus-paper.pdf

Why ID (2019)
https://www.accessnow.org/whyid-letter/

Ban Face Surveillance (2019)
https://epic.org/banfacesurveillance/

Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System (16.08.2018)
https://ihrp.law.utoronto.ca/sites/default/files/media/IHRP-Automated-Systems-Report-Web.pdf

Declaration: A Moratorium on Facial Recognition Technology for Mass Surveillance Endorsements
https://thepublicvoice.org/ban-facial-recognition/endorsement/

The surveillance industry is assisting state suppression. It must be stopped (26.11.2019)
https://www.theguardian.com/commentisfree/2019/nov/26/surveillance-industry-suppression-spyware

(Contribution by Ella Jakubowska, EDRi intern, with many ideas gratefully received from or inspired by members of the EDRi network)

close
04 Dec 2019

Interoperability: A way to escape toxic online environments

By Chloé Berthélémy

The political debate on the future Digital Services Act mostly revolves around the question of online hate speech and how to best counter it. Whether based on state intervention or self-regulatory efforts, the solutions to address this legitimate public policy objective will be manifold. In its letter to France criticising the draft legislation on hateful content, the European Commission itself acknowledged that simply pushing companies to remove excessive amounts of content is undesirable and that the use of automatic filters is ineffective in front of such a complex issue. Out of the range of solutions that could help to tackle online hate speech and foster spaces for free expression, a legislative requirement for certain market-dominant actors to be interoperable with their competitors would be an effective measure.

Trapped in walled gardens

Originally, the internet enabled everyone to interact with one another thanks to a series of standardised protocols. It allowed everyone to share knowledge, to help each other and be visible online. Because the internet infrastructure is open, anyone can create their own platform or means of communication and connect with others. However, the internet landscape dramatically changed in the last decade: the rise of Big Tech companies has resulted in a highly centralised online ecosystem around few dominant players.

Their power lies in their huge profits stemming from an opaque advertisement business and in the enormous user bases that drag even more users into their services. Unlike the historical openness of the internet, these companies strive for even higher profits by closing their systems and locking their customers in. Hence, the costs to leave are too high for many to actually take the leap. This gives companies an absolute control over all the interactions taking place and the content posted on their services.

Facebook, Twitter, YouTube or LinkedIn decide for you what you should see next, track each action you make to profile you and decide whether your post hurts their commercial interests, and therefore should be removed, or not. In that context, you have no say in the making of the rules.

Unhealthy communications

What is most profitable for those companies is content that generates the most profiling data possible – arising from each user interaction. In that regard, pushing offensive, polarising and shocking content is the best strategy to capture users’ attention and trigger a reaction from them. Combining this with a system of rewards – likes, views, thumbs up – for the ones using the same rhetorical strategies, these companies provide a fertile ground for conflict and the spread of hate. In this toxic environment based on arbitrary decisions, this business model explains how death threats against women can thrive while LGBTQIA+ people are censored when discussing queer issues. Entire communities of users are dependent on the goodwill of the intermediaries and must endure sudden changes of “community guidelines” without any possibility to contest them.

When reflecting on the dissemination mechanisms of hate speech and violent content online, it is easy to understand that delegating the task to protect victims to these very same companies is absolutely counter-intuitive. However, national governments and the EU institutions have mainly chosen this regulatory path.

Where is the emergency exit?

Another way would be to support the development of platforms with various commercial practices, degrees of user protection and content regulation standards. Such a diversified online ecosystem would give users a genuine choice of alternative spaces that fit their needs and even allow them to create their own with chosen community rules. The key to this system’s success would be to maintain the link with the other social media platforms, where most of their friends and families still remain. Interoperability would guarantee to everyone the possibility to leave without losing their social connections and to join another network where spreading hate is not as lucrative. On the one hand, interoperability can help escape the overrepresentation of hateful content and dictatorial moderation rules, while on the other hand, it triggers the creation of human-scaled, open but safe spaces of expression.

The way it works allows a user on service A to interact with, read and show content to users on a service B. This is technically well feasible: For example, Facebook used to build its messaging service on an open protocol before 2015 and Twitter still permits users to tweet directly from third-party websites. It is crucial that this discussion takes place in the wider debate around the Digital Services Act: the questions of who decides how we consume daily news, how we connect with friends online and how we choose our functionalities should not be left to a couple of dominant companies.

Content regulation – what’s the (online) harm? (09.10.2019)
https://edri.org/content-regulation-whats-the-online-harm/

France’s law on hate speech gets a thumbs down
https://edri.org/frances-law-on-hate-speech-gets-thumbs-down

Hate speech online: Lessons for protecting free expression (29.10.2019)
https://edri.org/hate-speech-online-lessons-for-protecting-free-expression/

E-Commerce review: Opening Pandora’s box? (20.06.2019)
https://edri.org/e-commerce-review-1-pandoras-box/

(Contribution by Chloé Berthélémy, EDRi)

close
04 Dec 2019

Shedding light on the Facebook content moderation centre in Athens

By Homo Digitalis

Following months of efforts, in early September 2019, EDRi observer Homo Digitalis managed to shed light on a case that concerns each and every Facebook user: a content moderation centre in Athens, Greece, tasked to moderate Facebook ads. As many other content moderation policies run by virtually unaccounatable private companies, this can pose threats to our freedom of expression.

A person who claimed to be working as a “Facebook content moderator” in Athens contacted Homo Digitalis in February 2019. But how could this be? Facebook has various content moderation centres around the globe, but none of them was known to operate in Athens. It turned out that Tassos (name has been changed) was indeed one of the hundreds who work in Athens as content moderators for Facebook. These content moderators determine, on behalf of Facebook, what is inappropriate or misleading and needs to be deleted from the platform. The particularity of the moderation centre in Athens is that it moderates exclusively advertisements, not content that individual Facebook users are posting to the platform. However, “advertisements” for Facebook does not only mean advertisements posted by transnational corporations or prominent newspapers. It includes all the posts by any professional page on Facebook, including personal professional pages of a lawyer, a dentist, a journalist, a photographer, a model and any professional in general.

Facebook has been operating this centre at least since September 2018, through a subcontractor called Teleperformance, a France-based multinational specialised in acting as customer services for other companies, or “outsourced omnichannel customer experience management” in business jargon.

But how did Tassos end up there? By responding to a brief, vague job post. No qualifications were required for the post, except speaking one of the 25 working languages. The lack of requirement of specific experience on content moderation or even experience in technology raises serious concerns regarding the quality of the selection process.

Tassos told that during a brief interview, he was informed that the job involved the possibility of being exposed to violent imagery. He went through three weeks of training before starting work. This training mostly consisted of presentations of Facebook’s policies and practical examples of how to solve a case. However, Facebook’s policies tend to change a lot and quite rapidly with no extra training provided on the new policies. According to Tassos, Facebook typically also took months to respond to any questions and doubts that arose around the new policies – or did not respond at all to many questions. This lead into situations in which the moderators had no adequate means to properly deal with an advertisement which might have been against a new or an amended policy.

Although there was no formal daily quota that the moderators should meet, informally, they had to screen a hundred posts per hour. Tassos said that as the shift drew to a close, some of his co-workers would simply approve or reject content without further deliberation. Their aim, he said, was to meet a desirable target. Wrong decisions made due to pressure could subsequently lead to biases in the artificial intelligence (AI) system that will allegedly be used to perform the same task for Facebook in the coming years.

Facebook, as part of their business operations, manages the way that ads run on its platform. It seems, however, that it has chosen some wrong ways to do so. The current operation of the Athens moderation centre is a threat to the rights of the users, since it may put their freedom of expression and freedom of information in danger. It is unacceptable that decisions which could inherently affect the personal or business life of one or more Facebook’s users, are made in seconds, by persons who have no expertise in conducting a proper assessment.

In addition to that, the operations of the centre raise concerns for the well-being of moderators who work without psychological support or supervision.Tassos spoke about the working conditions and the training the moderators received. According to his contract, which he shared with Homo Digitalis, he provided call support and “services primarily in the Facebook customer service department”. Tassos explained that his work did not include telephone calls. On the other hand, it occasionally included depictions of violence. If Tassos was not sure whether he should accept or reject a post, he had to go through a long manual, which included the cruelest examples of what is not accepted under Facebook policies. “We would check this manual many times per day. I was dreaming of two dead dogs hung on a tree, who were shown as an example in this manual, every night for months.” Employees could book a 30-minute session every two weeks with one of the three psychologists, who were available on site. However, most moderators were unwilling to attend such sessions, concerned that their conversations might leak and result in their dismissal. Tassos was prohibited from discussing his work with anyone outside the organisation, and he was not allowed to reveal the identity of his employer in social media.

During the last 14 months, approximately 800 persons have gone through the recruitment procedure and have been hired by Teleperformance to moderate advertisements for Facebook in Athens. The employees cover a wide range of languages and nationalities (allegedly 25 languages and more than 70 countries are covered). The languages include Greek, English, French, German, Spanish, Italian, Arabic, Turkish, Norwegian, Finnish, Israeli, and Russian. It is easily understood that the operation of this centre concerns every Facebook user from countries, in which one of these languages are spoken.

Homo Digitalis communicated the story to Kathimerini, a prominent newspaper in Greece. Kathimerini had been conducting research on the issue for years, but had not managed to get the testimony of a moderator. The night before Kathimerini published an article on the issue in its front page, Facebook made a formal statement for the first time, admitting that it indeed operates a content moderation centre in Athens.

Homo Digitalis
https://www.homodigitalis.gr/en/

Homo Digitalis comments for Facebook’s content moderation center in Greece (20.10.2019)
https://www.homodigitalis.gr/en/posts/4493

Full Story at Kathimerini’s website: Inside Facebook’s moderation hub in Athens
http://www.ekathimerini.com/246279/gallery/ekathimerini/special-report/inside-facebooks-moderation-hub-in-athens

(Contribution by Konstantinos Kakavoulis, EDRi observer Homo Digitalis, Greece)

close
04 Dec 2019

Why privacy is particularly crucial for people with disabilities

By Guest author

With data being described as the “new currency”, many questions arise around privacy and data protection. We all leave increasingly larger data footprints as we use more, and more advanced technologies. We let apps access our phonebook contacts, track our habits and behavior, and know our preferences. At other times, we do not even have an alternative to smart meters being installed in our homes, network operators being required to store our connection records, and websites tracking our IP addresses. All these are examples of personal data we emit and that get collected, yet we are often not exactly aware how this data is being used and how sensitive it can be.

While this concerns everyone, socially vulnerable and marginalised groups are even more impacted by this lack of control over their data. For people with disabilities, who are often encountered with stigmatisation and segregation, this is a real and imminent threat. Even simple and seemingly harmless interactions on social media could have serious consequences. Merely your contact list and the interests you and your contacts have can reveal a lot about you, including any disability and health conditions. In some cases, this can reveal belonging to socially disadvantaged and persecuted groups. This data in the wrong hands can lead to discrimination and social exclusion.

Yet as technologies are more connected and interlinked, this type of sensitive data gets exposed more easily even without data-hoarding social networks. For example, using cloud-based assistive services, such as captioning for people with auditory disabilities, text simplification for people with cognitive and learning disabilities, and image recognition for people with visual disabilities reveals a likely disability to the app developer, the operating system vendor, and the network operator at least. Trojan apps that collect information from your other installed apps expand the audience who gets access to highly personal and potentially sensitive information, often without your knowledge.

This trend continues as technology continues to permeate our daily lives. For example, a smart fridge that helps with your grocery shopping has sensitive knowledge of your eating habits and dietary needs. That is, even without using specialised assistive technologies, everyday products that are increasingly connected and equipped with some form of Artificial Intelligence (AI) gather and process our personal data, which in many cases can be highly sensitive. Compounded by AI-bias, which is even higher for people with disabilities due to lack of proper datasets and due to inherent influences of bias, home appliances such as a smart fridge could lead to unemployment.

Technology provides immense opportunities for many, in particular for people with disabilities who rely on technology for accessibility – not only to enable assistive technologies, but also everyday products and services can empower and contribute to more equality. Yet there are also serious challenges, including in privacy and data protection. The report “Plug and Pray? – A disability perspective on artificial intelligence, automated decision-making and emerging technologies” of the European Disability Forum (EDF) describes critical challenges of accessible technology, which threaten social justice. The key to address these challenges is to employ inclusive design processes that involve people with disabilities throughout the design and development – “nothing about us without us”.

The World Wide Web Consortium (W3C) Web Accessibility Initiative (WAI)
https://www.w3.org/WAI/

European Disability Forum (EDF)
http://www.edf-feph.org/

Plug and pray? A disability perspective on artificial intelligence, automated decision-making and emerging technologies
http://www.edf-feph.org/sites/default/files/edf-emerging-tech-report-accessible.pdf

Easy to read: How can new technologies make things better for people with disabilities? (22.03.2019)
http://www.edf-feph.org/sites/default/files/2019_03_22_etr_of_edf_report_on_technology.pdf

(Contribution by Shadi Abou-Zahra, World Wide Web Consortium Web Accessibility Initiative – W3C WAI)

close
04 Dec 2019

France’s law on hate speech gets a thumbs down

By Chloé Berthélémy

France’s draft legislation on hate speech (also called the “Avia law”) received a lot of criticism. The draft law was approved in July 2019 by the French National Assembly and will be examined by the Senate in December. It would oblige platforms to remove flagged hateful content within 24 hours or face fines. The Czech Republic first sent a detailed Opinion on the draft law. This was followed by the European Commission officially asking France to postpone the adoption of the law, mentioning in its letter a risk of violation of articles 3, 14 and 15 of the EU E-Commerce Directive.

EDRi, with the support of its members ARTICLE 19 and Access Now, submitted formal comments to the Commission to warn against the major pitfalls that the draft Avia law entails. The Commission joins EDRi in calling on France to halt the adoption of the legislative proposal. Leaving social media companies just 24 hours to do a reasoned assessment of a flagged piece of content officially appoints them as arbiters of legality and “the truth”, with heightened chilling effects on freedom of expression online. The fines companies face for non-compliance with the 24-hour deadline are disproportionate for platforms with limited resources such as not-for-profit community initiatives like Wikipedia.

Furthermore, the strict time cap will de facto lead to the use of filtering technology, resulting in over-removal of content. Because of the highly context-sensitive nature of hate speech, it is impossible for content filters to understand the nuances between actual illegal hateful speech and legal content. In addition, the draft law includes the obligation to prevent hate speech content from being re-uploaded. This will lead to a general filtering of all content posted on the platforms, which is not compatible with the E-Commerce Directive’s prohibition of general monitoring.

Lastly, the multiplication of national laws dealing with all sorts of “undesirable” content online leads to a confusing legislative patchwork in Europe. For companies and users it means uncertainty and blurs their understanding of which law applies to them. The Commission rightly suggests that the Avia law would overlap with the future Digital Services Act (DSA), which foresees European rules for how platforms moderate illegal content online. For this reason, EDRi provides comments on specific elements of the draft Avia Law to which the European Commission should pay particular attention when developing its own proposal.

EDRi: Contribution to the examination of France’s draft law aimed at combating hate content on the internet (18.11.2019)
https://edri.org/wp-content/uploads/2019/11/20191118_EDRiCommentsEC_FrenchAvialaw.pdf

A privately managed public space? (20.11.2019)
https://edri.org/online-content-moderation-privately-managed-public-space/

Content regulation – what’s the (online) harm? (09.10.2019)
https://edri.org/content-regulation-whats-the-online-harm/

How security policy hijacks the Digital Single Market? (02.09.2019)
https://edri.org/how-security-policy-hijacks-the-digital-single-market/

Hate speech online: Lessons for protecting free expression (29.10.2019)
https://edri.org/hate-speech-online-lessons-for-protecting-free-expression/

(Contribution by Chloé Berthélémy, EDRi)

close
04 Dec 2019

Serbia: Unlawful facial recognition video surveillance in Belgrade

By SHARE Foundation

On 3 December 2019, EDRi member SHARE Foundation, together with two other organisations, published a policy brief concerning a new “smart video-surveillance system” in Belgrade. The brief highlights that the impact assessment of video surveillance on human rights, conducted by the Serbian Ministry of Interior did not meet the legal requirements, and the installation of the system lacks basic transparency. SHARE states that the process should be suspended immediately, and the authorities should engage in an inclusive public debate on the necessity, implications and conditionality of such a system.

The installation of smart video surveillance in Belgrade, with thousands of cameras and facial recognition software, has raised public concern. Hundreds of people have submitted freedom of information (FOI) requests asking the Ministry of Interior (MoI) about said cameras, while public officials made contradictory statements and withheld crucial information. Consequently, civil society has sought to assess the introduction of new forms of video surveillance in public spaces.

Three civil society organisations – SHARE Foundation, Partners for Democratic Change Serbia (Partners Serbia) and Belgrade Centre for Security Policy (BCSP) – published a detailed analysis of the MoI’s Data Protection Impact Assessment (DPIA) on the use of smart video surveillance. The conclusion was that the document does not meet the formal or material conditions required by the Law on Personal Data Protection in Serbia.

The Commissioner on Personal Data Protection in Serbia published his opinion on the DPIA, confirming these findings. According to the Commissioner, the DPIA was not conducted in line with the requirements of the Personal Data Protection Law; it is not clear to which surveillance system it refers, and what are the legal grounds thereof; it does not include a risk assessment regarding the rights and freedoms of data subjects, nor a comprehensive description of data protection measures.

How did all begin?

At the beginning of 2019, the Minister of Interior and the Director of Police announced the placement of 1000 cameras on 800 locations in Belgrade. The public was informed that these surveillance cameras will have facial and license plate recognition software.

Thereafter, civil society organisations requested the MoI information on:

  1. public procurement of the cameras,
  2. impact assessment on personal data protection that must be developed under the Personal Data Protection Law,
  3. camera locations and
  4. crime risk assessment based on which camera locations were determined.

The MoI answered that all documents for the public procurement of video equipment are confidential, while the information on locations and crime rate analysis is not contained in any document that the Ministry possesses, which is a legal precondition for accessing information of public importance in Serbia. The MoI added that the impact assessment of data processing on the protection of personal data has not been completed because the implementation of the new Personal Data Protection Law had not yet begun.However, the MoI’s responses contain some information on cooperation with the Chinese company Huawei on improving the information and telecommunication system through the “Safe City” project. The MoI entered into a Strategic Partnership Agreement with Huawei in 2017, aiming to introduce eLTE technologies. The Serbian government provided consent for this agreement in 2016. At the same time, Huawei published significantly more information on their cooperation with the MoI. Huawei stated that it offered MoI smart video surveillance and intelligent transport systems, advanced 4G network, unified data centres and related command centres. Furthermore, nine test cameras were originally installed at five locations, which successfully performed, according to Huawei. This information was unknown to the Serbian public.

Huawei removed the content on cooperation with the MoI from the official website shortly after the SHARE Foundation released a report, which contained information that Huawei already published online. The archived version is still available. In the meantime, the Minister of Interior said that 2000 cameras will be installed instead of 1000.

Finally, in September 2019, the MoI drafted and delivered the DPIA to the Commissioner for an opinion. For civil society, this was a commendable reaction of the authorities, especially given that the new Personal Data Protection Law entered into force at the end of August and the DPIA was completed in September 2019.

An insufficient Impact Assessment

The opportunity to address all issues of public interest through the MoI’s DPIA was missed, as well as the obligation to fulfill both formal and material terms required by the Personal Data Protection Law.

The DPIA does not meet the minimum legal requirements, especially in relation to smart video surveillance, which is a source of most interest and concern of the domestic and foreign public. The methodology and structure of the DPIA do not comply with the requirements of the Personal Data Protection Law because:

  • There is no comprehensive description of the intended actions on processing personal data in the case of smart video surveillance;
  • There is no risk assessment regarding the rights and freedoms of the data subjects;
  • The measures that are to be taken in relation to the existence of risk are not described;
  • The technical, organisational and personnel measures for data protection are only partially described; and
  • The legal basis for the mass use of smart video surveillance systems is disputable.

The positive effects on crime reduction as described in the DPIA are overestimated, due to the fact that relevant research and comparative practices have been used selectively.

It has not been established that the use of smart video surveillance is necessary for the sake of public safety, or that the use of such invasive technology is proportionate, considering the risks to citizens’ rights and freedoms.

The DPIA contains examples from countries that rely heavily on video surveillance and facial recognition technology and neglects the growing trend of banning or restricting such systems in the world, due to the identified risks to citizens’ rights and freedoms. Finally, there are numerous concerns and inconsistencies about the use of smart video surveillance comparing the DPIA and statements made by MoI officials in the media.

Suspend smart video surveillance now!

The MoI should suspend further introduction of smart video surveillance systems. In addition, the MoI and the Commissioner should initiate an inclusive public debate on video surveillance legislation and practice that will be in line with a charter on the democratic application of video surveillance in the European Union.

SHARE Foundation
https://www.sharefoundation.info/en/

Unlawful video surveillance with face recognition in Belgrade (04.12.2019)
https://www.sharefoundation.info/en/unlawful-video-surveillance-with-face-recognition-in-belgrade/

Policy brief: Serbian government is implementing unlawful video surveillance with face recognition in Belgrade (03.12.2019)
https://www.sharefoundation.info/wp-content/uploads/Serbia-Video-Surveillance-Policy-brief-final.pdf

(Contribution by EDRi member SHARE Foundation, Serbia)

close
03 Dec 2019

Wanted: Communications Intern!

By EDRi

European Digital Rights (EDRi) is an international not-for-profit association of 42 digital human rights organisations. We defend and promote rights and freedoms in the digital environment, such as the right to privacy, personal data protection, freedom of expression, and access to information.

Join EDRi now and become a superhero for the defense of our rights and freedoms online!

The EDRi Brussels office is currently looking for an intern to support our communications, campaigning and community coordination team. The internship will focus on social media, publications, campaigning, press work, and the production of written materials. The intern will also assist in tasks related to community coordination.

The internship will begin in February 2020 and go on for 4-6 months. You will receive a monthly remuneration of minimum 750 EUR (according to “convention d’immersion professionnelle”).

Key tasks:

  • Social media: drafting posts, engaging with followers, monitoring
  • Layouts and visuals: layouts and editing of visuals (specifically for social media)
  • Writing and editing: drafting and editing of press releases and briefings, newsletter articles, and supporter mailings
  • Assisting in other communications, campaigning and community coordination tasks, such as maintenance of mailing lists, monitoring media visibility, updating and analysing communications statistics, and event organisation

Needed:

  • experience in social media community management and publications
  • layout, photo and visual editing skills
  • excellent skills in writing and editing
  • fluent command of spoken and written English

Desired:

  • experience in journalism, media or public relations
  • interest in online activism and campaigning for digital human rights

How to apply:

To apply please send a maximum one-page cover letter and a maximum two-page CV (only PDFs are accepted) by email to heini >dot< jarvinen >at< edri >dot< org. Closing date for applications is 5 January 2020. Interviews with selected candidates will take place during the first half of January, and the internship is scheduled to start in February.

We are an equal opportunities employer with a strong commitment to transparency and inclusion. We strive to have a diverse and inclusive working environment. We encourage individual members of groups at risk of racism or other forms of discrimination to apply for this post.

close
25 Nov 2019

New Protocol on cybercrime: cutting red tape ≠ cutting human rights safeguards

By Chloé Berthélémy

From 20 to 22 November 2019, European Digital Rights (EDRi) and the Electronic Frontier Foundation (EFF) took part in the Octopus Conference 2019 at the Council of Europe (CoE) to present the comments submitted by EFF, EDRi, IT-Pol Denmark and the Electronic Privacy Information Center (EPIC) on draft provisions of the Second Additional Protocol to the Cybercrime Convention respect human rights. The Protocol sets the conditions for access to electronic data by law enforcement in the context of criminal investigations.

17 civil society organisations joint their call in a letter to the CoE Cybercrime Committee (T-CY) to ensure that the negotiations between more than 60 countries include substantial human rights safeguards in the draft text. The list of potential signatories goes far beyond the Council of Europe Parties and includes countries like the United States, Turkey, Morocco and Azerbaijan.

The procedures proposed by the Cybercrime Convention Committee (T-CY) exacerbate the challenges of the Cybercrime Convention (CCC), and create the potential for serious interference with human rights.

– the letter reads.

While the United States and the EU are engaging in a race to the bottom against one another in terms of privacy protections, it is essential that the T-CY listens to civil society concerns and avoids creating a mechanism that bypasses critical legal protections inherent in the current Mutual Legal Assistance Treaties (MLATs) – falsely considered as “red tape”.

Read the letter here.

Joint civil society response to discussion guide on a 2nd Additional Protocol to the Budapest Convention on Cybercrime (28.06.2018)
https://edri.org/files/consultations/globalcoalition-civilsocietyresponse_coe-t-cy_20180628.pdf

Nearly 100 public interest organisations urge Council of Europe to ensure high transparency standards for cybercrime negotiations (03.04.2018)
https://edri.org/global-letter-cybercrime-negotiations-transparency/

New Protocol on cybercrime: a recipe for human rights abuse? (25.08.2018)
https://edri.org/global-letter-cybercrime-negotiations-transparency/

close
22 Nov 2019

ePrivacy: EU Member States push crucial reform on privacy norms close to a dead end

By EDRi

Today, on 22 November 2019, the Permanent Representatives Committee of the Council of the European Union (COREPER) has rejected the Council’s position on a draft ePrivacy Regulation.

“In this era of disinformation and privacy scandals, refusing to ensure strong privacy protections in the ePrivacy Regulation is a step backwards for the EU,” said Diego Naranjo, Head of Policy at European Digital Rights (EDRi). “By first watering down the text and now halting the ePrivacy Regulation, the Council takes a stance to protect the interests of online tracking advertisers and to ensure the dominance of big tech. We hope the European Commission will stand on the side of citizens by defending the proposal and asking the Council to ensure a strong revised text soon in 2020.”

“The ePrivacy Regulation aims to strengthen users’ right to privacy and create protective measures against online tracking. Instead, EU states turned it into a surveillance toolkit,” said Estelle Massé, Senior Policy Analyst at EDRi member’s Access Now. “Today’s rejection should not be a signal that the reform cannot happen. Instead, it should be a signal that states must go back to the negotiating table and deliver what was promised to EU citizens: stronger privacy protections.”

In January 2017, the European Commission launched its proposal for a new ePrivacy Regulation, aiming at complementing the General Data Protection Regulation (GDPR), to protect the right to privacy and to the confidentiality of communications. An update to the outdated 2002 ePrivacy Directive is sorely needed – in today’s world where technology is intertwined in our everyday life, a strong regulation is crucial to protect us against the negative impacts of “surveillance capitalism”, to safeguard the functioning of our democracies, and to put people as the core element of the internet. The European Parliament took a strong stance towards the proposal when it adopted its position in October 2017. For over two years, the Council halted the proposal from advancing, presenting suggestions that lowered the fundamental rights protections that were proposed by the Commission and strengthened by the Parliament.

Today, the Council has voted to reject its own text. This leaves the door open for current practices that endanger citizens’ rights to continue happening. Now it is up to the Commission to either withdraw the entire proposal and leave citizens unprotected, or to the Council to prepare a new text that can get enough support to allow moving forward with the proposal. To meet the aims set for the ePrivacy Regulation, the new text should ensure privacy by design and by default, protect communications in transit and when stored, ban tracking walls, prevent backdoors to scan private communications without a court order and avoid secondary processing of communications data without consent.

Read more:

e-Privacy revision: Document pool
https://edri.org/eprivacy-directive-document-pool/

EU states vote on ePrivacy reform: We were promised more privacy. Instead, we are getting a surveillance toolkit. (22.11.2019)
https://www.accessnow.org/eu-states-vote-on-eprivacy-reform-we-were-promised-more-privacy-instead-we-are-getting-a-surveillance-toolkit/

EU Council considers undermining ePrivacy (25.07.2018)
https://edri.org/eu-council-considers-undermining-eprivacy/

Five reasons to be concerned about the Council ePrivacy draft (26.09.2018)
https://edri.org/five-reasons-to-be-concerned-about-the-council-eprivacy-draft/

Open letter to EU Member States: Deliver ePrivacy now! (10.10.2019)
https://edri.org/open-letter-to-eu-member-states-deliver-eprivacy-now/

The most recent European Council ePrivacy text (15.11.2019)
https://www.politico.eu/wp-content/uploads/2019/11/file.pdf

close
20 Nov 2019

Dance. Enjoy. Share. With Care.

By Ella Jakubowska
  • Anyone using cloud services should be aware of what the “cloud” is, what it is not, and how it can affect our privacy and security.
  • Our information stored in “clouds” can be protected if the EU says “Yes!” to a strong ePrivacy Regulation, greater enforcement of the General Data Protection Regulation (GDPR), and drops the “e-evidence” proposals.

Storing our information in “clouds” gives us access to funny photos of our dogs at the touch of a button, lets us back-up our mobile phones so that we don’t lose our crush’s number forever if we drop our phone down the toilet (oops!), and the cloud also gives us the means to binge-watch that addictive TV show that everyone is talking about. It can even amplify computing capacity, giving doctors the power to treat rare diseases more effectively. Many of these things were unimaginable just ten years ago – but today, we carry this incredible power in the palm of our hands.

It is important that cloud users have the knowledge and control to upload data to cloud services safely, securely, and in an enjoyable way. Your personal data should be protected online, including when you upload it to and store it in the cloud. One of the fundamental aims of 2018’s General Data Protection Regulation (GDPR), after all, was to protect the personal data of all citizens in the EU, and to set a globally-leading standard for personal data protection.

The not-so-fluffy cloud

Yet, while the word “cloud” sounds soft and fluffy, the truth is that there is no such thing as “the cloud” or “your cloud”. People outsource the storage of data from their own device to the internet servers of a private company. In reality, these servers are “the cloud” and company they belong to most often profits from gathering more and more data. In some cases, uploaded data will be subject to only very weak data protections. And with the proposed ePrivacy text – a vital complement to GDPR – still stuck at the European Council after over two and a half years, anyone using the internet in the EU is left vulnerable and inadequately protected.

EU laws can keep it together

This is where stronger EU legislation is needed. Under the European Parliament’s ePrivacy text, a wide range of online rights will be protected. This includes the storage, transit and encryption of online communications, which would help to protect users when their communications data is backed up to the cloud. Personal data, other than communications data, is already protected by the GDPR. This is important because, as recent cases in Germany have shown, unlawful data breaches of minors’ data are already happening in Microsoft’s cloud services.

This is also an issue in the context of the so-called “e-evidence” debate on proposed legislation for law enforcement to access European citizens’ data across borders, straight from service providers. The legislation would allow police forces from other EU countries to directly access the private information that you have stored on the cloud: without a judicial warrant, without you or your own government knowing that this is happening, and even without you being a suspect. Under this proposal, cloud providers have very little opportunity to refuse requests to hand over cloud data, and crucial human rights accountability measures and due process mechanisms are completely missing. E-evidence legislation therefore poses a huge threat to the security and privacy of data that is stored on a cloud.

The cloud can give you flexibility, convenience and peace of mind – but it is important to know where your data is going, and who might have access to it. The cloud is no longer a source of reassurance and convenience if a private company (or a hacker) can misuse funny videos of you and your friends, personal messages with your parents about a health condition, or an intimate browser history that contains information about your sexual activities. In order to protect the information of millions of European citizens, the EU must adopt ePrivacy, enforce GDPR and drop the e-evidence proposals.

Remember, data protection is cool – and knowing your rights pays off!

Click to watch the animation

Read more:

Your family is none of their business (23.07.2019)
https://edri.org/your-family-is-none-of-their-business/

Real-time bidding: The auction for your attention (04.07.2019)
https://edri.org/real-time-bidding-the-auction-for-your-attention/

Video: Dance. Enjoy. Share. With care.
https://www.youtube.com/watch?v=5N_lrtOkW3g

Right a wrong: ePrivacy now! (09.10.2019)
https://edri.org/right-a-wrong-eprivacy-now/

“E-evidence”: Repairing the unrepairable (14.11.2019)
https://edri.org/e-evidence-repairing-the-unrepairable/

(Contribution by Ella Jakubowska, EDRi intern)

close