20 Jun 2019

E-Commerce review: Opening Pandora’s box?

By Kirsten Fiedler

The next important battle for our rights and freedoms in the digital sphere is looming on the horizon. While the public debate has recently focused on upload filters for alleged copyright infringements and online “terrorist” content, a planned legislative review will look more broadly at the rules for all types of illegal and “harmful” content.

This review aims to update the rules on how online services, such as social media platforms, should or should not delete or block illegal and “harmful” content. A reform might also bring changes to how online services could be held liable when such content is not taken down. The big question is: will the review of the E-Commerce Directive (ECD) open Pandora’s box and become one of this decade’s biggest threat to citizens’ rights and freedoms online – or will it be a chance to clarify and improve the current situation?

Christchurch, copyright and election manipulation

The recently adopted Copyright Directive and the draft European rules for the removal of terrorist content online initiated the creation of sector-specific rules for content removals.

Events like the Christchurch tragedy, potential disinformation threats during the European elections and hateful comments from increasingly radicalised right-wing extremists after the murder of a German pro-migrant politician contributed further to the debate surrounding illegal and “harmful” online content.

These events led to a multiplication of calls towards online services to “do more” and to “take more responsibility” for what is being uploaded to their servers. Several countries have started discussions about the adoption of national rules. For instance, following the German example, France has just introduced a law against online hate and the UK published a controversial Online Harms Paper.

E-Commerce Directive: What is it and its unavoidable reform

Adopted nearly 20 years ago, the E-Commerce Directive sets up liability exemptions for hosting companies for content that users share on their networks. Until very recently, these rules applied horizontally to all sorts of illegal content, including copyright infringements, hate speech, and child abuse material. The current rules for take-downs and removals are therefore (indirectly) defined by the ECD.

While the Directive is not perfect and created a few issues, mainly due to lack of clarity, its safe harbour provisions encouraged the protection of the fundamental rights of users, in particular the freedom of expression and that of information.

Since the adoption of the ECD, however, the landscape of services that might or might not fall under liability exemptions has drastically changed. Notably, cloud services and social media platforms became very important players and some have gained significant market power. Currently, a small number of dominant platforms have a high impact on individuals’ rights and freedoms, our societies and on our democracies.

The nature of the internet has also vastly changed in the past 20 years towards an increasingly participatory community. As a result, the amount of user-generated content has increased exponentially. On the other hand, we witness more government pressure on companies to implement voluntary mechanisms against alleged illegal or “harmful” content. These two parallel developments resulted in an increasing number of wrongful removals and blocking of legitimate speech.

In the past months, the Directorate-General for Communications Networks, Content and Technology (DG Connect) of the EU Commission already started the process of exploring policy options for content moderation that will be presented to the incoming College of Commissioners. A reform of the ECD to attempt the harmonisation of liability exemptions and content moderation rules seems to have become unavoidable.

The upcoming reform can therefore be both a chance and a potential trap for policy-makers. On one hand, it offers the opportunity to create legal certainty and introduce safeguards that will enable users to enjoy their rights and freedoms. On the other, the reform can be a trap if policy-makers embrace blunt one-size-fits-all solutions that avoid real solutions for societal issues and instead lead to massive collateral damages.

close
19 Jun 2019

Fighting defamation online – AG Opinion forgets that context matters

By EDRi and IT-Pol

On 4 June 2019, Advocate General (AG) of the Court of Justice of the European Union (CJEU), Maciej Szpunar, delivered his Opinion on the Glawischnig-Piesczek v Facebook Ireland case. The case is related to injunctions obliging a service provider to stop the dissemination of a defamatory comment. Looking carefully at this Opinion is important, as the final ruling of the CJEU usually follows the lines of the AG’s Opinion.

The case involves Ms Glawischnig-Piesczek, an Austrian politician, who was the target of defamatory comment shared publicly on Facebook. As Facebook did not react to her first request for that comment to be deleted, Ms Glawischnig-Piesczek requested the Austrian courts to issue an order obliging Facebook to remove the publication and prevent its dissemination, including exact copies of the original comment as well as “equivalent content”. After the first court injunction, Facebook disabled access in Austria to the content initially published. Ultimately, the Supreme Court of Austria, before which the case was brought, referred to the CJEU several question related to the scope of application of such injunction geographically as well as to statements with identical wording or having equivalent meaning. As Facebook is not necessarily aware of all identical or equivalent content, the upcoming judgment of the CJEU will be essential for the interpretation of the E-Commerce Directive, notably its Articles 14 and 15.

In his Opinion, the AG states that a hosting provider such as Facebook can be ordered to seek and identify, among all the information disseminated by users of that platform, content identical to the content that has been characterised as illegal by a court. Moreover, the hosting provider may be required to search equivalent content, but only among the content disseminated by the user that generated the illegal information in the first place.

The Opinion is interesting for two reasons: first, it provides reflection on the way to distinguish between general and specific monitoring of content by hosting providers; second, it tries to draw a line between “identical” and “equivalent” content.

AG Szpunar starts by expressing great concerns that an obligation put on an intermediary to filter all content would make it aware of illegal content, thus causing the loss of its liability exemption provided under Article 14 of the e-Commerce Directive. In the present case, the referring court has established that Facebook falls under Article 14, so the active-passive host distinction is not further explored in the Opinion. The upcoming CJEU case about liability of YouTube for user uploads (C-682/18) will undoubtedly revisit this question. However, the AG does not preclude the possibility to impose “active” monitoring under the provisions of Article 15 of the same Directive. Recalling the conclusions from the L’Oréal v eBay case (C-324/09), which limits the preventive obligation (ie. “filtering”) to “infringements of the same nature by the same recipient of the same rights, in that particular case trade mark rights” (point 45). For a monitoring obligation to be specific and sufficiently targeted, the AG mentions the criteria of duration, but also the information relating to the nature of the infringements, their author and their subject. It raises the question on how the monitoring can be limited in time and stopped, once a specific case is declared to be over.

Applying these principles to the present case, the AG believes a monitoring obligation for “identical content” among information generated by all users would ensure a fair balance between the fundamental rights involved. His argument is to be found at points 61 and 63 where he speculates that seeking and identifying identical content can be done with passive “software tools” (ie. upload filters), which does not represent “an extraordinary burden” for the intermediary.

This is where the distinction with “equivalent” content is drawn: equivalent content would deserve more “active non-automatic filtering” by the intermediary of all the information disseminated via its platform. What is meant by non-automatic filtering is not entirely clear, but the distinction in the mind of the AG could be between filtering that never requires manual intervention to ensure a fair balance with other fundamental rights (freedom of expression and right to information, in particular) and non-automatic filtering that does require such intervention in order to avoid situations similar to the Netlog case C-360/10, where the CJEU ruled that a preventive filtering system applying indiscriminately to all users was incompatible with the Charter of Fundamental Rights of the European Union.

Unfortunately, a distinction along these lines seems ill-suited for the case at hand which is about defamation. Specific words that are defamatory in the present case could be used in other contexts without being defamatory. Obvious examples would be counterspeech, irony among friends, or even news reporting. The situation is really the same for content defined as identical or equivalent: context matters, and automated algorithms will not be able to make the finely grained decisions about when the use of certain words (whether copied verbatim, that is identical content, or with changes, meaning equivalent content) is legal or illegal. A filtering obligation for identical content will have the same negative effect on freedom of expression and the right to information as a filtering obligation for equivalent content.

The present case will be particularly important for defining the distinction between specific monitoring and general monitoring, where there is presently very little case law. Since the E-Commerce Directive Article 15(1) prohibits general monitoring, specific monitoring by implication is any monitoring that is compatibe with the E-Commerce Directive, interpreted in the light of the Charter of Fundamental Rights. Only the L’Oréal v eBay case (C-324/09) has dealt with this issue. Compared to the earlier case, the AG proposes an expanded definition of specific monitoring which has the notable disadvantage of being rather unworkable since it relies on a flawed dichotomy between identical and equivalent content. This dichotomy is disconnected from the legal reality that specific monitoring must comply with the Charter of Fundamental Rights and prevent the risk of censorship resulting from a filtering obligation. Hopefully, the judgment in the case can present a more workable definition of specific monitoring that is reconcilable with both Articles 14 and 15 of the E-Commerce Directive.

Case C-18/18: Eva Glawischnig-Piesczek v Facebook Ireland Limited
https://curia.europa.eu/jcms/upload/docs/application/pdf/2019-06/cp190069en.pdf

Legal victory for trademark litigants over intermediary liability (13.07.2011)
https://edri.org/edrigramnumber9-14ebay-loreal-case-ecj/

SABAM vs Netlog – another important ruling for fundamental rights (16.02.2012)
https://edri.org/sabam_netlog_win/

(Contribution by Chloé Berthélémy, EDRi, and Jesper Lund, EDRi member IT-Pol, Denmark)

close
19 Jun 2019

EU rushes into e-evidence negotiations without common position

By Chloé Berthélémy

On 6 June 2019, the Justice and Home Affairs Council (JHA) – which gathers all EU Member States Ministers of Justice – asked the European Commission to start international negotiations on cross-border access to electronic evidence in criminal matters (so-called “e-evidence”) in the upcoming months. The Commission should enter into bilateral negotiations with the United States (US), and at the same time, it should join the ongoing discussions at the Council of Europe about the adoption of a Second Additional Protocol to the Budapest Convention on Cybercrime – which also deals with access to e-evidence.

Both negotiation mandates were issued while the Commission’s own proposal for a European e-evidence regulation is highly contested and still being debated in the European Parliament. According to this proposal, law enforcement authorities in any EU Member States would be allowed to force providers like Facebook or Google to hand over personal data from users, even if the provider is located in a different country. The authorities of the provider’s country would have almost no say in it, and in most cases would not even know that their citizens’ data has been accessed by foreign authorities.

Many critics, including EDRi, lawyers, academics, the European Data Protection Board (EDPB), and other civil society organisations oppose the very idea behind the e-evidence proposal as it heavily infringes on our fundamental rights without due safeguards.

Even within the EU, some activities are considered criminal in one country, and legal in another. Negotiating similar data access rules with countries like the US or even Russia and Azerbaijan (as part of the Council of Europe) that often have very different concepts of the rule of law, puts people in Europe at risk. This is especially dangerous for political dissidents and activists who have come to the EU as a “safe haven”.

In fact, the previous European Parliament Committee responsible (Committee on Civil Liberties, Justice and Home Affairs, LIBE) expressed serious criticism in a series of Working Documents. Still, the Commission intends to negotiate on its own terms and those of the Council of the European Union, which are very similar.

It is unacceptable that the Commission does not wait for and is unlikely to take into account the position of the co-legislator. In line with the European democratic legislative process, negotiations with third parties should not start as long as no official position of the EU as a whole has been reached. Worse yet, the Commission will likely be obliged to amend its own negotiation position on in order to follow the outcomes of internal discussions between the EU Council and the Parliament. It will greatly undermine the legitimacy and credibility of the EU as a negotiating partner.

No transparency

As usual when the Commission is representing the EU in negotiations of international agreements or treaties, few transparency mechanisms are put in place to inform citizens about what is being discussed, what compromises are being struck, and what concessions are being given from which side of the table. Often such information is kept secret while the issues at stake have considerable impact on people and our democracies. The Commission announced that it will regularly inform the Member States about the progress made, but no such reports seem to be foreseen to the European Parliament. Yet, it is the Parliament that represents European citizens and that is key for democratic scrutiny and transparency. It goes without saying that scrutiny by and participation of civil society will be even more challenging.

The European Data Protection Supervisor (EDPS) recently published a recommendation demanding to include additional data protection principles and fundamental rights protections into the negotiation mandates given to the Commission. It is unclear how this recommendation and the strong criticism from experts across the board will be taken into consideration.

Recent CJEU ruling puts the Commission’s proposal in jeopardy

In addition, the Court of Justice of the European Union (CJEU) recently released a ruling on the issuance of European Arrest Warrants by public prosecutors. It decided that for the purpose of cross-border judicial cooperation, certain public prosecutors cannot qualify as competent “issuing judicial authority” under the European treaties. According to the CJEU, public prosecutor’s offices in countries such as Germany cannot be considered independent as they are likely exposed to direct or indirect instructions from the Minister for Justice and thus to political decisions. Issuing authorities should be capable of exercising their functions objectively, taking into account all incriminatory and exculpatory evidence and without external directions or instructions. This ruling is of importance in the context of the e-evidence proposal. It proposes that judicial authorities, including prosecutors, can issue European production and preservation orders to obtain data in cross-border cases. In line with this CJEU’s ruling, public prosecutors would not be allowed issue these orders for the purpose of judicial cooperation as set out in Article 82(1) of the Treaty on the Functioning of the European Union (TFEU). Thus, the current proposal is weakened in regard to its legality and will need great improvements to ensure compliance with CJEU case law.

Cross-border access to data for law enforcement: Document pool
https://edri.org/cross-border-access-to-data-for-law-enforcement-document-pool/

CCBE press release: CJEU ruling casts doubts on the legality of the proposed e-evidence regulation (29.05.2019)
https://www.ccbe.eu/news/news-details/article/cjeu-ruling-casts-doubts-on-the-legality-of-the-proposed-e-evidence-regulation/

EDPS Opinion on the negotiating mandate of an EU-US agreement on cross-border access to electronic evidence (02.04.2019)
https://edps.europa.eu/sites/edp/files/publication/19-04-02_edps_opinion_on_eu_us_agreement_on_e-evidence_en.pdf

EDPS Opinion regarding the participation in the negotiations in view of a Second Additional Protocol to the Budapest Cybercrime Convention (02.04.2019)
https://edps.europa.eu/data-protection/our-work/publications/opinions/budapest-cybercrime-convention_en

(Contribution by Chloé Berthélémy, EDRi)

close
19 Jun 2019

Greece: Complaint filed against breach of EU data protection law

By Homo Digitalis

On 30 May 2019, EDRi observer Homo Digitalis filed a complaint to the European Commission against a breach of EU data protection law by Greece. The European Commission registered the complaint under the reference number CHAP(2019)01564 on 6 June 2019, and its services will assess the complaint and provide a reply within 12 months.

Homo Digitalis claims that Greece has breached Article 63, paragraph 1 of the Directive 2016/680, also knows as the Law Enforcement Directive (LED). According to this Article, Member States shall adopt and publish by 6 May 2018, the laws, regulations and administrative provisions necessary to comply with the LED. However, the Greek State has not published any national law in this regard, and more than one year after the above-mentioned deadline, it has not applied any related provisions.

The provisions of the LED are intended to cover all personal data processing undertaken for law enforcement (police and criminal justice) purposes, regardless of whether the processing takes place within or across national borders. In this way, the Framework Decision 2008/977/JHA’s most basic restriction is finally lifted and law enforcement authorities within the EU have to implement the LED’s provisions into their everyday personal data processing activities. Therefore, a Greek national law implementing the provisions of the LED is crucial for ensuring a consistent and high level of protection of people’s data when those are processed for the prevention, investigation, detection, and prosecution of criminal offenses.

Since Greece has not respected the deadline that the EU regulator has set, it fails to meet EU requirements related to the strengthening of the rights of data subjects and of the obligations of those who process personal data. It also fails to provide equivalent powers for monitoring and ensuring compliance with the data protection rules in Greece. Before submitting the complaint, Homo Digitalis had proceeded to a number of actions at national level.

In the complaint, Homo Digitalis also underlines shortcomings related to the enforcement of the General Data Protection Regulation (GDPR). Despite the fact that the provisions of the GDPR are binding in their entirety and directly applicable in all Member States since 25 May 2018, the Greek State has not published a national law enforcing GDPR’s provisions in national law until today. This is very troublesome, especially considering that EU legislators have left many important measures to the discretion of the Members States, such as rules regulating the processing of genetic data, biometric data or data concerning health (Article 9); or the protection of employees’ personal data in the context of employment (Article 88), for example.

Homo Digitalis
https://www.homodigitalis.gr/

Homo Digitalis’ complaint (30.05.2019)
https://www.homodigitalis.gr/source_content/uploads/2019/05/Homo-Digitalis_Complaint_Breach-of-EU-law_30.05.2019.pdf

European Commission’s official receipt (06.06.2019)
https://www.homodigitalis.gr/source_content/uploads/2019/06/CHAP201901564-C-EN-AR-1.pdf

Homo Digitalis files a complaint to the European Commission against a breach of EU data protection law by Greece (only in Greek, 30.05.2019)
https://www.homodigitalis.gr/posts/3858

(Contribution by Eleftherios Chelioudakis, EDRi observer Homo Digitalis, Greece)

close
19 Jun 2019

German Big Brother Awards – one “winner” reacts and appears

By Digitalcourage

The German Big Brother Awards (BBA) gala was held on 8 June 2019 in Bielefeld, Germany. Organised annually since 2000 by EDRi member Digitalcourage, this year’s gala was the third to be streamed live in English in addition to the original German. For the second time, the venue was Bielefeld theatre, where the stage set for an operetta that had premiered the previous day was adapted for the presentation.

The award in the “Authorities and Administration” category was given to the Interior Minister of the Federal State of Hesse, Peter Beuth. After the tightening of the Hessian police laws had earned Hesse’s conservative–green coalition an award in 2018, this marked the first time that two successive awards had gone to the same governing coalition of the same Federal State. The 2019 award was given for the acquisition of a software from Palantir, a controversial US company with close links to the Central Intelligence Agency (CIA), to analyse and interrelate data from various sources ranging from police databases to social media. The use of this software for “preventive” police work, which was given its own legal basis in a late addition to the police law of 2018, was criticised as having disastrous effects on human rights and on the rule of law. By commissioning Palantir to supply, adapt and operate the software, the US company was given access to sensitive police data on non-US citizens which they, pursuant to the Foreign Intelligence Surveillance Act (FISA), have to share with US secret service if warranted. On top of that, the software’s algorithms are a trade secret, so any “police findings” it may come up with are beyond effective scrutiny. In the speech, laudator Rolf Gössner also raised questions about the guarded manner in which Palantir’s services were commissioned.

No 2019 award was given in the traditional “Workplace” category. Jury member Peter Wedde was interviewed on stage and explained that there are still many prizeworthy issues in the workplace, but that reporting is a problem: either the media consider an individual complaint to be too small or alarm systems inside a business prevent “big” issues from being reported externally. Other problems are that workers representation is often not established in companies and violations often escape being penalised. He described details from a number of individual cases and called for improved whistleblowing protections.

An award in the “Biotechnology” category went to the largest provider of consumer DNA testing worldwide, ancestry.com. Laudator Thilo Weichert challenged claims in the company’s terms that consumers who supply DNA samples and accompanying information on themselves and their families would retain data ownership. The data protection statement grants Ancestry the right to share data with a broad range of partners, while sample suppliers are denied information on the research these partners conduct with their data. Conversely, consumers are barred by Ancestry from sharing the results of “their” analyses with others. Among the companies buying data that consumers supply to Ancestry are large pharmaceuticals such as GlaxoSmithKline. Ancestry was criticised for not informing its customers about other risks such as becoming the focus of police searches for suspects using DNA data, or possible family disruptions and psychological consequences, and for ignoring German legal requirements on warning about such consequences, on information disclosure as well as data protection.

The winner in the “Communication” category was Precire Technologies (formerly called Psyware) of Aachen, Germany. This company offers artifical intelligence (AI) based analysis of speech including recorded phone conversations, from which it claims to be able to derive psychological profiles. One service offered by Precire is pre-selection of job applicants by encouraging them to take part in computer-led, seemingly innocuous phone conversations. In her award speech, laudator Rena Tangens expressed doubt on the validity of such judgements, pointing to contradictions in the company’s arguments (for example that speech patterns are likened to immutable finger prints, whereas Precire also offers a speech training app), and criticised that Precire refuses to publish its own studies on its technologies, while publicly available studies are only based on data provided by the company itself. People who are not looking for jobs can still be exposed to the company’s technology when their phone calls are handled by call centres, where the software advises agents on how to treat a case based on an analysis of the caller’s emotions.

The ”Technology” award went to the “Technical Committee CYBER” of the European Telecommunications Standards Institute (ETSI) for its efforts to establish an alternative to the newest version of the internet encryption standard “Transport Layer Security” (TLS 1.3) under the name “Enterprise Transport Security” (ETS, formerly eTLS). Laudator Frank Rosengart described this standard as clearly being designed in the interests of government and secret service surveillance as it includes key escrow, a process where “backdoor” keys are retained and possibly handed over to investigators. Anyone obtaining such keys would be able to decrypt all future communications with an online service. Users, on the other hand, will have little opportunity to detect that this weaker encryption is being used or to prevent it. The standard was created despite warnings by the Internet Engineering Task Force (IETF) and other experts.

A much-debated award in the “Consumer Protection” category was given to a leading German news website, Zeit Online, for the use of tracking in its website, for the previous use of Google online services to store personal data including details on political opinions about users of an award-winning project called “Germany Talks”, and for accepting Google sponsorship for the international successor project, “My Country Talks”. Laudator padeluun explained that he has been friends with the paper’s online editor-in-chief, Jochen Wegner, for many years. He praised Zeit’s journalism as well as the project “Germany Talks”, which brought people of conflicting political opinions together for personal conversations. Despite the good intentions, the speech explained, the project had in its implementation phase succumbed to the temptation of using Google’s Cloud Office to handle registrations, and Zeit had also used these services for collaborative work on other journalistic investigations. The award speech called on Zeit Online to realise the consequences of mass surveillance that it had extensively reported on after the Snowden revelations, and to consequently expand its own IT base and look for alternatives to using online tracking as a source of revenue for its online services. Four days before the gala, Jochen Wegner had made the award public in a response on Zeit’s editorial blog “Glashaus”. Digitalcourage made clear that this course of action did not constitute a breach of its journalistic embargo – Zeit had been notified as BBA awardees and like all such “winners” they were free to react. Zeit’s response acknowledged some of the critique in the award speech while refuting other aspects. Jochen Wegner also visited the gala and accepted the award in person, using the customary opportunity to voice his opinion in an on-stage interview. His appearance was acknowledged with long and respectful applause. The Big Brother Awards organisers and Zeit Online are looking to continue the conversation and hopefully reach tangible conclusions.

Full English information on the 2019 awards can be found at https://bigbrotherawards.de/en/2019 – a recording of the English live stream is due to be added soon.

Digitalcourage
https://digitalcourage.de/

BBA Germany 2018: Spying on employees, refugees, citizens… (16.05.2018)
https://edri.org/bba-germany-2018-spying-on-employees-refugees-citizens/

BBA Germany 2017: Espionage, threats, tracking, provoking cyber wars (17.05.2017)
https://edri.org/bba-germany-2017-espionage-threats-tracking-provoking-cyber-wars/

Big Brother Awards Germany 2016 (18.05.2016)
https://edri.org/bba-germany-2016/

Big Brother Awards Germany 2015 (22.04.2015)
https://edri.org/big-brother-awards-germany-2015/

Big Brother Awards Germany 2014 (24.04.2014)
https://edri.org/big-brother-awards-germany-2014/

Big Brother Awards Germany 2013 (24.04.2013)
https://edri.org/edrigramnumber11-8bba-germany-2013/

Big Brother Awards Germany 2012 (25.04.2012)
https://edri.org/edrigramnumber10-8bba-germany-2012/

Big Brother Awards Germany 2008 (05.11.2008)
https://edri.org/edrigramnumber6-21bba-germany-2008/

(Contribution by Sebastian Lisken, EDRi member Digitalcourage)

close
19 Jun 2019

Danish DPA approves Automated Facial Recognition

By IT-Pol

On 13 June 2019, the Danish football club Brøndby IF announced that starting in July 2019, automated facial recognition (AFR) technology will be deployed at Brøndby Stadium. It will be used to identify persons that have been banned from attending Brøndby IF football matches for violations of the club’s own rules of conduct. The AFR system will use cameras that scan the public area in front of the stadium entrances, so that persons on the ban list can be ”picked out” from the crowd before reaching the entrance.

The use of AFR technology at Brøndby Stadium comes with prior approval from the Danish Data Protection Authority (DPA) which is a requirement in the Data Protection Act, as explained below. Brøndby IF is the first company to secure an approval for using AFR in Denmark.

Under the EU General Data Protection Regulation (GDPR), biometric data for the purpose of uniquely identifying a person constitutes sensitive personal data (special categories of personal data in Article 9). This covers AFR. Article 9(1) of the GDPR prohibits the processing of sensitive personal data unless one of the conditions in Article 9(2) applies. The explicit consent of the data subject [Article 9(2)(a)] is one of these conditions, and generally speaking the most relevant one for private controllers. Consent cannot be the legal basis for using AFR at a football stadium though, since consent must be voluntary.

GDPR Article 9(2)(g) allows processing of sensitive personal data if the processing is necessary for reasons of substantial public interest, on the basis of EU or Member State law, which must be proportionate to the aim pursued. The law must provide for suitable and specific measures to safeguard the fundamental rights and the interests of the data subject.

Based on Article 9(2)(g), the Danish GDPR supplementary provisions (“Data Protection Act”) contains a general carve-out from the prohibition of processing sensitive personal data. Section 7(4) of the Data Protection Act provides that ”the processing of data covered by Article 9(1) of the GDPR may take place if the processing is necessary for reasons of substantial public interest.” Prior authorisation from the DPA is required for controllers that are not public authorities, and this authorisation may lay down more detailed terms for the processing.

Denmark has no specific national law providing a legal basis for the use of AFR by controllers along with suitable safeguards for data subjects. However, Section 7(4) can be used to allow any processing of sensitive personal data by law, including AFR, assuming that the threshold of substantial public interest is met. The explanatory remarks of Section 7(4) state that the provision must be interpreted narrowly, but the actual scope of the open-ended derogation is left to administrative practice by public controllers and authorisation decisions by the DPA for processing by private controllers.

With the authorisation to Brøndby IF, the Danish DPA has decided that the processing with AFR to enforce a private ban list is necessary for reasons of substantial public interest, and that the processing is proportionate to the aim pursued. The logic of that decision is rather difficult to understand in the present case. AFR is one of the most invasive surveillance technologies since a large number of persons in a crowd can be identified from their biometrics (facial images) and automatically catalogued based on matches with pre-defined watch lists. At the same time, AFR is a very unreliable and inaccurate technology with known systematic biases in the form of higher error rates for certain ethnic minorities.

At Brøndby Stadium, AFR will be used to process sensitive personal data of, on average, 14000 persons per football match. The ban list currently contains only 50 persons, and there is no information available about how many of these 50 persons are actually trying to circumvent the ban and get access to Brøndby Stadium. There is also no pressing public security need for using this very invasive surveillance technology. The number of arrests by the Danish police in connection with football matches is at a record low, and rather ironically the Brøndby IF press release even highlights that there has been a positive development regarding security at Danish football matches over the last ten years. This evidence must, at the very least, call into question the proportionality of using AFR, even before considering whether there are really reasons of substantial public interest involved.

To the Danish newspaper Berlingske, the Danish DPA commented that there is no rigid definition of ”substantial public interest”. In the application from Brøndby IF, the DPA has considered the issue of security for certain sports events with large audiences. The DPA further told Berlingske that AFR would allow for more effective enforcement of the ban list compared to manual checks, and that this could reduce the queues at the stadium entrances, lowering the risk of public unrest from impatient football fans standing in queues.

The claims for the effectiveness of AFR are contradicted by the findings of independent evaluations of the technology. A report by the UK civil liberties organisation Big Brother Watch analyses the use of AFR by the Metropolitan Police and the South Wales Police at festivals and sports events, deployments comparable to the plans of Brøndby IF. Evidence obtained from the UK police through freedom of information (FOI) requests documents that 95% of the AFR matches are false-positive identifications. Persons are ”identified” by the AFR technology without being on a watch list. The obvious conclusion is that AFR is simply not a reliable and accurate technology for identifying persons in a large crowd. The unreliability of AFR could also affect the legality of using the technology since one of the GDPR principles in Article 5(1)(d) is that personal data must be accurate. AFR matches are personal data, but very far from being accurate.

It is unclear whether the reliability of AFR, or rather the lack thereof, has played any role in the DPA decision to grant authorisation for using AFR at Brøndby Stadium. Brøndby IF seems to assume that AFR is an almost perfect technology. The press releases claims that the AFR system will not be able to identify or register persons who are not on the ban list, implicitly ruling out any false-positive identification. Needless to say, this claim is demonstrably wrong. The authorisation from the DPA does not mention accuracy of AFR, and there are no specific requirements for the controller to take measures to limit false-positive identifications or even keep track of the magnitude of this problem. The “more detailed terms” set by the DPA in the authorisation to Brøndby IF add little to the ordinary GDPR obligations for controllers.

Danish EDRi member IT-Pol publicly criticised the plans for deployment of AFR technology at Brøndby Stadium. The threshold set by the Danish DPA in terms of requirements for a substantial public interest and proportionality seems very low, and this could lead to a large number of applications for using AFR by other private controllers in Denmark. Indeed, within just two days of the Brøndby IF press release, another Danish football club (AGF) expressed an interest in using AFR at its stadium and in exchanging biometric information about persons on ban lists with Brøndby IF. Incidentally, AGF has recently installed a new video surveillance system which is able to use AFR although the AFR functionality is currently deactivated in the system. Since AFR is largely about software analysis of captured video images, there is probably a large number of modern video surveillance systems in Denmark where AFR functionality could potentially be activated, perhaps through a software upgrade.

IT-pol
https://itpol.dk/

English translation of the Danish Data Protection Act (GDPR supplementary provisions)
https://www.datatilsynet.dk/media/6894/danish-data-protection-act.pdf

Face Off: The lawless growth of facial recognition in UK policing, Big Brother Watch (May 2018)
https://bigbrotherwatch.org.uk/wp-content/uploads/2018/05/Face-Off-final-digital-1.pdf

Association warns against new technology: fans should complain, DR Nyheder (only in Danish, 13.06.2019)
https://www.dr.dk/nyheder/indland/forening-advarer-mod-ny-teknologi-paa-stadion-fans-boer-klage

(Contribution by Jesper Lund, EDRi member IT-pol, Denmark)

close
19 Jun 2019

Poland: Banks obliged to explain their credit decisions

By Panoptykon Foundation

Owing to the initiative of the Polish EDRi member Panoptykon, bank clients in Poland will have the right to receive an explanation of the assessment of their creditworthiness. The initiative proposed and fought for amendments in the Polish banking law, and resulted in an even higher standard than the one envisioned in the General Data Protection Regulation (GDPR).

There is naturally a strong asymmetry of power between banks and clients. So far that manifested itself for example in the fact that banks were able to demand their clients to present any information connected with their life situation and the purpose of the loan, as well as to obtain information from other sources. Apart from the generally binding principles of personal data protection, there were no other restrictions in that scope. In effect, the client who was denied a loan by the bank was able to only guess what the problem was – income, the form of employment, or perhaps any liabilities not paid on time. That will change: clients of Polish banks will be able to check what were the decisive factors in the assessment of their creditworthiness.

More than the GDPR

A consumer will have the right to obtain “information on the factors, including personal data, which affected the evaluation of their creditworthiness”. That right applies irrespective of whether or not a credit decision was automated and regardless of its content.

The GDPR guarantees transparency limited to automated decisions. However, in reality, the line between the assessment made by the algorithm and the final credit decision made by an analyst may be blurred. Moreover, irrespective of the degree of human involvement, a credit decision is based on an advanced analysis of personal data and on the profiling of clients. From that perspective, extending the right to explanation to all decisions based on profiling and using big data is an excellent solution.

What should the bank tell the client?

The right to explanation encompasses factors – including personal data – which affected the creditworthiness assessment. The bank does not need to provide a full list of factors taken into account in that process, but it has to disclose all those which had an impact on the final decision. It will not be enough to specify that the basis for the negative assessment was, for instance, the income. The bank will be obliged to disclose what exact amount of income it took into consideration. This creates room for dialogue and a chance to correct mistakes (such as a missing zero in the amount of income, or rectifying an outdated report from a credit information bureau). In a long-term perspective, it also serves as a valuable instruction for those clients who wish to increase their credibility towards banks. The information received may become an impulse to a timely repayment of liabilities or seeking another form of employment.

Translating law to the banking practice

The new regulations will undoubtedly strengthen the client’s position towards the bank. In relation to each automated credit decision, the client will have the GDPR rights to request rectifications, to question the decision, and to obtain human intervention. In relation to each decision issued with the participation of a bank employee, the client will also be able to use the new right and ask for specific personal data which affected the final decision. These are two independent procedures, safeguarding a high standard of transparency and data protection.

With this achievement, Panoptykon has improved to a significant extent the power inbalance between banks and their clients. This achievement could be used by human rights and consumer groups as a precedent. As we see in this case, the rights contained in the GDPR need organised action all across the EU to make the goals of the Regulation work in practice.

Panoptykon Foundation
https://en.panoptykon.org/

The right to explanation of creditworthiness assessment – first such law in Europe (12.06.2019)
https://en.panoptykon.org/right-to-explanation

The right to explanation FAQ (only in Polish, 05.04.2019)
https://panoptykon.org/prawo-do-wyjasnienia

Infographic: When can I use the right to explanation? (only in Polish, 12.04.2019)
https://panoptykon.org/biblio/infografiki/kiedy-przysluguje-mi-prawo-do-wyjasnienia

Infographic: Mortgage: how the right to explanation works? (only in Polish, 05.04.2019)
https://panoptykon.org/biblio/infografiki/kredyt-hipoteczny-jak-dziala-prawo-do-wyjasnienia

Infographic: Installment purchase: how the right to explanation works? (only in Polish, 05.04.2019)
https://panoptykon.org/biblio/infografiki/zakupy-na-raty-jak-dziala-prawo-do-wyjasnienia

(Contribution by EDRi member Panoptykon Foundation, Poland)

close
07 Jun 2019

Data Retention: EU Commission inconclusive about potential new legislation

By Diego Naranjo

On 6 June 2019, representatives from eight civil society organisations (including EDRi members) met with officials from the European Commission (EC) Directorate General of Home Affairs (DG HOME) to discuss data retention. This meeting, according to the EC officials, was just another one in a series of meetings that DG HOME is holding with different stakeholders to discuss potential data retention initiatives that could be put forward (or not) by the next Commission. The meeting is not connected to the publication of the conclusions by the Council on data retention published also on 6 June which coincidentally tasks the Commission with doing a study “on possible solutions for retaining data, including the consideration of a future legislative initiative”.

Ahead of the meeting, civil society was sent a set of questions about the impact of existing and potentially new data retention legislation on individuals, how a “legal” targeted data retention could be designed, and what are the specific issues (data retention periods, geographical restrictions, and so on) that could be included in case new data retention legislation were to be proposed.

According to the Commission, there are no clear “next stages” in the process, apart from the aforementioned study that will have to be prepared after the Council conclusions on data retention published on 6 June. The Commission will, in addition to this study, continue dialogues with civil society, data protection authorities, EU Fundamental Rights Agency and Member States that will inform a potential future action (or inaction) from the EC on data retention.

Four years ago EDRi met with DG HOME and presented them a study of a set of data retention laws which were likely to be considered illegal in light of the Digital Rights Ireland case. The EC then replied to our meeting and study saying that they would “monitor” existing data retention laws and their compliance with EU law. Four years after that, no infringing proceedings have been launched against any Member State and their (quite probably) illegal data retention laws.

Read more:

EU Member States willing to retain illegal data retention (16.09.2019)
https://edri.org/eu-member-states-willing-to-retain-illegal-data-retention/

Data retention – Conclusions on retention of data for the purpose of fighting crime (27.05.2019)
http://data.consilium.europa.eu/doc/document/ST-9663-2019-INIT/en/pdf

EU Member States plan to ignore EU Court data retention rulings (29.11.2017)
https://edri.org/eu-member-states-plan-to-ignore-eu-court-data-retention-rulings/

(Contribution by Diego Naranjo, EDRi)

close
05 Jun 2019

Our dependency on Facebook – life-threatening?

By Bits of Freedom

What is your priority when a terrorist attack or a natural disaster takes place close to where your parents live or where your friend went on holidays? Obviously, you would immediately like to know how your loved ones are doing. You will call and text them until you get in touch.

Or, imagine that you happen to be close to an attack yourself. You have little or no information, and you see a person with weapons running down the road. You would urgently call the police, right? You try to call, but it isn’t possible to connect to the mobile network. Your apps are not working either. You can’t inform your loved ones, you can’t find information about what’s going on, and you can’t call the police. Right at the time that communication and knowledge are vital, you can’t actually do anything. Afterwards, it appears that the telecom providers switched off their mobile networks directly after the attack, obeying police orders. This measure was necessary for safety, because it was suspected that the perpetrators were using the mobile network.

This scenario isn’t that far-fetched. A few years ago the telephone network in the San Francisco underground was partially disconnected. The operator of the metro network wanted to disrupt the demonstration against police violence after such a protest disturbed the timetable. The intervention was considered justified based on the safety of passengers. As a consequence of the previous demonstrations, the platforms had become overcrowded with passengers that couldn’t continue their journeys. However, the intervention was harshly criticised as the deactivation of the phone network had endangered the passengers – because, how do you, for example, alert the emergency services in an emergency situation when nobody’s phone is working?

Immediately after the terrorist attacks in Sri Lanka in April 2019, the government did something similar: it made services like Facebook unavailable, to avoid that the flow of speculations spread through platforms like Facebook would worsen the chaos.

In Sri Lanka, Facebook is practically a synonym for “the internet” – it’s the main communication platform in the country where the practice of zero-rating flourishes. As a result of Facebook’s dominance, contents that are published on the platform can very quickly have an enormous reach. And, it is exactly the posts that capitalise fear, discontentment, and anger that have a huge potential to go viral, whether they are true or not. Facebook in itself doesn’t have an incentive to limit the impact of these posts. On the contrary: the most extreme messages are contributing to the addictive nature of the social network. The posts themselves aren’t a threat to people’s physical safety, but in the context of terrorist attacks, they can be lethal.

The distribution of false information is apparently such a huge problem that the Sri Lankan government has no other option than to disconnect the main communication platform in the country. It’s a decision with far-reaching consequences: people are being isolated from their main source of information and from the only communication tool to reach their family and friends. We find ourselves in a situation in which the harmful side-effects of such a platform are perceived to be bigger than the gigantic importance of open communication channels and provision of information – rather no communication than Facebook-communication.

This shows how dangerous it is when a society is so dependent on one online platform. This dependency also makes it easier for a government to gain control by denying access to that platform. The real challenge is to ensure a large diversity of news sources and means of communication. In the era of information, dependency on one dominant source of information can be life-threatening.

This article was first published at https://www.bitsoffreedom.nl/2019/05/29/life-threatening-our-dependency-on-facebook/

Life-threatening: Our dependency on Facebook (only in Dutch, 06.05.2019)
https://www.bitsoffreedom.nl/2019/05/06/levensgevaarlijk-onze-afhankelijkheid-van-facebook/

BART Pulls a Mubarak in San Francisco (12.08.2011)
https://www.eff.org/deeplinks/2011/08/bart-pulls-mubarak-san-francisco

Social media temporarily blocked (21.04.2019)
https://news.lk/news/sri-lanka/item/25077-social-media-temporarily-blocked

Sri Lanka blocks social media, fearing more violence (21.04.2019)
https://www.nytimes.com/2019/04/21/world/asia/sri-lanka-social-media.html

(Contribution by Rejo Zenger, EDRi member Bits of Freedom, the Netherlands; translation from Dutch to English by Bits of Freedom volunteers Winnie van Nunen and Amber Balhuizen)

close
05 Jun 2019

Czech Constitutional Court rejects complaint on data retention

By Iuridicum Remedium

Czech EDRi member Iuridicum Remedium (IuRe) has fought for 14 years against Czech implementation of the controversial EU data retention Directive which was declared invalid by the Court of Justice of the European Union (CJEU). After years of campaigning and many hard legislative battles, the fight has finally come to an end: on 22 May 2019, the Czech Constitutional Court rejected IuRe’s proposal to declare the Czech data retention law unconstitutional. The court ended up rejecting the claim, despite it being supported by 58 deputies of the parliament across the political spectrum.

In the Czech Republic, data retention legislation was first adopted in 2005. In March 2011, the Constitutional Court upheld first IuRe’s complaint on original data retention legislation and canceled it. In 2012, however, a new legal framework was adopted to implement the EU Data Retention Directive – that the CJEU found to contravene European law in Digital Rights Ireland case in 2014, and to comply with the Constitutional Court’s decision. This new legislation contained still problematic general and indiscriminate data retention and a number of sub-problems. Therefore, even in the light of CJEU’s decisions, IuRe decided to prepare a new constitutional complaint.

IuRe originally submitted a complaint to challenge the very principle of bulk data retention as massive collection and storage of data of people, without any link to the individual suspicion in criminal activities, extraordinary events, or terrorist threats. The CJEU already declared this general and indiscriminate data retention principle inadmissible in two of its decisions (Digital Rights Ireland and Tele2). Although the Czech Constitutional Court refers to both judgments several times, their conclusions – especially when it comes to analyse the foundations of why data retention is not in line with the Czech Constitution – does not deal with it properly.

The Constitutional Court’s main argument to declare data retention constitutional is that as communications increasingly occur in the digital domain, so does crime. Even though this could be true,it is regrettable that the Constitutional Court did not further develop this reasoning and argued why this is in itself a basis for bulk data retention. The Court also ignored that greater use of electronic communication also implies greater interference with privacy that is associated with general data retention.

The Court further argued that personal data, even without an obligation to retain it, are kept in any case for other purposes, such as invoicing for services, answering to claims and behavioral advertising. In the Court’s opinion, the fact that people give operators their “consent” to process their personal data reinforces the argument to claim that data retention is legal and acceptable. Unfortunately, the Constitutional Court does not take into consideration that the volume, retention period and sensitivity of personal data held by operators for other purposes is quite different from the obligatory data retention prescribed by the Czech data retention law. Furthermore, the fact that operators need to keep some data already (for billing purposes for example) shows that police would not be completely left in the dark without a legal obligation to store data.

In addition to the proportionality of data retention, which has not been clarified by the Court, another issue is how “effective” data retention is to reduce crime. Statistics from 2010 to 2014 show that there was no significant increase in crime or reduction of the crime detection in the Czech Republic after the Constitutional Court abolished the obligation to retain data in 2011. Police statistics presented to the Court that data retention is not helping to combat crime in general, nor facilitating investigation of serious crimes (such as murders) or other types of crimes (such as frauds or hacking). In arguments submitted by police representatives and by the Ministry of the Interior, some examples of individual cases where the stored data helped (or hampered an investigation when missing) were repeatedly mentioned. However, it has not been proven by any evidence shown to the Court that general and indiscriminate data retention would improve the ability of the police to investigate crimes.

The Court also did not annul the partially problematic parts of the legislation, such as the data retention period (six months), the volume of data to be retained, or too broad range of criminal cases where data may be required. Furthermore, the Court has not remedied the provisions of the Police Act that allow data to be requested without court authorisation in cases of search for wanted or missing persons or the fight against terrorism.

In its decision, the Constitutional Court acknowledges that stored data are very sensitive and that in some cases the sensitivity of so-called “metadata” may even be greater than the retention of the content of the communications. Thus, the retention of communications data represents a significant threat to individuals’ privacy. Despite all of this, the Court discarded IuRE’s claim to declare data retention law unconstitutional.

IuRe disagrees with the outcome of this procedure in which the Court has come to a conclusion on the constitutional conformity of the existing Czech data retention legislation. Considering the wide support for the complaint, IuRe will work on getting at least a part of existing arrangements changed by legislative amendments. In addition to this, we will consider the possibility for the EC to launch infringing proceedings or initiate other judicial cases, since we strongly believe that the existing bulk data retention of communications data in Czech law still contravenes the aforementioned CJEU decisions on mass data retention.

Czech constitutional decision (only in Czech)
https://www.usoud.cz/fileadmin/user_upload/Tiskova_mluvci/Publikovane_nalezy/2019/Pl._US_45_17_vcetne_disentu.pdf

Proposal to revoke data retention filed with the Czech Court (10.01.2018)
https://edri.org/proposal-to-revoke-data-retention-filed-with-the-czech-court/

(Contribution by Jan Vobořil, EDRi member Iuridicum Remedium, Czech Republic)

close