06 Nov 2019

Why tech is not “just a tool”

By Ella Jakubowska

Throughout October 2019, digital rights-watchers welcomed new reports warning about the human rights crises of Artificial Intelligence (AI) and other digital technologies. From Philip Alston’s caution that the UK risks “stumbling zombie-like into a digital welfare dystopia” to David Kaye’s critique of internet companies’ and States’ failure to respect human rights online, civil society is increasingly demanding greater insight into the impact of technology on society. Individuals who do not work on “digital rights” are also becoming progressively more aware of the exponentially increasing power and control of technology giants such as Facebook and Google.

Whilst every citizen is and will continue to be affected (whether positively or negatively) by the rise of technology for everyday services, the risks are becoming more evident for some of the groups that already suffer systematic discrimination. Take this woman who was automatically barred from entering her gym because the system did not recognise that she could be both a doctor, and a woman; or this evidence that people of colour get worse medical treatment when decisions are made by algorithms. Not to mention the environmental and human impact of mining precious metals for smartphones (which disproportionately impacts the global south) and the incredibly high emissions released by training just one single algorithm. The list, sadly, goes on and on.

The idea that human beings are biased is hardly a surprise. Most of us make “implicit associations”, unconscious assumptions and stereotypes about the things and the people that we see in the world. According to some scientists, there are evolutionary reasons for this, in order to allow our ancestors to distinguish between friends and foes. These biases, however, become problematic when they lead to unfair or discriminatory treatment – certain groups being surveilled more closely, censored more frequently, or punished more harshly. In the context of human rights in the online environment, this matters because everyone has a right to equal access to privacy, to free speech, and to justice.

States are the actors that are responsible for respecting and protecting their citizens’ human rights. In the past and still today in most cases, representatives of a state (such as social workers, judges, police and parole officers) would make decisions that impact its citizens’ rights: working out the amount of benefits that a person will receive, deciding on the length of a prison sentence, or making a prediction about the likelihood of them re-offending. Increasingly, these decisions are being made by algorithms.

Many well-meaning people have fallen into the trap of thinking that tech, with its structured 1s and 0s, removes humans’ messy bias, and allows us to make better, fairer decisions. Yet technology is made by humans, and we unconsciously build our world views into the technology that we produce. This encodes and amplifies underlying biases, whilst outwardly giving the appearance of being “neutral”. Even the data that is used to train algorithms or to make decisions reflects a particular social history. And if that history is racist, or sexist, or ableist? You guessed it: this past discrimination will continue to impact the decisions that are made today.

The decisions made by social workers, police and judges are, of course, frequently difficult, imperfect, and susceptible to human bias too. But they are made by state representatives with an awareness of the social context of their decision, and crucially, an ability to be challenged by the impacted citizen – and overturned if an appropriate authority feels they have judged incorrectly. Humans also have a nifty way of being able to learn from mistakes so that they do not repeat them in the future. Machines making these decisions do not “learn” in the same way as humans: they “learn” to get more precise with their bias, and they lack the self-awareness to know that it leads to discrimination. To make things worse, many algorithms that are used for public services are currently protected under intellectual property laws. This means that citizens do not have a route to challenge decisions that an algorithm has made about them. Recent cases such as Loomis v. Wisconsin, which saw a citizen challenge a prison sentence determined by the US’s COMPAS algorithm, have worryingly ruled in favour of upholding the algorithm’s proprietary protections, refusing to reveal how the sentencing decision was made.

Technology is not just a tool, but a social product. It is not intrinsically good or bad, but it is embedded with the views and biases of its makers. It uses flawed data to make assumptions about who you are, which can impact the world that you see. Another example of this is the use of highly personalised adverts in the EU, which may breach our fundamental right to privacy. Technology cannot – at least for now – make fair decisions that require judgement or assessment of human qualities. When it comes to granting or denying access to services and rights, this is even more important. Humans can be aware of their bias, work towards mitigating it, and challenge it when they see it in others. For anyone creating, buying or using algorithms, active measures about how the tech will impact social justice and human rights must be at the heart of design and use.

Hate speech online: Lessons for protecting free expression (29.10.2019)
https://edri.org/hate-speech-online-lessons-for-protecting-free-expression/

Millions of black people affected by racial bias in health-care algorithms (24.10.2019)
https://www.nature.com/articles/d41586-019-03228-6

Anatomy of an AI System
https://anatomyof.ai/

Profiling the unemployed in Poland: Social and political implications of algorithmic decision making
https://panoptykon.org/sites/default/files/leadimage-biblioteka/panoptykon_profiling_report_final.pdf

Project Implicit
https://implicit.harvard.edu/implicit/takeatest.html

Digital dystopia: how algorithms punish the poor (14.10.2019)
https://www.theguardian.com/technology/2019/oct/14/automating-poverty-algorithms-punish-poor

(Contribution by Ella Jakubowska, EDRi intern)

close
06 Nov 2019

Danish data retention: Back to normal after major crisis

By IT-Pol

The Danish police and the Ministry of Justice consider access to electronic communications data to be a crucial tool for investigation and prosecution of criminal offences. Legal requirements for blanket data retention, which originally transposed the EU Data Retention Directive, are still in place in Denmark, despite the judgments from the Court of Justice of the European Union (CJEU) in 2014 and 2016 that declared general and indiscriminate data retention illegal under EU law.

In March 2017, in the aftermath of the Tele2 judgment, the Danish Minister of Justice informed the Parliament that it was necessary to amend the Danish data retention law. However, when it comes to illegal data retention, the political willingness to uphold the rule of law seems to be low – every year the revision is postponed by the Danish government with consent from Parliament, citing various formal excuses. Currently, the Danish government is officially hoping that the CJEU will revise the jurisprudence of the Tele2 judgment in the new data retention cases from Belgium, France and the United Kingdom which are expected to be decided in May 2020. This latest postponement, announced on 1 October 2019, barely caught any media attention.

However, data retention has been almost constantly in the news for other reasons since 17 June 2019 when it was revealed to the public that flawed electronic communications data had been used as evidence in up to 10000 police investigations and criminal trials since 2012. Quickly dubbed the “telecommunications data scandal” by the media, the ramifications of the case have revealed severely inadequate data management practices by the Danish police for almost ten years. This is obviously very concerning for the functioning of the criminal justice system and the right to a fair trial, but also rather surprising in light of the consistent official position of the Danish police that access to telecommunications data is a crucial tool for investigation of criminal offences. The mismatch between the public claims of access to telecommunications data being crucial, and the attention devoted to proper data management, could hardly be any bigger.

According to the initial reports in June 2019, the flawed data was caused by an IT system used by the Danish police to convert telecommunications data from different mobile service providers to a common format. Apparently, the IT system sometimes discarded parts of the data received from mobile service providers. During the Summer of 2019, a new source of error was identified. In some cases, the data conversion system had modified the geolocation position of mobile towers by up to 200 meters.

Based on the new information of involuntary evidence tampering, the Director of Public Prosecutions decided on 18 August 2019 to impose a temporary two-month ban on the use of telecommunications data as evidence in criminal trials and pre-trial detention cases. Somewhat inconsequential, the police could still use the potentially flawed data for investigative purposes. Since telecommunications data are frequently used in criminal trials in Denmark, for example as evidence that the indicted person was in the vicinity of the crime scene, the two-month moratorium caused a number of criminal trials to be postponed. Furthermore, about 30 persons were released from pre-trial detention, something that generated media attention even outside Denmark.

In late August 2019, the Danish National Police commissioned the consultancy firm Deloitte to conduct an external investigation of its handling of telecommunications data and to provide recommendations for improving the data management practices. The report from Deloitte was published on 3 October 2019, together with statements from the Danish National Police, the Director of Public Prosecutions, and the Ministry of Justice.

The first part of the report identifies the main technical and organisational causes for the flawed data. The IT system used for converting telecommunications data to a common format contained a timer which sometimes submitted the converted data to the police investigator before the conversion job was completed. This explains, at least at technical level, why parts of the data received from mobile service providers were sometimes discarded. The timer error mainly affected large data sets, such as mobile tower dumps (information about all mobile devices in a certain geographical area and time period) and access to historical location data for individual subscribers.

The flaws in the geolocation information for mobile towers that triggered the August moratorium were traced to errors in the conversion of geographical coordinates. Mobile service providers in Denmark use two different systems for geographical coordinates, and the police uses a third system internally. During a short period in 2016, the conversion algorithm was applied twice to some mobile tower data, which moved the geolocation positions by a couple of hundred meters.

On the face of it, these errors in the IT system should be relatively straightforward to correct, but the Deloitte report also identifies more fundamental deficiencies in the police practices of handling telecommunications data. In short, the report describes the IT systems and the associated IT infrastructure as complex, outdated, and difficult to maintain. The IT system used for converting telecommunications data was developed internally by the police and maintained by a single employee. Before December 2018, there were no administrative practices for quality control of the data conversion system, not even simple checks to ensure that the entire data set received from mobile service providers had been properly converted.

The only viable solution for the Danish police, according to the assessment in the report, is to develop an entirely new infrastructure for handling telecommunications data. Deloitte recommends that the new infrastructure should be based on standard software elements which are accepted globally, rather than internally developed systems which cannot be verified. Concretely, the reports suggests using POL-INTEL, a big data policing system supplied by Palantir Technologies, for the new IT infrastructure. In the short term, some investment in the existing infrastructure will be necessary in order to improve the stability of the legacy IT systems and reduce the risk of creating new data flaws. Finally, the report recommends systematic independent quality control and data validation by an external vendor. The Danish National Police has accepted all recommendations in the report.

Deloitte also delivered a short briefing note about the use of telecommunications data in criminal cases. The briefing note, intended for police investigators, prosecutors, defence lawyers and judges, explains the basic use cases of telecommunications data in police investigations, as well as information about how the data is generated in mobile networks. The possible uncertainties and limitations of telecommunications data are also mentioned. For example, it is pointed out that mobile devices do not necessarily connect to the nearest mobile tower, so it cannot simply be assumed that the user of the device is close to the mobile tower with almost “GPS level” accuracy. This addresses a frequent critique against the police and prosecutors for overstating the accuracy of mobile location data – an issue that was covered in depth by the newspaper Information in a series of articles in 2015. Quite interestingly, the briefing note also mentions the possibility of spoofing telephone numbers, so that the incoming telephone call or text message may originate from a different source than the telephone number registered by the mobile service provider under its data retention obligation.

On 16 October 2019, the Director of Public Prosecutions decided not to extend the moratorium on the use of telecommunications data. Along with this decision, the Director issued new and more specific instructions for prosecutors regarding the use of telecommunications data. The Deloitte briefing note should be part of the criminal case (and distributed to the defence lawyer), and police investigators are required to present a quality control report to prosecutors with an assessment of possible sources of error and uncertainty in the interpretation of the telecommunications data used in the case. Documentation of telecommunications data evidence should, to the extent possible, be based on the raw data received from mobile service providers and not the converted data.

For law enforcement, the October 16 decision marks the end of the data retention crisis which erupted in public four months earlier. However, only the most imminent problems at the technical level have really been addressed, and several of the underlying causes of the crisis are still looming under the surface, for example the severely inadequate IT infrastructure used by the Danish police for handling telecommunications data. The Minister of Justice has announced further initiatives, including investment in new IT systems, organisational changes to improve the focus on data management, improved training for police investigators in the proper use and interpretation of telecommunications data, and the creation of a new independent supervisory authority for technical investigation methods used by the police.

Denmark: Our data retention law is illegal, but we keep it for now (08.03.2017)
https://edri.org/denmark-our-data-retention-law-is-illegal-but-we-keep-it-for-now/

Denmark frees 32 inmates over flaws in phone geolocation evidence, The Guardian (12.09.2019)
https://www.theguardian.com/world/2019/sep/12/denmark-frees-32-inmates-over-flawed-geolocation-revelations

Response from the Minister of Justice to the reports on telecommunications data (in Danish only, 03.10.2019)
http://www.justitsministeriet.dk/nyt-og-presse/pressemeddelelser/2019/justitsministerens-reaktion-paa-teledata-redegoerelser

Can cell tower data be trusted as evidence? Blog post by the journalist covering telecommunications data for the newspaper Information (26.09.2015)
https://andreas-rasmussen.dk/2015/09/26/can-cell-tower-data-be-trusted-as-evidence/

(Contribution by Jesper Lund, EDRi member IT-pol, Denmark)

close
06 Nov 2019

Portuguese ISPs ignore telecom regulator’s recommendations

By D3 Defesa dos Direitos Digitais

In 2018, the Portuguese telecom regulator ANACOM told the three major Portuguese mobile Internet Service Providers (ISPs) to change offers that were in breach of EU net neutrality rules. Among other things, the regulator recommended that ISPs publish their terms and conditions, and increase the data volume of their mobile data packs in order to bring it closer to their zero-rating offer. In Portugal, average mobile data volumes are small, yet among the most expensive in Europe. ANACOM’s net neutrality report that was published in June 2019 reveals how the ISPs reacted to the regulator’s intervention.

While operators have complied with ANACOM’s decision on differential treatment of traffic after the general data ceiling has been exhausted, that was as far as they went. Regarding the increase of data volume, all three major operators simply ignored ANACOM’s demand. None of them changed their offers. One of the operators claimed, instead, that “the current ceiling is adjusted to the demand”.

Then, ANACOM had asked the ISPs to publish the terms and conditions under which other companies and their applications can be included in the their zero-rating packages. The result: All operators ignored this recommendation, too.

Surprisingly, the regulator’s reaction was lukewarm, at best. Instead of strongly criticising the ISPs for not complying to its recommendations, it stated that it “will continue to monitor all matters concerning these recommendations”, and that this will be followed up with “further analysis in the context of net neutrality […]”.

Portuguese EDRi observer D3 Defesa dos Direitos Digitais regrets the lack of will and courage on the part of ANACOM to put an end to the harmful practices of ISPs. Zero-rating harms consumers and free competition by tilting the playing field in favour of a few selected, dominant applications, and it constitutes a threat to a free and neutral internet. By not acting against price discrimination practices between applications and restricting its action to technical discrimination of traffic, ANACOM shows no intention to act on the underlying problem of zero-rating offers.

The result is that in Portugal, mobile data volumes are on average small, and the prices are among the highest in Europe. Users suffer from an over-concentrated market – three major ISPs share 98% of the market. In this setting, the leading companies can afford to ignore the regulator’s public recommendations without practical consequences. The legislator has not introduced the fines for net neutrality infringements that are mandatory under EU law since 2015.

This article is an adaptation of an article published at:
https://en.epicenter.works/content/zero-rating-in-portugal-permissive-regulator-allows-isp-to-get-away-with-offering-some-of

D3 Defesa dos Direitos Digitais
https://www.direitosdigitais.pt/

epicenter.works
https://en.epicenter.works/

Portuguese ISPs given 40 days to comply with EU net neutrality rules (07.03.1018)
https://edri.org/portuguese-isps-given-40-days-to-comply-with-eu-net-neutrality-rules/

Civil society urges Portuguese telecom regulator to uphold net neutrality (23.04.2018)
https://edri.org/civil-society-urges-portuguese-telecom-regulator-uphold-net-neutrality/

(Contribution by Eduardo Santos, EDRi observer D3 Defesa dos Direitos Digitais, Portugal)

close
06 Nov 2019

Twitter banning political ads – the tip of the iceberg

By Chloé Berthélémy

Twitter seems to have learnt the lessons of the 2019 US elections. After the revelation of the Cambridge Analytica scandal, the link between the use of social media targeted political advertisement and the voting behaviour of specific groups of people has been explored and explained again and again. We now understand how social media platforms like Facebook and Twitter play a decisive role in our elections and other democratic processes, and how misleading information, spreading faster and further than true stories on those platforms, can remarkably manipulate voters.

When Facebook CEO Marc Zuckerberg was grilled by Representative Alexandria Ocasio-Cortez in a hearing of the United States House Committee on Financial Services on 23 October, he admitted that if Republicans would pay for spreading a lie on their services, it would probably not be prohibited. Political advertisements are not subjected to any fact-checking review which could theoretically lead to the refusal or the blocking of this promoted content. According to Zuckerberg’s vision, if a politician lies, an open public debate helps exposing these lies and the electorate holds the politician accountable by rejecting her or his ideas. The principle of free speech departs from this very idea that all statements should be debated, and the bad ones would be naturally put aside. The only problem is that neither Facebook nor Twitter provides an infrastructure for such an open public debate.

These companies do not display content in a neutral and universal way to everybody. What one sees reflects what their personal data have been revealing about their life, preferences and habits. Information is broadcast to each user in a selective, narrowly defined manner, in line with what the algorithms have concluded about that person’s past online activity. Hence, so-called “filter bubbles”, combined with human inclination for confirmation bias, capture individuals in restricted information environments. These prevent people from forming opinions based on diversified sources of information – a core principle of open public debate.

Some parties in this discussion would like to officially acknowledge the critical infrastructure status dominant social media have in our societies, considering their platforms as the new place where the public sphere is taking place. This would imply applying to social media platforms the existing laws on TV channels and radio broadcasters that require them to carry certain types of content and to exclude others. Considering the amount of content posted every minute of each of those platforms, the recourse to automatic filtering measures would be inevitable. This would also cement their power over people’s speech and thoughts.

Banning political ads is a positive step towards reducing the harm caused by the amplification of false information. However, this measure is still missing the point: the most crucial problem is micro-targeting. Banning political ads is unlikely to stop micro-targeting, since that‘s the business model of all the main social media companies, including Twitter.

The first step of micro-targeting is profiling. Profiling consists of collecting as much data as possible on each user to build behavioural tracking profiles – it was proven that Facebook has expanded this collection to even those who aren’t using their platform. Profiling is enabled by keeping the user trapped on the platform and inciting as much attention and “engagement” as possible. The “attention economy” relies on content that keep us scrolling, commenting and clicking. Which content does the job is predicted based on our tracking profiles. Usually it’s offensive, shocking and polarizing content. This is why political content is one of the most effective at maximizing profits. No need for it to be paid for.

Twitter CEO Jack Dorsey is right in affirming that this is not a freedom of expression issue, but rather an outreach question, to which no fundamental right exists. To the contrary, rights to data protection and to privacy are human rights, and it is high time for the European Union to substantiate them against harmful profiling practices. A step towards that would be to adopt a strong ePrivacy Regulation. This piece of legislation would reinforce the safeguards the General Data Protection Regulation (GDPR) introduced. It would ensure that privacy by design and by default are guaranteed. Finally, it would tackle the perversive model of online tracking.

Right a wrong: ePrivacy now! (9.10.2019)
https://edri.org/right-a-wrong-eprivacy-now/

Open letter to EU Member States: Deliver ePrivacy now! (10.10.2019)
https://edri.org/tag/eprivacy-regulation/

Civil society calls Council to adopt ePrivacy now (5.12.2018)
https://edri.org/civil-society-calls-council-to-adopt-eprivacy-now/

EU elections – protecting our data to protect us from manipulation (08.05.2019)
https://edri.org/eu-elections-protecting-our-data-to-protect-us-from-manipulation/

(Contribution by Chloé Berthélémy, EDRi)

close
06 Nov 2019

Greece: The new data protection law raises concerns

By Homo Digitalis

On 29 August 2019, the much awaited new Greek data protection law came into force. Τhis law (4624/2019), implements both the provisions of the EU Law Enforcement Directive (LED, 2016/680) and the General Data Protection Regulation (GDPR) into national level. However, since the first days after the law was adopted, a lot of criticism was voiced concerning the lack of conformity of its provisions with the GDPR.

The Greek data protection law was adopted following the Εuropean Commission’s decision of July 2019 to refer Greece to the Court of Justice of the European Union (CJEU) for not transposing the LED on time. Thus, the national authorities acted fast in order to adopt a new data protection law. Unfortunately, the process was rushed through. As a result, the new data protection law suffers from important shortcomings and includes Articles that are challenging the provisions of the LED or even the GDPR.

In September 2019, Greek EDRi observer Homo Digitalis, together with a Greek consumer protection organisation EKPIZO, sent a common request to the Hellenic data protection authority (DPA) asking it to issue an Opinion on the conformity of the Greek law with the provisions of the LED and the GDPR. The DPA issued a press statement in early October 2019 announcing that it will come up with an Opinion in due time. Moreover, on 24 October 2019; Homo Digitalis filed a new complaint to the European Commission regarding the provisions of the Greek data protection law that are challenging the EU data protection regime.

Moreover, in order to acquire a thorough view on the Greek law, Homo Digitalis reached out to one of the most prominent privacy and data protection law experts in Greece, Professor Lilian Mitrou, who kindly shared her thoughts on the positive and one negative aspects of the new data protection law.

Professor Mitrou states that, on the positive side, the Greek legislator has introduced further limitations to the processing of sensitive data (genetic data, biometric data or data concerning health). Thus, according to the Article 23 of the new Greek law, the processing of genetic data for health and life insurances is expressly prohibited. “In this respect the Greek law, by stipulating prohibition on the use of genetic findings in the sphere of insurance, precludes the risk of results of genetic diagnosis being used to discriminate against people,” she says.

However, a strong point of criticism relates to the provisions concerning the purpose alienation. The Greek law introduces very wide and vague exceptions from the purpose limitation principle that prohibits the further use of data for incompatible purposes. “For example, private entities are allowed to process personal data for preventing threats against national or public security upon request of a public entity. Serious concerns are raised also with regard to the limitations of the data subjects’ rights,” Professor Mitrou points out.

She reminds that the Greek legislator “has made extensive use of the limitations permitted by Article 23 of the GDPR to restrict the right to information, the right to access and the right to rectification and erasure”. However, these restrictions have been adopted without fully complying with the safeguards provided in Article 23, para 2 GDPR. Moreover, the Greek law introduces provisions that allow the data controller not to erase data upon request of the data subject, in case the controller has reason to believe that erasure would adversely affect legitimate interests of the data subject. Thus, the data controller is allowed by the Greek legislator to substitute the will of the data subject.

“The Greek law has not respected the GDPR as standard borderline and has (mis)used ‘opening clauses’ and Member State discretion not to enhance but to reduce the level of data protection,” Professor Mitrou concludes.

Homo Digitalis
https://www.homodigitalis.gr/

Professor Lilian Mitrou
https://www.icsd.aegean.gr/group/members-data.php?group=L1&member=47&fbclid=IwAR3LWksLRO0Yp1JCNWaGp-UODEeyALxtDHYOUo7Tg7kQ_CtGXfS2l8Z-cxw

The data protection law 4624/2019 (only in Greek 29.08.2019)
https://www.kodiko.gr/nomologia/document_navigation/552084/nomos-4624-2019

Official Request to the Hellenic Data Protection Authority for the issuance of legal opinion on Law 4624/2019 (20.09.2019)
https://www.homodigitalis.gr/en/posts/4217

Homo Digitalis on a seminar discussion regarding Law 4624/2019 (24.09.2019)
https://www.homodigitalis.gr/en/posts/4232

Homo Digitalis’ complaint to the European Commission against the new data protection law 4624/2019 (24.10.2019)
https://www.homodigitalis.gr/source_content/uploads/2019/11/Complaint-Form-for-breach-of-EU-law_24October2019.pdf

(Contribution by Eleftherios Chelioudakis, EDRi observer Homo Digitalis, Greece)

close
29 Oct 2019

Hate speech online: Lessons for protecting free expression

By Ella Jakubowska

On 21 October, David Kaye – UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression – released the preliminary findings of his sixth report on information and communication technology. They include tangible suggestions to internet companies and states whose current efforts to control hate speech online are failing to comply with the fundamental principles of human rights. The EU Commission should consider Kaye’s recommendations when creating new rules for the internet and – most importantly – when drafting the Digital Services Act (DSA).

The “Report of the Special Rapporteur to the General Assembly on online hate speech” (docx) draws on international legal instruments on civil, political and non-discrimination rights to show how human rights law already provides a robust framework for tackling hate speech online. The report offers an incisive critique of platform business models which, supported by States, profit from the spread of “hateful content” whilst violating free expression by wantonly deleting legal content. Instead, Kaye offers a blueprint for tackling hate speech in a way which empowers citizens, protects online freedom, and puts the burden of proof on States, not users. Whilst the report outlines a general approach, the European Commission should incorporate Kaye’s advice when developing the proposed Digital Services Act (DSA) and other related legislation and non-legal initiatives, to ensure that the regulation of hate speech does not inadvertently violate citizens’ digital rights.

Harmful content removal: under international law, there is a better way

Sexism, racism and other forms of hate speech (which Kaye defines as “incitement to discrimination, hostility or violence”) in the online environment are quite rightly areas of attention for global digital policy and law makers. But the report offers a much-needed reminder that restricting freedom of expression online through deleting content is not just an ineffective solution, but in fact threatens a multitude of rights and freedoms that are vital for the functioning of democratic societies. Freedom of expression is, as Kaye states, “fundamental to the enjoyment of all human rights”. If curtailed, it can open the door for repressive States to systematically suppress their citizens. Kaye gives the example of blasphemy laws: profanity, whilst offensive, must be protected – otherwise it can be used to punish and silence citizens that do not conform to a particular religion. And others such as journalist Glenn Greenwald have already pointed out in the past how “hate speech” legislation is used in the EU to suppress left-wing viewpoints.

Fundamental rules for restricting freedom of expression online

The report is clear that restrictions of online speech “must be exceptional, subject to narrow conditions and strict oversight”, with the burden of proof “on the authority restricting speech to justify the restriction”. Any restriction is thus subject to three criteria under human rights law:

Firstly under the legality criteria, Kaye uses human rights law to show that any regulation of hate speech online (as offline) must be genuinely unlawful, not just offensive or harmful. It must be regulated in a way that does not give “excessive discretion” to governments or private actors, and gives independent routes of appeal to impacted individuals. Conversely, the current situation gives de facto regulatory power to internet companies by allowing (and even pressuring) them to act as the arbiters of what does and does not constitute free speech. Coupled with error-prone automated filters and short takedown periods incentivising over-removal of content, this is a free speech crisis in motion.

Secondly on the question of legitimacy, the report outlines the requirement for online hate speech laws and policies to be treated in the same way as any other speech. This means ensuring that freedom of expression is restricted only for legitimate interests, and not curtailed for “illegitimate purposes” like suppressing criticism of States. Potential illegal suppression is enabled by overly broad definitions of hate speech, which can act as a catch-all for content that States find offensive, despite being legal. A lack of strict definitions in the counter-terrorism policy field has already had a strong impact on freedom of expression in Spain, for example. “National security” was proven to be abusively invoked to justify measures interfering in human rights, and used as a pretext to adopt vague and arbitrary limitations.

Lastly, necessity and proportionality are violated by current moderation practices including “nearly immediate takedown” requirements and automatic filters which clumsily censor legal content, becoming collateral damage in a war against hate speech. This violates rights to due process and redress, and unnecessarily puts the burden of justifying content on users. Worryingly, Kaye continues that “such filters disproportionately harm historically under-represented communities.”

A rational approach to tackling hate speech online

The report offers a wide range of solutions for tackling hate speech whilst avoiding content deletion or internet shutdowns. Guided by human rights documents including the so-called “Ruggie Principles” (the 2011 UN Guiding Principles on Business and Human Rights), the report emphasises that internet companies need to exercise a greater degree of human rights due diligence. This includes transparent review processes, human rights impact assessments, clear routes of appeal and human, rather than algorithmic, decision-making. Crucially, Kaye calls on internet platforms to “de-monetiz[e] harmful content” in order to counteract the business models that profit from viral, provocative, harmful content. He stresses that the biggest internet companies must bear the cost of developing solutions, and share them with smaller companies to ensure that fair competition is protected.

The report is also clear that States must take more responsibility, working in collaboration with the public to put in place clear laws and standards for internet companies, educational measures, and remedies (both judicial and non-judicial) in line with international human rights law. In particular, they must take care when developing intermediary liability laws to ensure that internet companies are not forced to delete legal content.

The report gives powerful lessons for the future DSA and other related policy initiatives. In the protection of fundamental human rights, we must limit content deletion (especially automated) and avoid measures that make internet companies de facto regulators: they are not – and nor would we want them to be – human rights decision-makers. We must take the burden of proof away from citizens, and create transparent routes for redress. Finally, we must remember that the human rights rules of the offline world apply just as strongly online.

Report of the Special Rapporteur on the promotion and protection of the freedom of opinion and expression, A/74/486 (Advanced unedited report)
https://www.ohchr.org/EN/Issues/FreedomOpinion/Pages/Annual.aspx

E-Commerce review: Opening Pandora’s box? (20.06.2019)
https://edri.org/e-commerce-review-1-pandoras-box/

In Europe, Hate Speech Laws are Often Used to Suppress and Punish Left-Wing Viewpoints (29.08.2017)
https://theintercept.com/2017/08/29/in-europe-hate-speech-laws-are-often-used-to-suppress-and-punish-left-wing-viewpoints/

EU copyright dialogues: The next battleground to prevent upload filters (18.10.2019)
https://edri.org/eu-copyright-dialogues-the-next-battleground-to-prevent-upload-filters/

Spain: Tweet… if you dare: How counter-terrorism laws restrict freedom of expression in Spain (13.03.2018)
https://www.amnesty.org/en/documents/eur41/7924/2018/en/

CCBE Recommendations on the protection of fundamental rights in the context of ‘national security’ 2019
https://www.ccbe.eu/fileadmin/speciality_distribution/public/documents/SURVEILLANCE/SVL_Guides_recommendations/EN_SVL_20190329_CCBE-Recommendations-on-the-protection-of-fundamental-rights-in-the-context-of-national-security.pdf

(Contribution by Ella Jakubowska, EDRi intern)

close
23 Oct 2019

#PrivacyCamp20: Technology and Activism

By Dean Willis

The 8th annual Privacy Camp will take place in Brussels on 21 January 2020.

With the focus on “Technology and Activism”, Privacy Camp 2020 will explore the significant role digital technology plays in activism, enabling people to bypass traditional power structures and fostering new forms of civil disobedience, but also enhancing the surveillance power of repressive regimes. Together with activists and scholars working at the intersection of technology and activism, this event will cover a broad range of topics from surveillance and censorship to civic participation in policy-making and more.

The call for panels invites classical panel submissions, but also interactive formats such as workshops. We have particular interest in providing space for discussions on and around social media and political dissent, hacktivism and civil disobedience, the critical public sphere, data justice and data activism, as well as commons, peer production, and platform cooperativism, and citizen science. The deadline for proposing a panel or a workshop is 10 November 2019.

In addition to traditional panel and workshop sessions, this year’s Privacy Camp invites critical makers to join the debate on technology and activism. We are hosting a Critical Makers Faire for counterculture and DIY artists and makers involved in activism. The Faire will provide a space to feature projects such as biohacking, wearables, bots, glitch art, and much more. The deadline for submissions to the Makers Faire is 30 November.

Privacy Camp is an annual event that brings together digital rights advocates, NGOs, activists, academics and policy-makers from Europe and beyond to discuss the most pressing issues facing human rights online. It is jointly organised by European Digital Rights (EDRi), Research Group on Law, Science, Technology & Society at Vrije Universiteit Brussel (LSTS-VUB), the Institute for European Studies at Université Saint-Louis – Bruxelles (USL-B), and Privacy Salon.

Privacy Camp 2020 takes place on 21 January 2020 in Brussels, Belgium. Participation is free and registrations open in December.

Privacy Camp 2020: Call for submissions
https://privacycamp.eu/?p=1601

Privacy Camp
https://privacycamp.eu/

(Contribution by Dean Willis, EDRi intern)

close
23 Oct 2019

Net neutrality overhaul: 5G, zero-rating, parental control, DPI

By Epicenter.works

The Body of European Regulators for Electronic Communications (BEREC) is currently in the process of overhauling their guidelines on the implementation of the Regulation (EU) 2015/2120, which forms the legal basis of the EU’s net neutrality rules. At its most recent plenary, BEREC produced new draft guidelines and opened a public consultation on this draft. The proposed changes to the guidelines seem like a mixed bag

5G network slicing

The new mobile network standard 5G specifies the ability of network operators to provide multiple virtual networks (“slices”) with different quality characteristics over the same network infrastructure, called “network slicing”. Because end-user equipment can be connected to multiple slices at the same time, providers could use the introduction of 5G to create new products where different applications make use of different slices with their associated quality levels. In its draft guidelines, BEREC clarifies that it‘s the user who has to be able to choose which application makes use of which slice. This is a welcome addition.

Zero-rating

Zero-rating is a practice of billing the traffic used by different applications differently, and in particular not deducting the traffic created by certain applications from a user’s available data volume. This pratice has been criticised, because it reduces the choice of consumers regarding which applications they can use, and disadvantages new, small application providers against the big, already established players. These offers broadly come in two types: “open” zero-rating offers, where application providers can apply to become part of the programme and have their application zero-rated, and “closed” offers where that is not the case. The draft outlines specific criteria according to which open offers can be assessed.

Parental control filters

While content- and application-specific pricing is an additional challenge for small content and application providers, content-specific blocking can create even greater problems. Nevertheless, the draft contains new language that creates a carve-out for products such as parental control filters operated by the access provider from the provisions of the Regulation that prohibit such blocking, instead subjecting them to a case-by-case assessment by the regulators (as is the case for zero-rating). The language does not clearly exclude filters that are sold in conjunction with the access product and are on by default, and the rules can even be read as to require users who do not want to be subjected to the filtering to manually reconfigure each of their devices.

Deep Packet Inspection

Additionally, BEREC is also running a consultation on two paragraphs in the guidelines to which it hasn‘t yet proposed any changes. These paragraphs establish important privacy protections for end-users. They prohibit access providers from using Deep Packet Inspection (DPI) when applying traffic management measures in their network and thus protect users from having the content of their communications inspected. However, according to statements made during the debriefing session of the latest BEREC plenary, some actors want to allow providers to look at domain names, which themselves can reveal very sensitive information about the user and require DPI to extract from the data stream.

EDRi member epicenter.works will respond to BEREC’s consultation and encourages other stakeholders to participate. The proposed changes are significant. That is why clearer language is required, and users‘ privacy needs to remain protected. The consultation period ends on 28 November 2019.

epicenter.works
https://epicenter.works/

Public consultation on the document on BEREC Guidelines on the Implementation of the Open Internet Regulation (10.10.2019)
https://berec.europa.eu/eng/news_consultations/ongoing_public_consultations/5947-public-consultation-on-the-document-on-berec-guidelines-on-the-implementation-of-the-open-internet-regulation

Zero rating: Why it is dangerous for our rights and freedoms (22.06.2016)
https://edri.org/zero-rating-why-dangerous-for-our-rights-freedoms/

NGOs and academics warn against Deep Packet Inspection (15.05.2019)
https://edri.org/ngos-and-academics-warn-against-deep-packet-inspection/

Net Neutrality vs. 5G: What to expect from the upcoming EU review? (05.12.2018)
https://edri.org/net-neutrality-vs-5g-what-to-expect-from-the-upcoming-eu-review/

(Contribution by Benedikt Gollatz, EDRi member epicenter.works, Austria)

close
23 Oct 2019

Austrian Passenger Name Records complaint – the key points

By Epicenter.works

Austrian EDRi member epicenter.works filed a complaint with the Austrian data protection authority (DPA) about the Passenger Name Records (PNR) in August 2019, with the aim to overturn the EU PNR Directive. On 6 September, the DPA rejected the complaint, which was a good news, because that was the only way to lodge a complaint to the Federal Administrative Court.

The complaint: Objections

Epicenter.works’ complaint about the PNR system to the Federal Administrative Court contains a number of objections. The largest and most central one concerns the entire PNR Directive itself. The Court of Justice of the European Union (CJEU) has already repeatedly declared similar mass surveillance measures to be contrary to fundamental rights, for example in the case of data retention or in the expert opinion on the PNR agreement with Canada.

A complaint can’t be directly lodged to the CJEU, but the Administrative Court must submit questions on the interpretation of the law to the CJEU, as epicenter.works suggested in the complaint. The first question suggested is summarised as follows: “Does the PNR Directive contradict the fundamental rights of the EU?”

Moreover, Austria has not correctly implemented the PNR Directive, has partially extended its application, and has not implemented important restrictions from the Directive. For example, the Directive obliges all automatic hits, for example when someone is identified as a potential terrorist, to be checked by a person. This has not been implemented in the Austrian PNR Act. The question to the CJEU proposed in the complaint is therefore: “If the PNR Directive is valid in principle, is the processing of PNR data permitted even though the automatic hits do not have to be checked by a person?”

Where the Austrian PNR Act goes beyond the Directive, epicenter.works suggests that the Court should request the Constitutional Court to repeal certain provisions.

The Austrian PNR Act goes further than the Directive

According to the PNR Directive, PNR data may only be processed for the purpose of prosecuting terrorist offences and certain serious criminal offences. These serious crimes are listed in an annex to the Austrian PNR Act, which are directly translated from the PNR Directive. However, some of these crimes do not have an equivalent crime in Austrian law, leaving the entire provision unclear. Because of this flaw, the complaint asks the Constitutional Court to repeal this provision of the PNR Act. The list of terrorist offences in the PNR Act also goes much further than the Directive.

The PNR Directive only requires EU Member States to record flights to or from third countries, leaving the recording of intra-EU flights optional for Member States. Many countries have also extended this to domestic flights. In Austria, the Minister of the Interior can do this by decree without giving any specific reason. The complaint suggests that the Constitutional Court should delete this provision, because it has a strong impact on the fundamental rights of millions of people — without any justification of its necessity or proportionality.

Finally, the PNR Act also provides for the possibility for customs authorities and even the military to have access to PNR data. This is neither provided for in the PNR Directive, nor necessary for the prosecution of alleged terrorist and those suspected of serious crimes, and therefore it’s an excessive measure. Here, too, the complaint suggests that the Constitutional Court should delete the provisions that give these authorities access to PNR data.

epicenter.works
https://en.epicenter.works/

Our PNR complaint to the Federal Administrative Court
https://en.epicenter.works/content/our-pnr-complaint-to-the-federal-administrative-court

PNR: EU Court rules that draft EU/Canada air passenger data deal is unacceptable (26.07.2017)
https://edri.org/pnr-eu-court-rules-draft-eu-canada-air-passenger-data-deal-is-unacceptable/

Why EU passenger surveillance fails its purpose (25.09.2019)
https://edri.org/why-eu-passenger-surveillance-fails-its-purpose/

Passenger surveillance brought before courts in Germany and Austria (22.05.2019)
https://edri.org/passenger-surveillance-brought-before-courts-in-germany-and-austria/

(Contribution by EDRi member epicenter.works, Austria)

close
23 Oct 2019

The sixth attempt to introduce mandatory SIM registration in Romania

By ApTI

A tragic failure by the police to save a teenage girl who was abducted but managed to call the 112 emergency number three times before she was murdered, led to the adoption of a new Emergency Ordinance in Romania. The law introduces several measures to improve the 112 system, one of which is mandatory SIM card registration for all prepaid users. Currently approximately ten million prepaid SIM cards are used in Romania.

This is the sixth legislative attempt in the last eight years to pass legislation for registering SIM card users despite a Constitutional Court decision in 2014 deeming it illegal. The measure was adopted through a fast legislative procedure and is supposed to enter into effect on 1 January 2020.

It seems like the main reason to introduce mandatory SIM card registration is that authorities want to localise the call to the emergency number and punish false emergency calls. However, this measure is not likely to be efficient for the purpose, as anyone who buys a SIM card could obviously give it to someone else. Another reason is to identify the caller in real emergency situations, to be able to more easily locate them and send help.

Romania is one of the few countries in the European Union where calling the emergency number without a SIM card is not possible. This has been a deliberate decision taken by Romanian authorities to limit the number of “non-urgent” calls.

What happened?

After the Emergency Ordinance was proposed, EDRi member ApTI, together with two other Romanian NGOs, launched a petition to the Ombudsman and the government calling for this law not to be adopted. After civil society’s calls for a public debate, the Ministry of Communications organised an oral hearing in which the participants were given no more than five minutes to express their views, without the possibility to have an actual dialogue. The Emergency Ordinance was adopted shortly after the hearing, despite the fact that the Romanian Constitution explicitly states that laws which affect fundamental rights cannot be adopted by emergency ordinances (Article 115 of the Romanian Constitution).

What did the court say in 2014?

In 2014, the Constitutional Court held that the “retention and storage of data is an obvious limitation of the right to personal data protection and to the fundamental rights protected by the Constitution on personal and family privacy, secrecy of correspondence and freedom of speech” (para. 43 of Decision nr. 461/2014, unofficial translation). The Court explained that restricting fundamental rights is possible only if the measure is necessary in a democratic society. The measure must also be proportionate, and must be applicable without discrimination and without affecting the essence of the right or liberty.

Collecting and storing the personal data of all citizens who buy prepaid SIM cards for the mere reason of punishing those who might abusively call the emergency number seems like a bluntly disproportionate measure that unjustifiably limits the right to private life. At the same time, such a measure inverses the presumption of innocence and automatically assumes that all prepaid SIM card users are potentially guilty.

What’s the current status?

The Ombudsman listened to civil society’s concerns, and challenged the Ordinance at the Constitutional Court. Together with human rights NGO APADOR-CH, ApTI is preparing an amicus curiae to support the unconstitutionality claims.

In the meantime, the Ordinance moved on to parliamentary approval and the provisions related to mandatory SIM card registration were rejected in the Senate, the first chamber to debate the law. The Chamber of Deputies can still introduce modifications.

Asociatia pentru Tehnologie si Internet (ApTI)
https://www.apti.ro/

Petition against Emergency Ordinance on mandatory sim card registration (only in Romanian, 12.08.2019)
https://www.apti.ro/petitie-cartele-prepay-initiativa2019/

ApTI’s response to the public consultation on Emergency Ordinance on mandatory SIM card registration (only in Romanian, 21.08.2019)
https://www.apti.ro/raspuns-apti-inregistrare-prepay-112

Constitutional Court decision nr. 461/2014 (only in Romanian)
https://privacy.apti.ro/decizie-curtea-constitutionala-prepay-461-2014/

Timeline of legislative initiatives to introduce mandatory SIM card registration (only in Romanian)
https://apti.ro/Ini%C5%A3iativ%C4%83-legislativ%C4%83-privind-%C3%AEnregistrarea-utilizatorilor-serviciilor-de-comunica%C5%A3ii-electronice-tip-Prepay

(Contribution by Valentina Pavel, EDRi member ApTI, Romania)

close