06 Nov 2018

Welcoming our new Executive Director Claire Fernandez!

By EDRi

EDRi is happy to announce that we found a new Executive Director! Claire Fernandez will join the organisation on 19 November 2018, and will be in charge of the leadership, mission and strategy of the organisation, its financial sustainability and oversight, and the daily management of the operations. Claire’s joining of the organisation is part of a wider leadership change and transition in our Brussels office team.

Since February 2013, Claire has worked as the Deputy Director of the European Network Against Racism (ENAR). EDRi and ENAR partnered up earlier this year to draw up some core principles in the fight against illegal content online.

Prior to her role in ENAR, she worked as an independent human rights consultant, leading the Open Society Foundations’ campaign on the reform of the European Court of Human Rights and revising the Council of Europe Commissioner for Human Rights’ Report on the human rights of Roma. Previously, Claire Fernandez was an adviser to the Council of Europe Commissioner for Human Rights. From 2008 to 2010, she represented the Organization for Security and Cooperation in Europe (OSCE) in Bosnia and Kosovo, advising local authorities on good governance and minorities’ rights. She holds a Master degree in Human Rights from the Robert Schuman University in Strasbourg, France.

I am grateful for the opportunity to work with this impressive network and staff on digital rights, which are now increasingly recognised as the cornerstone of human rights, rule of law and democracy.

said Claire.

The Brussels office staff, the EDRi board and the EDRi members warmly welcome Claire. We all look forward to working with her!

 

Read more:

Upcoming EDRi leadership change: A message from Joe and Kirsten (29.03.2018)
https://edri.org/upcoming-edri-leadership-change-message-joe-kirsten/

Twitter_tweet_and_follow_banner

close
25 Oct 2018

The GDPR Today – Stats, news and tools to make data protection a reality

By EDRi

25 October 2018 marks the launch of GDPR Today – your online hub for staying up-to-date with the (real) life of the new EU data protection law, the General Data Protection Regulation (GDPR). The project will monitor the implementation of the law across Europe by publishing statistics and sharing relevant news around key subjects.

GDPR Today, led by several EDRi member organisations, aims to complement our association’s past support for the data protection reform.

Katarzyna Szymielewicz, vice-president of EDRi and co-founder and president of Panoptykon Foundation

The initiative will prioritise building knowledge around legal guidelines and decisions, data breaches, new codes of conduct, tools facilitating individuals’ exercise of rights, important business developments and governmental support for data protection authorities. The GDPR Today is an instrument aimed at data protection experts, activists, journalists, lawyers, and anyone interested in the protection of personal data.

Our goal with GDPR Today is to present facts to the public on the implementation of the law, so that those interested can follow how the GDPR is both shaping the EU digital market and helping people regain control over their personal data.

Estelle Massé, Senior Policy Analyst and Global Data Protection Lead at Access Now

The GDPR has so far often been portrayed as a burden, and the focus has been on so-called non-functional elements which remain untested and often created misunderstanding around the functional ones. The GDPR Today will put facts on the implementation of the law at the centre of the debate.

Read the first edition of the GDPR Today here: https://www.gdprtoday.org/

Twitter_tweet_and_follow_banner

close
24 Oct 2018

ENDitorial: YouTube puts uploaders, viewers & itself in a tough position

By Bits of Freedom

A pattern is emerging. After blocking a controversial video, YouTube nonpologises for doing so, and reinstates the video… just to block it again a few months later. The procedures around content moderation need to improve, but that’s not all: more needs to change.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

In June 2018, EDRi member Bits of Freedom reported that YouTube had already taken down a Dutch pro-choice NGO Women on Waves’ accounts three times in 2018, each time without proper justification. As if that wasn’t ridiculous enough, their account was taken down a fourth time just as they were being interviewed by the Dutch television program Nieuwsuur about the previous takedowns, again without notice, and without a satisfactory explanation. YouTube subsequently did what it has done many times before: the company issued a nonpology and reinstated the account. Based on experience, it is a question of when, not if, it gets removed again.

It’s odd that an account can be wrongfully blocked several times over the course of just a few months. One would expect that, after an account has been wrongfully blocked once or, at worst, twice, moderators would receive a warning that triggers a process in which a(n additional) person is involved as soon as the account is recommended for blocking. However, at best, this would only prevent the most obvious mistakes. Whether there’s a properly functioning process in place to block videos or accounts or not, there will always be controversies. The company will not be able to prevent the occasional moderation error from happening.

YouTube is in a near-monopoly position when it comes to uploading and watching videos, and it has a huge reach. Every decision YouTube makes about whether a video can be accessed through its platform has the possibility of having an enormous impact. This becomes especially clear regarding videos that deal with controversial topics. Nieuwsuur gives a few examples: bodily integrity, sexual freedom, and cannabis. Of course you’ll always be able to find someone somewhere in the world who has a problem with these topics, which is probably the reason for YouTube to ban certain videos about these topics upfront, and to quickly remove other videos as soon as someone complains. Videos and accounts disappear if one or more viewers report them as offensive, or if YouTube’s computers detect certain images or combinations of words.

This puts everyone in a tough position: the creator, the viewer and the platform itself. Creators see their videos fall off the internet from time to time and can’t do anything about it. Viewers can’t watch the videos they want to watch, regardless of their feelings about certain topics. Platforms will never be able to please everyone; opinions will continue to differ. Moreover, due to public and political pressure, a company can no longer decide for itself how to run its platform.

The only solution to all this lies in ensuring that everyone – the uploader, viewer and the platform – has options to choose from. The only way to do that is to ensure that multiple platforms exist side by side. Each with their own interests, considerations, and audience. It enables creators to choose the platform that fits them best. As a viewer you can choose a platform that is as open-minded as you are. And the platform can go back to making its own decisions about what it deems acceptable and what not.

And the beauty of it all: in this scenario the procedures for moderating content become less crucial. If a platform handles complaints in a very sloppy way, then one can simply choose a better functioning alternative, because they aren’t dependent on that particular platform.

YouTube puts uploaders, viewers and itself in a tough position (25.10.2018)
https://www.bitsoffreedom.nl/2018/10/24/youtube-puts-uploaders-viewers-and-itself-in-a-tough-position/

Women on Waves’ three YouTube suspensions this year show yet again that we can’t let internet companies police our speech (28.06.2018)
https://www.bitsoffreedom.nl/2018/06/28/women-on-waves-three-youtube-suspensions-this-year-show-yet-again-that-we-cant-let-internet-companies-police-our-speech/

YouTube censors Dutch organizations’ videos (only in Dutch)
https://nos.nl/nieuwsuur/artikel/2244146-youtube-censureert-video-s-nederlandse-organisaties-kanaal-weer-op-zwart.html

(Contribution by Rejo Zenger, EDRi member Bits of Freedom, the Netherlands)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
24 Oct 2018

Council continues limbo dance with the ePrivacy standards

By Yannic Blaschke

It’s been six-hundred-fifty-two days since the European Commission launched its proposal for an ePrivacy Regulation. The European Parliament took a strong stance towards the proposal when it adopted its position a year ago, but the Council of the European Union is still only taking baby steps towards finding its position.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

In their latest proposal, the Austrian Presidency of the Council continues, unfortunately, the trend of presenting the Council with suggestions that lower privacy protections that were proposed by the Commission and strengthened by the Parliament. In the latest working document that was published on 19 October 2018, it becomes apparent that we are far from having reached the bottom of what the Council sees as acceptable in treating our personal data as a commodity.

Probably the gravest change of the text is to allow the storing of tracking technologies on the individual’s computer without consent for websites that partly or wholly finance themselves through advertisement, provided they have informed the user of the existence and use of such processing and the user “has accepted this use” (Recital 21). The “acceptance” of such identifiers by the user as suggested is far from being the informed consent that the General Data Protection Regulation (GDPR) established as a standard in the EU. The Austrian Presidency text will put cookies which are necessary for a regular use (such as language preferences and contents of a shopping basket) on the same level as the very invasive tracking technologies which are being pushed by the Google/Facebook duopoly in the current commercial surveillance framework. This opens the Pandora’s box for more and more sharing, merging and reselling citizen’s data in huge online commercial surveillance networks, and micro-targeting them with commercial and political manipulation, without the knowledge of the person whose private information is being shared to a large number of unknown third parties.

One of the great added values of the ePrivacy Regulation (which was originally intended to enter into force at the same point in time as the GDPR) is that it’s supposed to raise the bar for companies and other actors who want to track citizens’ behaviour on the internet by placing tracking technologies on the users’ computers. Currently, such an accumulation of potentially highly sensitive data about an individual mostly happens without real knowledge of individuals, often through coerced (not freely given) consent, and the data is shared and resold extensively within opaque advertising networks and data-broker services. In a strong and future-proof ePrivacy Regulation, the collection and processing of such behavioural data thus needs to be tightly regulated and must be based on an informed consent of the individual – an approach that becomes now more and more jeopardised as the Council seems to become increasingly favourable to tracking technologies.

The detrimental change of Recital 21 is only one of the bad ideas through which the Austrian Presidency seeks to strike a consensus: In addition, there is for instance the undermining of the protection of “compatible further processing” (which is itself already a bad idea introduced by the Council) in Article 6 2aa (c), or the watering down of the requirements for regulatory authorities in Article 18, which causes significant friction with the GDPR. With one disappointing “compromise” after another, the ePrivacy Regulation becomes increasingly endangered of falling short on its ambition to end unwanted stalking of individuals on the internet.

EDRi will continue to observe the developments of the legislation closely and calls everyone in favour of a solid EU privacy regime that protects citizens’ rights and competition to voice their demands to their member states.

Five Reasons to be concerned about the Council ePrivacy draft (26.09.2018)
https://edri.org/five-reasons-to-be-concerned-about-the-council-eprivacy-draft/

EU Council considers undermining ePrivacy (25.07.2018)
https://edri.org/eu-council-considers-undermining-eprivacy/

Your ePrivacy is nobody else’s business (30.05.2018)
https://edri.org/your-eprivacy-is-nobody-elses-business/

e-Privacy revision: Document pool (10.01.2017)
https://edri.org/eprivacy-directive-document-pool/

(Contribution by Yannic Blaschke, EDRi intern)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
24 Oct 2018

EU’s flawed arguments on terrorist content give big tech more power

By EDRi

On 12 September 2018, the European Commission proposed yet another attempt to empower the same big tech companies it claims are already too powerful: a draft Regulation on preventing the dissemination of terrorist content online. The proposal encourages private companies to delete or disable access to “terrorist content”.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

The implementation deadline of the so-called Terrorism Directive on blocking and removal of terrorist-related content online has only just passed (on 8 September 2018), but the Commission has already rushed to launch yet another new proposal ahead of the upcoming EU elections. The proposed draft is so flawed that the Commission is unable to properly justify it in the 146 pages of the almost comical Impact Assessment it produced.

What does the Commission think “terrorist content” is?

The proposed draft Regulation provides a very broad definition of terrorist content that is similar to – but different from – the definition in the recently adopted Terrorism Directive (currently being transposed into 27 national EU legal frameworks). The definition includes the following activities:

  • inciting or advocating, including by glorifying, the commission of terrorist offences, thereby causing a danger that such acts be committed;
  • encouraging the contribution to terrorist offences;
  • promoting the activities of a terrorist group, in particular by encouraging the participation in or support to a terrorist group.

While the Terrorism Directive required “intention” to be part of all elements constituting terrorist offences, this draft Regulation omits this necessary requirement. Without considering people’s intentions, we risk that any communication of terrorist-related content, whether for confrontation, reporting, research or historical purposes, will be automatically deleted – with associated personal data being subject to long-term storage. In a democratic society, this is not acceptable.

What measures does the Terrorism Regulation contain?

The draft Regulation establishes three main measures:

  1. Upload filters (“proactive measures”) to be implemented by companies;
  2. Orders issued by (undefined) national authorities to remove or disable access to terrorist content within an hour; and
  3. Referrals by national authorities, Europol or competent Union Body on the basis of terms of service violations of companies (not the law), subject to the “voluntary consideration” of the online hosting providers themselves. This will lead to de facto pressure on companies by States without any accountability or due regard to the rule of law.

What is the Impact Assessment saying to justify this proposal?

Contrary to the Terrorism Directive, the European Commission presented an Impact Assessment with its proposal for a Terrorism Regulation. The Commission has filled in 146 pages with unsupported claims, misreadings of the public consultation on illegal content online, and with many arguments that advocate against having this proposal in the first place. The impact assessment recognises that:

  • Only 6% of respondents to a recent public consultation have been faced with terrorist content online – and yet, the Commission claims that we need a new Regulation to prevent its dissemination. As approximately 75% of reports to national hotlines are incorrect, this means that the actual figure is more likely to be less than 2%.
  • 75% of the respondents considered the internet to be safe – but even that is not enough to stop the Commission’s political drive to push more “terrorism” legislation.
  • There are difficulties to find a harmonised definition of “terrorist propaganda” – and yet, instead of conducting a public consultation on how to better define it, it launches a new instrument that won’t solve the actual issue.
  • Member States have claimed that removal of content “can impair an investigation and reduce the chances of disrupting criminal activity and obtaining the necessary evidence for prosecution purposes,” and yet some of the measures proposed would lead to companies unilaterally deciding to remove content.
  • That there is “rich literature” on the biases and inherent errors and discrimination that can lead to erroneous decisions in algorithmic-decision making – and yet, the Commission proposes a Regulation to implement exactly this type of measures.

Another example of non-evidence based policy-making is that the impact assessment does not provide an analysis of the costs that would entail setting up the necessary hash databases for automated content removal of content. And yet, the draft Regulation suggests this as one of the measures to be implemented in all EU Member States.

Why does the Commission propose a new Regulation now?

Despite the lack of evidence on how these measures will prevent terrorist attacks and how they will be appropriate and proportionate to conclude that the Regulation is needed, the proposal is here. It is almost like if the decision had already been made before conducting an assessment of the impact this proposal would have to actually fight terrorism.

The Directive on combating terrorism obliges the European Commission to present a report on the impact of the legislation on “fundamental rights and freedoms, including on non-discrimination, on the rule of law, and on the level of protection and assistance provided to victims of terrorism” by 2021. On that basis, the Commission is supposed to consider whether follow-up actions were needed. Instead of checking the impact of the existing legislation first, the Commission has rushed into a new proposal and is aiming to finalise it before the European Elections in May 2019.

It is regrettable that legislation is exploited to give citizens a false sense of security, while it is actually undermining their rights and freedoms.

EDRi is following this dossier very closely. As a first step, we will publish a policy paper and suggestions for amendments on the proposed Regulation in the following weeks, as well as a document pool to gather all documentation around this file.

Joint Press Release: EU Terrorism Regulation – an EU election tactic (12.09.2018)
https://edri.org/press-release-eu-terrorism-regulation-an-eu-election-tactic/

Proposal for a Regulation on preventing the dissemination of terrorist content online (12.09.2018)
http://www.europarl.europa.eu/RegData/docs_autres_institutions/commission_europeenne/com/2018/0640/COM_COM(2018)0640_EN.pdf

Impact Assessment accompanying the Proposal for a Regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online (12.09.2018)
https://ec.europa.eu/commission/sites/beta-political/files/soteu2018-preventing-terrorist-content-online-swd-408_en.pdf

EU Parliament’s anti-terrorism draft Report raises major concerns (10.10.2018)
https://edri.org/eu-parliaments-anti-terrorism-draft-report-raises-major-concerns/

Terrorism Directive: Document pool (24.11.2016)
https://edri.org/terrorism-directive-document-pool/

(Contribution by Diego Naranjo and Maryant Fernández Pérez, EDRi)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
24 Oct 2018

CJEU introduces new criteria for law enforcement to access to data

By IT-Pol and EDRi

On 2 October 2018, the Court of Justice of the European Union (CJEU) delivered a new ruling in the “Ministerio Fiscal” case on access to data retained by electronic communications service providers under the scope the ePrivacy Directive.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

While investigating the robbery and theft of a mobile phone, the Spanish police asked an investigating magistrate to order various providers of electronic communications services to disclose the telephone numbers that had been activated during a twelve-day period with the International Mobile Equipment Identity (IMEI) code of the stolen mobile device, as well as the names and addresses of the subscribers for the SIM cards used for this activation. The request was denied by the magistrate on grounds that the criminal offence did not fulfill the requirements for serious offences in the Spanish Law 25/2007 on the retention of data relating to electronic communications and to public communication networks. On appeal by the prosecutor, a Spanish court referred the case to the CJEU.

The CJEU ruled that access to retained data for the purpose of determining the owners of the SIM cards used for activation of a mobile device entails an interference with the owners’ fundamental rights to privacy and personal data protection. However, the CJEU clarified that if the purpose for accessing the retained data is solely to obtain the subscriber identity, Article 15(1) of ePrivacy Directive allows restrictions of the rights provided for by the Directive for the prevention, investigation, detection, and prosecution of criminal offences – not just serious criminal offences.

What is interesting about this ruling is that in its previous Tele2/Watson judgment, the CJEU had ruled that access to the retained data is limited to cases involving serious crime. To reconcile the two rulings, the CJEU explains that this is because the objective pursued by the access must be proportionate to the seriousness of the interference with the fundamental rights that the access entails. The Tele2 case is concerned with access to retained data which, taken as a whole, allows precise conclusions to be drawn regarding the private lives of the persons concerned. Such access constitutes a serious interference with fundamental rights and can be justified only by the objective of fighting serious crime. If, however, the access to retained data is a non-serious interference, as in the present case involving access to the subscriber’s identity, access can be justified by the objective of fighting criminal offences generally.

The question that immediately comes to mind is whether this new case in any way departs from the strict conditions for access to retained data set forth in the Tele2/Watson judgment, and, in particular, whether the Ministerio Fiscal case waters down some of these conditions, thus allowing for access to retained data by law enforcement authorities in a greater number of scenarios.

First and foremost, it is important to note that the overlap between the two judgments is fairly small since they are concerned with very different questions:

The object of the Tele2/Watson case is the retention of data which, taken as a whole, is liable to allow very precise conclusions to be drawn concerning the private lives of the persons whose data has been retained (first part of the judgment) and access to such data retained by electronic communications service providers (second part).

In contrast, the Ministerio Fiscal case is concerned with the presumably very narrow situation where accessing data does not constitute a serious interference. This includes obtaining a subscriber identity. However, the CJEU confirms that access to retained data which reveals the date, time, duration and recipients of the communications, or the locations where the communications took place, must be regarded as a serious interference since that data allows precise conclusions to be drawn about the private lives of the persons concerned (cf. paragraph 60 of the ruling). In these situations, access to the retained data must be limited to cases involving serious crimes, as in the Tele2 case.

There is, however, one scenario where the new judgment may add some confusion to the interpretation of the Tele2 judgment. According to paragraphs 108-111 of the Tele2 judgment, targeted data retention requirements for the purpose of fighting serious crime are compatible with EU law (unlike general and undifferentiated data retention which is illegal under EU law). Moreover, it would be natural to read paragraph 115 of the Tele2 judgment as always limiting the access to such retained data to cases involving serious crime because the targeted data retention requirement in itself constitutes a serious interference with fundamental rights that can only be justified by the objective of fighting serious crime. Allowing access to the retained data in cases not involving serious crime would arguably undermine the purpose limitation at the retention stage.

The CJEU did not define what can constitute a serious crime. Similarly, the Ministerio Fiscal ruling does not clearly refer to why the data was retained in the first place or whether that should affect the conditions for access to the retained data.

Because there is no apparent connection to why the data is retained, the CJEU now seems to say in paragraphs 54-61 of the Ministerio Fiscal ruling that if access is only sought to minor parts of the retained data, for example only for the purpose of obtaining the subscriber identity, accessing that data does not constitute a serious interference, even if the data is only available in the first place because of a (targeted) data retention order that can only be justified by the objective of fighting serious crime. This situation could arise in practice if the data retention order includes all data items in the (annulled) Data Retention Directive for a targeted group of persons, but access to the retained data is only requested for the purpose of determining the identity of a subscriber who has been assigned a specific dynamic IP address.

Leaving aside this potential weakening of the strict Tele2 conditions for access to retained data, there are three main positive aspects of the new judgment from a digital rights perspective:

  1. The judgment clarifies that traffic data under the ePrivacy Directive includes the subscriber name and the IMEI address of the mobile device (cf. paragraphs 40-42). This implies that access to such data falls within the scope and safeguards of the ePrivacy Directive, and that the ePrivacy Directive cannot be circumvented by attempts to expand to definition of subscriber data.
  2. The judgment notes in paragraph 51 with reference to the Court’s Opinion on the EU-Canada Passenger Name Records (PNR) agreement that access to any retained data, including subscriber identity, constitutes an interference with the fundamental right to the protection of personal data. Therefore, the CJEU requires substantive and procedural conditions based on objective criteria for the access to the retained PNR data, and the access must be subject to prior review by a court or an independent administrative body. In the Ministerio Fiscal case, the CJEU was not asked to consider substantive and procedural conditions for access. Nonetheless, paragraph 51 of the judgment has potential implications for other parts of EU law, most notably the proposed e-Evidence Regulation, which allows for access to not just subscriber data, but also so-called access data (data necessary to identify the user of a service) for all criminal offences and without any requirements of prior review by a court (a prosecutor’s approval can be sufficient) or an independent administrative body.
  3. In paragraphs 34-37 of the Ministerio Fiscal judgment, the CJEU reiterates what it said in the Tele2/Watson judgment – that national legislation permitting access by competent authorities to personal data retained by electronic communications service providers cannot be regarded as activities of the state that fall outside the scope of Article 15(1) of the ePrivacy Regulation, since the access by competent authorities necessarily presupposes processing of personal data by the electronic communications service providers.

CJEU judgment in case C-207/16 Ministerio Fiscal (02.10.2018)
http://curia.europa.eu/juris/document/document.jsf?docid=206332&mode=req&pageIndex=1&dir=&occ=first&part=1&text=&doclang=EN&cid=252986

CJEU judgment in joined Cases C‑203/15 and C‑698/15 (Tele2/Watson)
http://curia.europa.eu/juris/document/document.jsf?text=&docid=186492&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=2525180

(Contribution by Jesper Lund, IT-Pol, Denmark, and Maryant Fernández Pérez, EDRi)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
24 Oct 2018

ePrivacy: Public benefit or private surveillance?

By Yannic Blaschke

92 weeks after the proposal was published, the EU is still waiting for an ePrivacy Regulation. The Regulation is supposed to replace the current ePrivacy Directive, aligning it with the General Data Protection Regulation (GDPR).

While the GDPR regulates the ways in which personal data is processed in general, the ePrivacy Regulation specifically regulates the protection of privacy and confidentiality of electronic communications. The data in question not only includes the content and the “metadata” (data on when, where and to whom a person communicated) of communications, but also other identifiers such as “cookies” that are stored on users’ computers. To make the legislation fit for its purpose in regard to technological developments, the European Commission (EC) proposal addresses some of the major changes in communications of the last decade, including the use of so-called “over the top” services, such as WhatsApp and Viber.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

The Regulation is currently facing heavy resistance from certain sectors of the publishing and behavioural advertising industry. After an improved text was adopted by the European Parliament (EP), it is now being delayed at the Council of the European Union level, where EU Member States are negotiating the text.

One of the major obstacles in the negotiations is the question to what extent providers such as telecommunication companies can use metadata for other purposes than the original service. Some private companies – the same ones that questioned the need of consent from users in the GDPR – now re-wrapped their argument saying that an “overreliance” on consent would substantially hamper future technologies. Over-reliance on anything is not good, by definition, as is under-reliance, but such sophistry is a mainstay of lobby language.

However, this lobby attack omits reference to the fact that compatible further processing would not lead only to benign applications in the public interest: Since the proposal does not limit further processing to statistical or research purposes, it could just as well be used for commercial purposes such as commercial or political manipulation. But even with regard to the potentially more benevolent applications of AI, it should be kept in mind that automated data processing has in some cases shown to be highly detrimental to parts of society, especially vulnerable groups. This should not be ignored when evaluating the safety and privacy of aggregate data. For instance, while using location data for “smart cities” can make sense in some narrowly-defined circumstances when it is used for traffic control or natural disaster management, it gains a much more chilling undertone when it leads for instance to racial discrimination in company delivery services or law enforcement activities. It is easily imaginable that metadata, one of the most revealing and easiest to process forms of personal data, could be used for equally crude or misaligned applications, yielding highly negative outcomes for vulnerable groups. Moreover, where aggregate, pseudonymised data produces adverse outcomes for an individual, not even a rectification or deletion of the person’s data will lead to an improvement, as long as the accumulated data of similar individuals is still available.

Another pitfall of the supposedly private, ostensibly pseudonymised way of processing is that even if individual users are not targeted, companies may need to maintain the metadata of citizens in identifiable form to link existing data sets with new ones. This could essentially lead to a form of voluntary data retention, which might soon attract the interest of public security actors rapaciously seeking new data sources and new powers. If such access was granted, individuals would essentially be identifiable. Even retaining “only” aggregate data for certain societal groups or minorities might often already be enough to spark discriminatory treatment.

Although the Austrian Presidency of the Council of the European Union did include in their most recent draft compromise some noteworthy safeguards for compatible further processing, most notably the necessity to consult the national Supervisory Authority or to conduct a data protection impact assessment, the current proposal does not adequately empower individuals. Given that the interpretation of what is a “compatible” further processing may vary significantly among Member States (which would lead to years of litigation), it should be up to citizens to decide (and for the industry to prove) which forms of metadata processing are safe, fair and beneficial in society.

Five Reasons to be concerned about the Council ePrivacy draft (26.09.2018)
https://edri.org/five-reasons-to-be-concerned-about-the-council-eprivacy-draft/

EU Council considers undermining ePrivacy (25.07.2018)
https://edri.org/eu-council-considers-undermining-eprivacy/

Your ePrivacy is nobody else’s business (30.05.2018)
https://edri.org/your-eprivacy-is-nobody-elses-business/

e-Privacy revision: Document pool (10.01.2017)
https://edri.org/eprivacy-directive-document-pool/

(Contribution by Yannic Blaschke, EDRi intern)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
24 Oct 2018

New standards for networking challenges regulators & digital rights

By Article 19

On 17 October, the European body of telecommunications regulators (BEREC) organised a stakeholder meeting in Brussels, inviting industry, consumers, regulators and citizens’ rights groups to reflect on the BEREC Work Programme 2019.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

Notwithstanding BEREC’s strong commitment to exploring new ways to boost consumer empowerment, the lack of consumer and human rights voices present in the room did not go unnoticed. BEREC received criticisms for having prepared public consultations in haste, and many stakeholders called for better transparency. They also received praise for their handling of net neutrality and roaming to date.

Industry voices were predictably split along known lines. Vertically integrated operators, entities that exercise commercial control over both the physical network and service provision to consumers through that physical network, continue to feel threatened by web services and vertically separated services, while challengers and new entrants continue to prioritise competition as the vehicle to drive investment. Predictably, the recent advances of 5G cellular technologies bring the question of market access to the foreground of BEREC concerns.

Some stakeholders present at the forum expressed strong enthusiasm for a European industrial policy view of 5G and called for roll-out and investment. BEREC itself appears cognisant of the fact that they do not settle industrial policy for the EU but implement legislation decided by the legislators. The new European Electronic Communications Code (EECC) is clear: there will be a continued focus on effective competition, with a more harmonised approach across the EU area.

While in fixed networks vertical separation is increasingly looking like to the road to more investment and better infrastructure, vertical integration is still present in mobile networks. 5G standards development, in fact, appears to build vertical integration into the technical architecture of the network. It will be imperative for BEREC to work not only on the aspects of effective competition that arise from economic reports and market surveys, but also to engage with the way in which technical designs shape the market. As operators attempt to innovate themselves out of competition, forces for public good should instead consider which innovation may enable better competition. The open standards and architecture of the internet itself is a testament to the ability of technical standards to enable competition, innovation, and access.

Places to start looking for a more competition-friendly technical architecture include the mechanisms for authenticating to the network. A network operator should not have a monopoly on granting access to competitors or consumers by technical design – having separable technical layers to the network ensures long-term sustainability. Mechanisms for hand-over between network operators must also combine security and interoperability: the internet shows that these problems can be solved, but it requires determination and will. For a software-defined network, it will be crucial to understand where the power to configure the characteristics of the network reside. We may ask what are the appropriate defaults in a network operators market that is dominated by a few actors.

6th BEREC Stakeholder Forum
https://berec.europa.eu/eng/events/berec_events_2018/173-6th-berec-stakeholder-forum

Draft BEREC Work Programme 2019
https://berec.europa.eu/eng/document_register/subject_matter/berec/public_consultations/8249-draft-berec-work-programme-2019

Proposed Directive establishing the European Electronic Communications Code (14.09.2016)
https://ec.europa.eu/digital-single-market/en/news/proposed-directive-establishing-european-electronic-communications-code

Public consultation on draft BEREC WP 2019
https://berec.europa.eu/eng/news_consultations/ongoing_public_consultations/5140-public-consultation-on-draft-berec-wp-2019

(Contribution by Amelia Andersdotter and Maria Luisa Stasi, EDRi member Article 19)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
24 Oct 2018

Closed-doors discussions to filter the internet continue

By Diego Naranjo

On 12 September 2018, the European Parliament (EP) adopted the worst imaginable amendments to the copyright Directive proposal. After this disastrous vote, discussions moved behind closed doors, to the informal trilogues discussions, where the Council of the European Union (EU Member States), representatives of the Parliament and the European Commission (EC) are trying to reach an agreement on the two positions of the text (the Council proposal and the EP texts). Will they, soon? That’s less clear now.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

The Italian government has expressed its intention of moving away from the text previously agreed by the Council since the new government does not support some aspects of it, namely the upload filters. As there are several other Member States that were hardly enthusiastic about the proposal to start with, there seems to be the possibility that the Council ends up revising its own version. The Council text explicitly asked for upload filters in Article 13 of the Directive, while the EP text “only” leads to the same result by changing the liability of platforms.

Given the concerns around Article 13, it is possible that Council will decide to review its position, and Member States need to discuss further their positions. The text has faced strong criticism from academics, civil society, librarians, the United Nations Special Rapporteur on Freedom of Expression, and many others. If the EU wants to achieve a successful reform which won’t be challenged immediately in the Court of Justice of the European Union (CJEU), this further debate is crucial. If the worst parts of the text are not amended, the EU could be rushing itself to adopt a text that is wrong in many levels. We could end up with a closed, filtered and censored internet where the side effects of the measures are way worse than the alleged benefits it will bring for the music industry and collecting societies.

What’s next for Europe’s internet censorship plan? (10.10.2018)
https://edri.org/whats-next-for-europes-internet-censorship-plan/

Press Release: EU Parliament flip-flops backwards on copyright (12.09.2018)
https://edri.org/press-release-eu-parliament-flip-flops-backwards-on-copyright/

Deconstructing an MEP’s support for the Copyright Directive (12.09.2018)
https://edri.org/deconstructing-an-meps-support-for-the-copyright-directive/

(Contribution by Diego Naranjo, EDRi)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
19 Oct 2018

Civil society calls for evidence-based solutions to disinformation

By EDRi

Human and digital rights organisations Access Now, Civil Liberties Union for Europe and European Digital Rights (EDRi) published a joint report on 18 October 2018 evaluating the European Commission’s online disinformation and propaganda initiatives.

The report encourages good policy development based on thorough research and evidence. The European Commission or Member States should not propose binding policies until evidence and accurate benchmarks have been identified.

“We urge the European Commission to restrain from issuing any binding policy simply because there’s not enough meaningful data to underpin evidence-based policy. Research is needed to evaluate the impact of online disinformation and propaganda on society, and develop measures according to the fact-based findings of that research. Any measures should respect freedom of expression and data protection”, said Éva Simon, Freedom of Expression and Privacy Advocacy Officer of Liberties.

“Any measure to tackle the complex topic of online disinformation must not be blindly reliant on automated means, artificial intelligence or similar emerging technologies without ensuring that the design, development and deployment of such technologies are individual centric and respect human rights”, said Fanny Hidvégi, European Policy Manager with Access Now.

The EU should move away from superficial solutions and propose practical, proportionate solutions to tackle the root causes of online disinformation and manipulation, such as the dominant data-hungry business models in the market,

said Maryant Fernández Pérez, Senior Policy Advisor at European Digital Rights (EDRi).

The three organisations warn against some of the proposed solutions by the Commission. Example of such flawed solutions are institutionalised fact-checking, relying on blind faith in Artificial intelligence and emerging technologies, creating the “EU vs. Disinformation” campaign and limiting anonymity.

As a possible way forward, the report advocates for three more meaningful solutions:

  1. Address the business model of online manipulation through appropriate data protection, privacy and competition laws.
  2. Prevent the misuse of personal data in elections.
  3. Increase media information and literacy.

With this analysis and these solutions, the report aims at feeding into the European Commission’s Action Plan on Disinformation that is expected to present by the end of the year.

EDRi, Liberties and Access Now issue this report today following their common understanding on addressing disinformation in the digital age.

To read all our recommendations download the full report here.

Questions and media inquiries should be addressed to:

Fanny Hidvégi
Access Now – European Policy Manager
fanny@accessnow.org
+32489825097

Éva Simon
Civil Liberties Union for Europe – Freedom of Expression and Privacy Advocacy Officer
eva.simon@liberties.eu
+49 3091566653

Andreea Belu
European Digital Rights – Campaigns and Communications Manager
andreea.belu@edri
+32 2 274 25 70

Twitter_tweet_and_follow_banner

close