18 Dec 2019

Support our work by investing in a piece of e-clothing!

By EDRi

Your privacy is increasingly under threat. European Digital Rights works hard to have you covered. But there’s only so much we can do.

Help us help you. Help us get you covered.

Click the image to watch the video!

Check out our 2020 collection!*

*The items listed below are e-clothes. That means they are electronic. Not tangible. But still very real – like many other things online.

Your winter stock(ings) – 5€
A pair of hot winter stockings can really help one get through cold and lonely winter days. Help us to fight for your digital rights by investing in a pair of these superb privacy–preserving fishnet stockings. This delight is also a lovely gift for someone special.


A hat you can leave on – 10€
Keep your head undercover with this marvelous piece of surveillance resistance. Adaptable to any temperature types and – for the record – to several CCTV models, the item really lives up to its value. This hat is an indispensable accessory when visiting your favourite public space packed with facial recognition technologies.


Winter/Summer Cape – 25€
Are you feeling heroic yet? Our flamboyant Winter/Summer cape is designed to keep you warm and cool. This stylish accessory takes the weight off your shoulders – purchase it and let us take care of fighting for your digital rights!


Just another White T-Shirt – 50€
A white t-shirt can do wonders when you’re trying to blend in with a white wall. This wildly unexciting but versatile classic is one of the uncontested fundamental pillars of your privacy enhancing e-wardrobe.


THE privacy pants ⭐️ – 100€
This ultimate piece of resistance is engineered to keep your bottom warm in the coldest winter, but also aired up during the hottest summer days. Its colour guarantees the ultimate tree (of knowledge) look. The item comes with a smart zipper.


Anti-tracksuit ⭐️ – 250€
Keep your digital life healthy with the anti-tracking tracksuit. The fabric is engineered to bounce out any attempt to get your privacy off track. Plus, you can impress your beloved babushka too.


Little black dress ⭐️ – 500€
Whether at a work cocktail party, a funeral, shopping spree or Christmas party – this dress will turn you into the center of attention, in a (strangely) privacy-respecting manner.


Sew your own ⭐️ – xxx€
Unsure of any of the items above? Let your inner tailor free, customise your very own unique, designer garment, and put a price tag of your choice on it.



⭐️ The items of value superior to 100€ are delivered with an (actual, analog, non-symbolic) EDRi iron-on privacy patch that you can attach on your existing (actual, analog, non-symbolic) piece of clothing or accessory. If you wish to receive this additional style and privacy enhancer, don’t forget to provide us with your postal address (either via the donation form, or in your bank transfer message)!


Question? Remark? Idea? Please contact us brussels [at] edri [dot] org !

close
18 Dec 2019

The many faces of facial recognition in the EU

By Ella Jakubowska

We previously launched the first article and case study in a series exploring the human rights implications of facial recognition technology. In this post, we look at how different EU Member States, institutions and other countries worldwide are responding to the use of this tech in public spaces.

Live facial recognition technology is increasingly used to identify people in public, often without their knowledge or properly-informed consent. Sometimes referred to as face surveillance, concerns about the use of these technologies in public places is gaining attention across Europe. Public places are not well-defined in law, but can include open spaces like parks or streets, publicly-administered institutions like hospitals, spaces controlled by law enforcement such as borders, and – arguably – any other places where people wanting to take part in society have no ability to opt out from entering. As it stands, there is no EU consensus on the legitimacy nor the desirability of using facial recognition in such spaces.

Public face surveillance is being used by many police forces across Europe to look out for people on their watch-lists; for crowd control at football matches in the UK; and in tracking systems in schools (although so far, attempts to do this in the EU have been stopped). So-called “smart cities” – where technologies that involve identifying people are used to monitor environments with the outward aim of making cities more sustainable – have been implemented to some degree in at least eight EU Member States. Outside the EU, China is reportedly using face surveillance to crack down on the civil liberties of pro-democracy activists in Hong Kong, and there are mounting fears that Chinese surveillance tech is being exported to the EU and even used to influence UN facial recognition standards. Such issues have brought facial recognition firmly onto the human rights agenda, raising awareness of its (mis)use by both democratic and authoritarian governments.

How is the EU grappling with the facial recognition challenge?

Throughout 2019, a number of EU Member States responded to the threat of facial recognition, although their approaches reveal many inconsistencies. In October 2019, the Swedish Data Protection Authority (DPA) – the national body responsible for personal data under the General Data Protection Regulation (GDPR) – approved the use of facial recognition technology for criminal surveillance, finding it legal and legitimate (subject to clarification of how long the biometric data will be kept). Two months earlier, they levied a fine of 20 000 euro for an attempt to use facial recognition in a school. Similarly, the UK DPA has advised police forces to “slow down” due to the volume of unknowns – but have stopped short of calling for a moratorium. UK courts have failed to see their DPA’s problem with facial recognition, despite citizens’ fears that it is highly invasive. In the only European ruling so far, Cardiff’s high court found police use of public face surveillance cameras to be proportionate and lawful, despite accepting that this technology infringes on the right to privacy.

The French DPA took a stronger stance than the UK’s DPA, advising a school in the city of Nice that the intrusiveness of facial recognition means that their planned face recognition project cannot be implemented legally. They emphasised the “particular sensitivity” of facial recognition due to its association with surveillance and its potential to violate rights to freedom and privacy, and highlighting the enhanced protections required for minors. Importantly, France’s DPA concluded that legally-compliant and equally effective alternatives to face recognition, such as using ID badges to manage student access, can and should be used instead. Echoing this stance, the European Data Protection Supervisor, Wojciech Wiewiórowski, issued a scathing condemnation of facial recognition, calling it a symptom of rising populist intolerance and “a solution in search of a problem.”

A lack of justification for the violation of fundamental rights

However, as in the UK, the French DPA’s views have frequently clashed with other public bodies. For example, the French government is pursuing the controversial Alicem digital identification system despite warnings that it does not comply with fundamental rights. There is also an inconsistency in the differentiation made between the surveillance of children and adults. The reason given by both France and Sweden for rejecting child facial recognition is that it will create problems for them in adulthood. Using this same logic, it is hard to see how the justification for any form of public face surveillance – especially when it is unavoidable, as in public spaces – would meet legal requirements of legitimacy or necessity, or be compliant with the GDPR’s necessarily strict rules for biometric data.

The risks and uncertainties outlined thus far have not stopped Member States accelerating their uptake of facial recognition technology. According to the EU’s Fundamental Rights Agency (FRA), Hungary is poised to deploy an enormous facial recognition system for multiple reasons including road safety and the Orwellian-sounding “public order” purposes; the Czech Republic is increasing its facial recognition capacity in Prague airport; “extensive” testing has been carried out by Germany and France; and EU-wide migration facial recognition is in the works. EDRi member SHARE Foundation have also reported on its illegal use in Serbia, where the interior ministry’s new system has failed to meet the most basic requirements under law. And of course, private actors also have a vested interest in influencing and orchestrating European face recognition use and policy: lobbying the EU, tech giant IBM has promoted its facial recognition technology to governments as “potentially life-saving” and even funded research that dismisses concerns about the ethical and human impacts of AI as “exaggerated fears.”

As Interpol admits, “standards and best practices [for facial recognition] are still in the process of being created.” Despite this, facial recognition continues to be used in both public and commercial spaces across the EU – unlike in the US, where four cities including San Francisco have proactively banned facial recognition for policing and other state uses, and a fifth, Portland, has started legislative proceedings to ban facial recognition for both public and private purposes – the widest ban so far.

The need to ask the big societal questions

Once again, these examples return to the idea that the problem is not technological, but societal: do we want the mass surveillance of our public spaces? Do we support methods that will automate existing policing and surveillance practices – along with the biases and discrimination that inevitably come with them? When is the use of technology genuinely necessary, legitimate and consensual, rather than just sexy and exciting? Many studies have shown that – despite claims by law enforcement and private companies – there is no link between surveillance and crime prevention. Even when studies have concluded that “at best” CCTV may help deter petty crime in parking garages, this has only been with exceptionally narrow, well-controlled use, and without the need for facial recognition. And as explored in our previous article, there is overwhelming evidence that rather than improving public safety or security, facial recognition creates a chilling effect on a shocking smorgasbord of human rights.

As in the case of the school in Nice, face recognition cannot be considered necessary and proportionate when there are many other ways to achieve the same aim without violating rights. FRA agrees that general reasons of “crime prevention or public security” are neither legitimate nor legal justifications per se, and so facial recognition must be subject to strict legality criteria.

Human rights exist to help redress the imbalance of power between governments, private entities and citizens. In contrast, the highly intrusive nature of face surveillance opens the door to mass abuses of state power. DPAs and civil society, therefore, must continue to pressure governments and national authorities to stop the illegal deployment and unchecked use of face surveillance in Europe’s public spaces. Governments and DPAs must also take a strong stance to the private sector’s development of face surveillance technologies, demanding and enforcing GDPR and human rights compliance at every step.

Facial Recognition and Fundamental Rights 101 (04.12.2019)
https://edri.org/facial-recognition-and-fundamental-rights-101/

In the EU, facial recognition in schools gets an F in data protection (10.12.2019)
https://www.accessnow.org/in-the-eu-facial-recognition-in-schools-gets-an-f-in-data-protection/

Data-Driven Policing: The Hardwiring of Discriminatory Policing Practices across Europe (05.11.2019)
https://www.enar-eu.org/IMG/pdf/data-driven-profiling-web-final.pdf

Facial recognition technology: fundamental rights considerations in the context of law enforcement (27.11.2019)
https://fra.europa.eu/sites/default/files/fra_uploads/fra-2019-facial-recognition-technology-focus-paper.pdf

Serbia: Unlawful facial recognition video surveillance in Belgrade (04.12.2019)
https://edri.org/serbia-unlawful-facial-recognition-video-surveillance-in-belgrade/

At least 10 police forces use face recognition in the EU, AlgorithmWatch reveals (11.12.2019)
https://algorithmwatch.org/en/story/face-recognition-police-europe/

(Contribution by Ella Jakubowska, EDRi intern)

close
18 Dec 2019

Austrian government hacking law is unconstitutional

By Epicenter.works

On 11 December 2019, the Austrian Constitutional Court decided that the surveillance law that permits the use of spying software to read encrypted messages violates the fundamental right to respect for private life (article 8 ECHR), the fundamental right to data protection (§ 1 Austrian data protection law) and the constitutionally granted right that prohibits unreasonable searches (Art 9 Austrian bill of rights – Staatsgrundgesetz).

This judgement comes after the legalisation of government spyware in Austria was prevented already twice. In 2016, a draft bill was withdrawn by the Minister of Justice after heavy criticism from civil society, technical experts and academics. In a second attempt in 2017, the legalisation of government spyware was included in a broader surveillance package. The draft bill was already in committee stage in the Parliament, but was withdrawn after a record number of consultation responses from many individuals and high profile institutions, such as the chamber of economics, the high court and the data protection board. In 2018, the far-right government adopted the contested surveillance package, including government spyware and indiscriminate use of licence plate recognition in Austria.

The constitutionality of this law was subsequently challenged by a third of the Members of Parliament. In the judgement published on 11 December, the court pointed out, that there is a huge difference between traditional wiretapping and the infiltration of a computer system in order to read encrypted messages. Information about the personal use of computer systems provides insight into all areas of life and allows conclusions to be drawn about the user’s thoughts, preferences, views and disposition. The court criticised especially that the law allowed to use the spying software for prosecuting offences against property which have a low maximum penalty, like burglary (maximum penalty of five years).

Further, the court emphasised that the control mechanisms were insufficient. The law required a judicial approval at the beginning of the measure, and the control of the legal protection officer during the measure. The legal protection officer is a special Austrian institution that is supposed to protect the rights of those affected by secret investigations. Given the peculiarities and sensitivity of the surveillance measure this control mechanism was not enough of a safeguard for the Constitutional Court. The court required an effective independent supervision by an institution that is equipped with the appropriate technical means and human resources, not only at the beginning of the measure, but also for the entire duration of the surveillance.

The other provision that was challenged in front of the Constitutional Court was a mandatory data retention of car movements on Austria’s streets. The recognition of licence plates, car types and driver pictures in a centralised database of the Ministry of Interior was struck down as a form of indiscriminate data retention. A similar type of mass surveillance of telecommunication meta data was lifted in 2014. Austria is now one of very few EU countries without telecommunication data retention and government spyware. Uniquely, the debate in Austria was focused on the security risks that are inherent with government spyware. Through years of campaigning, most people have understood that the vulnerabilities required to infect a target device are a risk for everybody with the same operating system or application.

epicenter.works
https://en.epicenter.works

Summary of epicenter.works’ campaign against government spyware
https://en.epicenter.works/thema/state-trojan

Summary of epicenter.works’ campaign against the surveillance package
https://en.epicenter.works/thema/surveillance-package

Judgement of the Austrian Constitutional Court (only in German, 11.12.2019)
https://www.vfgh.gv.at/medien/Kfz-Kennzeichenerfassung_und__Bundestrojaner__verfass.de.php

(Contribution by Alina Hanel and Thomas Lohninger, EDRi member epicenter.works, Austria)

close
18 Dec 2019

Spain: New law threatens internet freedoms

By Xnet

On 5 November 2019, the Royal Decree-Law 14/2019 that had been adopted on 31 October was published in the Spanish Official State Gazette (BOE). This was just five days before the general elections that would take place on 10 November, under an undefined “exceptionality and urgency”, and justified by the “challenges posed by new technologies from the point of view of public security”. Those challenges being, according to the Decree, “disinformation activities” and,“interference in political participation processes”.

This Royal Decree-Law modifies the regulation on the internet and electronic communications in order to grant the government greater powers to control these technologies in a range of vaguely defined situations. The Decree-Law defines an access to the network increasingly administered by the state, with no obligation for a judicial ruling to limit the access. This could pose a threat to human rights, particularly to that of freedom of expression.

From now on, and without any judicial intervention to prevent possible abuses and safeguard citizens’ rights, the Government in office, by a decision of the Ministry of Economy and Enterprise, has the power to intervene, lock, or shut down the internet and electronic communication networks or services, including the “infrastructures capable of hosting public electronic communications networks, their associated resources or any element or level of the network or service necessary”. This intervention is defined in the following terms (in *bold*, what has been included or modified through the Royal Decree-Law):

1. Prior to the beginning of the sanctioning procedure, the competent body of the Ministry of Economy and Business may order, by means of a resolution without a prior hearing, the cessation of the alleged infringing activity when there are compelling reasons of urgency based on any of the following assumptions:
(a) *When there is an immediate and serious threat to public order, public security or national security.*
(b) When there is an immediate and serious threat to *public health. (replaces “endangering human life”)*
c) When the alleged infringing activity may cause serious damage to the functioning of public security, civil protection and emergency services.
d) When other electronic communications services or networks are seriously interfered with.
(e) *When it creates serious economic or operational problems for other providers or users of electronic communications networks or services or other users of the radio spectrum.*

In addition, other modifications have been introduced in order to reinforce the protection of private monopolies and prohibit the development and research into blockchain technologies as identification systems.

Overall, it can be considered that the Decree’s content contradicts its own explanatory memorandum that foresees the protection and the improvement of “the privacy and digital rights of the citizen” as objectives of the Decree. In its adopted form, the Decree could, on the contrary, be easily used to control and silence domestic political opposition, and to prevent mass demonstrations and strikes against unpopular government policies all around Spain. Finally, it introduces the prohibition for certain uses of blockchain and similar distributed technologies until the EU publishes guidelines on the subject – a detrimental rule for investigation and innovation in Spain.

To grant legal certainty in the protection of human rights of its citizens, and to regain their trust, the Spanish Government should reconsider the adoption of this Royal Decree-Law, which was rushed and signed without considering the implications it could involve concerning human rights.

Xnet
https://xnet-x.net

Today in Spain as in China (only in Spanish, 07.11.2019)
https://xnet-x.net/en/hoy-en-espana-como-en-china/

Today in Spain as in China (only in Catalan, 06.11.2019)
https://www.vilaweb.cat/noticies/avui-espanya-com-xina-opinio-simona-levi/

(Contribution by Simona Levi, EDRi member Xnet, Spain)

close
18 Dec 2019

Online content moderation: Where does the Commission stand?

By Chloé Berthélémy

The informal discussions (trilogues) between the European Parliament, the Council of the European Union and the European Commission are progressing on the Terrorist Content Regulation (TCO, aka “TERREG”). While users’ safeguards and rights-protective measures remain the Parliament’s red lines, the Commission presses the co-legislators to adopt what was a pre-elections public relations exercise, rather than an urgently needed piece of legislation. Meanwhile, the same European Commission just delivered a detailed opinion to France criticising its currently debated hate speech law (“Avia law”). The contrast between the Commission’s positions supporting certain measures in the Terrorist Content Regulation and opposing similar ones in the French Avia law is so striking that it is difficult to believe they come from the same institution.

Scope of targeted internet companies

In its letter to the French government, the Commission mentions that “it is not certain that all online platforms in the scope of the notified project […] pose a serious and grave risk” in light of the objective of fighting hate speech online. The Commission also notes that the proportionality of the envisaged measures is doubtful and is missing a clear impact assessment, especially for small and medium-sized enterprises (SMEs) established in other EU Member States.

These considerations for proportionate and targeted legislative measures have completely vanished in the context of the Terrorist Content Regulation. The definition set out in the Commission’s draft Regulation is too broad and covers an extremely large, diverse and unpredictable range of entities. Notably, it covers even small communications platforms with a very limited number of users. The Commission asserts that terrorist content is currently being disseminated over smaller sites, and therefore, the Regulation obliges them “to take responsibility in this area”.

What justifies these two very different approaches to a similar problem? That is not clear: On the one hand, the Commission denounces a missing evaluation that an obligation to adopt measures preventing the redistribution of illegal content (“re-upload filters”) in the Avia law would have on European SMEs. On the other hand, it does not provide any analysis in its impact assessment on the Terrorist Content Regulation of the costs that would entail setting up hash databases for automated removal of content and still pushes for such “re-upload filters” in trilogues.

Expected reaction time frame for companies

The European Commission criticises the 24-hour deadline the French proposal introduces for companies to react to illegal content notifications. The Commission held that “any time limit set during which online platforms are required to act following notification of the presence of illegal content must also allow for flexibility in certain justified cases, for example where the nature of the content requires a more substantial assessment of its context that could not reasonably be made within the time limit set”. Considering the high fines in cases of non-compliance, the Commission believes it could place a disproportionate burden on companies and lead to an excessive deletion of content, thus undermining freedom of expression.

A year ago, the Commission strongly supported in the original TCO proposal the deletion of terrorist content online within one hour of receipt of a removal order. No exception for small companies was foreseen despite their limited resources to react in such short time frame, leaving them with no other choice than to pay the fines or apply automated processing if they have the means to do so. Although removal orders do not technically require the platform to review the notified content within one hour, the Commission’s proposal allows for any competent authority to issue such orders, even if they are not independent.

Terrorist content is as context-sensitive as hate speech

In the letter sent to the French government on the Avia law, the Commission argues that the French proposal could lead to a breach of Article 15(1) of the E-Commerce Directive, as it would risk forcing online platforms to engage in an active search for hosted content in order to comply with the obligation to prevent the re-upload of already identified illegal hate speech. Again, the Commission regrets that the French authorities did not provide sufficient evidence that this measure is proportionate and necessary in relation to the impact on fundamental rights including the rights to privacy and data protection.

At the same time, the Commission (and the Council) seemed in the TCO Regulation uncompromising on the obligation put on platforms to use “proactive measures” (aka upload filters). As in the copyright Directive discussions, EDRi maintains strong reservations against the mandatory use of upload filters, since they are error prone, invasive and, likely to produce “false positives”, meaning nothing less than a profound danger for freedom of expression. For example, current filters used voluntarily by big platforms have taken down documentation of human rights violations and awareness-raising material against radicalisation.

The turn of the Commission position regarding online content in the Avia law sets a positive precedent regarding online content, including upcoming legislation in the Digital Services Act (DSA). We hope that the brand new Commission can keep a similar sensible approach in future proposals.

Recommendations for the European Parliament’s Draft Report on the Regulation on preventing the dissemination of terrorist content online (December 2018)
https://edri.org/files/counterterrorism/20190108_EDRipositionpaper_TERREG.pdf

Trilogues on terrorist content: Upload or re-upload filters? Eachy peachy. (17.10.2019)
https://edri.org/trilogues-on-terrorist-content-upload-or-re-upload-filters-eachy-peachy/

EU’s flawed arguments on terrorist content give big tech more power (24.10.2018)
https://edri.org/eus-flawed-arguments-on-terrorist-content-give-big-tech-more-power/

How security policy hijacks the Digital Single Market (02.10.2019)
https://edri.org/how-security-policy-hijacks-the-digital-single-market/

(Contribution by Chloé Berthélémy, EDRi)

close
18 Dec 2019

Casual attitude in intelligence sharing is troubling

By Bits of Freedom

A recent report by Dutch Military Intelligence and Security Service CTIVD shows that the Dutch secret services regularly violate the law when sharing intelligence with foreign services. For the sake of privacy and freedom of communication, it is crucial that data sharing safeguards are both tightened and more strictly enforced.

A report issued by the CTIVD revealed that the secret services do not necessarily act in accordance with the law when it comes to sharing (sometimes sensitive) information with the intelligence agencies of other countries. Ten instances were found in which the Dutch secret services had illegally provided raw data to foreign services, disregarding what is already a fairly weak legal regime for information sharing. The services’ casual attitude towards existing legal frameworks and their reluctance to be more meaningfully regulated may set a dangerous precedent for the relationship between intelligence agencies and democratic oversight in the Netherlands.

The Dutch secret services routinely exchange data with foreign secret services. Dutch EDRi member Bits of Freedom argues that the services should always know what they are exchanging, because they are tasked with protecting the citizens, and part of that task includes not giving away risky information about them. Sadly, services’ internal guidelines to that effect are missing, while legal provisions are insufficient and often ignored.

A lack of internal policy

The Dutch secret services’ internal policy for sharing data with other services is porous and vague: It does not distinguish between different legal basis, the assessments against the requirements of necessity, propriety, and due care are missing, and two legal bases lack additional requirements entirely. It further does not stipulate that weighting notes, which assess the trustworthiness of any cooperating agency, need to be taken into account when deciding whether to share information. There are also no standard procedures as to whether foreign services are allowed to use the information provided (or that they must act in accordance with international law if they do so) or pass it on without thinking twice about it.

Non-compliance with already limited legal provisions

Aside from services’ internal policy, or lack thereof, the law provides a rough framework for the sharing of raw data. Raw data are data which have not been processed, filtered, or analysed based on their nature or content by the services in any form. Under the current regulation, whenever the services wish to share raw data, they must obtain permission from the responsible minister. This permission is subject to a set of circumstances. According to the CTIVD, however, the services not only do not pay sufficient attention to these circumstances, they are also given the leeway to do so.

Sometimes, the services even have their very own ideas about sharing raw data, deeming an internal assessment of whether the information they seek to share is relevant to the receiving body a sufficient benchmark. Not only is an assessment of potential relevance significantly more abstract than the criterion of evaluation, it is also simply not how the law works, and the CTIVD therefore objects to this line of reasoning.

Additional instances in which the clearance protocol for raw data have been violated include: classified data incorrectly labelled as evaluated; data incorrectly added to an existing permission without obtaining separate consent; and missing reference to the weighting notes that would have classified the other service as a risk. If the services want to share raw data with a service that is known to be problematic, the risk, and what the services do to manage it, must be outlined in the permission request. Failure to provide this greatly undermines the functioning of the minister’s permission as a guarantee. After all, how can the minister properly consider a request if they are not informed of the risks involved?

Safeguards are needed

An important stipulation when providing data to a foreign service is that that service may not pass on the data, also known as the third-party-rule. Both secret services (MIVD and AIVD) structurally fail to set this condition, though it is required by law, when providing data to foreign services. The AIVD, for instance, neglected to do so three times over the last year despite the weighing note on the receiving foreign service indicating risks that would have indeed required it.

According to the CTIVD, the agreements with other countries on which the services currently base their data sharing practices are also not satisfactory. Some are still from the 1960s or do not relate to the relevant data dispensation, others are still in draft form, or exist only with one country while the data is shared with several. Moreover, during their inquiry, the CTIVD was unable to find the agreements and the services were unable to state where they were recorded.

Oversight is hindered

To make oversight possible, the services have a duty to record what they do, including what information is given to foreign services. Because records are kept at different levels, however, there is no comprehensive overview of the data shared with foreign services. Furthermore, the services are neglecting their reporting duties. Every time the secret services provide raw data to foreign services, they must inform the CTIVD accordingly. They failed to do so eight times in eight months.

How to address the problem

Bits of Freedom is deeply concerned about the provision of raw data to foreign services. It seems irresponsible that the secret services are allowed to collect data in bulk, and share it with foreign services without properly evaluating the request. Against that backdrop, the CTIVD report has raised a whole range of important questions around the services’ due diligence in risk assessment and their regard for ministerial permission protocol, civil liberties protection, and oversight.

The upcoming implementation of the dragnet, which will allow for the untargeted, systematic, and large-scale interception and analysis of citizens’ online communication, likely means that even more raw data will be shared with foreign countries. Bits of Freedom argues that this is highly problematic. How can citizens’ rights be guaranteed when the services share information (also about them) without even knowing what it is exactly that they are sharing? The House of Representatives will soon discuss proposed amendments to the Dragnet Act and while the dragnet itself seems inevitable, the Parliament should at least take into account the following points to defend privacy and freedom of communication:

  1. The services should be obligated to show that the sharing of raw data is accompanied by ensuring minimal risk for civilians and organisations, following a “least-intrusive-means” doctrine for data sharing, as it were.
  2. The sharing of raw data should be taken more seriously. As is the case with the services’ other special powers, the “Assessment Committee for the Deployment of Powers” (TIB) should review the request of the services and the approval of the minister before the sharing of data is ultimately cleared.
  3. The services have shown that they do not always comply with the law. As a result, the CTIVD, as the body tasked with reviewing the lawfulness of the services’ activities, should be given more power. When the services violate the law they should be stopped immediately by the CTIVD.

Bits of Freedom
https://www.bitsoffreedom.nl/

Casual attitude in intelligence sharing is troubling
https://aboutintel.eu/intelligence-sharing-troubling/

Dutch Senate votes in favour of dragnet surveillance powers (26.07.2017)
https://edri.org/dutch-senate-votes-in-favour-of-dragnet-surveillance-powers/

(Contribution by Lotte Houwing, EDRi member Bits of Freedom, the Netherlands)

close
18 Dec 2019

Say “no” to cookies – yet see your privacy crumble?

By noyb

Cookie banners of large French websites turn a clear “no” into “fake consent”. EDRi member noyb has filed three General Data Protection Regulation (GDPR) complaints with the French Data Protection Regulator (CNIL).

Relying on the open source extension “Cookie Glasses” developed by researchers of the French institute Inria, noyb identified countless violations of European and French cookie privacy laws. noyb found out that the French eCommerce page CDiscount, the movie guide Allociné and the fashion magazine Vanity Fair all turn a rejection of cookies by users into a “fake consent”. On 10 December 2019, noyb filed three formal complaints with the French Data Protection Authority (CNIL).

Despite users going through the trouble of “rejecting” countless cookies on CDiscount, Allocine.fr and Vanity Fair, these websites have sent digital signals to tracking companies claiming that users have agreed to being tracked online. CDiscount has sent “fake consent” signals to 431 tracking companies per user, Allocine to 565 and Vanity Fair to 375, as the analysis of the data flows show.

Among the recipients of this “fake consent” are Facebook and the online advertising companies AppNexus and PubMatic. These companies have consequently placed tracking cookies after users have clearly objected to all tracking.

The main association for online tracking businesses, the Interactive Advertising Bureau (IAB), created a framework that plays a key role in this. All websites used the “IAB Transparency and Consent Framework”, an industry standard behind most cookie banners to communicate what noyb believe is “fake consent”. Only Facebook does currently not use the IAB Framework – but still placed cookies without consent.

Every user should be entitled to receive a clear information regarding the setting of cookies on their device, and each data controller must ensure the respect the user’s choice: refusal or acceptation of such setting.

Article 80 of the General Data Protection Regulation (GDPR) foresees that data subjects can be represented by a non-profit association. noybfiled complaints against the “fake consent” on behalf of the data subjects with the French Data Protection Regulator (CNIL).

noyb
https://noyb.eu/

Say “NO” to cookies – yet see your privacy crumble? (10.12.2019)
https://noyb.eu/say-no-to-cookies-yet-see-your-privacy-crumble/

Complaint, CDiscount (10.12.2019)
https://noyb.eu/wp-content/uploads/2019/12/Complaint-CDiscount-Facebook-REDACTED-EN.pdf

Complaint, Allociné.fr (only in French, 10.12.2019)
https://noyb.eu/wp-content/uploads/2019/12/Complaint-Allocine-AppNexus-REDACTED-FR.pdf

Complaint, Vanity Fair (only in French, 10.12.2019)
https://noyb.eu/wp-content/uploads/2019/12/Complaint-Vanity-Fair-Pubmatic-REDACTED-FR.pdf

Do Cookie Banners Respect my Choice? Measuring Legal Compliance of Banners from IAB Europe’s Transparency and Consent Framework
https://arxiv.org/abs/1911.09964

AIB: TCF – Transparency & Consent Framework
https://iabeurope.eu/transparency-consent-framework/

(Contribution by EDRi member noyb, Austria)

close
18 Dec 2019

Bits of Freedom celebrates its 20th anniversary

By Bits of Freedom

EDRi member Bits of Freedom celebrates its 20 year anniversary. Bits of Freedom believes an open and just society is only possible when people can participate in public life without fear of repercussions. For this, every person needs to be free to share information and their private life needs to be respected. The right to privacy and freedom of expression are at the core of this. Bits of Freedom fights for these fundamental rights by contributing to strong legislation, by championing the emancipatory potential of the internet, and by holding those in power to account.

During this anniversary year, Bits of Freedom helped thousands of individuals in Europe gain more control over their data, they called out Facebook for lying to Parliament, fought for a better copyright law, analysed the state of play with regards to the sharing of unevaluated (bulk) data by the Dutch secret services, published our plea to fix its communications ecosystem instead of focussing on symptoms, and combined free tech and a freely accessible webcam stream to create the ultimate stalker tool – and raise awareness for the problems around facial recognition in public space. Bits of Freedom published, of course, their annual review of 2018, and to commemorate the 20 year milestone, launched its very first online shop, including a new line of merchandise modelled by four individuals who, all in their own way, have contributed to Bits of Freedom’s work.

Looking back over twenty years, some things don’t seem to have changed much. Data retention, internet filters and breaking encryption are still among the go-to “solutions” policy makers propose to not-very-clearly defined problems. Although Bits of Freedom’s arguments in response to these knee-jerk reactions at the core remain the same, the environment in which we put them forth is ever-evolving. Similarly, even though the Netherlands was the first European country to commit net neutrality to law, it’s still necessary to continue to fight for equal treatment of all internet traffic.

Some things do change. There is growing awareness for the work of digital rights organisations, and our movement is gaining in size and strength. More and more people are willing to take action. 2020 will most likely be the year in which more than 50% of Bits of Freedom’s work will be funded by individuals. And not a moment too late. Bits of Freedom’s field of work is expanding and civil society actors concerned with its topics are still few and far apart. We need to expand to be able to deal with the increasing attention for these topics, especially at the European level.

To celebtrate the 20 years of Bits of Freedom, and to support their work, you can now visit the online shop.

Bits of Freedom
https://www.bitsoffreedom.nl/

My Data Done Right
https://www.mydatadoneright.eu/

Facebook lies to Dutch Parliament about election manipulation (21.05.2019)
https://www.bitsoffreedom.nl/2019/05/21/facebook-lies-to-dutch-parliament-about-election-manipulation/

The Netherlands, aim for a more ambitious copyright implementation! (11.09.2019)
https://edri.org/the-netherlands-aim-for-more-ambitious-copyright-implementation/

Casual attitude in intelligence sharing is troubling (18.12.2019)
https://edri.org/casual-attitude-in-intelligence-sharing-is-troubling

Regulating online communications: fix the system, not the symptoms (21.06.2019)
https://www.bitsoffreedom.nl/2019/06/21/regulating-online-communications-fix-the-system-not-the-symptoms/

Amazon’s Rekognition shows its true colors (12.12.2019)
https://www.bitsoffreedom.nl/2019/12/12/amazons-rekognition-shows-its-true-colors/

Bits of Freedom Annual Report 2018
https://2018.bitsoffreedom.nl/

(Contribution by EDRi member Bits of Freedom, the Netherlands)

close
17 Dec 2019

Letter to Member States calls for safeguards in Terrorist Content Regulation

By EDRi

On 16 December 2019, EDRi and Access Now sent a letter to EU Member States urging them ensure key safeguards on the proposed Regulation regarding the removal orders, the cross border mechanism and crucial exceptions for education, journalistic and research materials in the ongoing trilogue discussions. This letter is another step in the work that the EDRi network has done on this piece of legislation which could have tremendous impact on freedom of expression online.

The trilogues (informal discussions between the European Parliament, Member States and the Commission) will resume in January under the Croatian Presidency of the Council of the European Union (January – July 2020).

You can read the letter below:

Dear Minister,

I am writing to you on behalf of Access Now, a non-profit organisation that defends and extends digital rights of users at risk around the world, and European Digital Rights (EDRi), a network of 42 NGOs that promote and defend human rights in the online environment.

We urge you to develop and support a position at the Council on the proposed Terrorist Content Regulation that respects constitutional traditions of the Member States and maintains the exception for protected forms of expression. We particularly highlight the need for judicial redress for both hosting service providers and content providers, and to prevent situations where a competent authority of one Member State is bound by removal orders issued by another Member States

The LIBE Committee delivered a balanced and well-rounded report with important improvements made to the original Commission’s text (see https://edri.org/terrorist-content-libe-vote/) based on a great collaborative work with the Committees on Internal Market and Consumer Protection and Culture and Education.

We seize the opportunity before the holiday season to recall the crucial elements of the European Parliament’s position that will keep the Regulation in line with the Charter of Fundamental Rights and the EU acquis, notably adequate cross-border cooperation mechanism that respects constitutional traditions of the Member States and the exception for protected forms of expression.

First, the EU Charter as well as European Convention on Human Rights (ECHR) provide every individual with a right to an effective remedy before the competent national tribunal against any measure that potentially violates their fundamental rights. In cases of cross-border removal orders, the current proposal creates a system in which a competent authority of any Member States can issue a removal order to any hosting service provider established or represented in the EU. However, in the current Council proposal, removal orders can only be challenged in that Member State whose authority issued the order. The Regulation should allow removal orders to be contested in the Member State in which the hosting service provider has its legal establishment to ensure meaningful access to an effective remedy.

Second, the proposed system limits the possibilities and effectiveness of judicial redress for both hosting service providers and content providers. The proposal should include cross border mechanism that enables online users as well as hosting service providers to challenge the removal orders before a competent authority of the Member State in which the hosting service provider is established or represented. Such a mechanism will help to prevent situations where a competent authority of one Member State is bound by removal orders issued by other Member States, which is in contrary to the constitutional traditions of several Member States of the Union.

Finally, it is crucial that the exception for certain protected forms of expression, such as education, journalistic and research materials is maintained in the proposal. The jurisprudence of the European Court of Human Rights (ECtHR) specifically requires a particular caution to such protected forms of speech and expression. Even the content that initially appears as unlawful can be used in certain cases for legitimate purposes, especially when informing the public about matters of public interest, promoting education, scientific and academic research as well as artistic expression.

We remain at your disposal for any support you may need from us in the future.

Best wishes,
Eliska Pirkova, Access Now
Fanny Hidvegi, Access Now
Diego Naranjo, EDRi

close
04 Dec 2019

Facial recognition and fundamental rights 101

By Ella Jakubowska

This is the first post in a series about the fundamental rights impacts of facial recognition. Private companies and governments worldwide are already experimenting with facial recognition technology. Individuals, lawmakers, developers – and everyone in between – should be aware of the rise of facial recognition, and the risks it poses to rights to privacy, freedom, democracy and non-discrimination.

In November 2019, an online search on “facial recognition” churned up over 241 million hits – and the suggested results imply that many people are unsure about what facial recognition is and whether it is legal. Although the first uses that come to mind might be e-passport gates or phone apps, facial recognition has a much broader and more complex set of applications, and is becoming increasingly ubiquitous in both public and private spaces – which can impact a wide range of fundamental rights.

What the tech is facial recognition all about?

Biometrics is the process that makes data out of the human body – literally, your unique “bio”-logical qualities become “metrics.” Facial recognition is a type of biometric application which uses statistical analysis and algorithmic predictions to automatically measure and identify people’s faces in order to make an assessment or decision. Facial recognition can broadly be categorised in terms of the increasing complexity of the analytics used: from verifying a face (this person matches their passport photo); identifying a face (this person matches someone in our database), to classifying a face (this person is young). Not all uses of facial recognition are the same and, therefore, neither are the associated risks. Facial recognition can be done live (e.g. analysis of CCTV feeds to see if someone on the street matches a criminal in a police database) or non-live (e.g. matching two photos), which has a higher rate of accuracy.

There are opportunities for error and inaccuracy in each category of facial recognition, with classification being the most controversial because it claims to judge a person’s gender, race, or other characteristics. This categorisation can lead to assessments or decisions that infringe on the dignity of gender non-conforming people, embed harmful gender or racial stereotypes, and lead to unfair and discriminatory outcomes.

Furthermore, facial recognition is not about facts. According to the European Union Agency for Fundamental Rights (FRA), “an algorithm never returns a definitive result, but only probabilities” – and the problem is exacerbated as the data on which that the probabilities are based reflects social biases. When these statistical likelihoods are interpreted as if they are a neutral certainty, this can threaten important rights to fair and due process. This in turn has an impact on individuals’ ability to seek justice when facial recognition infringes on their rights. Digital rights NGOs warn that facial recognition can harm privacy, security and access to services, especially for marginalised communities. A powerful example of this is when facial recognition is used in migration and asylum systems.

A question of social justice and democracy

Whilst discrimination resulting from technical issues or biased data-sets is a genuine problem, accuracy is not the crux of why facial recognition is so concerning. A facial recognition system claiming to identify terrorists at an airport, for example, could be considered 99% accurate even if it did not correctly identify a single terrorist. And greater accuracy is not necessarily the answer either, as it can make it easier for police to target or profile people of colour based on damaging racialised stereotypes. The real heart of the problem lies in what facial recognition means for our societies, including how it amplifies existing inequalities and violations, and whether it fits with our conceptions of democracy, freedom, privacy, equality, and social good. Facial recognition by definition raises questions about the balance of personal data protection, mass surveillance, commercial interests and national security which societies should carefully consider. Technology is frequently incredible, impressive, and efficient – but this should not be confused with its use being necessary, beneficial, or useful for us as a society. Unfortunately, these important questions and key issues are often decided out of public sight, with little accountability and oversight.

What’s in a face?

Your face has a particular sensitivity in the context of surveillance, says France’s data protection watchdog – and as a very personal form of personal data, both live and non-live images of your face are already protected from unlawful processing under the General Data Protection Regulation (GDPR).

Unlike a password, your face is unique to you. Passwords can be kept out of sight and reset if needed – but your face cannot. If your eye is hacked, for example, there is no way to wipe the slate clean. And your face is also distinct from other forms of biometric data such as fingerprints because it is almost impossible to avoid being subject to facial surveillance when such technology is used in public places. Unlike having your fingerprints taken, your face can be surveilled and analysed without your knowledge. Your face can also be a marker of protected characteristics under international law such as your right to freely practice your religion. For these reasons, facial recognition is highly intrusive and can infringe on rights to privacy and personal data protection, among many other rights.

Researchers have highlighted the frightening assumptions underpinning much of the current hype about facial recognition, especially when used to categorise emotions or qualities based on individuals’ facial movements or dimensions. This harks back to the discredited pseudo-science of physiognomy – a favourite of Nazi eugenicists – and can have massive implications on individuals’ safety and dignity when used to make a judgement about things like their sexuality or whether they are telling the truth about their immigration status. Its use in recruitment also increases discrimination against people with disabilities. Experts warn that there is no scientific basis for these assertions – but that has not stopped tech companies churning out facial classification systems. When used in authoritarian societies or where being LGBTQI+ is a crime, this sort of mass surveillance threatens the lives of journalists, human rights defenders, and anyone that does not conform – which in turn threatens everyone’s freedom.

Why can’t we open the Black Box?

The statistical analysis underpinning facial recognition and other similar technology is often referred to as a “black box”. Sometimes this is because the technological complexity of deep learning systems means that even data scientists do not fully understand the way that the algorithmic models make decisions. Other times, this is because the private companies creating the systems use intellectual property or other commercial protections to hide their models. This means that individuals and even states are prevented from scrutinising the inner workings and decision-making processes of facial recognition tech, even though it impacts so many fundamental rights, which violates principles of transparency and informed consent.

Facial recognition and the rule of law

If this article has felt like a roll-call of human rights violations – that’s because it is. Mass surveillance through facial recognition technology threatens not just the right to privacy, but also democracy, freedom, and the opportunity to develop one’s self with dignity, autonomy and equality in society. It can have what is known as a “chilling effect” on legal dissent, stifling legitimate criticism, protest, journalism and activism by creating a culture of fear and surveillance in public spaces. Different uses of facial recognition will have different rights implications – depending not only on what and why they are analysing people’s faces, but also because of the justification for the analysis. This includes whether the system meets legal requirements for necessity and proportionality – which, as the next article in this series will explore, many current applications do not.

The rule of law is of vital importance across the European Union, applying to both national institutions and private companies – and facial recognition is no exception. The EU can contribute to protecting people from the threats of facial recognition by strongly enforcing GDPR and by considering how existing or future legislation may impact upon facial recognition too. The EU should foster debates with citizens and civil society to help illuminate important questions including the differences between state and private uses of facial recognition and the definition of public spaces, and undertake research to better understand the human rights implications of the wide variety of uses of this technology. Finally, prior to deploying facial recognition in public spaces, authorities need to produce human rights impact assessments and ensure that the use passes the necessity and proportionality test.

When it comes to facial recognition, just because we can use it does not necessarily mean that we should. But what if we continue to be seduced by the allure of facial recognition? Well, we must be prepared for the violations that arise, implement safeguards for protecting rights, and create meaningful avenues for redress.

Facial recognition technology: fundamental rights considerations in the context of law enforcement (27.11.2019)
https://fra.europa.eu/sites/default/files/fra_uploads/fra-2019-facial-recognition-technology-focus-paper.pdf

Why ID (2019)
https://www.accessnow.org/whyid-letter/

Ban Face Surveillance (2019)
https://epic.org/banfacesurveillance/

Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System (16.08.2018)
https://ihrp.law.utoronto.ca/sites/default/files/media/IHRP-Automated-Systems-Report-Web.pdf

Declaration: A Moratorium on Facial Recognition Technology for Mass Surveillance Endorsements
https://thepublicvoice.org/ban-facial-recognition/endorsement/

The surveillance industry is assisting state suppression. It must be stopped (26.11.2019)
https://www.theguardian.com/commentisfree/2019/nov/26/surveillance-industry-suppression-spyware

(Contribution by Ella Jakubowska, EDRi intern, with many ideas gratefully received from or inspired by members of the EDRi network)

close