13 May 2020

Ban biometric mass surveillance!

By EDRi

Across Europe, highly intrusive and rights-violating facial recognition and biometric processing technologies are quietly becoming ubiquitous in our public spaces. As the European Commission consults the public on what to do, EDRi calls on the Commission and EU Member States to ensure that such technologies are comprehensively banned in both law and practice.

Keep walking. Nothing to see here….

By the end of 2019, at least 15 European countries had experimented with invasive biometric mass surveillance technologies, such as facial recognition. These are designed to watch, track or analyse people, score them, and make judgements about them as they go about their daily lives.

Worse still, many governments have done this in collaboration with secretive tech companies, in the absence of public debate, and without having demonstrated that the systems meet even the most basic thresholds of accountability, necessity, proportionality, legitimacy, legality or safeguarding.

A few thousand cameras to rule them all

Without privacy, you do not have the right to a private chat with your friends, your family, your boss or even your doctor. Your activism to save the planet becomes everyone’s business. You will be caught when blowing the whistle on abuse and corruption, or when attending a political march that your government does not want you to attend. You lose the right to go to a religious service or Trade Union meeting without someone keeping an eye on you; to hug your partner without someone snooping; or to wander freely without someone thinking you are being suspicious.

With constant mass surveillance, you lose a way to ever be truly alone. Instead, you become constantly surveilled and controlled.

COVID-1984?

Since the start of the Coronavirus pandemic, apps and other proposals have been suggested to rapidly expand bodily and health surveillance systems under the guise of public health. However, there is a real risk that the damage caused by widening surveillance measures will last long after the pandemic is over. For example, will employers remove the cameras doing temperature checks in offices after the pandemic?

Biometric mass surveillance systems can exacerbate structural inequalities, accelerate unlawful profiling, have a chilling effect on their freedoms of expression and assembly, and put limits on everyone’s ability to participate in public and social activities.

Fanny Hidvégi, Europe Policy Manager at EDRi member Access Now (AN) explains:

Human rights apply in emergencies and health crises. We don’t have to choose between privacy and health: protecting digital rights also promotes public health. The suspension of data protection rights in Hungary show why the EU needs to step up to protect fundamental rights.

Biometric surveillance – an architecture of oppression

Portrayed as an “architecture of oppression”, the untargeted capture or processing of sensitive biometric data makes it possible for governments and companies to build up incredibly detailed permanent records of who you meet, where you go, and what you do. More, it allows these actors to use all these records against you – whether for law enforcement, public authority or even commercial uses. By linking them to faces and bodies, these permanent records become quite literally carved into your skin. The increased capacity of states to track and identify individuals through facial recognition and other biometric processing is likely to disproportionately impact populations which are already highly policed, surveilled and targeted by abuse, including people of colour, Roma and Muslim communities, social activists, LGBTQ+ people and people with irregular migration status. There can be no place for this in a democratic, rights-based, rule-of-law-respecting society.

Ioannis Kouvakas, Legal Officer at EDRi member Privacy International (PI) warns that:

The introduction of facial recognition into cities is a radical and dystopic idea which significantly threatens our freedoms and poses fundamental questions about the kind of societies we want to live in. As a highly intrusive surveillance technique, it can provide authorities with new opportunities to undermine democracy under the cloak of defending it. We need to permanently ban its roll out now before it’s too late.

EDRi is therefore calling for an immediate and indefinite ban on biometric mass surveillance across the European Union.

Biometric mass surveillance is unlawful

This ban is grounded in the rights and protections enshrined in the Charter of Fundamental Rights of the European Union, the General Data Protection Regulation (GDPR) and the Law Enforcement Directive (LED) which are currently under the spotlight for their two-year anniversary reviews. Together, these instruments guarantee that the people of the EU can live without fear of arbitrary treatment or abuse of power; with respect for their autonomy and self-development; and in safety and security by setting strong data protection and privacy standards. Biometric mass surveillance constitutes a violation of the essence of these instruments, and a contravention of the very heart of the EU’s fundamental rights.

Once systems are in place that normalise and legitimise the 24/7 watching of everyone, all the time, it’s a slippery slope towards authoritarianism. The EU must ensure, therefore, through legislative and non-legislative means, that biometric mass surveillance is comprehensively banned in law and in practice. Lotte Houwing, Policy Advisor at EDRi member Bits of Freedom (BoF) cautions that:

We are shaping the world of tomorrow with the measures we are taking today. It is of utmost importance that we keep this in mind and do not let the COVID-19 crisis scare us in to a (mass) surveillance state. Surveillance is not a medicine.

The EU regulates everything from medicines to children’s toys. It is unimaginable that a drug which has not been shown to be effective, or a toy which poses significant risks to children’s wellbeing, would be allowed onto the market. However, when it comes to biometric data capture and processing, in particular in an untargeted way in public spaces (i.e. mass surveillance), the EU has been a haven for unlawful biometric experimentation and surveillance. This has happened despite the fact that a 2020 study demonstrated that over 80% of Europeans are against sharing their facial data with authorities.

EDRi calls on the EU Commission, European Parliament and Member States to stick to their values and protect our societies by banning biometric mass surveillance. Failing to do so will increase the risks of an uncontrolled and uncontrollable demise into a digital dystopia.

Read more:

EDRi paper: Ban Biometric Mass Surveillance (13. 05. 2020)
https://edri.org/wp-content/uploads/2020/05/Paper-Ban-Biometric-Mass-Surveillance.pdf

Explainer: Ban Biometric Mass Surveillance (13. 05. 2020)
https://edri.org/wp-content/uploads/2020/05/Explainer-Ban-Biometric-Mass-Surveillance.pdf

EDRi calls for fundamental rights-based responses to COVID-19 (20. 03. 2020)
https://edri.org/covid19-edri-coronavirus-fundamentalrights/

Emergency responses to COVID-19 must not extend beyond the crisis (15. 04. 2020)
https://edri.org/emergency-responses-to-covid-19-must-not-extend-beyond-the-crisis/

COVID-19 & Digital Rights: Document Pool (04. 05. 2020)
https://edri.org/covid-19-digital-rights-document-pool/

close
13 May 2020

COVID-Tech: COVID infodemic and the lure of censorship

By Chloé Berthélémy

In EDRi’s series on COVID-19, COVIDTech, we will explore the critical principles for protecting fundamental rights while curtailing the spread of the virus, as outlined in the EDRi network’s statement on the virus. Each post in this series will tackle a specific issue at the intersection of digital rights and the global pandemic in order to explore broader questions about how to protect fundamental rights in a time of crisis. In our statement, we emphasised the principle that states must “defend freedom of expression and information”. In this second post of the series, we take a look at the impact on freedom of expression and information that the measures to fight the spread of misinformation could have. Automated tools, content-analysing algorithms, state-sponsored content moderation, all have become normal under COVID-19, and it is a threat to many of our essential fundamental rights.

We already knew that social media companies perform pretty badly when it comes to moderate content on their platforms. Regardless of the measures they deploy (whether using automated processes or employing human moderators), they make discriminatory and arbitrary decisions. They fail to understand context and cultural and linguistic nuances. Lastly, they provide no proper effective access to remedies.

In times of a global health crisis where accessing vital health information, keeping social contact and building solidarity networks are so important, online communications, including social media and other content hosting services, have become even more essential tools. Unfortunately, they are also vectors of disinformation and misinformation that erupt in such exceptional situations and threaten public safety and governmental responses. However, private companies – whether voluntarily or pressured by governments – should not impose over-strict, vague, or unpredictable restrictions on people’s conversations about important topics.

Automated tools don’t work: what a surprise!

As the COVID-19 crisis broke out, emergency health guidelines forced big social media companies to send their content moderators home. Facebook and the like promised to live up to expectations by basing daily content moderation on their so-called artificial intelligence. It only took a few hours to observe glitches in the system.

Their “anti-spam” system was striking down quality COVID-19 content from trustworthy sources as violations of the platforms’ community guidelines. Sharing newspaper articles, links to official governmental websites or simply mentioning the term “coronavirus” in a post would result in having your content preemptively blocked.

This whole trend perfectly demonstrates why relying on automated processes can only be detrimental to freedom of expression and to freedom of receiving and imparting information. The current context led even the Alan Turing Institute to suggest that content moderators should be considered “key workers” in the context of the COVID-19 pandemic.

Content filters show high margins of error and are prone to over-censoring. Yet the European Parliament adopted a resolution on the EU’s response to the pandemic which calls on social network companies to proactively monitor and “stop disinformation and hate speech”. In the meantime, the European Commission continues its “voluntary approach” with the social media platforms and contemplates the possibility to propose soon a regulation.

Criminalising misinformation: a step too far

In order to swiftly respond to the spreading of COVID-19 health crisis, some Member States desperately try to control the flow of the information about the spread of the virus. In their efforts, they are seduced by the adoption of hasty legislation that criminalises disinformation and misinformation which may ultimately lead to state sponsored censorship and suppression of public discourse. For instance, Romania granted new powers to its National Authority for Administration and Regulation in Communications to order take-down notices for websites containing “fake news”. A draft legislation in its neighbour Bulgaria originally included the criminalisation of the spread of “internet misinformation” with fines of up to 1,000 euros and even imprisonment of up to three years. In Hungary, new emergency measures include the prosecution and potential imprisonment of those who spread “false” information.

The risks of abuse of such measures and unjustified interference with the right to freedom of expression directly impair the media’s ability to provide objective and critical information to the public, which is crucial for individuals’ well-being in times of national health crisis. While extraordinary situations definitely require extraordinary measures, they have to remain proportional, necessary and legitimate. Both the EU and Member States must refrain from undue interference and censorship and instead focus on measures that promote media literacy and protect and support diverse media both online and offline.

None of the approaches taken so far show a comprehensive understanding of the mechanisms that enable the creation, amplification and dissemination of disinformation as a result of curation algorithms and online advertising models. It is extremely risky for a democratic society to rely only on very few communications channels, owned by private actors of which the business model feeds itself from sensationalism and shock.

The emergency measures that are being adopted in the fight against COVID-19 health crisis will determine how European democracies will look like in its aftermath. The upcoming Digital Services Act (DSA) is a great opportunity for the EU to address the monopolisation of our online communication space. Further action should be done specifically in relation to the micro-targeting practices of the online advertising industry (Ad Tech). This crisis also showed to us that the DSA needs to create meaningful transparency obligations for better understanding of the use of automation and for future research –starting with transparency reports that include information about content blocking and removal.

What we need for a healthy public debate online are not gatekeepers entitled by governments to restrict content as in non-transparent and arbitrary manner. Instead, we need diversified, community-led and user-empowering initiatives, that allow everyone to contribute and participate.

Read more:

Joint report by Access Now, Civil Liberties Union for Europe, European Digital Rights, Informing the “disinformation” debate (18.10.18)
https://edri.org/files/online_disinformation.pdf

Access Now, Fighting misinformation and defending free expression during COVID-19: Recommendations for States (21.04.20) https://www.accessnow.org/cms/assets/uploads/2020/04/fighting-misinformation-and-defending-free-expression-during-covid-19-recommendations-for-states-1.pdf

Digital rights as a security objective: Fighting disinformation (05.12.18)
https://edri.org/digital-rights-as-a-security-objective-fighting-disinformation/

ENDitorial: The fake fight against fake news (25.07.18)
https://edri.org/enditorial-the-fake-fight-against-fake-news/

(Contribution by Chloé Berthélémy, EDRi Policy Advisor)

close
13 May 2020

Member in the spotlight: D3 – Defesa dos Direitos Digitais

By EDRi

This is the tenth article of the series “EDRi member in the Spotlight” in which our members introduce themselves and their work in an in-depth highlight in interview format.

Today we introduce our Portuguese member: D3 – Defesa dos Direitos Digitais.

1. Who are you and what is your organisation’s goal and mission?
We are a volunteer-run association dedicated to the defense of fundamental rights in the digital context. Our focus is to ensure autonomy and freedom of choice; uphold privacy and free access to information, knowledge and culture; and defend digital rights as a reinforcement to the principles of a democratic society.

nomeadamente assegurando a liberdade de escolha e autonomia, a privacidade e o livre acesso à informação, conhecimento e cultura, entendendo a defesa dos meios digitais como potenciadora dos mecanismos de autocontrolo do Estado de Direito democrático

2. How did it all begin, and how did your organisation develop its work?

For many years there has been a gap within Portuguese civil society: there was no entity dedicated to digital rights issues. Some people would fight for them in their individual capacity, and a few organisations would cover them when absolutely needed, but often going beyond their original scope of action by doing so (such as free software organizations).

Around 2016, some people got together and started to discuss how we could start such an organization. The objective was to coordinate the already existing civil society efforts in an organized and dedicated way, with a wide enough scope that would allow us to cover any issue within the field – not because we wanted to cover everything but because we wanted the freedom to tackle any of them. It finally happened in March 2017, and we have been active since then.

3. The biggest opportunity created by advancements in information and communication technology is…

It can enable further universal access to many rights such as education, participation in democratic life, access to more information and data, tools for both on day to day life and to support the individuals and society in exceptional moments.

4. The biggest threat created by advancements in information and communication technology is…

Lack of oversight and disregard of the dangers associated with technology and its usage; excess of optimism about the capacity of technology to solve problems for both the individuals and the society; magical thinking that comes with the lack of understanding of technology and science in general.

5. Which are the biggest victories/successes/achievements of your organisation?

We managed to become the first divergent voice in matters where there used to be no entity representing the public interest. We brought new issues to the public debate like data retention, copyright, net neutrality, electronic voting, public surveillance, and more. Our biggest achievement was having our data retention complaint to the Justice Ombudsman reach the Constitutional Court (the decision is still pending)

6. If your organisation could now change one thing in your country, what would that be?

Exclude non-policy influences from public policy decision making.

7. What is the biggest challenge your organisation is currently facing in your country?

Internally we face the usual issues related to the small scale of the country; for example, we could use more volunteers lending a hand.

Externally, right now it is impossible to escape the subject of COVID-19, which is making some people fall prey to questionable tech-solutionism promises which are inspired by practices of non-democratic countries.

8. How can one get in touch with you if they want to help as a volunteer, or donate to support your work?

You can reach us at geral@direitosdigitais.pt .

If you speak Portuguese, you can find more information on our website, including how to volunteer and how to donate. An English-language smaller scale version is also in our plans.

Read more:

EDRi Member in the spotlight series
https://edri.org/member-in-the-spotlight/

D3 Home Page
https://direitosdigitais.pt/

close
13 May 2020

Austria’s biggest privacy scandal: residential addresses made public

By Epicenter.works

Nobody took data protection into account for the so-called “Supplementary Register for Other Concerned Parties” (Ergänzungsregister für sonstige Betroffene). The Ministry for the Economy and the Finance Ministry are responsible for a data breach to which the Austrian Economic Chambers were an accomplice.

Personal data of at least one million people have been publicly posted on the Internet for years without any protective measures, as NEOS and epicenter.works explained in a joint press conference on 8 May. This is a gift from the Republic to every data dealer and identity thief. “The technical and organisational measures necessary for protecting the rights of the affected persons according to GDPR are completely absent”, adds epicenter.works’ managing director Thomas Lohninger. In contrast to the Central Register of Residents (ZMR), all protective mechanisms are missing here, such as requiring identification of the querying person or charging a fee for the release of data, or the option to protect one’s own data with an informational release block.

Private residential addresses are particularly sensitive

“We do not yet know exactly how many people are affected by this data scandal and which groups are involved,” Lohninger continues. “According to our estimates, there must be about one million concerned people.” It could also be deduced from the data when tax returns were filed or whether, for example, state assistance was received. “What is even more dramatic is that the private residential addresses of these people are publicly available on the Internet and there is no way to defend oneself against it. From the Federal President downwards, almost everyone can be found there who has and has had income other than from non-self-employment”, the data protection expert adds.

No purpose, no information block, no protective measures

“The purpose of this public register is not apparent. Public registers regularly entail rights and obligations, such as Entries in the Civil Register, Register of Companies or Register of Associations. Although the internal provision of source numbers within the administration may be the reason for the creation of the supplementary register, this does not explain its years of public and barrier-free access,” says epicenter.works’ lawyer Lisa Seidl. In many cases, the scope of the accessible data goes beyond the data that can be retrieved from the ZMR and, in contrast, there are no protective mechanisms, such requiring the identification of the querying person, charging a fee for the release of information or providing the option of setting up an informational release block. Even if the 2009 regulation provides a legal basis for the publication of the register, this regulation could constitute a violation of the fundamental right to data protection, said Seidl.

On the basis of redacted excerpts from this database, we can show that the data of journalists, politicians and other persons who are particularly concerned about the confidentiality of their private data were included. For example, out of 183 members of Parliament, 100 were visible with their private addresses. You can find a corresponding list here. Furthermore, many Public Broadcasting (ORF) journalists could be found easily.

How is this different from the Commercial Register?

The Commercial Register is easily accessible, but it costs quite a lot – 12.90€ per extract – and is essential (i.e. it has an important purpose), because you have to and should know about the economic risk you are taking when you sign contracts with other companies. In any case, it does not contain private residential addresses, but the business addresses of the companies.

Is the regulation potentially even illegal or unconstitutional?

In principle, the Austrian state must comply with GDPR, but it is exempt from penalties. If data that are not already publicly accessible (e.g. tax data of private individuals – not companies!) are in the register, this needs its own legal basis (in this case a decree), and only then according to the GDPR the data can be processed. However, this decree could still be unconstitutional (§1 of the Data Protection Act (DSG) has constitutional status). Justified constraints of fundamental rights always require a legitimate objective that is necessary and proportionate. The register falls at this first hurdle, as making tax data accessible for the public is not a “legitimate objective”. Therefore, the constraint of fundamental rights is unjustified and a violation of the fundamental right to data protection. As long as the Austrian Constitutional Court has not repealed the regulation on the grounds of unlawfulness or unconstitutionality, it is to be applied.

Chronology of the register

  • Decision to establish this register publicly 2004/2009, Schüssel / Faymann
  • Register transferred to the Austrian Ministry for the Economy in December 2018 without question, no protective measures established, despite introduction of GDPR no enforcement of the rights of those affected
  • Austrian Finance Ministry continuously sends data to registers, unclear from which sources

Read more:

Größter Datenskandal der Republik: Über eine Million Wohnadressen öffentlich (08.05.2020)
en.epicenter.works/content/grosster-datenskandal-der-republik-uber-eine-million-wohnadressen-offentlich

Austrian government hacking law is unconstitutional (18.02.2019)
edri.org/austrian-government-hacking-law-is-unconstitutional

Austrian postal service involved in a data scandal (28.01.2019)
edri.org/austrian-postal-service-involved-in-a-data-scandal/

(Contribution by Thomas Lohninger, from EDRi Member epicenter.works)

close
13 May 2020

Xnet issues two complaints to improve data protection in Spain

By Xnet

Xnet highlights gaps in Spain’s adaptation of the EU General Data Protection Regulation (GDPR). The Spanish member of EDRi has opened two complaints to the European Commission related to the lack of effective adaptation of the data minimisation principle and the lack of conciliation between personal data protection and freedom of expression and information in the Spanish legislation.

The COVID-19 Crisis has forcefully put on the table the scope to which the extraction and use of citizens’ personal data may reach.

These problems had already been detected and explained in a February 2020 report by Xnet, “Privacy, Data Protection and Institutionalised Abuses” and with the campaign #DatosPorLiebre.

Xnet believes that the use of personal data in the general interest is necessary. However, it should never conflict with the respect for the fundamental rights to privacy and intimacy.

The procedures that Xnet is now starting are a consequence of the report, but the EDRi member believes that they will also be useful in the design of policies post-COVID-19. The European Commission has published a position that supports Xnet’s point of view. This could positively influence the new Spanish Secretary-General for Digital Transformation. This is why Xnet considers that this is a good moment to start these two procedures.

As Xnet explained in the report “Privacy, Data Protection and Institutionalised Abuses”, they consider that the “Organic Law on Data Protection and the Guarantee of Digital Rights”, which aims to adapt the GDPR to the Spanish system, contains gaps that are detrimental to fundamental rights.

The report and the procedures explain the collision between the principle of minimisation, which is fundamental in the GDPR, and other laws in force that prevent its enforcement and the control of personal data, their use and destination by individuals.

Specifically, the identification requirements of citizens when they want to carry out any type of procedure, however simple it may be, at a Public Administration or other companies, are abusive and disproportionate. These identification requirements of Spanish legislation are no longer justified in the new framework established by the GDPR. The principle of minimisation establishes that no one should ask or extract more data than necessary. The privacy must be by design and by default.

The second procedure highlights the lack of transposition of Article 85 of the Regulation into national law, thus failing to comply with the obligation that it establishes to reconcile the right to personal data protection with the freedoms of expression and information. This makes it difficult to uncover cases of abuse or corruption, which is very necessary in a situation such as this one.

Read more:

[In Spanish]: Xnet abre procedimientos ante la Comisión Europea para la mejora de la protección de datos en la legislación española (04.05.2020)
https://xnet-x.net/lagunas-legislacion-espanola-ce-proteccion-datos

ApTI submits complaint on Romanian GDPR implementation (27.02.2019):
https://edri.org/apti-submits-complaint-on-romanian-gdpr-implementation

One Year Under the GDPR. An implementation progress report:
https://www.accessnow.org/cms/assets/uploads/2019/07/One-Year-Under-GDPR-report.pdf

(Contribution by Simona Levi, from EDRi member Xnet)

close
04 May 2020

COVID-19 & Digital Rights: Document Pool

By EDRi

The Coronavirus (COVID-19) pandemic poses a global public health challenge of unprecedented proportions. In order to tackle it, countries around the world need to engage in coordinated, evidence-based responses grounded in solidarity, support and respect for human rights. This means that measures cannot lead to disproportionate and unnecessary actions. It is also vital that measures are not extended once we are no longer in a state of emergency. Otherwise, the actions taken under exceptional circumstances today can have significant repercussions on human rights both today and tomorrow.

In this document pool we will be listing relevant articles and documents related to the intersection of the COVID-19 crisis and digital rights. This will allow you to follow the developments of surveillance measures, content moderation, tracking and privacy-threatening actions in Europe as they relate to the coronavirus pandemic, as well as offer the set of perspectives and recommendations put forth by a host of digital rights watchdog organisations across Europe and the world. The document pool is updated regularly to ensure the delivery of the most up-to-date information.

  1. EDRi’s Analysis and Recommendations
  2. EDRi Articles, blog posts and press releases
  3. Mapping Exercise
  4. Official EU Documents
  5. Other Useful Resources

1. EDRi’s Analysis and Recommendations

Official EDRi statement on COVID-19 and Digital Rights

EDRi Members’ Responses and Recommendations on COVID-19

Analysing Tracking & Tracing Apps


2. EDRi’s Articles, blog posts and press release

EDRi Reporting

#COVIDTech – An EDRi Blog Series


3. Mapping Exercises

EDRi Members Mapping

Other Mapping Excercises


4. Official EU Documents


5. Other Useful Resources

With huge thanks to the individuals and organisations across the EDRi network who have shared resources for this document pool.

close
29 Apr 2020

#WhoReallyTargetsYou: DSA and political microtargeting

By Panoptykon Foundation

Europe is about to overhaul its 20-year-old e-Commerce Directive and it is a once-in-a-decade chance to correct the power imbalance between platforms and users. As part of this update, the Digital Services Act (DSA) must address the issue of political microtargeting (PMT).

Microtargeting, and PMT in particular, has the alarming power to derail democracy, and should be regulated. According to self-assessment reports, political advertisers spent €31 million (excluding the UK) on Facebook, and only €5 million on Google between March and September 2019. Facebook’s role in developing and targeted adverts goes far beyond a simple presentation medium — its tools for optimising ad delivery, targeting audiences and defining delivery criteria are far beyond the capacity of most political parties alone. A detailed report based on data collected during two Polish election campaigns in 2019 carried out by Panoptykon and partners, shed critical light on the role of the company, and what it revealed was extremely informative:

The study found that Facebook’s transparency and control tools that would explain how ad targeting works offered to both researchers and users are “insufficient and superficial.” Users are targeted by Facebook’s algorithm based on potentially thousands of distinct selectors following a a set of criteria that only the company knows. Advertisers on Facebook can opt to select audiences on obvious factors such as age, gender, language spoken and location. But the Facebook machine also steers them towards increasingly narrower criteria such as interests (political affiliation, sex orientation, musical tastes, etc…), “life events” and behaviour, as well as more than 250,000 free-text attributes including, for example, Adult Children of Alcoholics, or Cancer Awareness, which constitute a deeper privacy concern.

Facebook is not merely a passive intermediary; its algorithms interpret criteria selected by advertisers and deliver ads in a way that fulfils advertisers’ objectives, and actively curate the content that users see in their timelines based on those assumptions. In 2016, the company introduced a feature allowing them to target “lookalikes” – profiles similar to a target audience. It also allows A/B testing so advertisers can compare which ads are more effective.

But Facebook’s “why am I seeing this ad?” transparency tool can be misleading, revealing only the “lowest common denominator” attribute. For example, according to the report, during the European elections campaign in Poland in May 2019, a person who was pregnant saw a political ad referring to prenatal screenings and perinatal care. “Why am I seeing this ad?” informed her that she was targeted because she was interested in “medicine” (potential reach 668 million) rather than “pregnancy” (potential reach of 316 million). Users can only verify (check, delete, or correct) a short list of interests that the platform is willing to reveal.

Here is where upcoming regulation comes into play: At the very least, the Digital Services Act should prohibit PMT based on characteristics which expose our mental or physical vulnerabilities (e.g. depression, anxiety, addiction, illness). But if the EU wants to be ambitious and tackle many of the associated problems with the current business model, the DSA should go further and regulate any sort of advertising aimed at profiling users, particularly as there appears to be a gap between ads labelled as “political” by the platform, and ads perceived as political by researchers.

Regulating targeted ads, requiring greater transparency for researchers and users, opt-in rather than opt-out, tighter requirements for political advertising and recognising PMT as an application of AI that poses serious risks for human rights will not solve all the problems of political disinformation in society, but they would certainly eliminate some of the worst practices today.

Read more:

Who (really) targets you? Facebook in Polish election campaigns
https://panoptykon.org/political-ads

Annual self-assessment reports of signatories to the Code of Practice on Disinformation 2019 (29.10.2019)
https://ec.europa.eu/digital-single-market/en/news/annual-self-assessment-reports-signatories-code-practice-disinformation-2019

(Contribution by Karolina Iwańska, from EDRi member Panoptykon)

close
29 Apr 2020

Member in the spotlight: Homo Digitalis

By EDRi

This is the tenth article of the series “EDRi member in the Spotlight” in which our members introduce themselves and their work in an in-depth highlight in interview format.

Today we introduce our Greek member: Homo Digitalis.

1. Who are you and what is your organisation’s goal and mission?

Homo Digitalis is the only digital rights civil society organization in Greece. Our goal is the protection of human rights and freedoms in the digital age. We strive to influence legislators & policy makers on a national level, and to raise awareness amongst the people of Greece regarding digital rights issues. Moreover, when digital rights are jeopardized by public or private actors, we carry out investigations, conduct studies and proceed to legal actions.

2. How did it all begin, and how did your organisation develop its work?

Homo Digitalis was founded in 2018 by 6 tech lawyers with a strong passion about the protection and promotion of digital rights. No digital rights organisations existed in Greece before. So, we wanted to create an organisation that could bring like-minded people together and shake things up. After two years of voluntary work, we have managed to grow into an organization with more than 100 members, who bring together a wide variety of disciplines such as law, computer science, humanities and social sciences.

We aim to transform Homo Digitalis from an organization based on voluntary work to a strong watchdog with a long-term strategy and full-time personnel. It will be a long and difficult path, but we have started acquiring our first grants and we are confident that we will grow, gaining more recognition and support for us and our vision.

3. The biggest opportunity created by advancements in information and communication technology is…

…facilitating access to information all around the globe, and building bridges between people. These advancements constitute a driver for positive change in our societies, and could lead to enhanced equality and transparency.

4. The biggest threat created by advancements in information and communication technology is…

…mass surveillance of our societies and power asymmetry in the information economy.

5. Which are the biggest victories/successes/achievements of your organisation?

Becoming a full member of EDRi is certainly a great success of Homo Digitalis so far!

Additionally, Homo Digitalis has managed to achieve important accomplishments over the last two years. We have increased public awareness on digital rights issues by generating media interest in our actions, visiting educational institutions and participating in events, campaigns, and giving talks all around Greece. Moreover, we were instrumental in influencing the public debate around data protection reform in Greece by cooperating with related stakeholders, and by filing complaints and requests before EU and national authorities, respectively.

Also, through access to information requests, complaints, and investigations we have attained a high level of scrutiny regarding projects on technology-led policing and border management activities in Greece. In addition, we have collaborated with investigative journalists to reveal important facts. Even though we are an organization based solely on volunteers, we give our best to respond quickly to the challenges that arise.

Furthermore, we have been fortunate enough to participate shoulder to shoulder with powerful digital rights organisations in EU-wide projects and campaigns and to learn from their expertise and knowledge. Finally, we also had the great opportunity to present our views and opinions in important fora, such as the UN Human Rights Council 39th session in Geneva or the European Parliament in Brussels.

All these accomplishments over the last two years give us the strength to continue our work towards the protection and promotion of human rights in the digital age.

6. If your organisation could now change one thing in your country, what would that be?

Active participation of people in collective activities such as digital rights activism. If individuals could devote a part of their knowledge and time to such activities, we would have a stronger voice to influence policy makers and legislators towards political decisions that respect our rights and freedoms and not violate them, instead.

7. What is the biggest challenge your organisation is currently facing in your country?

After 10 years of financial crisis and austerity measures in Greece that limited public spending, we witness over the last years an increase in funds used for technology-led policing and border managements projects. Thus, we must stay wide-awake in order to challenge and fight back the implementation of intrusive tools and technologies in our societies that limit our rights and freedoms.

8. How can one get in touch with you if they want to help as a volunteer, or donate to support your work?

You can visit our website to help us as a volunteer or to donate and support our work.

Also, we always appreciate a good conversation, so feel free to reach out to info@homodigitalis.gr. Last but not least, you can subscribe to our newsletter here.

Read more:

EDRi member in the spotlight series
https://edri.org/member-in-the-spotlight/

Join Homo Digitalis as member/supporter/volunteer
https://www.homodigitalis.gr/en/join-us

Donate to Homo Digitalis
https://www.homodigitalis.gr/en/donations/help-us-grow

close
29 Apr 2020

Why COVID-19 is a Crisis for Digital Rights

By Guest author

The COVID-19 pandemic has triggered an equally urgent digital rights crisis.

New measures being hurried in to curb the spread of the virus, from “biosurveillance” and online tracking to censorship, are potentially as world-changing as the disease itself. These changes aren’t necessarily temporary, either: once in place, many of them can’t be undone.

That’s why activists, civil society and the courts must carefully scrutinise questionable new measures, and make sure that – even amid a global panic – states are complying with international human rights law.

Human rights watchdog Amnesty International recently commented that human rights restrictions are spreading almost as quickly as coronavirus itself. Indeed, the fast-paced nature of the pandemic response has empowered governments to rush through new policies with little to no legal oversight.

There has already been a widespread absence of transparency and regulation when it comes to the rollout of these emergency measures, with many falling far short of international human rights standards.

Tensions between protecting public health and upholding people’s basic rights and liberties are rising. While it is of course necessary to put in place safeguards to slow the spread of the virus, it’s absolutely vital that these measures are balanced and proportionate.

Unfortunately, this isn’t always proving to be the case. What follows is an analysis of the impact of the COVID-19 pandemic on the key subset of policy areas related to digital rights:

a) The Rise of Biosurveillance

A panopticon world on a scale never seen before is quickly materialising.

“Biosurveillance” which involves the tracking of people’s movements, communications and health data has already become a buzzword, used to describe certain worrying measures being deployed to contain the virus.

The means by which states, often aided by private companies, are monitoring their citizens are increasingly extensive: phone data, CCTV footage, temperature checkpoints, airline and railway bookings, credit card information, online shopping records, social media data, facial recognition, and sometimes even drones.

Private companies are exploiting the situation and offering rights-abusing products to states, purportedly to help them manage the impact of the pandemic. One Israeli spyware firm has developed a product it claims can track the spread of coronavirus by analysing two weeks’ worth of data from people’s personal phones, and subsequently matching it up with data about citizens’ movements obtained from national phone companies.

In some instances, citizens can also track each other’s movements leading to not only vertical, but also horizontal sharing of sensitive medical data.

Not only are many of these measures unnecessary and disproportionately intrusive, they also give rise to secondary questions, such as: how secure is our data? How long will it be kept for? Is there transparency around how it is obtained and processed? Is it being shared or repurposed, and if so, with who?

b) Censorship and Misinformation

Censorship is becoming rife, with many arguing that a “censorship pandemic” is surging in step with COVID-19.

Oppressive regimes are rapidly adopting “fake news” laws. This is ostensibly to curb the spread of misinformation about the virus, but in practice, this legislation is often used to crack down on dissenting voices or otherwise suppress free speech. In Cambodia, for example, there have already been at least 17 arrests of people for sharing information about coronavirus.

At the same time, many states have themselves been accused of fuelling disinformation to their citizens to create confusion, or are arresting those who express criticism of the government’s response.

As well as this, some states have restricted free access to information on the virus, either by blocking access to health apps, or cutting off access to the internet altogether.

c) AI, Inequality and Control

The deployment of AI can have consequences for human rights at the best of times, but now, it’s regularly being adopted with minimal oversight and regulation.

AI and other automated learning technology are the foundation for many surveillance and social control tools. Because of the pandemic, it is being increasingly relied upon to fight misinformation online and process the huge increase in applications for emergency social protection which are, naturally, more urgent than ever.

Prior to the COVID-19 outbreak, the digital rights field had consistently warned about the human rights implications of these inscrutable “black boxes”, including their biased and discriminatory effects. The adoption of such technologies without proper oversight or consultation should be resisted and challenged through the courts, not least because of their potential to exacerbate the inequalities already experienced by those hardest hit by the pandemic.

d) Eroding Human Rights

Many of the human rights-violating measures that have been adopted to date are taken outside the framework of proper derogations from applicable human rights instruments, which would ensure that emergency measures are temporary, limited and supervised.

Legislation is being adopted by decree, without clear time limitations, and technology is being deployed in a context where clear rules and regulations are absent.

This is of great concern for two main reasons.

First, this type of “legislating through the back door” of measures that are not necessarily temporary avoids going through a proper democratic process of oversight and checks and balances, resulting in de facto authoritarian rule.

Second, if left unchecked and unchallenged, this could set a highly dangerous precedent for the future. This is the first pandemic we are experiencing at this scale – we are currently writing the playbook for global crises to come.

If it becomes clear that governments can use a global health emergency to instate human rights infringing measures without being challenged or without having to reverse these measures, making them permanent instead of temporary, we will essentially be handing over a blank cheque to authoritarian regimes to wait until the next pandemic to impose whatever measures they want.

Therefore, any and all measures that are not strictly necessary, sufficiently narrow in scope, and of a clearly defined temporary nature, need to be challenged as a matter of urgency. If they are not, we will not be able to push back on a certain path towards a dystopian surveillance state.

e) Litigation: New Ways to Engage

In tandem with advocacy and policy efforts, we will need strategic litigation to challenge the most egregious measures through the court system. Going through the legislature alone will be too slow and, with public gatherings banned, public demonstrations will not be possible at scale.

The courts will need to adapt to the current situation – and are in the process of doing so – by offering new ways for litigants to engage. Courts are still hearing urgent matters and questions concerning fundamental rights and our democratic system will fall within that remit. This has already been demonstrated by the first cases requesting oversight to government surveillance in response to the pandemic.

These issues have never been more pressing, and it’s abundantly clear that action must be taken.

If you want to read more on the subject, follow EDRi’s new series #COVIDTech here: https://edri.org/emergency-responses-to-covid-19-must-not-extend-beyond-the-crisis/

This article was originally published at: https://digitalfreedomfund.org/why-covid-19-is-a-crisis-for-digital-rights/

Read more:

Tracking the Global Response to COVID-19:
https://privacyinternational.org/examples/tracking-global-response-covid-19

Russia: doctor who called for protective equipment detained (03.04.2020)
https://www.amnesty.org.uk/press-releases/russia-doctor-who-called-protective-equipment-detained

A project to demystify litigation and artificial intelligence (06.12.2019)
https://digitalfreedomfund.org/a-project-to-demystify-litigation-and-artificial-intelligence/

Making Accountability Real: Strategic Litigation (30.01.2020)
https://digitalfreedomfund.org/making-accountability-real-strategic-litigation/

Accessing Justice in the Age of AI (09.04.2020)
https://digitalfreedomfund.org/accessing-justice-in-the-age-of-ai/

(Contribution by Nani Jansen Reventlow, Digital Freedom Fund)

close
29 Apr 2020

Everything you need to know about the DSA

By Chloé Berthélémy

In her political guidelines, the President of the European Commission Ursula von der Leyen has committed to “upgrade the Union’s liability and safety rules for digital platforms, services and products, with a new Digital Services Act” (DSA). The upcoming DSA will revise the rules contained in the E-Commerce Directive of 2000 that affect how intermediaries regulate and influence user activity on their platforms, including people’s ability to exercise their rights and freedoms online. This is why reforming those rules has the potential to be either a big threat to fundamental rights rights or a major improvement of the current situation online. It is also an opportunity for the European Union to decide how central aspects of the internet will look in the coming ten years.

A public consultation by the European Commission is planned to be launched in May 2020 and legislative proposals are expected to be presented in the first quarter of 2021.

In the meantime, three different Committees of the European Parliament have announced or published Own Initiative Reports as well as Opinions in view of setting the agenda of what the DSA should regulate and how it should achieve its goals.

We have created a document pool in which we will be listing relevant articles and documents related to the DSA. This will allow you to follow the developments of content moderation and regulatory actions in Europe.

Read more:

Document pool: Digital Service Act (27. 04. 2020)
https://edri.org/digital-service-act-document-pool/

close