10 Jun 2020

COVID-Tech: the sinister consequences of immunity passports

By Ella Jakubowska

In EDRi’s series on COVID-19, COVIDTech, we explore the critical principles for protecting fundamental rights while curtailing the spread of the virus, as outlined in the EDRi network’s statement on the pandemic. Each post in this series tackles a specific issue at the intersection of digital rights and the global pandemic in order to explore broader questions about how to protect fundamental rights in a time of crisis. In our statement, we emphasised the principle that states must “defend freedom of expression and information”. In this fourth post of the series, we take a look at the issue of immunity passports, their technological appeal and their potentially sinister consequences on social inequality and fundamental rights

The dangerous allure of science fiction

Early in the coronavirus outbreak, pandemic guilty-pleasure film, Contagion, skyrocketed to the top of streaming sites’ most watched lists. One of the film’s most interesting plot points (mild spoiler alert) is the suggestion of a simple form of immunity passport. Wristbands for people who have been vaccinated are presented as an obvious solution – and why wouldn’t they be? Various forms of immunity passport are a compelling idea. It sounds as if they could allow us to get back to a more normal life. But the reality is not as clear-cut as in the movies, and the threats to how we live our lives – in particular, the people that could be most harmed by such schemes – mean that we must be incredibly cautious. Consequently, as it stands now, the lack of evidence, combined with the size of the threat that these schemes pose to fundamental rights and freedoms, reveal that – digital or otherwise – immunity passports must not be rolled out.

Immunity passports – science fact says “no”

In the last few weeks, “digital immunity passports”, certificates, apps, and other similar ideas have become prominent in discussions about how to exit from global lockdowns, with proposals popping up in Germany, Italy, Colombia, Argentina and the US to name a few. It is a legitimate policy goal to help people find safe ways to exist in this “new normal”. Yet these proposals are all founded on the dangerous fallacy that we know and understand what coronavirus “immunity” looks like.

The WHO have been clear in their assessment that there is “currently no evidence” for immunity, and that such schemes may in fact incentivise risky behaviour. Medical journal The Lancet adds that such proposals are “impractical, but also pose considerable equitable and legal concerns even if such limitations [due to our lack of knowledge about immunity] are rectified.” And science journal Nature warns that immunity passports can actually harm public health. If public health experts are warning against immunity passports – even once we know more about COVID-19 immunity – then why are governments and private actors still pushing them as a silver bullet?

Like with controversial tracking and contact tracing apps, there are a host of privacy and data protection concerns when such schemes become “digital”. Individual health data is very sensitive, as is data about our locations and interactions. As it is often with private companies that are aggressively pushing proposals (hello TransferWise and Bolt in Estonia), there are serious concerns about transparency, accountability, and who really benefits. EDRi has warned that public health tools should be open for public scrutiny, and limited in scope, purpose and time. With private companies rushing to profit from this crisis, can we be confident that this will happen? The lessons learned from digital identification programmes suggests we have reasons to be very sceptical.

A new generation of “haves” and “have nots”

The crux of the problem with immunity passports is that they will likely be used to decide who is and who is not allowed to participate in public life: who can go to work – and therefore earn money to support themselves and their family; who can go to school; and even who can stay in hotels. By essence, these “passports” could decide who can and who cannot exercise their fundamental rights.

As both Privacy International (PI) and Access Now explain, the law tells us that any restrictions on people’s rights must be really well justified, meeting high levels of necessity and proportionality, and must also have a clear legal basis. These criteria mean that measures that limit people’s rights must be demonstrably effective, have no viable alternative, not violate the essence of fundamental rights and have clear safeguards. This is a very high set of criteria that need to be met. In the context of an absence of scientific proof, significant risks created by false positives and false negatives and big concerns about data protection and privacy, the idea of digital immunity passports becomes even more sinister. This hasn’t stopped tech companies like Onfido lobbying their national health services or governments to adopt their services for biometric immunity passports.

Biometric surveillance and the risks of hyper-connected data

In a wider sense, digital immunity passports – especially those linked to people’s sensitive biometric data – are part of a growing mass surveillance infrastructure which can watch, analyse and control people across time and place. Such systems rely on holding mass databases on people (which in itself comes with big risks of hacking and unauthorised sharing) and are damaging to the very core of people’s rights to dignity, privacy and bodily integrity. The combining of health data with biometric data further increases the ability of states and private actors to build up highly detailed, intrusive and intimate records of people. This can, in turn, have a chilling effect on freedom of expression and assembly by disincentivising people from joining protests, suppressing political opposition, and putting human rights defenders and journalists at risk. As Panoptykon Foundation have explained, such systems are ripe for abuse by governments looking to control people’s freedoms.

Discrimination and unequal impacts creating a segregated society

It is foreseeable that the introduction of immunity passports will have unequal and disproportionate impacts upon those that already face the highest levels of poverty, exclusion and discrimination in society. Those with the smallest safety nets, such as people in precarious and low-waged jobs, will be the ones who are least able to stay at home. The pressure to be allowed outside – and the impacts of not being allowed to do so – will therefore be unequally distributed. We know that some people are more at risk if they do contract the virus: those with underlying health conditions, older people and in the UK,black people. This inequality of who suffers the most will replicate the already unequal distribution in our societies. And if immunity passports are administered digitally, then those without access to a device will be automatically excluded. This stratification of society by biological and health characteristics, as well as access to tech, is dangerous and authoritarian.

Don’t let science fiction become reality

Digital immunity passports are no longer the preserve of science fiction. There is a very real risk that these schemes are putting innovation and appearance over public health, in a move often called “technosolutionism”. Digital and biometric immunity passports not only threaten the integrity of our sensitive bodily and health data, but create a stratified society where those who can afford to prove their immunity will have access to spaces and services that the remainder will not– de facto becoming second class citizens. The New York Times calls this “immunoprivilege”.

When the time comes that we have solid scientific evidence about immunity, it will be up to public health officials to work out how this can translate into certification, and for data protection and privacy authorities and experts to help guide governments to ensure that any measures strictly respect and promote fundamental rights and freedoms. Until then, let’s rather focus on improving our national health systems, ensuring that research goes into preventing this and future pandemics (despite the push-back from Big Pharma) and that we build a new society free of virus such as COVID-19 and surveillance capitalism.

Read more:

COVID-19 & Digital Rights: Document Pool (04.05.2020)
https://edri.org/covid-19-digital-rights-document-pool/

Ban Biometric Mass Surveillance (13.05.2020)
https://edri.org/wp-content/uploads/2020/05/Paper-Ban-Biometric-Mass-Surveillance.pdf

Exit through the App Store? (20.04.2020)
https://www.adalovelaceinstitute.org/wp-content/uploads/2020/04/Ada-Lovelace-Institute-Rapid-Evidence-Review-Exit-through-the-App-Store-April-2020-2.pdf

Ten reasons why immunity passports are a bad idea (21.05.2020)
https://www.nature.com/articles/d41586-020-01451-0

(In Polish) Certyfikaty odporności przepustką do normalnego życia? Nie idźmy tą drogą! (29.05.2020)
https://panoptykon.org/certyfikaty-odpornosci-covid

close
10 Jun 2020

UK: Stop social media monitoring by local authorities

By Privacy International

Would you like your local government to judge you by your Facebook activity? In a recent study, we investigated how local authorities (Councils) in Great Britain are looking at social media accounts as part of their investigation tactics on issues such as benefits, debt recovery, fraud, environmental investigations, and children’s social care.

Social media platforms are a vast trove of information about individuals and collectives, including their personal preferences, political and religious views, physical and mental health and the identity of their friends and families. Social media monitoring or social media intelligence (SOCMINT) are the techniques and technologies that allow the monitoring and gathering of information on social media platforms such as Facebook and Twitter.

Life-changing decisions could be made on the basis of this intelligence but yet no quality check on the effectiveness of this form of surveillance is in place as of now. This has particular consequences and a disproportionate negative impact on certain individuals and communities.

What PI found out

In October 2019 Privacy International sent a Freedom of Information request to every local authority in Great Britain asking not only about whether they had conducted an audit, but sought to uncover the extent to which ‘overt’ social media monitoring in particular was being used and for what local authority functions.

We have analysed 136 responses to our Freedom of Information requests, specifically those that were received by November 2019. All responses are publicly available on the platform “What Do They Know”.

Our investigation has found that:

  • A significant number of local authorities are now using ‘overt’ social media monitoring as part of their intelligence gathering and investigation activities. This substantially out-paces the use of ‘covert’ social media monitoring
  • If you don’t have good privacy settings, your data is fair game for overt social media monitoring.
  • There is no quality check on the effectiveness of this form of surveillance on decision making.
  • Your social media profile could be used by a local authority, without your knowledge or awareness, in a wide variety of their functions; predominantly intelligence gathering and investigations.

The UK Surveillance Commissioner’s Guidance defines overt social media monitoring as looking at ‘open source’ data, that is, publicly available data, and data where privacy settings are available but not applied. This may include: “List of other users with whom an individual shares a connection (friends/followers); Users’ public posts: audio, images, video content, messages; “likes”, shared posts and events”. According to the Guidance, “[r]epetitive examination/monitoring of public posts as part of an investigation” constitutes instead ‘covert’ monitoring and “must be subject to assessment.”

Who is being targeted?

Everyone is potentially targeted as at some point in our lives we all interact with local authorities as we go through some of the processes listed above. The difference, however, is that we all are affected differently.

As in many other instances when it comes to the digitalisation and use of new technologies, those belonging to already marginalised and precarious groups and who are already subject to additional monitoring and surveillance, are once again experiencing the brunt of such practices.

There are particular groups of the populations which are being impacted dramatically by the use of such techniques because they are dependent and subject to the functions of local authorities such as individuals receiving social assistance/welfare as well as migrants.

We have seen similar developments in the migration sector where for immigration enforcement purposes governments are resorting to social media intelligence. Some of these activities are undertaken directly by government themselves but in some instances, governments are calling on companies to provide them with the tools and/or know-how to undertake these sort of activities.

How to protect those most vulnerable

As local authorities in Great Britain and elsewhere seize on the opportunity to use this treasure trove of information about individuals, use of social media by local authorities is set to rise and in the future we are likely to see more sophisticated tools used to analyse this data, automate decision-making, generate profiles and assumptions.

The collection and processing of personal data obtained from social media as part of local authority investigations and intelligence gathering, must be strictly necessary and proportionate to make a fair assessment of an individual. There needs to be effective oversight over the use of social media monitoring, both overt and covert, to ensure that particular groups of people are not disproportionately affected, and where violations of guidance and policies do occur, they are effectively investigated and sanctioned.

It is urgent to ensure that the necessary and adequate safeguards are in place to protect those in the most vulnerable and precarious positions where such information could lead to tragic life altering decisions such as the denial of welfare support.

Therefore, we urge local authorities to:

  • Refrain from using social media monitoring, and avoid it entirely where they do not have a clear, publicly accessible policy regulating this activity

When exceptionally used:

  • Local authorities should use social media monitoring only if and when in compliance with their legal obligations, including data protection and human rights.
  • Every time a local authority employee views a social media platform, this is recorded in an internal log including, but not limited to, the following information:
    • Date/time of viewing, including duration of viewing of a single page
    • Reason/justification for viewing and/or relevance to internal investigation
    • Information obtained from social platform
    • Why it was considered that the viewing was necessary
    • Pages saved and where saved to
  • Local authorities should develop internal policies creating audit mechanisms, including:
    • The availability of a designated staff member to address queries regarding the prospective use of social media monitoring, as well as her/his contact details;
    • A designated officer to review the internal log at regular intervals, with the power to issue internal recommendations

Whilst we may post publicly, we don’t expect local authorities to look at our photos and screenshot our thoughts, and use this without our knowledge to make decisions that could have serious consequences on our life.

The growing intrusion by government authorities’ – without a public and parliamentary debate – also risks impacting what people say online, leading to self-censorship, with the potential deleterious effect on free speech. We may have nothing to hide, but if we know our local authority is looking at our social media accounts, we are likely to self-censor.

Social media platforms should not be reframed as spaces for the state to freely gather information about us and treat people as suspects.

Read more:

When Local Authorities aren’t your Friends (24.05.2020)
https://privacyinternational.org/long-read/3586/when-local-authorities-arent-your-friends

Social Media Monitoring Freedom of Information Act Request to Local Authorities (24.05.2020)
https://privacyinternational.org/long-read/3585/social-media-monitoring-freedom-information-act-request-local-authorities

The use of social media monitoring by local authorities – who is a target? (24.05.2020)
https://privacyinternational.org/explainer/3587/use-social-media-monitoring-local-authorities-who-target

Is your Local Authority looking at your Facebook likes? (01.05.2020)
https://privacyinternational.org/sites/default/files/2020-05/Is%20Your%20Local%20Authority%20Looking%20at%20your%20Facebook%20Likes_%20May2020_0.pdf

Social Media Monitoring – a batch request (07.10.2019)
https://www.whatdotheyknow.com/info_request_batch/858

Social Media Intelligence (23.10.2017)
https://privacyinternational.org/explainer/55/social-media-intelligence

Security Through Human Rights: New Liberties Report (18.10.2017)
https://www.liberties.eu/en/news/security-through-human-rights-liberties-report/13238

(Contribution by Antonella Napolitano from EDRi member Privacy International)

close
10 Jun 2020

Cryptocurrency scammers flood Facebook users with manipulative ads

By Metamorphosis

This article was originally published by Metamorphosis in Global Voices.

Scammers using fake Forbes articles and anti-EU disinformation as bait continue to target Facebook users across Europe, the EDRi member Metamorphosis Foundation has warned.

The Skopje-based Metamorphosis Foundation is a civil society organisation from North Macedonia promoting digital rights and media literacy.

Its monitoring of social networks has revealed that scammers continue to use Facebook advertisements masked as links to articles from the respectable Forbes.com, continuing disinformation trends involving not only China, but also European Union members like Sweden.

On 19 May, the Ministry of Interior Affairs of North Macedonia warned citizens that scammers use social networks and e-mail to distribute links misrepresented as articles from Forbes.com to promote the purchase of a supposed new Chinese cryptocurrency.

Citizens who click on the links and provide personal data to the scammers are then targeted by phone calls persuading them to start ‘investing’ by paying installments of $250 dollars.

Other manipulation techniques are then deployed to make users increase the fee.

The anti cyber-crime unit of the Macedonian police claimed the malicious links lead to a website hosted in Ukraine, allegedly run by a Russian citizen in a manner similar to the debunked OneCoin Ponzi scheme run by Bulgarian fraudster Ruja Ignatova, which inflicted damage worldwide of over $4 billion.

Data publicly provided by Facebook about the geographic reach of the advertisements promoting these links suggest they go far beyond the borders of North Macedonia, activists warn.

Manipulative ads help scammers gather personal data from victims

Metamorphosis identified several similar ads that are active on social networks. Users who click on these ads are redirected to addresses such as this one instead of pages on the Forbes.com website.

Bardhyl Jashari, Executive Director of Metamorphosis, explained: “Misleading advertisements continue to target social network users across the world. Using the public data provided by Facebook about the ads targeting the audience based in North Macedonia as a starting point, the Metamorphosis team revealed that the same ads are served in almost all European countries, as well as countries in the Middle East. Scammers use pages about culture, even about cookies (the edible ones), to launch ads that lead the users to web pages and blogs that look almost the same as the ones the Macedonian police warned about.”

This dangerous trend also touches upon another of Metamorphosis’ areas of involvement. Since its founding in 2004, Metamorphosis has been working on promoting serving and promoting child safety online.

Jashari also noted:

“A very worrisome development is that these organised crime networks also use pages aimed at children and teenagers to camouflage their malicious content. For instance a page branded as community for the popular game MineCraft (titled Minecraft) had been running ads that continue to disseminate disinformation about Sweden, aimed at users in Russia, Austria, Belgium, but also in Singapore, Qatar and United Arab Emirates, and dozens of other countries.”

Users clicking on these ads are taken to a page providing an incentive for them to leave their personal data. In the case of Sweden this was disguised as a discount coupon.

While MineCraft has a huge adult following, it is a particularly popular game among children aged between 9 and 11. This practice helps condition future audiences particularly susceptible to both disinformation and scamming.

What is Metamorphosis doing to combat these tactics?

In November 2019, Metamorphosis’ Critical Thinking for Media-wise Citizens (CriThink) project warned that scammers benefit from established disinformation narratives about Sweden.

Sponsored Facebook posts lure people who had been previously primed through right-wing populist propaganda media networks based in North Macedonia to believe media manipulations about unrest in the country and the European Union (EU), originally published by pro-Kremlin media.

In the same manner, these articles promoted fake news that Sweden has introduced a cryptocurrency opposing the Euro.

To launch these geo-targeted ads, scammers used a series of pages with general interest topics, including some branded as unofficial fan clubs of Western celebrities like actors Liam Neeson and Anthony Michael Hall.

CriThink, which is an initiative supported by the EU Delegation in North Macedonia, educated local social media users on how to use the transparency features of Facebook pages used by the scammers, in order to flag and report the suspicious pages using the mechanisms provided by the platform.

In order to boost citizen engagement in raising media literacy levels, CriThink articles related to social networks provide instructions on how users can use reporting features to alert administrators about harmful content, ranging from hate speech to scams.

Several weeks later, in December 2019, Facebook informed some of its users who participated in the online action that they had removed the ads reported as scams.

Read more:

Cryptocurrency scammers flood Facebook users with ads for fake Forbes.com articles (29.05.2020)
https://globalvoices.org/2020/05/29/cryptocurrency-scammers-flood-facebook-users-with-ads-for-fake-forbes-com-articles/

Cyber-crime unit of Macedonian police warns about a new scam involving fake Chinese cryptocurrency (19.05.2020)
https://meta.mk/en/macedonian-cyber-police-warns-about-a-new-scam-involving-fake-chinese-cryptocurrency/

The £4bn OneCoin scam: how crypto-queen Dr Ruja Ignatova duped ordinary people out of billions — then went missing (15.12.2019)
https://www.thetimes.co.uk/article/the-4bn-onecoin-scam-how-crypto-queen-dr-ruja-ignatova-duped-ordinary-people-out-of-billions-then-went-missing-trqpr52pq

Disinfo: Crime-infested no-go zones exist in multiple European countries (17.10.2019)
https://euvsdisinfo.eu/report/crime-infested-no-go-zones-exist-in-multiple-european-countries/

(In Macebonian) Дезинформации за Шведска се користат како мамка за корисници на „Фејсбук“ од Северна Македонија (18.11.2019)
https://crithink.mk/dezinformaczii-za-shvedska-se-koristat-kako-mamka-za-korisniczi-na-fejsbuk-od-makedonija/

(Contribution by Filip Stojanovski from EDRi member Metamorphosis)

close
10 Jun 2020

SHARE’s campaign bears fruit: Google appoints Serbian representatives

By SHARE Foundation

Serbian citizens can now bring their objections and requests regarding Google’s use of their private date to the tech giant’s new representative in the country. Google, as one of the first tech-giants complying with the new Serbian law, wrote a letter to the Commissioner for Information of Public Importance and Personal Data Protection, i.e. Serbia’s Data Protection Authority, on 21 May 2020, stating that their representatives would be the Belgrade-based independent law firm BDK Advokati. Over a year ago, SHARE Foundation, a member of the European Digital Rights (EDRi) network, had asked Google and other global tech companies to take this very step and comply more closely with EU regulation.

YouTube, Chrome, Android, Gmail, maps and many other digital products without which the internet is unimaginable, are an important segment of the industry which entirely relies on processing personal data. With a significant delay and numerous difficulties, states have begun bringing some order in this field, which directly interferes with basic human rights. The European Union has set this standard by adopting the General Data Protection Regulation (GDPR), while the new Law on Personal Data Protection in Serbia, in place since August 2019, followed this model too.

Although they have been operating in Serbia for a long time, global tech-corporations observe most developing countries as territories for an unregulated exploitation of citizens’ data. At the end of May 2019, SHARE sent the aforementioned request to 20 of the biggest tech companies from around the world, three months before the application of the new Law on Personal Data Protection, reminding them of their obligations towards Serbian citizens and the parameters of the new national law.

Twitter responded to us by saying that they were working on it. A global platform for booking airline tickets, eSky, also contacted us, and appointed their representative in Serbia. In December 2019, when Google and Facebook were dragging their feet in the issue of appointing representatives in the country, SHARE filed misdemeanor charges to the Serbian Commissioner.

Read more:

Open letter to Commissioner for Information of Public Importance and Personal Data Protection (21.05.2020)
https://www.poverenik.rs/images/stories/dokumentacija-nova/razno/GoogleLLCletter-21052020.pdf

(In Serbian) Twitter imenuje predstavnika u Srbiji (17.07.2019)
https://www.sharefoundation.info/sr/odgovorio-nam-twitter/

SHARE files complaints against Facebook and Google (05.12.2019)
https://www.sharefoundation.info/en/share-files-complaints-against-facebook-and-google/

SHARE calls Facebook and Google to appoint their representatives in Serbia (21.05.2019)
www.sharefoundation.info/en/share-calls-facebook-and-google-to-appoint-their-representatives-in-serbia/ (opens in a new tab)” href=”www.sharefoundation.info/en/share-calls-facebook-and-google-to-appoint-their-representatives-in-serbia/” target=”_blank”>www.sharefoundation.info/en/share-calls-facebook-and-google-to-appoint-their-representatives-in-serbia/

Organisations from across Europe insist on a transparent appointment of the Commissioner in Serbia (04.12.2018)
https://www.sharefoundation.info/en/organisations-from-across-europe-insist-on-a-transparent-appointment-of-the-commissioner-in-serbia/

(Contribution by Bojan Perkov from EDRi member SHARE Foundation)

close
08 Jun 2020

Open Letter: ending gag lawsuits in Europe – protecting democracy and fundamental rights

By EDRi

The European Digital Rights network joined 118 civil society organisations from across the globe in signing an open letter (the latest act of a longstanding movement) addressing the need to end gag lawsuits that threaten the public interest by allowing powerful actors to silence those that would speak against them.

Read the letter here or find it below:

The problem: gag lawsuits against public interest defenders

The EU must end gag lawsuits used to silence individuals and organisations that hold those in positions of power to account. Strategic Lawsuits Against Public Participation (SLAPP) are lawsuits brought forward by powerful actors (e.g. companies, public officials in their private capacity, high profile persons) to harass and silence those speaking out in the public interest. Typical victims are those with a watchdog role, for instance: journalists, activists, informal associations, academics, trade unions, media organisations and civil society organisations.

Recent examples of SLAPPs include PayPal suing SumOfUsfor a peaceful protest outside PayPal’s German headquarters; co-owners of Malta’s Satabank suing blogger Manuel Deliafor a blog post denouncing money laundering at Satabank; andBollore Group suing Sherpa and ReAct in France to stop them from reporting human rights abuses in Cameroon. In Italy more than 6,000 or two-thirds of defamation lawsuitsfiled against journalists and media outlets annually are dismissed as meritless by a judge. When Maltese journalist Daphne Caruana Galizia was brutally killed, there were 47 SLAPPs pending against her.

SLAPPs are a threat to the EU legal order, and, in particular:

  • A threat to democracy and fundamental rights. The EU is founded on the rule of law and respect for human rights. SLAPPs impair the right to freedom of expression, to public participation and to assembly of those who speak out in the public interest, and have a chilling effect on the exercise of these rights by the community at large.
  • Threat to access to justice and judicial cooperation. Cross-border judicial cooperation relies on the principles of effective access to justice across the Union and mutual trust between legal systems. That trust must be based on the legally enforceable upholding of common values and minimum standards. To the extent that they distort and abuse the system of civil law remedies, SLAPPs undermine the mutual trust between EU legal systems: member states must be confident that rulings issued by other member states’ courts are not the result of abusive legal strategies and are adopted as the outcome of genuine proceedings.
  • A threat to the enforcement of EU law, including in connection to the internal market and the protection of the EU budget. The effective enforcement of EU law, including the proper functioning of the internal market, depends on the scrutiny of the behaviour of individual entities by the EU, member states and –crucially –informed individuals. Watchdogs, be it media or civil society actors, play a key enforcement role. Therefore, the absence of a system which safeguards public scrutiny is a threat to the enforcement of EU law. The same reasoning applies to the management of EU programmes and budget, which cannot be monitored through the sole vigilance of the European Commission.
  • A threat to freedom of movement. The absence of rules to protect watchdogs from SLAPP has an impact on the exercise of the Treaty’s fundamental freedoms, since it affects the ability of media, civil society organisations and information services providers to confidently operate in jurisdictions where the risk of SLAPPs is higher, and discourages people from working for organisations where they can be the target of SLAPPs.

The solution: an EU set of anti-SLAPP measures

The EU can and must end SLAPPs by adopting the following complementary measures to protect all those affected by SLAPPs:

1. An anti-SLAPP directive

An anti-SLAPP directive is needed to establish a Union-wide minimum standard of protection against SLAPPs, by introducing exemplary sanctions to be applied to claimants bringing abusive lawsuits, procedural safeguards for SLAPP victims, including special motions to contest the admissibility of certain claims and/or rules making the burden shifting to the plaintiff to demonstrate a reasonable probability of succeeding in such claims, as well as other types of preventive measures. The Whistle-Blower Directive sets an important precedent protecting those who report a breach of Union law in a work-related context. Now the EU must ensure a high standard of protection against gag lawsuits for everyone who speaks out –irrespective of the form and the context –in the public interest.

The legal basis for an anti-SLAPP directive is to be found in multiple provisions of the Treaty; for example, Article 114 TFEU on the proper functioning of the internal market, Article 81 TFEU on judicial cooperation and effective access to justice and Article 325 TFEU on combating fraud related to EU programmes and budgets.

2. The reform of Brussels I and Rome II Regulations
Brussels I Regulation (recast) contains rules which grant claimants the ability to choose where to make a claim. This must be amended to end forum shopping in defamation cases, which forces defendants to hire and pay for defence in countries whose legal systems are unknown to them and where they are not based. This is beyond the means of most and falls foul of the principles of fair trial and equality of arms.

Rome II Regulation does not regulate which national law will apply to a defamation case. This allows claimants to select the most favourable substantive law and therefore leads to a race to the bottom. Today, victims may be subject to the lowest standard of freedom of expression applicable to their case.

3. Support all victims of SLAPPs
Funds are needed to morally and financially support all victims of SLAPPs, especially with legal defence. Justice Programme funds should be used to train judges and practitioners, and a system to publicly name and shame the companies that engage in SLAPPs, for example in an EU register, should be created

Finally, the EU must ensure that the scope of anti-SLAPP measures include everybody affected by SLAPPs, including journalists, activists, trade unionists, academics, digital security researchers, human rights defenders, media and civil society organisations, among others.

This paper was signed by 119 media and civil society organisations.

You can find the original letter and the full list of signatories here.

close
04 Jun 2020

EDRi submits response to the European Commission AI consultation – will you?

By EDRi

Today, 4th June 2020, European Digital Rights (EDRi) submitted its response to the European Commission’s public consultation on artificial intelligence (AI). In addition, EDRi released its recommendations for a fundamental rights-based Artificial Intelligence Regulation.

AI is a growing concern for all who care about digital and human rights. AI systems have the ability to exacerbate mass surveillance and intrusion into our personal lives, reflect and reinforce some of the deepest societal inequalities, fundamentally alter the delivery of public and essential services, undermine data protection legislation, and disrupt the democratic process.

In Europe, we have already seen the negative impacts of automated systems at play at the border, in predictive policing systems which only increase over-policing of racialised communities, in ‘fraud detection’ systems which target poor, working class and migrant areas, and countless more examples. Read more in our explainer.

Therefore, EDRi calls for the European Commission to set clear red-lines for impermissible use, ensure democratic oversight, and include the strongest possible human rights protection.

We encourage all people, collectives and organisations to respond to the consultation and make sure these issues are addressed. Need help answering the consultation? Read EDRi’s answering guide for the public here.

Will you make your voice heard in a crucial moment for the future of our societies? Submit your own response to the consultation online here.

Read more:

EDRi Consultation response: European Commission consultation on Artificial Intelligence (04.06.2020)
https://edri.org/wp-content/uploads/2020/06/AI_EDRiConsultationResponse.pdf

EDRi Recommendations for a fundamental rights-based Artificial Intelligence Regulation: addressing collective harms, democratic oversight and impermissible use (04.06.2020)
https://edri.org/wp-content/uploads/2020/06/AI_EDRiRecommendations.pdf

EDRi Explainer AI and fundamental rights: How AI impacts marginalised groups, justice and equality (04.06.2020)
https://edri.org/wp-content/uploads/2020/06/AI_EDRiExplainer.pdf

EDRi Answering Guide to the European Commission consultation on AI (04.06.2020)
https://edri.org/wp-content/uploads/2020/06/AI_EDRiAnsweringGuide.pdf

close
04 Jun 2020

Can the EU make AI “trustworthy”? No – but they can make it just

By EDRi

Today, 4 June 2020, European Digital Rights (EDRi) submitted its answer to the European Commission’s consultation on the AI White Paper. On top of our response, in our additional paper we outline recommendations to the European Commission for a fundamental rights- based AI regulation. You can find our consultation response, recommendations paper, and answering guide for the public here.

How to ensure a “trustworthy AI” has been highly debated since the European Commission launched its White Paper on AI in February this year. Policymakers and industry have hosted numerous conversations about “innovation”, “Europe becoming a leader in AI”, and promoting a “Fair AI”.

Yet, a “fair” or “trustworthy” artificial intelligence seems a far way off. As governments, institutions and industry swiftly move to incorporate AI into their systems and decision-making processes – grave concerns remain as to how these changes will impact people, democracy and society as a whole.

EDRi’s response outlines the main risks AI poses for people, communities and society, and outlines recommendations for an improved, truly ‘human-centric’ legislative proposal on AI. We argue that the EU must reinforce the protections already embedded in the General Data Protection Regulation (GDPR), outline clear legal limits for AI by focusing on impermissible use, and foreground principles of collective impact, democratic oversight, accountability, and fundamental rights. Here’s a summary of our main points.

Put people before industrial policy

A ‘human centric’ approach to AI requires that considerations of safety, equality, privacy, and fundamental rights are the primary factors underpinning decisions as to whether to promote or invest in AI.

However, the European Commission’s White Paper proposal takes as a point of a departure the inherent economic benefits of promoting AI, particularly in the public sector. Promoting AI in the public sector as a whole, without requiring scientific evidence to justify the need or the purpose of such applications in some potentially harmful situations, is likely to have the most direct consequences on everyday peoples’ lives, particularly on marginalised groups.

Despite wide ranging applications that could advance our societies (such as some uses in the field of health), we have also seen the vast negative impacts of automated systems at play at the border, in predictive policing systems which exacerbate overpolicing of racialised communities, in ‘fraud detection’ systems which target poor, working class and migrant areas, and countless more examples [link to explainer]. All such examples highlight the potentially devastating consequences AI systems can have in the public sector, contesting the case for ‘promoting the uptake of AI.’ These examples highlight the need for AI regulation to be rooted in a human-centric approach.

The development of artificial intelligence technology offers huge potential opportunities for improving our economies and societies, but also extreme risks. Poorly-designed and governed AI will exacerbate power imbalances and inequality, increase discrimination, invade privacy and undermine a whole host of other rights. EU legislation must ensure that cannot happen. Nobody’s rights should be sacrificed on the altar of innovation.

said Chris Jones, Statewatch

Address collective harms of AI

The vast potential scale and impact AI systems challenges existing conceptions of harm. Whilst in many ways we can view the challenges posed by AI as fundamental rights issues, often the harms perpetrated are much broader, disadvantaging communities, economy, democracy and entire societies. From the impending threat of mass surveillance as a a result of biometric processing in publicly-accessible spaces, to the use of automated systems or ‘upload filters’ to moderate content on social media, to severe disruptions to the democratic process, we see the impact goes far beyond the level of the individual. One specificity of regulating AI is the need to address societal-level harms.

Prevent harms by focusing on impermissible use

Just as the problems with AI are collective and structural, so must be the solutions. The European Commission’s White Paper outlines some safeguards to address ‘high-risk’ AI, such as training data to correct for bias and ensuring human oversight. Whilst these safeguards are crucial, they will not address the irreparable harms which will result from a number of uses of AI.

The EU must move beyond technical fixes for the complex problems posed by AI. Instead, the upcoming AI regulation must determine the legal limits, impermissible uses or ‘red-lines’ for AI applications. This is a necessary step for a people-centered, fundamental rights-based AI”

says Sarah Chander, Senior Policy Adviser, EDRi.

The EDRi network lists some of the impermissible uses of AI:

  • indiscriminate biometric surveillance and biometric capture and processing in public spaces1
  • use of AI to solely determine access to or delivery of essential public services (such as social security, policing, migration control)
  • uses of AI which purport to identify, analyse and assess emotion, mood, behaviour, and sensitive identity traits (such as race, disability) in the delivery of essential services
  • predictive policing
  • autonomous lethal weapons and other uses which identify targets for lethal force (such as law and immigration enforcement)

“The EU must ensure that states and companies meet their obligations and responsibilities to respect and promote human rights in the context of automated decision-making systems. EU institutions and national policymakers must explicitly recognise that there are legal limits to the use and impact of automation. No safeguard or remedy would make indiscriminate biometric surveillance or predictive policing acceptable, justified or compatible with human rights”

said Fanny Hidvegi, Europe Policy Manager at Access Now

Require democratic oversight for AI in the public sphere

The rapidly increasing deployment of AI systems presents a major governance issue. Due to the (designed) opacity of the systems, the complete lack of transparency from governments when such systems are deployed for use in public, essential functions, and the systematic lack of democratic oversight and engagement – AI is furthering the ‘power asymmetry between those who develop and employ AI technologies, and those who interact with and are subject to them.’2

As a result, decisions impacting public services will be more opaque, increasingly privately owned, and even less subject to democratic oversight. It is vital that the EU’s regulatory proposal on AI addresses this – implementing mandatory measures of democratic oversight for the procurement and deployment of AI in the public sector and essential services. More, the EU must explore methods of direct public engagement on AI systems. In this regard, authorities should be required to specifically consult marginalised groups likely to be disproportionately impacted by automated systems.

Implement the strongest possible fundamental rights protections

Regulation on AI must reinforce, rather than replace, the protections already embedded in the General Data Protection Regulation (GDPR). The European Commission has the opportunity to complement these protections with safeguards for AI. To put people first and provide the strongest possible protections, all systems should complete mandatory human rights impact assessments. This assessment should evaluate the collective, societal, institutional and governance implications the system poses, and outline adequate steps to mitigate this.

“The deployment of such systems for predictive purposes comes with high risks on human rights violations. Introducing ethical guidelines & standards for the design and deployment of these tools is welcome, but not enough. Instead, we need the European Union and Member States to ensure compliance with the applicable regulatory frameworks, and draw clear legal limits to ensure AI is always compatible with fundamental rights.”

says Eleftherios Chelioudakis – Homo Digitalis

EDRi’s position calls for fundamental rights to be prioritised in the regulatory proposal for all AI systems, not only those categorised as ‘high-risk’. We argue AI regulation should avoid creating loop-holes or exemptions based on sector, size of enterprise, or whether or not the system is deployed in the public sector.

“It is crucial for the EU to recognize that the adoption of AI applications is not inevitable. The design, development and deployment of systems must be tested against human rights standards in order to establish their appropriate and acceptable use. Red lines are thus an important piece of the AI governance puzzle. Recognizing impermissible use at the outset is particularly important because of the disproportionate, unequal and sometimes irreversible ways in which automated decision making systems impact societies.”

said Vidushi Marda, Senior Programme Officer, at ARTICLE 19

The rapid uptake of AI will fundamentally change our society. From a human rights’ perspective, AI systems have the ability to exacerbate surveillance and intrusion into our personal lives, fundamentally alter the delivery of public and essential services, vastly undermine vital data protection legislation, and disrupt the democratic process.

For some, AI will mean reinforced, deeper harms as such systems feed and embed existing processes of marginalisation. For all, the route to remedies, accountability, and justice will be ever-more unclear, as this power asymmetry further shifts to private actors, and public goods and services will be not only automated, but privately owned.

There is no “trustworthy AI” without clear red-lines for impermissable use, democratic oversight, and a truly fundamental rights-based approach to AI regulation. The European Union’s upcoming legislative proposal on artificial intelligence (AI) is a major opportunity change this; to protect people and democracy from the escalating economic, political and social issues posed by AI.

Footnotes:

1EDRi (2020). ‘Ban Biometric Mass Surveillance: A set of fundamental rights demands for the European Commission and Member States’ https://edri.org/wp-content/uploads/2020/05/Paper-Ban-Biometric-Mass-Surveillance.pdf

2Council of Europe (2019). ‘Responsibility and AI DGI(2019)05 Rapporteur: Karen Yeung https://rm.coe.int/responsability-and-ai-en/168097d9c5

Read more:

EDRi Consultation Response: European Commission Consultation on the White Paper on Artificial Intelligence
https://edri.org/wp-content/uploads/2020/06/AI_EDRiConsultationResponse.pdf

EDRI Recommendations for a Fundamental-rights based Artificial Inelligence Regulation: Addressing collective harms, democratic oversight and impermissable use
https://edri.org/wp-content/uploads/2020/06/AI_EDRiRecommendations.pdf

Access Now Consultation Response: European Commission Consultation on the White Paper on Artificial Intelligence
https://www.accessnow.org/EU-white-paper-consultation

Bits of Freedom (2020). ‘Facial recognition: A convenient and efficient solution, looking for a problem?’
https://www.bitsoffreedom.nl/2020/01/29/facial-recognition-a-convenient-and-efficient-solution-looking-for-a-problem/

EDRi (2020). ‘Ban Biometric Mass Surveillance: A set of fundamental rights demands for the European Commission and Member States’
https://edri.org/wp-content/uploads/2020/05/Paper-Ban-Biometric-Mass-Surveillance.pdf

Privacy International and Article 19 (2018). ‘Privacy and Freedom of Expression in the Age of Artificial Intelligence’
https://www.article19.org/wp-content/uploads/2018/04/Privacy-and-Freedom-of-Expression-In-the-Age-of-Artificial-Intelligence-1.pdf

close
27 May 2020

COVID-Tech: Surveillance is a pre-existing condition

By Guest author

In EDRi’s series on COVID-19, COVIDTech, we will explore the critical principles for protecting fundamental rights while curtailing the spread of the virus, as outlined in the EDRi network’s statement on the virus. Each post in this series will tackle a specific issue about digital rights and the global pandemic in order to explore broader questions about how to protect fundamental rights in a time of crisis. In our statement, we emphasised that “measures taken should not lead to discrimination of any form, and governments must remain vigilant to the disproportionate harms that marginalised groups can face.” In this third post of the series, we look at surveillance – situating the measures in their longer term trajectory – particularly of marginalised communities.

One minor highlight in this otherwise bleak public health crisis is that privacy is trending. Now more than ever, conversations about digital privacy are reaching the general public. This is a vital development as states and private actors pose ever greater threats to our digital rights in their responses to COVID-19. The more they watch us, the more we need to watch them.

One concern, however, is that these debates have siphoned this new attention to privacy into a highly technical, digital realm. The debate is dominated by the mechanics of digital surveillance, whether we should have centralised or decentralised contact tracing apps, and how zoom traces us as we work, learn and do yoga at home.

Although important, this is only a partial framing of how privacy and surveillance are experienced during the pandemic. Less prominently featured are the various other privacy infringements being ushered in as a result of COVID-19. We should not forget that for many communities, surveillance is not a COVID-19 issue – it was already there.

The other sides of COVID surveillance

Very real concerns about digital measures proposed as pandemic responses should not overshadow the broader context of mass-scale surveillance emerging before our eyes. Governments across Europe are increasingly rolling out measures to physically track the public, via telecommunications and other data, without explicit reference to how this will impede the spread of the virus, or when the use and storage of this data will end.

We are also seeing the emergence of bio-surveillance dressed in a public health response’s clothing. From the Polish government’s app mandating the use of geo-located selfies, to talks of using facial biometrics to create immunity passports to facilitate the return of of workers in the UK, governments have, and will continue to, use the pandemic as a cover to get into our homes, and closer to us.

Yet, less popular in media coverage are physical surveillance techniques. Such measures are – in many European countries – coupled with heightened punitive powers for law enforcement. Police have deployed drones in France, Belgium and Spain, and communities in cities across Europe are feeling the pressure of increased police presence in their communities. Heightened measures of physical surveillance cannot be accepted at face value or ignored. Instead, they must be viewed in tandem with new digital developments.

Who can afford privacy?

These measures are not neutrally harmful. In unequal societies, surveillance will always target racialised1 people, migrants, and the working classes. These people bear the burden of heightened policing powers and punitive ‘public health’ enforcement – being more likely to need to leave the house for work, take public transport, live in over-policed neighbourhoods, and in general be perceived as suspicious, criminal, necessitating surveillance.

This is a privacy issue as much as it is about inequality. Except, for some, the consequences of intensified surveillance under COVID-19 means heightened exposure to the virus through direct contact with police, increased monitoring of their social media, the anxiety of constant sirens, and in the worst cases, the real bodily harm of police brutality.

In the last few days, Romani communities in Slovakia reported numerous cases of police brutality, some against children playing outside. Black, brown and working class communities across Europe are experiencing the physical and psychological effects of being watched even more than normal. In Brussels, where EDRi is based, a young man has died in contact with the police during raids.

This vulnerability is economic, too – for many, privacy is a sparse commodity.It is purchased by those who live in affluent neighbourhoods, by those with ‘work from home’ jobs. Those who cannot afford privacy in this more basic sense will, unfortunately, not be touched by debates about contact tracing. For many, digital exclusion means that measures such as contact-tracing apps are completely irrelevant. Worse, if future measures in response to COVID-19 are designed with the assumption that we all use smart phones, or have identity documents, they will be immensely harmful.

These measures are being portrayed as ‘new’, at least in our European ‘liberal’ democracies. But for many, surveillance is not new. Governmental responses to the virus have simply brought to the general public a reality reserved for people of colour and other marginalised communities for decades. Prior to COVID-19, European governments have deployed technology and other data-driven tools to identify, ‘risk-score’ and experiment on groups at the margins, whether by way of predicting crime, forecasting benefit fraud, or assessing whether or not asylum applicants are telling the truth by their facial movements.

We need to integrate these experiences of surveillance into the mainstream privacy debate. These conversations have been sidelined or explained away with the logic of individual responsibility. For example, last year, in a public debate on technology and surveillance of marginalised communities, one participant swiftly moved the conversation away from police profiling and toward privacy literacy. They asked the room of anti-racist activists “does everybody here use a VPN?”

Without a holistic picture of how surveillance affects people differently – the vulnerabilities of communities and the power imbalances that produce this – we will easily fall into the trap that quick fix solutions can guarantee our privacy, and that surveillance can be justified.

Is surveillance a price worth paying?

If we don’t root our arguments in people’s real life experiences of surveillance, not only do we devalue the right to privacy for some, but we also risk losing the argument to those who believe that surveillance is a price worth paying.

This narrative is a direct consequence of an abstract, technical and neutral framing of surveillance and its harms. Through this lens, infringements of privacy are minor, necessary evils. As a result, privacy will always lose the the false ‘privacy vs health’ trade-off. We should challenge the trade-off itself, but we can also ask: who will really will pay the price of surveillance? How do people experience breaches of privacy?

Another question we need to ask is who profits from surveillance? Numerous companies have shown their willingness to enter public-private alliances, using COVID-19 as the opportunity to market surveillance based ‘solutions’ to issues of health (often with dubious claims). Yet, again, this is not new – companies like Palantir, contracted by the UK government to process confidential health data during COVID-19, have a much longer-standing role in the surveillance of migrants and people of colour and facilitating deportations. Other large tech companies will use COVID-19 to continue their expansion into areas like ‘digital welfare’. Here, deeply uneven power relationships will be further cemented with the introduction of digitalised tools, making them harder to challenge and posing ever greater risks to those who rely on the state. If unchallenged, this climate of techno-solutionism will only increase the risk of new technology testing and data-extraction from marginalised groups for profit.

A collective privacy

There is a danger to viewing surveillance as exceptional; a feature of COVID-19 times. It suggests that protecting privacy is only newsworthy when it is about ´everyone’ or ‘society as a whole’. What that means, though is that actually we don’t mind if a few don’t have privacy.

Surveillance measures and other threats to privacy have countless times been justified for the ‘public good’. Privacy – framed in abstract, technical and individualistic terms – simply cannot compete, and ever greater surveillance will be justified. This surveillance will be digital and physical and everything in between, and profits will be made. Alternatively, we can fight for privacy as a collective vision – something everybody should have. Collective privacy is not exclusive or abstract – it means looking further than how individuals might adjust their privacy settings, or how privacy can be guaranteed in contact tracing apps.

A collective vision of privacy means contesting ramped-up police monitoring, the use of marginalised groups as guinea pigs for new digital technologies, as well as ensuring new technologies have adequate privacy protections. It also requires us to think about who will be the first to feel the impact of surveillance? How do we support them? To answer these questions, we need to recognise surveillance in all its manifestations, including way before the outbreak of COVID-19.

Original illustration by Miguel Brieva, licensed under CBNA 2020, La Imprenta, included in “Que No Haya Sido en Vano

Read more:

Telco data and Covid-19: A primer (21.04.20)
https://privacyinternational.org/explainer/3679/telco-data-and-covid-19-primer

Slovak police officer said to have beaten five Romani children in Krompachy settlement and threatened to shoot them (29.04.20)
http://www.romea.cz/en/news/world/slovak-police-officer-said-to-have-beaten-five-romani-children-in-krompachy-settlement-and-threatened-to-shoot-them

Amid COVID-19 Lockdown, Justice Initiative Calls for End to Excessive Police Checks in France (27.03.20)
https://www.justiceinitiative.org/newsroom/amid-covid-19-lockdown-justice-initiative-calls-for-end-to-excessive-police-checks-in-france

Digital divide ‘isolates and endangers’ millions of UK’s poorest (28.04.20)
https://www.theguardian.com/world/2020/apr/28/digital-divide-isolates-and-endangers-millions-of-uk-poorest

The EU is funding dystopian Artificial Intelligence projects (22.01.20)
https://www.euractiv.com/section/digital/opinion/the-eu-is-funding-dystopian-artificial-intelligence-projects

A Price Worth Paying: Tech, Privacy and the Fight Against Covid-19 (24.04.20)
https://institute.global/policy/price-worth-paying-tech-privacy-and-fight-against-covid-19

COVID-Tech: Emergency responses to COVID-19 must not extend beyond the crisis (15.04.20)
https://edri.org/emergency-responses-to-covid-19-must-not-extend-beyond-the-crisis/

COVID-Tech: COVID infodemic and the lure of censorship (13.04.2020)
https://edri.org/covid-infodemic-and-the-lure-of-censorship/

Footnotes

  1. This term refers to racial, ethnic and religious minorities, emphasising that racialisation is a structural process inflicted on people, groups and communities.

(Contribution by Sarah Chander, EDRi senior policy advisor)

close
27 May 2020

Competition law: Big Tech mergers, a dominance tool

By Laureline Lemoine

This is the third article in a series dealing with competition law and Big Tech. The aim of the series is to look at what competition law has achieved when it comes to protecting our digital rights, where it has failed to deliver on its promises, and how to remedy this. Read the first article on the impact of competition law on your digital rights here and the second article on what to do against Big tech’s abuse here.

One way Big Tech has been able to achieve a dominant position in our online life, is through mergers and acquisitions. In recent years, the five biggest tech companies (Amazon, Apple, Alphabet – parent company of Google, Facebook and Microsoft) spent billions to strengthen their position in acquisitions that shaped our digital environment. Notorious acquisitions which made headlines include: Facebook/WhatsApp, Facebook/Instagram, Microsoft/LinkedIn, Google/YouTube, and more recently, Amazon/Twitch.

Beyond infamous social media platforms and big deals, Big Tech companies also acquire less known-companies and start-ups, which also greatly contribute to ther growth. While not making big newsworthy acquisitions, Apple still “buys a company every two to three weeks on average” according to its CEO. Since 2001, Google-Alphabet has been acquiring over 250 companies and since 2007, while Facebook acquired over 90. Big Tech’s intensive acquisition policy particularly applies to artificial intelligence (AI) startups. This is worrying because reducing competitors also means reducing diversity, leaving Big Tech in charge of developing these technologies, at a time where AI technologies are more and more used in decisions affecting individuals and are known to be susceptible to bias.

Big Tech’s intensive acquisition policy can have different goals at play, sometimes at the same time. These companies acquire competitors who could have offer, or were offering consumers, an alternative, in order to eliminate or shelve them (“killer acquisitions”), in order to consolidate a position in the same market or in a neighbouring market, or in order to acquire their technical or human skills (“talent acquisitions”). See for example this overview of Google and Facebook’s acquisitions.

And in time of economic trouble, Big Tech is even more lurking. In the US, Senator Warren wants to introduce a moratorium on COVID-era acquisitions.

Big Tech’s mergers are mostly unregulated

While mergers and acquisitions are part of business life, the issue is that most Big Tech’s acquisitions are not subject to any control. And the few ones which are reviewed have been authorised without conditions. This led to debates on the state of competition law: are the current rules fit for today’s age of data-driven acquisitions and technology takeovers?

While some already called for a ban on acquisitions by certain companies, others are discussing the thresholds set in competition law to allow review by competent authorities, but also, more intrinsically, the criteria used to review mergers.

The issue with thresholds is that they depend on monetary turnover, which many companies and startups do not reach, either because they haven’t yet monetised their innovations or because their value is not reflected in their turnover but, for example, in their data trove. Despite low turnovers, Facebook was still willing to spent 1 and 19 billions for, respectively, Instagram and WhatsApp. These data-driven mergers allowed for these companies’ data sets to be aggregated, increasing the (market) power of Facebook.

The French competition authority suggests for example, to introduce an obligation to inform the EU and/or national competition authorities of all the mergers implemented in the EU by “structuring” companies. These “structuring” companies would be defined clearly according to objective criteria and in cases of risks, the authorities would ask these players to notify the mergers for review.

However, although the acquisition of WhatsApp by Facebook was reviewed by the European Commission thanks to a referee from national competition authorities, the operation was still authorised. This poses another issue: the place of data protection and privacy in merger control. Competition authorities assume that, since there is a data protection framework, data protection rights are respected and individuals are exercising their rights and choices. But this assumption does not take into account the reality of the power imbalance between users and Big Tech. In this regard, some academics, such as Orla Lynskey suggests solutions such as the increased cooperation between competition, consumers and data protection authorities to understand and examine the actual barriers to consumer choice in data-driven markets. Moreover, where it is found that consumers value data privacy as a dimension of quality, the competitive assessment should therefore reflect whether a given operation would deteriorate such quality.

A wind of change might already be coming from the US, as the Federal Trade Commission issued last February “Special Orders” to the five Big Tech companies, “requiring them to provide information about prior acquisitions not reported to the antitrust agencies”, including how acquired data has been treated.

Google/Fitbit: the quest for our sensitive data

The debate recently resurfaced when Google’s proposed acquisition of Fitbit was announced. Immediately, a number of concerns were raised, both in terms of competition and of privacy (see for example the European Consumer Organisation BEUC, and the Electronic Frontier Foundation (EFF)’s concerns). From a fundamental rights perspective, the most worrying issue lies in the fact that Google would be acquiring Fitbit’s health data. As Privacy International warns, “a combination of Google / Alphabet’s potentially extensive and growing databases, user profiles and dominant tracking capabilities with Fitbit’s uniquely sensitive health data could have pervasive effects on individuals’ privacy, dignity and equal treatment across their online and offline existence in future.”

Such concerns are also shared beyond civil society, as the announcement led the European Data Protection Board to issue a statement, warning that “the possible further combination and accumulation of sensitive personal data regarding people in Europe by a major tech company could entail a high level of risk to the fundamental rights to privacy and to the protection of personal data.”

It is a fact that Google cannot be trusted with our personal data. As well as a long history of competition and data protection infringements, Google is questionably trying to enter the healthcare market, and already breaking patients’ trust.

Beyond concerns, this operation will be the opportunity for the European Commission to adopt a new approach after the Facebook/WhatsApp debacle. Google is acquiring Fitbit for its data and therefore the competitive assessment should reflect that. Moreover, the Commission should use this case as an opportunity to consult with consumer and data protection authorities.

Read more:

Google wants to acquire Fitbit, and we shouldn’t let it! (13.11.19)
https://privacyinternational.org/news-analysis/3276/google-wants-acquire-fitbit-and-we-shouldnt-let-it

GOOGLE-FITBIT MERGER: Competition concerns and harms to consumers (07.07.20)
http://www.beuc.eu/publications/beuc-x-2020-035_google-fitbit_merger_competition_concerns_and_harms_to_consumers.pd

Considering Data Protection in Merger Control Proceedings (06.06.18)
https://one.oecd.org/document/DAF/COMP/WD(2018)70/en/pdf

Competition law: what to do against Big Tech’s abuse? (01.04.2020)
https://edri.org/competition-law-what-to-do-against-big-tech-abuse/

The impact of competition law on your digital rights (19.02.2020)
https://edri.org/the-impact-of-competition-law-on-your-digital-rights/

(Contribution by Laureline Lemoine, EDRi senior policy advisor)

close
27 May 2020

More than the sum of our parts: a strategy for the EDRi Network

By Claire Fernandez

It took over a year. From an EDRi members’ survey in early 2019 to the vote by the (online) General Assembly of members at the end of April 2020. In those months we held workshops, webinars, calls, several rounds of comments, draft iterations and about 50 consultations. We won’t lie, it was a lengthy, challenging and resource-consuming process. But it was worth it: we can now announce, proud and excited, the adoption of the EDRi Network 2020-2024 Strategy (link to summary).

Along the process, we learned a great deal about the context EDRi operates in and how the network situates in European societies. We also learned about how strategic planning processes can unveil larger questions about networks’ identity and health, and on what brings people together.

Values vs practices

There are many diverse visions about EDRi and about what a strategy is. EDRi network is comprised of a wide-ranging constellations of distinct voices. There is no ‘one size fits all’ narrative that encompasses some of the most complex issues. Some, like Richard D. Bartlett, would argue that people would rather align on a community of practices than on shared ‘values’. In EDRi’s case, what practices bring us together? ‘EDRis’ do share a passion for working in a community based on expertise, trust and hard work. We therefore worked on a balance and design a strategy that would give a overarching common sense of purpose and direction while leaving enough space for people to carry on with their work.

The strategy

It feels daring and risky to put our vision and assumptions on paper and boil down to what EDRi is all about. The strategy starts with highlighting the problems EDRi faces and showing a sense of urgency for action. While technologies represent opportunities, the near-total digitisation and permanent recording of our lives poses a significant risk to our autonomy and to our democracies.

A significant piece of the strategy is the power analysis, which describes the context in which EDRi operates. Our world is characterised by power asymmetries between state and private actors on the one hand, and people on the other. These power imbalances threaten democracy and people’s behavior. There is a lot to be done to change power structures that allow for injustice and human rights violations in the digital age. And thus, EDRi will not succeed alone. We play a contributing role based on our mission, identity and strengths as a digital rights network. We will aim for a world in which people live with dignity and vitality and to create a fair and open digital environment that enables everyone to flourish and thrive to their fullest potential. This is part and parcel of many other social justice causes as mobilisation and democratic change are highly dependent of technologies.

For EDRi that means that we will work in the next five years to influence decision-makers to regulate and change surveillance-based practices.

What’s next?

Now that our shared vision and purpose are articulated for a range of audiences, implementation work can start. In the coming months, our work as a network focuses on human rights based responses to the Covid-19 pandemic, on meaningful platform regulation and on requesting bans on invasive and risky biometric technologies.

A strategy is a frame, the start of a process rather than a document. We will therefore need to test our assumptions, reflect, iterate and build trust to advance digital rights for all. EDRi’s mission is ambitious. To succeed, we need a healthy network, fierce EDRi member organisations and empowered people. Our vehicle for change is a sustainable and resilient field that combats burn out and toxicity and relies on both personal relationships and professional processes.

The pandemic is an absolute turning point that marks the beginning of a different era, it can leave us feeling vulnerable and afraid for ourselves and our loved ones, but also reminds us that we are part of a broader community. What better time than this crisis for a new beginning for EDRi and the societies we live in to create a world of dignity in the digital age?

Read more:

Strategy summary
https://edri.org/wp-content/uploads/2020/05/EDRi_Strategy_Summary.pdf

EDRi calls for fundamental rights-based responses to COVID-19 (20.03.20)
https://edri.org/covid19-edri-coronavirus-fundamentalrights/

DSA: Platform Regulation Done Right (09.04.20)
https://edri.org/dsa-platform-regulation-done-right/

Ban biometric mass surveillance! (13.05.20)
https://edri.org/blog-ban-biometric-mass-surveillance/

close