In our ongoing work on technology and migration, we examine the impacts of the current COVID-19 pandemic on the rights of people on the move and the increasingly worrying use of surveillance technology and AI at the border and beyond.
Refugees, immigrants, and people on the move have long been linked with bringing disease and illness. People crossing borders whether by force or by choice are often talked about in apocalyptic terms like ‘flood’ or ‘wave,’ underscored by growing xenophobia and racism. Not only are these links blatantly incorrect, they also legitimise far-reaching state incursions and increasingly hardline policies of surveillance and techno-solutionism to manage migration.
These practices become all the more apparent in the current global fight against the COVID-19 pandemic.
In a matter of days, we saw Big Tech present a variety of ‘solutions’ for fighting the coronavirus sweeping the globe. Coupled with extraordinary state powers, the incursion of the private sector leaves open the possibility of grave human rights abuses and far reaching effects on civil liberties, particularly for communities on the margins. While emergency powers can be legitimate if grounded in science and the need to protect health and safety, history shows that states commit abuses in times of exception. New technologies can often facilitate these abuses, particularly against marginalised communities.
As more and more states move increasingly towards a model of bio-surveillance to contain the spread of the pandemic, we are seeing an increase of tracking, automated drones, and other types of technologies developed by the private sector purporting to help manage migration and stop the spread of the virus. However, if previous use of technology is any indication, refugees and people on the move will be disproportionately targeted. Once tools like virus-killing robots, cellphone tracking, and‘artificially intelligent thermal cameras’ are turned on, they will be used against people crossing borders and bring far reaching ramifications. Our research has repeatedly shown that migration technological experiments are often discriminatory, breach privacy, and even endanger lives.
Pandemic responses are political. Making people on the move more trackable and detectable justifies the use of more technology and data collection in the name of public health and national security. Even before the current pandemic, we have already been documenting a worldwide roll-out of migration “techno-solutionism.” These technological experiments occur at many points in a person’s migration journey. Well before you even cross a border, Big Data analytics are used to predict your movement and biometric data is collected about refugees. At the border, AI lie detectors and facial recognition have started to scan people’s faces for signs of deception. Beyond the border, algorithms have made their way into complex decision-making in immigration and refugee determinations, normally undertaken by human officers. A host of people’s fundamental human rights are impacted, including freedom from discrimination, privacy issues, and even life and liberty.
In some cases, increased technology at the border has sadly already meant increased deaths. In late 2019, the European Border and Coast Guard Agency, commonly known as Frontex, announced a new border strategy called ECOSUR which relies on increased staff and new technology like drones. This strategy is similar to the Horizon 2020 ROBORDER project which ‘aims to create a fully functional autonomous border surveillance system with unmanned mobile robots including aerial, water surface, underwater and ground vehicles.’ In the U.S., similar ‘smart-border’ technologies have been called a more ‘humane’ alternative to the Trump Administration’s calls for a physical wall. However, these technologies can have drastic results. For example, border control policies that use new surveillance technologies along the US–Mexico border have actually doubled migrant deaths and pushed migration routes towards more dangerous terrain through the Arizona desert, creating what anthropologist Jason De Leon calls a ‘land of open graves’. Given that the International Organization for Migration (IOM) has reported that due to recent shipwrecks, over 20,000 people have died trying to cross the Mediterranean since 2014, we can only imagine how many more bodies will wash upon the shores of Europe as the situation worsens up in Greece and Turkey.
While technology can offer the promise of novel solutions for an unprecedented global crisis, we must ensure that COVID technology does not unfairly target refugees, racialised communities, the Indigenous communities, and other marginalised groups, and make discriminatory inferences that can lead to detention, family separation, and other irreparable harms. Technological tools can quickly become tools of oppression and surveillance, denying people agency and dignity and contributing to a global climate that is increasingly more hostile to people on the move. Most importantly, technological solutions do not address the root causes of displacement, forced migration, and economic inequality, all of which exacerbate the spread of global pandemics like COVID-19. Unless all of us are healthy, including marginalised communities, no one is.
In times of exception like a global pandemic, the hubris of Big Tech thinking it has all the answers is not the solution to a complex global health crisis.
Cases of arbitrary arrests, surveillance, phone tapping, privacy breaches and other digital rights violations have drastically increased in Central and Southeast Europe as governments started imposing emergency legislation to combat the COVID-19 outbreak. Belgrade-based Balkan Investigative Reporting Network (BIRN) and the digital rights organization SHARE Foundation have started a blog titled “Digital Rights in the Time of COVID-19” documenting these developments.
“The
information gathered by the two organizations so far shows that the
most problematic [violations] are, essentially,
multiple issues involving the privacy of people who are put under
quarantine, the
spread of
disinformation and the
dangerous
misconceptions regarding the virus in the online and
social media
networks, as well as the increase of internet scams.”
The data gathered by the
two organizations through the blog’s database feature indicate that
in just over the last two weeks, 80 people have been arrested, some
of them jailed, for spreading fake news and disinformation, with the
most draconian examples in Turkey, Serbia, Hungary and Montenegro.
One such
noteworthy example occurred in the Serbian city of Novi Sad where
Nova,rs journalist Ana Lalić
was
arrested for “upsetting the public.” Lalić
had published an article describing the chaotic conditions of the
Clinical Center of Vojvodina, their “chronic lack of equipment”
and under-preparedness. It was the Center who then filed the
complaint against her and which led to her 48-hour sentence. Her
arrest provoked the reaction from organisations
across Europe like EDRi
member Article
19 or Freedom House.
Governments in Montenegro
and Moldova made public the personal health data of people infected
with COVID-19, while official websites and hospital computer systems
suffered cyber-attacks in Croatia and Romania. Some countries like
Slovakia are considering lifting rights enshrined under the EU
General Data Protection Regulation (GDPR), while
Serbia imposed surveillance and phone tracking to limit freedom of
movement.
Potentially
infected citizens have been obliged to submit to new forms of control
by law. In Serbia since the
declaration of a state of emergency was declared and all citizens
arriving from abroad must undergo quarantine. During a March 19 press
conference, Serbian
President
Aleksandar Vučić stated that the
police is “following” Italian telephone numbers,
checking which citizens use roaming and constantly tracking their
locations. This was specifically aimed at members of the
Serbian
diaspora who returned from Italy and are supposed to self-isolate in
their homes. He also warned the people who leave their phones behind
that the state has “another way” of tracking them if they violate
quarantine, but didn’t explain the method.
In
neighboring Montenegro, the National Coordination Body for Infectious
Diseases decided to publish the names
and surnames of people who must undergo quarantine online,
after it determined that certain persons violated the measure, and as
a result “exposing the whole Montenegro to risk.” Civic alliance
challenged
this measure through
a complaint submitted to the Constitutional Court of Montenegro.
In
Croatia, concerned citizens developed a website samoizolacija.hr
(meaning “Self-isolation”), which allegedly enabled anyone to
anonymously report quarantine violators to the police. The site was
been subsequently
shut down, and the
Ministry of Interior initiated criminal investigations against
suspected violators of privacy rights.
Crisis
Headquarters of the Federation of Bosnia and Herzegovina issued
a recommendation on how
to publish the personal data of citizens who violate the prevention
measures, as government institutions at cantonal and local level
started publishing data about people in isolation and self-isolation,
including lists of people identified as infected by the coronavirus.
In response, on March 24, the Personal Data Protection Agency of
Bosnia and Herzegovina issued a decision
forbidding the publication of personal data of
citizens tested positive for the coronavirus or those subjected to
isolation and self-isolation measures.
Perkov
also raised the issue of whether these measures are effective, in
particular because this puts people in danger. In Montenegro,
infected people whose identities
were revealed on social networks,
have been subjected to hate speech.
“Furthermore,
is the idea behind such measures the public shaming of people who
disrespect the obligation for self-isolation or the reduction of
number of violations? The criteria of proportionality and necessity
have not been properly respected and their adequacy had not been
justified.”
The above cases of publication of health data online involve direct violation of the laws that designate them as protected at the highest legal level. In other words, these violations go against laws of the highest order that protect fundamental rights in the digital environment, and they are doing so under the guise of the COVID-19 crisis response, as if it were an open invitation to break the rules of free and protected societies.
European Digital Rights (EDRi) is a network of over 42 civil and human rights organisations from across Europe. We defend rights and freedoms in the digital environment.
Project Description
EDRi seeks a consultant to conduct research (a “mapping” exercise) into the existing points of engagement with anti-discrimination issues within the digital rights field. This project will involve providing a comprehensive picture of the key actors, initiatives and organisations related to this topic, and forming an analysis of the progress needed to reach a robust level of engagement with anti-discrimination. The deadline to apply is Thursday 30th April 2020.
Objective: To advance EDRi’s understanding of the intersections between digital rights and anti-discrimination by outlining the most relevant work being conducted in the field, specifically the key actors, organisations and activities.
Person specification: The ideal candidate has experience conducting similar sectoral mapping exercises; a familiarity with the European digital rights field; and a sound understanding of discrimination issues in the European context.
Activities
This project will consist of the following:
Conducting a mapping exercise outlining the key actors, organisations and activities across Europe focused on anti-discrimination issues in digital spaces. The scoping should span a wide range of data points including, but not limited to:
actors, from key individuals, projects and initiatives, firms, institutions, organisations and collectives;
forms of discrimination and exclusion, including on race/ethnicity, migration status, class, age, gender, sexual orientation, gender identity, disability, religion, from an intersectional perspective;
geographies, (the mapping should include input from as many European countries as possible);
related areas, we foresee a non-exhaustive possibility of areas the consultant may uncover, from social welfare, policing, employment, issues digital platforms, digital inclusion, diversity.
Providing an assessment of the extent of engagement with anti-discrimination issues in the digital rights field according to the research findings.
Engaging on a regular basis with EDRi to report on progress and findings.
Outputs: We ask that the consultant provide one report summarising the results of the mapping exercise and analysis.
Working methods: We propose that the research is conducted by combination of:
desk
research
(online)
interviews
other
methodologies in addition according to the researchers’
discretion.
Timeline
The deadline to apply to the call is Thursday 30th April 2020. The project duration is 2 months, ideally between May and June 2020. The final timeline is to be decided upon agreement. The key phases of the project are:
Scoping (early May 2020): consultant and EDRi undergo introductions, define the scope and methodologies
Research and drafting (May 2020): consultant carries out and drafts mapping
Feedback and Revisions (June 2020): feedback from EDRi and revisions.
Background
This project forms part of a broader process in which EDRi aims to progress toward greater inclusivity in in the European digital rights movement, ensuring interconnections with a broad range of social justice issues and fully delivering on our commitment to protect the digital rights of all.
Great
strides have been made to highlight the need for technology and
digital rights field to reflect on broader social justice issues,
such as discrimination, state violence inequalities, social exclusion
and how they relate to digital rights. However, many such initiatives
reside in the Unites States, with minimal reflection in mainstream
the digital rights environment.
Further details: The consultant will report directly to Sarah Chander (Senior Policy Adviser at EDRi). Remuneration is negotiable depending on experience.
How to apply
Send an email to Sarah at sarah.chander(at)edri(dot)org by Thursday 30th April 2020with “Anti-Discrimination Consultant” in the subject line, with a CV and a brief paragraph outlining your suitability for the project.
We are an equal opportunities employer with a strong commitment to transparency and inclusion. We strive to have a diverse and inclusive working environment. We therefore encourage individual members of groups at risk of discrimination to apply for this post.
In 2019, the President
of the European Commission has committed to upgrade the Union’s
liability and safety rules for digital platforms, services and
products, with a new Digital Services Act (DSA).
The upcoming proposal, expected at the end
of the year 2020, would, among others,
regulate how platforms should deal with
potentially illegal content that they host on their servers.
In its position paper ‘Digital Services Act: Platform Regulation Done Right’, European Digital Rights (EDRi) releases its first fundamental rights-based recommendations for the upcoming DSA. The recommendations represent the voice of 42 digital rights organisations active in Europe.
The
DSA
is
as
a unique opportunity to improve
the functioning of platforms as public space in our democratic
societies, to uphold people’s rights and freedoms, and to shape
the
internet as
an open,
safe and accountable infrastructure
for everybody.
These
recommendations are the results of 8 months of collaboration in the
EDRi network and beyond, including with groups that
represent
victims of illegal
content. We look forward to engaging on this very important piece of
legislation in the next
period.
EDRi
encourages
other
civil
society organisations and citizens to
reply to the
upcoming
Commission
consultation and
support the protection of fundamental rights online.
In a recent statement released on 20 March 2020, European Digital Rights (EDRi) calls on the Member States and institutions of the European Union (EU) to ensure that, while developing public health measures to tackle COVID-19, they:
Strictly uphold fundamental rights;
Protect data for now and the future;
Limit the purpose of data for COVID-19 crisis only;
Implement exceptional measures for the duration of the crisis only;
Condemn racism and discrimination;
Defend freedom of expression and information.
EDRi’s Head of Policy, Diego Naranjo, explains that:
EDRi supports necessary, proportionate measures, fully in line with national and international human rights and data protection and privacy legislation, taken in order to tackle the COVID – 19 global pandemic. These measures must not, however, set a precedent for rolling back the fundamental rights obligations enshrined in European law.
EDRi recognises that Coronavirus (COVID-19) disease poses a global public health challenge of unprecedented proportions. The use of good-quality data can support the development of evidence-based responses. However, we are witnessing a surge of emergency-related policy initiatives, some of them risking the abuse of sensitive personal data in an attempt to safeguard public health. When acting to address such a crisis, measures must comply with international human rights law and cannot lead to disproportionate and unnecessary actions. It is also vital that measures are not extended once we are no longer in a state of emergency.
In times of crisis, our authorities and communities must show responsibility, resilience, solidarity, and offer support to healthcare systems in order to protect our lives. States’ emergency responses to the COVID-19 pandemic must be proportionate, however, and be re-evaluated at specified intervals. By doing this, states will prevent the normalisation of rights-limiting measures, scope creep, data retention or enhanced surveillance that will otherwise be harmful long after the impacts of the pandemic have been managed.
In these times of pandemic and emergency measures, EDRi expresses solidarity towards collective protection and support for our health systems. We will continue monitoring and denouncing abuses of human rights in times when people are particularly vulnerable.
On 15 March, Section 215 of the USA PATRIOT Act, and several other similar legal provisions, were due to expire and begin the process of reform and review to incorporate new legal protections of privacy. However, as a result of a coordinated effort by both chambers of the US Congress, the provisions may be extended for at least 77 days.
Section
215 was
originally
introduced in 2001 as
part of the USA PATRIOT Act, a landmark piece of legislation passed
soon
after
the September 11th
attacks as an amendment to the Foreign Intelligence Surveillance Act
of 1978 (FISA). The
PATRIOT Act
was
designed to strengthen national security and law enforcement
capabilities. It gave federal agencies like the Federal Bureau of
Investigation (FBI) new
and expanded competences like the permission to search a home or
business without consent from the owner, indefinite detention of
immigrants, etc.
Section
215 is
a
provision of the PATRIOT Act
known as the “business records” provision. It
allows the government and law enforcement agencies to order third
parties to produce “specific and tangible” things such as books,
records, papers, documents, and other items, when the FBI is
conducting either an investigation into a “foreign intelligence,”
or an investigation to protect against “international terrorism”
or
“clandestine intelligence activities” (even if the investigation
targets US citizens).
It
has been at the centre
of many controversies of government overreaching and privacy
violations. As EDRi member the Electronic Frontier Foundation (EFF)
explained:
In the hearings last year, witnesses confirmed that the 215 ‘business records’ provision may allow the government to collect sensitive information, like medical records, location data, or even possibly footage from a Ring camera.
Section
215 had been the centrepiece of Edward Snowden’s leaks to The
Guardian in 2013, where he revealed that the Bush and Obama
administrations had been abusing the aforementioned provision to
obtain phone data of US citizens in bulk.
It was the most egregious violation of
privacy by the US government in recent history; and it happened in
secret. The Snowden leaks provoked a
legislative reaction by Congress with the passage of the USA FREEDOM
Act, which took
several measures to curtail the authority of law enforcement
agencies, though extended Section 215 almost in its entirety to the
end of 2019, and later to March 2020.
The
threat has not gone away
Section 215, along with at least two other provisions (the roving wiretap and lone wolf surveillance authorities), were meant to be included in FISA reform legislation designed to introduce amendments and changes that would increase protections of individual privacy against governmental intrusion. This was the hope of a host of activist groups, non-profit organizations, etc., that saw the expiration of these provisions as a chance to overhaul the information access system in the US. The reforms were timed to take advantage of FISA’s expiration date of March 15, 2020.
However, last week the House of Representatives passed a bill that essentially extended Section 215 for three more years through 2023 – though this House bill did include several minor changes that took some of the criticism into account, like extending prison penalties for engaging in secret surveillance. When the bill went to the Senate for final approval, however, Majority Leader Mitch McConnell (Republican) and the Senate, instead of voting on the bill and debating its proposed changes, decided to punt any decision regarding this legislative proposal and unanimously passed an extension of Section 215 of the USA PATRIOT Act for 77 days, though it would still be subject to opposition from recessed House members and to presidential approval. What would this extension mean? It would essentially delay any kind of discussion on whether Section 215 will be allowed to expire and what kind of replacement parameters will be introduced.
What happens now?
It
remains unclear what
will happen to Section 215, now that the COVID-19 crisis has thrown
the political landscape into disarray.
But,
as the USA FREEDOM Act bipartisan effort demonstrates, the push to
maintain this overbearing and invasive legislation endures. EDRi
member EFF, who has been regularly advocating
for privacy and
legislative
reform, is actively pushing for change:
It is past time for reform. Congress has already extended these authorities without reform once, without debate and without consideration of any meaningful privacy and civil liberties safeguards. If Congress attempts to extend these authorities again without significant reform, we urge members to vote no and to allow the authorities to sunset entirely.
What matters now is that this landmark legislative provision is allowed to sunset, and the reform process for the authority to access private data by law enforcement agencies begins anew. Whether we will see this hope come to fruition, however, remains to be seen.
This is the second article in a series dealing with competition law and Big Tech. The aim of the series is to look at what competition law has achieved when it comes to protecting our digital rights, where it has failed to deliver on its promises, and how to remedy this.
Read the first article on the impact of competition law on your digital rights here.
Almost everybody uses products or online services from Big Tech companies. These companies make up a considerable part of our online life.
This concentration of power in some sectors of the digital market (think search, social media, operating systems) by a small number of companies is having devastating effects on our rights. These companies are able to grow exponentially by constantly watching us and harvesting our personal data, which they then sell to data brokers, governments and dodgy third parties. With billions of users, these companies acquire an unprecedented level of knowledge about people’s most intimate lives.
They were able to achieve this by nudging people into giving up their personal data and by artificially creating powerful network effects linked to their dominant position that keeps users on a platform despite its intrusiveness. Accessing large quantities of data and creating locked-in user communities gives dominant platforms a strong competitive advantage while creating barriers of entry for competitors.
While being in a dominant position is not illegal, abusing that position is. And most Big Tech companies have been fined for abuses or are currently under investigation. Google alone had to pay 8 billion euros of fines in only three years.
And yet, in an interview given in December of 2019, Competition Commissioner Margrethe Vestager admitted that her fines have been unable to restore competition between Big Tech and smaller rivals because companies had “already won the market”.
So if fines do not work, what does? Have current antitrust laws reached their limits?
Traditional antitrust law assess the abuse of a dominant position ex-post, when the harm has been done and through lengthy investigations. Several ideas to get antitrust law up to speed with the digital economy are being discussed and are worth considering.
Giving
back the freedom
to choose
Speed alone, however, is unlikely to solve the problem. Policy recommendations at EU and national levels highlight the need for new ex-ante measures “to ensure that markets characterised by large platforms with significant network effects acting as gate-keepers, remain fair and contestable for innovators, businesses, and new market entrants”.
The new Digital Services Act (DSA) announced by the European Commission provides an opportunity for the EU to put in place the most urgent ex-ante measures without having to go through a full reform of its long-standing antitrust rules. One key measure that EDRi and many others have been pointing at is to make dominant social media and messaging platforms interoperable. Interoperability would require platforms to open their ‘walled gardens’ to other comparable services so that different users from different platforms can connect and communicate with each other.
This would enable competitors to challenge the huge user bases of incumbent social media platforms which permit the dominance to persist, and allow a fairer redistribution of power between competitors as well as with users. Combined with the right to data portability under the General Data Protection Regulation (GDPR), consumers could regain control over their personal data as they would not feel obliged to use a second-best service just because all their friends and family use it. Interoperability has already been used as a competition remedy in the past: in the Microsoft case, the European Commission required Microsoft to open up its operating system in view of enabling third parties to offer Windows-compatible software programmes.
Moreover,
mandatory interoperability would directly
strengthen healthy competition among platforms and could even create
whole new markets of online services built downstream
or upstream,
such as third-party client apps or content moderation plug-ins.
The DSA presents a huge opportunity for the EU to decide how central aspects of the internet will look like in the coming decade. By including requirements for Big Tech such as interoperability, the DSA would inspire new competition and drive into a broken market, limit the downsides of user lock-in and reduce negative network effects.
A special status for Big Tech?
Interoperability
measures could also
be
implemented as part of a broader
mechanism or
scheme
for
dominant players.
In
its contribution
to the debate on competition policy and digital challenges, the
French competition authority draws on suggestions from several
reports and
the current reform bill being discussed in Germany
to propose a new
mechanism for
“structuring players”.
They
suggest to define these players in three cumulative
stages:
1. companies providing online intermediation services; 2. which hold
structural market power and 3. which play a role in access to and in
the functioning of certain markets in
regards to competitors, users or third parties.
This new status could also allow for new ex-post measures. Whenever one of these players would implement a practice that raises competitive concerns, competition authority would be able to intervene, penalise the company, or prohibit the practice in the future. Such triggering practices could consist of hindering access to markets, preferencing their own services, using data to hamper access to a market or make interoperability or data portability more difficult.
Beyond competition law, because of the effect they have on our rights, these companies should be required to limit some of their harmful practices such as data extraction or message amplification. To this effect, they could be imposed other sets of obligations, such as obligations of transparency, access, non-discrimination or device neutrality. Some of these obligations already exist in the P2B regulation addressing relations between online platforms and businesses and could be extended for public scrutiny. Others should be explicitly written into the planned Digital Services Act. Together with strong ex-ante measures, they will help the EU to limit the most damaging behaviour of dominant platforms do today.
In the spring of 2019, the Hellenic Police signed a €4 million contract with Intracom Telecom, a global telecommunication systems and solutions vendor, for a smart policing project. Seventy five percent of the project is funded by the Internal Security Fund (ISF) 2014-2020 of the European Commission. The Hellenic Police published a press release for the signature of this contract in December 2019, while the vendor had publicly announced it earlier, in July 2019.
Based on the technical specifications of the contract, the vendor will develop and deliver to the Hellenic Police smart devices with integrated software enabling facial recognition and automated fingerprint identification, among other functionalities. The devices will be in the size of a smartphone, and police officers will be able to use them during police stops and patrols in order check and identify on the spot individuals who do not carry identification documents with them. The police officers will also be able to take a close-up photograph of an individual’s face and collect her/his fingerprints. Then, the fingerprints and the photographs collected will immediately be compared with data already stored in central databases after which the police officers will get the identification results on their devices.
The
Hellenic Police claims that this will be a more “efficient” way
to identify individuals in comparison to the current procedure, i.e.
bringing any individuals who do not carry identification documents to
the nearest police station. Based on the timetable for the
implementation of the project, the devices and the related systems
should be fully functional and ready for use within 20 months of
signing the contract. Thus, it is anticipated that the Hellenic
Police will be able to use these devices by the beginning of 2021.
Once the Hellenic Police published its press release in December 2019, EDRi observer Homo Digitalis addressed an Open Letter to the corresponding Greek minister requesting clarifications about the project. More precisely, based on the provisions of the Directive 2016/680 (LED) and the Greek Law 4624/2019 implementing it, Homo Digitalis asked the Minister of Citizen’s Protection whether or not the Hellenic Police has consulted the Hellenic Data Protection Authority (DPA) on this matter and/or conducted a related Data Protection Impact Assessment (DPIA) and what the applicable safeguards are, as well as to clarify the legal provisions that allow for such data processing activities by the Hellenic Police.
In
February 2020, the Hellenic Police replied but neither confirmed nor
denied that a prior consultation with the Hellenic DPA took place or
that a DPIA was conducted. Moreover, Homo Digitalis claims that the
Hellenic Police did not adequately reply about the applicable
safeguards and the legal regime that justifies such data processing
activities.
As
a result of this inaction from public authorities, on March 19, 2020
Homo Digitalis filed a request for opinion to the Hellenic DPA
regarding this smart policing contract. The request is based on the
national provisions implementing article 47 of the LED which provides
for the investigatory, corrective and advisory powers of the DPAs.
With this request, Homo Digitalis claims that the processing of biometric data, such as the data described in the contract, is allowed only when three criteria are met: 1. it is authorised by Union or Member State law, 2. it is strictly necessary, 3. and it is subject to appropriate safeguards for the rights and freedoms of the individual concerned. None of the above mentioned criteria is applicable in this case. Specifically, there are no special legal provisions in place allowing for the collection of such biometric data during police stops by the Hellenic police. Moreover, the use of these devices cannot be justified as strictly necessary since the identification of an individual is adequately achieved by the current procedure used. Nevertheless, such processing activities are using new technologies, and are very likely to result in a high risk to the rights and freedoms of the data subjects. Therefore, the Hellenic Police is obliged to carry out, prior to the processing, a data protection impact assessment and to consult the Hellenic DPA.
The 8th edition of Privacy Camp revolved in 2020 around the topic of Technology and Activism, the schedule being composed of ten sessions in different formats. What were these about? Read below a summary of each discussion, with references to full session recordings.
This session was under the format of a series of three stories from four activists. Jeff Deutch from The Syrian Archive opened the chat by pointing at the role of emerging tech in documenting conflicts, from the war in Vietnam to the rise of citizen journalism in the context of the Tunisian revolution in 2011 and the Syrian conflict. Due to platforms’ policies of content removal in this context, he pointed at three areas of work he’s currently invested in, as part of Syrian Internet Archive: archival, verification, and searching of content. Sergey Boyko from Internet Protection Society Russia continued the session by talking about his experience of using the internet while hiding from the law enforcement, who aimed to arrest him, with the goal of stopping a street protest against the Russian government’s pension reform that he was organising. He pointed at tactics to secure his communications, accommodation, and use of social media while in hide-out, and concluded it is possible to use the internet outside the governments’ eyes, if you understand how the internet works, and what the limitations of government surveillance capabilities are. Finally, Finn Sanders and Jan-Niklas Niebisch from Fridays for Future (FFF) Germany focused on the use of Social Media in FFF to attract people in protests with Instagram as instrumental in targeting young people. They outlined the tools used in national coordination, the cooperation with the police forces, as well as the moderation arrangements ensuring the content shared via these tools is legal and not harmful.
With a background in free software and free culture, journalist Rula Asad from Syrian Female Journalists Network kicked off the session mentioning how defending digital resilience is key to defending activists. She explained how her organisation does that in the case of internet shutdowns, spyware, or explaining activists how to use certain security tools. As someone advocating for human rights defenders (HRDs), with a focus on women, she helped building the security helpline in the IT security club in her University. Specifically, she mentioned the threat for men because of power relations and that for women speaking out is more difficult since they are more often silenced online than offline. Some of the risks she brought up are stress, burnout, and self-censorship because of lack of solidarity. Hassen Selmi from Access Now, on the other hand, mentioned phishing as a very common threat for HRDs, as well as physical attacks, arrests, search of devices, and ransomware. Finally, Alexandra Hache from Digital Defenders Partnership at Hivos mentioned the raise of mass surveillance and the crackdown on internet shutdowns and slow-down. She also pointed out at the raise of privacy friendly technologies, but also at the increased difficulty for users to control their data. More, she mentioned that activists are often using tools that are not designed for activists, such as social media. One of the key issues raised was the role of tailored training sessions for activists – with adequate follow-ups to ensure that good practices become part of the culture of the organisation.
In this lively and insightful debate, moderator Sofija Todorovic from the Balkan Investigative Reporting Network (BIRN) led the panel through an exploration of how the context of state power, in particular the presence or absence of democratic controls, can change what it means to protect investigative journalists. Andrej Director of Tech at Share Foundation, launched the discussion with an explanation of how attacks on journalists are becoming less technical, and more focused on social engineering or smear campaigns. Drawing on the Serbian context, he noted that replacing the control of public actors with private actors shifts, but does not solve, the problem. Peter Erdelyi, Senior Editor at 444.hu, continued that civil spaces in Hungary are shrinking, with systematic government pressure on independent media to stop investigations into corruption. Domagoj Zovak, Editor and Anchor at Prime Time, finished by talking about the monopoly of media control in Croatia, and how it has led to a culture of fear. The conclusion of the panel offered a powerful reminder that increasing internet regulation is not a panacea, as in some parts of the EU, it is the state that poses the biggest threat to free expression, not private platforms.
How To Parltrack Workshop Session description / Not recorded The Parltrack workshop gave participants the opportunity to understand Parltrack and how to use it and its data more efficiently. The workshop started by a presentation of the European institutions and the legislative system and processes. Parltrack was presented as a European initiative to improve the transparency of the legislative processes. Although it is not a perfect tool (it can be hard to obtain data and to render amendments), participants were explained how Parltrack combines information on dossiers, representatives, vote results and committee agendas into a unique database and allows the tracking of dossiers using email and RSS.
Jen Persson, Director of DefendDigitalMe, opened the discussion with the remark that children are perceived as an “Other” to be protected. It is under this protection regime that often, she argued, children loose their rights. She talked about schools monitoring pupils under their responsibility to identify extremist affiliations, as well as the commercial re-appropriation of school consensus data. Further, Daniel Carey from civil rights and judicial review law firm Deighton Pierce Glynn focused on his case on how a pupil took back control of their data after being referred under UK’s pre-crime Prevent programme. He rounded up how easy it is that data generated by children is used against that child. The third intervention was Liberty’s Policy and Campaigns Manager Gracie Bradley’s. She talked about the Against Borders for Children (ABC) coalition and the Boycott School Census action. She situated the topic within the UK’s “hostile environment policy”, under which the UK Government introduced entitlement checks into essential public services and data sharing schemes between those public services and the Home Office. Finally, Gloria Gonzales Fuster, Research Professor at Law, Science, Technology and Society (LSTS) Research Group at Vrije Universiteit Brussel (VUB), argued that anyone who cares about data protection and privacy in general cannot position children on a lower level of protection because of their age. She mentioned the existing preamble under the General Data Protection Regulation (GDPR) that children deserve specific protection, as well as the strategies often used to circumvent the legal protections for children and their data under current data protection legal frameworks.
The Privacy Camp hosted the European Data Protection Supervisor (EDPS) civil society summit that gave participants the opportunity to debate the rising threat of facial recognition with the EDPS himself, Wojciech Wiewiórowski, and members of his team. From across the EDRi network and beyond, attendees gathered at the roundtable to talk about violations to the principles of proportionality and necessity, and other rights impacts. This included examples of the deployment of facial surveillance systems in France, Serbia and Wales. The summit allowed participants to debate the merits of a ban compared to the benefits of a moratoria, and also to consider whether civil society should focus on improving enforcement of existing legislation instead. It also gave everyone the chance to consider the nuances between different uses of facial recognition – for example whether it is in public spaces or not. The EDPS closed the roundtable with a nod to the old CCTV privacy/security debates, and a recognition that the current approach to facial recognition is very fractured across Member States. He warned civil society not to focus on the accuracy question, and instead to look at tools to address the fundamental rights risks, such as impact assessments.
Following an explanation of the right of access under the General Data Protection Regulation (GDPR), the moderator Joris van Hoboken from the Law, Science, Technology and Society (LSTS) Research Group at Vrije Universiteit Brussel (VUB) introduced the first speaker Gaëtan Goldberg from noyb who presented their activities: representing individuals in front of their Data Protection Authorities (DPA). Noyb focuses on comparing the response people get when submitting a data access request to what the company say in its privacy policy and marketing material. Taking as an example the results from a series of subject access requests directed at streaming service providers such as Netflix and Spotify, Gaëtan concluded that data subject access requests are a good tool, but GDPR enforcement is much needed. Karolina Iwanska of Panoptykon Foundation explained that they approach the topic throughindustry, and the power and influence that governments and companies have over our decisions. She explained that the way Panoptykon uses data subject access request is through a focus on the interpretations of collected data in the areas of advertising and banking credit scoring. Finally, René Mahieu from LSTS presented the subject of his PhD around the questions of “Is the right of access effective in practice?”. Adding to the uses of the data requests by digital rights organisations, René mentioned the spread of data subject requets as a tool for labour rights and consumer rights organisations. More, he pointed out that access is not easily given, but as soon as public spotlight exists, companies are quick in replying to such requests.
The moderator Francesca Musiani from Centre for Internet and Society (CNRS) started the debate by briefly describing the new set of juridical measures that impact internet infrastructures and therefore the civil liberties of Russian population. She also listed some new ways of circumventing those limits that might be surprising for Western activists. Ksenia Ermoshina from CNRS talked about her research on the use and development of encryption protocols in the region, and how cryptographic researchers were surprised that endangered journalists were using Facebook, Whatsapp and similar tools. He continued stressing that the perception of security vs privacy is very different than in Western Europe. Anna Zaytseva from LLA CREATIS of University of Toulouse-Jean Jaurès gave examples on why some activists use Google services. The main reason was that, according to the Google Transparency Report, Google never replied to information requests from Belarusian or Russian authorities. She stressed that, due of geopolitics, if you want be an activist in the United States, you should use the Russian social network VK, whereas if you are an activist in Russia, you should use Facebook. Sergey Boyko, co-founder of the Internet Protection Society, highlighted that aspect: hundreds of VK users have been jailed for their opinions or posts, as there have been only two cases of people jailed for their posts in Facebook. In that sense, Facebook is relatively safe for Russian activists. Services like email.ru and VK give the info to SPB (Saint Petersburg Police) directly in real time. Boyko also mentioned that Russian authorities cannot use the Chinese way – they couldn’t simply ban Facebook and Google. They use other methods: they intimidate those companies with high fines, with menaces of blocking, and they work very closely with them, to get the sensitive contents removed. Activists are afraid that eventually Google and Facebook will start to collaborate much closer with Russian government. That’s why it’s important for the activists to work with those big platforms to make them understand the dangers of collaborating with Russian government.
The discussion was started by Amber Macintyre from Tactical Tech, who pointed that the rise in data-driven tools in the NGO sector informs, but does not determine long-term decision making. Michael Hulet from Extinction Rebellion Belgium mentioned that, despite being aware of the dangers of exploitative data flows in activist circles, privacy-friendly tools can slow-down a grassroots movement. Tools used must, according to him, be accessible and global. Further, Glyn Thomas, Digital Strategy Consultant working with NGOs, shared his thoughts on the privacy-related behaviour of organisations of different sizes, by also focusing on what are the dangers in this respect for each type of an NGO. Moderator Jan Tobias Muehlberg facilitated the Q&A, addressing issues such as trust, platform censorship of activists, use habit and ways to transit to alternatives, among others. The discussion concluded with the idea that activists need to have concrete visibility of threats coming from the lack of privacy in order to be motivated to change their tools and practices.
This powerful panel drew attention to the need for digital rights work to better incorporate diverse, intersectional experiences in order to protect all internet users. EDRi’s Chloe Berthélémy, as moderator, noted that this is important for upcoming work on the Digital Services Act (DSA). Oumayma Hammadi from Rainbow House Brussels launched the panel by raising the issue of the disproportionate censoring of LGBTQ+ online spaces and bodies. Alejandro Moledo from European Disability Forum (EDF) continued that platforms are an important part of self-determination for people with disabilities, but they receive enormous online abuse. Štefan Balog from Romea revealed how the internet has exacerbated hatred of Roma people and has been responsible for even inciting physical violence. Lastly, Pamela Morinière from the International Federation of Journalists talked about how our gendered society affects women journalists, leading to hate and violence both on- and offline. She explained that online anonymity protects abusers from accountability.
What
was your favourite session this year? Let us know by tweeting your
thoughts with the hashtag #PrivacyCamp20.
On 27 March 2020, European Digital Rights (EDRi) and 12 of its member organisations sent an open letter to representatives of Member States in the Council of the EU. In the letter, we voice our deep concern over the proposed legislation on the regulation of terrorist content online and what we view as serious potential threats to fundamental rights of privacy, freedom of expression, etc.
Dear representatives of Member States in the Council of the EU,
We hope that you are keeping well in this difficult time.
We are writing to you to voice our serious concerns with the proposed Regulation on preventing the dissemination of terrorist content online (COM/2018/640 final). We have raised these concerns before and many similar critiques have been expressed in letters opposing the Regulation from human rights officials, civil society groups, and human rights advocates.i
We firmly believe that any common position on this crucial file must respect fundamental rights and freedoms, the constitutional traditions of the Member States and existing Union law in this area. In order for this to happen, we urge you to ensure that the rule of law in cross-border cases is respected, that the competent authorities tasked with ordering the removal of illegal terrorist content are independent, to refrain from adopting mandatory (re)upload filters and guarantee that the exceptions for certain protected forms of expression, such as education, journalistic and research materials, are maintained in the proposal. We explain why in more detail further below.
First, we ask you to respect the principles of territoriality and ensure access to justice in cases of cross-border takedowns by ensuring that only the Member State in which the hosting service provider has its legal establishment can issue removal orders. The Regulation should also allow removal orders to be contested in the Member State of establishment to ensure meaningful access to an effective remedy. As recent CJEU case law has established “efficiency” or “national security” reasons cannot lead to short-cuts to rule of law mechanisms and safeguards.ii
Secondly, the principle of due process demands that the legality of content be determined by a court or independent administrative authority. This important principle should be reflected in the definition of ‘competent authorities’. For instance, we note that in the Digital Rights Ireland case, the Court of Justice of the European Union considered that the Data Retention Directive was invalid, inter alia, because access to personal data by law enforcement authorities was not made dependent on a prior review carried out by a court or independent administrative authority.iii In our view, the removal of alleged terrorist content entails a very significant interference with freedom of expression and as such, calls for the application of the same safeguards.
Thirdly, the Regulation should not impose the use of upload or re-upload filters (automated content recognition technologies) to those services under the scope of the Regulation. As the coronavirus crisis makes abundantly clear, filters are far from accurate. Only in recent days, Twitter, Facebook and YouTube have moved to full automation of removal of content, leading to bad scores of legitimate articles about coronavirus being removed.iv The same will happen if filters are applied to alleged terrorist content. There is also mounting data suggesting that algorithms are biased and have a discriminatory impact, which is a particular concern for communities affected by terrorism and whose counter-speech has proven to be vital against radicalisation and terrorist propaganda. Furthermore, a provision imposing specific measures on platforms should favour a model that gives room for manoeuvre to service providers on which actions to take to prevent the dissemination of illegal terrorist content, taking into account their capacities and resources, size and nature (whether non-for-profit, for-profit or community-led).
Finally, it is crucial that certain protected forms of expression, such as educational, artistic, journalistic and research materials are exempted from the proposal, and that it includes feasible measures to ensure how this can be successfully implemented. The determination of whether content amounts to incitement to terrorism or even glorification of terrorism is highly context specific. Research materials should be defined to include content that serves as evidence of human rights abuses. The jurisprudence of the European Court of Human Rights (ECtHR)v specifically requires a particular caution to ,such protected forms of speech and expression. It is vital that these principles are reflected in the Terrorist Content Regulation, including through the adoption of specific provisions protecting freedom of expression as outlined above.
We remain at your disposal for any support you may need from us in the future.
David Kaye, Joseph Cannataci and Fionnuala Ní Aoláin, Mandates of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression; the Special Rapporteur on the right to privacy and the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism, 2018. Available at: https://spcommreports.ohchr.org/TMResultsBase/DownLoadPublicCommunicationFile?gId=24234
See Digital Rights Ireland v. Minister for Communications, Marine and Natural Resources, Joined Cases C‑293/12 and C‑594/12, 08 April 2014 at para. 62.
In cases involving the dissemination of “incitement to violence” or terrorism by the press, the ECtHR’s starting point is that it is “incumbent [upon the press] to impart information and ideas on political issues just as on those in other areas of public interest. Not only does the press have the task of imparting such information and ideas: the public also has a right to receive them.” See Lingens v Austria, App. No. 9815/82,8 July 1986, para 41.
The ECtHR also repeatedly held that the public enjoyed the right to be informed of different perspectives, e.g. on the situation in South East Turkey, however unpalatable they might be to the authorities. See also Özgür Gündemv. Turkey, no. 23144/93, 16 March 2000, para.60 and 63 and the Council of Europe handbook on protecting the right to freedom of expression under the European Convention on Human Rights, summarizing the Court’s case law on positive obligations of States with regards to the protection of journalists (p.90-93), available at: https://rm.coe.int/handbook-freedom-of-expression-eng/1680732814