European Digital Rights (EDRi) is an international not-for-profit association of 44 digital human rights organisations. We defend and promote rights and freedoms in the digital environment, such as the right to privacy, personal data protection, freedom of expression, and access to information.
Join EDRi now and become a superhero for the defense of our rights and freedoms online!
The EDRi Brussels office is currently looking for an intern to support our communications. The internship will focus on website, social media, publications, visual design, press work, and the production of written materials.
This is your chance to work in a fast paced environment with a passionate team of digital rights activists supporting grass-roots efforts across Europe to build a better digital future.
The internship will begin in September 2020 for a period of 4-6 months. The chosen communication champion will receive a monthly remuneration of minimum 750 EUR (according to “convention d’immersion professionnelle”).
Key tasks:
Social
media: drafting and
scheduling posts, managing
followers’ inquiries,
monitoring, reporting
Publications:
designing and editing publication layouts and visuals
Press:
drafting press releases and briefings
Website:
assisting in making
website changes, updates and migrating content from the previous
website
Newsletter:
formatting
WordPress posts
and designing visuals for the bi-weekly EDRi-gram
Dissemination
strategy:
mapping key digital rights and tech for good organisations
Ad-hoc:
Assisting in other communications tasks, such as maintenance of
mailing lists, monitoring media visibility, updating and analysing
communications statistics
Needed:
Experience
in social media
community management and social
media performance reporting
Strong
layout, photo and visual
editing skills
Excellent
command of spoken and
written English
Knowledge
of website management
(WordPress)
Ability
to multi-task
and strong time
management skills
Strong
communication
and relationship
building skills
Proactive
problem-solver
Desired:
Experience
with journalism, media
or public relations
Interest
in online activism and
campaigning for
digital human rights
Excellent
story telling skills
Knowledge of
open source software
such as Matomo,
Thunderbird,
Libre
Office
How to apply:
To apply please send a maximum one-page cover letter and a maximum two-page CV (only PDFs are accepted) by email to gail >dot< rego >at< edri >dot< org. Closing date for applications is midnight on Sunday, 23 August 2020. Due to limited capacity, we are only able to contact successful candidates.
The first phase of the recruitment process involves a written exercise and is expected to take place in the last week of August. Interviews with selected candidates will take place in the second recruitment phase during the first week of September, The internship is scheduled to start in the first weeks of September, with an existing possibility for remote-working.
We are an equal opportunities employer with a strong commitment to transparency and inclusion. We strive to have a diverse and inclusive working environment. We encourage individual members of groups at risk of racism or other forms of discrimination to apply for this post.
SHARE Foundation has recently released a short documentary on the controversial use of the mass surveillance system in Belgrade, Serbia. Various digital experts and activists took part in the documentary, including the national Data Protection Authority in Serbia and EDRi’s own Policy and Campaigns Officer, Ella Jakubowska.
The 10-minute video explains all the key questions of the use of smart surveillance technology in public spaces, as well as the arguments of the Thousands of Cameras initiative, which is demanding the respect of the Constitution and the laws
The Government of Serbia in cooperation with Huawei has been actively working on the implementation of the “Safe City” project in Belgrade. This project involves the installation of thousands of smart surveillance cameras with object and face recognition features. The procurement also involves an artificial intelligence system used for the analytics of the feed captured with these cameras.
A civic initiative, #hiljadekamera [Thousands of Cameras] is tracking the development of the mass surveillance system in Belgrade and has so far collected and verified data on 689 facial recognition cameras across the city. Composed of concerned citizens, experts and digital rights organisations, has been vocal about the deterioration of privacy as a result of this project for over a year. The website with the map showing locations of smart cameras hiljade.kamera.rs was launched in mid-May, together with social media accounts.
In the first two months of this crowdsourcing action, the citizen map revealed twice as many smart cameras than there are on the official police list. Major discrepancies are noted in Novi Beograd, Zvezdara, Stari Grad, but also in other municipalities of Belgrade.
The Thousands of Cameras initiative gathers citizens, activists and human rights organisations asking for transparency and opening a wide public debate on the system of non-selective invasion of constitutional rights. At the same time, the hiljade.kamera.rs website serves as a portal where citizens can inform themselves about their rights or legal and technical aspects of the use of facial recognition technologies, or join the activities of the initiative.
The initiative, led by SHARE Foundation, a Belgrade – based digital rights organisation has done numerous activities in the field of crowdmapping the infrastructure, community building, research, advocacy and content production.
On 8 June 2020, IBM’s CEO announced to the US Congress that – on the grounds of “justice and racial equity” – the company would “sunset” its “general purpose” facial recognition technologies. EDRi has addressed the company through a letter, but IBM’s response suggests the organisation is motivated by public relations, instead of fundamental rights.
The EDRi network has reported on the serious risks posed by facial recognition to the full spectrum of fundamental rights and the rule of law. Besides facial recognition, biometric data surveillance includes the way we walk (gait) and voice recognition. Governments continue to use this surveillance to track us, turning public spaces into perpetual police line-ups.
Instead of enabling free, vibrant and democratic societies, facial recognition as a form of surveillance creates societies of suspicion, control and discrimination.
These risks are so severe that EDRi has called on the European Commission to ban biometric mass surveillance in both law and practice across the EU.
On 25 June, EDRi sent IBM a letter asking the company to provide more information about their commitment to stopping facial surveillance. It included questions like “Which contracts will be stopped as a result? Which contracts won’t? How does IBM define general purpose? Has IBM engaged fundamental rights experts? Do these steps apply only to the US, or to IBM’s global activities?”
On 8 July, IBM’s Chief Privacy Officer sent a short response to EDRi’s letter. Their one-page reply reiterated the general commitment of their earlier statement, and elaborated on IBM’s participation in various initiatives on artificial intelligence and ethics. Their response did not answer a single one of EDRi’s fifteen questions.
This response suggests that IBM’s motivation is driven by public relations and not fundamental rights. They have failed to provide any information that could enable us to substantively assess their commitments. Our questions still stand.
How can we know that Europe’s people and communities are protected from the threats of biometric surveillance across our public spaces, when there is a toxic trio of:
a lack of public transparency by IBM;
a failure at European level to provide national data protection authorities with adequate resources to hold IBM (not to mention the more prolific facial recognition players such as ClearviewAI, PimEyes, NEC, 3M and many more) to account; and
a failure to ban the development, procurement and deployment of these harmful tools?
The short answer is: we can’t.
IBM’s statement in reply to EDRi’s letter shows that relying on the self-regulation or ethical principles of the companies developing these technologies can never be sufficient.
It is clear that corporate PR is not and can never be a policy solution, as exemplified by Amazon pausing the sale of its facial recognition technologies to law enforcement, despite having aggressively pushed its sinister Rekognition technology to police and communities across the US in recent years.
It is high time that the European Commission and EU member states take the necessary steps to protect the EU’s democracy and commitment to fundamental rights. We strongly urge decision makers to permanently and comprehensively ban biometric mass surveillance in upcoming rules on AI.
Today, 16 July 2020, the Court of Justice of the European Union (CJEU) invalidated the EU-US Privacy Shield. The ruling proves a major victory for EU residents on how their personal data is processed and used by platforms like Facebook. The decision mandates the need to bring strong privacy legislation in the US and and generally a close scrutiny to data protection systems in place to avoid the misuse and unnecessary handling of private data of EU residents.
The
huge power of US
intelligence services, as
disclosed by Edward Snowden in 2013, proved
that the
data protection and
privacy rights of
EU residents are not
sufficiently protected. We
cannot allow any foreign agency to track and surveil our communities
with such a disregard for fundamental
rights.
“Today’s
European Court of Justice
ruling is a victory for privacy against
mass surveillance”, says Diego Naranjo, Head of Policy at
EDRi. “This is a
win both for Europeans,
whose personal data will be better
protected, and a call for US authorities to reform the way
intelligence service operate.”, he
further adds.
At its core, this case is about a conflict of law between US surveillance laws which demand surveillance and EU data protection laws that require privacy. The CJEU has decided today to bin Privacy Shield and instead reinforce that Standard Contractual Clauses (SCCs). SCCs which is one of the ways in which companies can make data transfers need very close scrutiny or should be suspended, if protections in the third country cannot be ensured. As noyb notes in their first reaction, Facebook and similar companies may also not use “SCCs” to transfer data as DPC must stop transfers under this instrument. The ruling is great news for all of those defending human rights online.
The background
In
2013, Edward Snowden publicly disclosed that US Intelligence Agencies
use surveillance programs
such as PRISM to access the
personal data of Europeans.
The documents disclosed listed several
US companies such as Apple,
Microsoft, Facebook, Google and Yahoo sharing
data with
the US government for surveillance programs.
Based on this whistleblowing case, Mr Max Schrems (currently of EDRi member, noyb) filed a complaint against Facebook Ireland Ltd before the Irish Data Protection Commissioner (DPC). The complaint argued that under the EU-US Safe Harbor Decision 2000/520/EC, Mr Schrems’ (and therefore any European platform user) personal data should not be sent from Facebook Ireland Ltd (serving Facebook users outside of the US and Canada) to Facebook Inc. (the US parent company), given that Facebook has to grant the US National Security Agency access to such data.
Next steps
Today’s CJEU ruling is just the beginning. It is now up to the EU to start negotiating a new framework with the US and ensure deep reforms in order for the new framework to be valid and respectful of fundamental rights.
Read more:
CJEU invalidates “Privacy Shield” in US Surveillance case. SCCs cannot be used by Facebook and similar companies (16.07.20) https://noyb.eu/en/cjeu
In this article we set out the background to EDRis’ work on anti-discrimination in the digital age. Here we take the first step to explore anti-discrimination as a digital rights issue, and then, what can EDRi do about it? The project is motivated by the need to recognise how oppression, discrimination and inequality impact the enjoyment of digital rights, and to live up to our commitment to uphold the digital rights of all.
The first half of 2020 has brought with it challenges and shifts of a global scale. From COVID-19 to #BlackLivesMatter – these events necessarily impact EDRi’s work as issues of digital and human rights – our privacy, our safety, and our freedoms, online and off. Not only have these events brought issues of privacy and surveillance to the forefront of global politics, they also teach us about vulnerability.
Vulnerability
is not a new concept to digital rights. It is
core to the fight to defend rights and freedoms online – we are
vulnerable to targeted advertising, to exploitation of our personal
data, to censorship, and to increased surveillance. Particularly in
times of crisis, this vulnerability is at the same time exposed as it
is exacerbated, with increased surveillance justified for the public
good.
How exactly can we understand vulnerability in terms of digital rights? In many senses, this vulnerability is universal. Ever-encroaching threats to our privacy, state surveillance, the mining of data on our personal lives for profit, are all universal threats facing individuals in a digital age.
Yet – just as we have seen that the myth of universal vulnerability in the face of Coronavirus debunked, we are also learning that we are not equally vulnerable to threats to privacy, censorship and surveillance. State and private actors abuse their power in ways that exacerbate injustice, threatens democracy and the rule of law. The way technologies are deployed often amplifies inequalities, especially when location and/or biometric data are used. Taking a leaf out of the book of anti-racism movements – instead of being ‘vulnerable’ to discrimination, exploitation and other harms, we know they are imposed on us. Rather than vulnerable, some groups are marginalised, as active processes with people, institutions and structures of power as the cause.
Going forward, an awareness of how marginalised groups enjoy their digital rights is crucial to a better defence and protection for all. From the black, brown and roma communities who are likely to be impacted by data-driven profiling, predictive policing, and biometric surveillance; the mother who only sees online job advertisements that fit her low-income profile; the child whose online learning experience should not be tainted by harmful content; the undocumented person who does not access health services due to the expectation of deportation and data-sharing, the queer and trans people who rely on anonymity to ensure a safe experience online, the black woman who has had her account suspended for using anti-racist terminologies, to the protester worried about protecting their identity, infringements of ‘digital rights’ manifest differently. Often, the harm cannot be corrected with a GDPR fine alone. It cannot be resolved with better terms and conditions. This is not just a matter of data protection, but of broader violations of human rights in a digital context.
These
wider questions of harms and
infringements in the digital age will challenge our existing
frameworks. Is there a universal
‘subject’ for digital rights? Who are we referring to most often
under the term ‘user’? Does this fully recognise the varying
degrees of harm we are exposed to? Will the concept of rights holders
as ‘users’ help or hinder this nuanced approach? Beyond ‘rights’,
how do ideas of equality and justice inform our work?
EDRi members such as Privacy International, have denounced data exploitations and how marginalised groups are disproportionately affected by digital rights violations. Panoptykon have explored how algorithmic profiling systems impact the unemployed in Poland, and integrate the risks of discrimination into their analysis of why the online advertising system is broken. At Privacy Camp, EDRi members are reflecting on how children’s rights, the issues of hate speech online impact our work as a digital rights network. Building on this work EDRi is mapping the organisations, projects and initiatives in the European digital rights field which include a discrimination angle, or that explore how people in different life situations experience digital rights. Once we have a picture of which work is ongoing in the field and the main gaps, we will explore how EDRi can move forward, potentially including further research, campaigns, or efforts to connect digital and non-digital organisations.
We hope that this project will help us to meet our commitment to uphold digital rights for all, and to challenge power imbalance. We are learning that a true universal approach recognises marginalisation in order to contest it. In order to protect digital rights for all we must understand these differences, highlight them, and then fight for collective solutions.
There is an ongoing mantra among law enforcement authorities in Europe according to which private companies are indispensable partners in the fight against “cyber-enabled” crimes as they are often in possession of personal data relevant for law enforcement operations. For that reason, police authorities increasingly attempt to lay hands on data held by companies – sometimes in disregard to the safeguards imposed by long-standing judicial cooperation mechanisms. Several initiatives at European Union (EU) level, like the proposed regulation on European Production and Preservation Orders for electronic evidence in criminal matters (so called “e-evidence” Regulation), seek to “facilitate” that access to personal data by national law enforcement authorities. Now it’s Europol’s turn.
The Europol Regulation entered into force in 2017, authorising the European Police Cooperation Agency (Europol) to “receive” (but not directly request) personal data from private parties like Facebook and Twitter directly. The goal was to enable Europol to gather personal data, feed it into its databases and support Member States in their criminal investigations. The Commission was supposed to specifically evaluate this practice of reception and transfer of personal data with private companies after two years of implementation (in May 2019). However, there is no public information on whether the Commission actually conducted such evaluation, what were its modalities as well as its results.
Regardless of the absence of this assessment’s results and of a fully-fledged evaluation of Europol’s mandate, the Commission and the Council consider the current legal framework as too limiting and therefore decided to revise it. The legislative proposal for a new Europol Regulation is planned to be released at the end of this year.
One of the main policy option foreseen is to lift the ban on Europol’s ability to proactively request data from private companies or query databases managed by private parties (e.g. WHOIS). However, disclosures by private actors would remain “voluntary”. Just as the EU Internet Referral Unit operates without any procedural safeguards or strong judicial oversight, this extension of Europol’s executive powers would barely comply with the EU Charter of Fundamental Rights – that requires that restrictions of fundamental rights (on the right to privacy in this case) must be necessary, proportionate and “provided for by law” (rather than on ad hoc “cooperation” arrangements).
This is why, in light of the Commission’s consultation call, EDRi shared the following remarks:
EDRi recommends to first carry out a full evaluation of the 2016 Europol Regulation, before expanding the agency’s powers, in order to base the revision of its mandate on proper evidence;
EDRi opposes the Commission’s proposal to expand Europol’s powers in the field of data exchange with private parties as it goes beyond Europol’s legal basis (Article 88(2));
The extension of Europol’s mandate to request personal data from private parties promotes the voluntary disclosure of personal data by online service providers which goes against the EU Charter of Fundamental Rights and national and European procedural safeguards;
The procedure by which Europol accesses EU databases should be reviewed and include the involvement of an independent judicial authority;
The Europol Regulation should grant the Joint Parliamentary Scrutiny Group with real oversight powers.
Read our full contribution to the consultation here.
There are widespread web tracking practices that undermine users’ human rights. However, safeguards against web tracking can and are being deployed by various service providers. EDRi member ARTICLE 19, and more generally EDRi as a whole, support these initiatives to protect user privacy and anonymity as part of a wider shift toward a more rights-respecting sector.
Web
browsers are our guide across the internet. We use them to connect
with others around the globe, orient ourselves, and find what we need
or want online. The resulting trail of data that we generate of our
preferences and actions has been exploited by the increasingly
interdependent business models of the online advertising industry and
web browsers. As advertising publishers, agencies, and service
providers aim to maximise profit from advertisers by delivering
increasingly personalised content to users, web browsers have strong
incentives to collect as much data as possible about what each user
searches, visits, and clicks on to feed into these targeted
advertising models.
These practices not only threaten users’ right to privacy, but can also undermine other fundamental rights, such as freedom of expression and access to information and non-discrimination.
How we are tracked online
A number of mechanisms used by web browsers for ad targeting and tracking can also be used to cross-reference and track users, block access to websites, or discriminate among users based on profiles generated about them from their online activities and physical location. These mechanisms include:
Web usage mining, where the underlying data, such as pages visited and time spent on each page, is collected as clickstreams;
Fingerprinting, where information such as a user’s OS version, browser version, language, time zone, and screen settings are collected to identify the device;
Beacons, which are graphic images placed on a website or email to monitor the behaviour of the user and their remote device; and
Cookies, which are small files holding client and website data that can remain in browsers for long periods of time and are often used by third parties.
Being subject to these practices should not be the non-negotiable price of using the internet. An increasing number of service providers are developing and implementing privacy-oriented approaches to serve as alternatives – or even the new default – in web browsing. These changes range from stronger, more ubiquitous encryption of data to the configuration and use of trusted servers for different tasks. These safeguards may be deployed as entirely new architectures and protocols by browsers and applications, and are being deployed at different layers of the internet architecture.
Encrypting the Domain Name System (DNS)
One advancement has been the development and deployment of internet protocols that support greater and stronger encryption of the data generated by users when they visit websites, redressing historical vulnerabilities in the Domain Name System (DNS). Encrypted Server Name Indication (eSNI) encrypts each domain’s identifiers when multiple domains are hosted by a single IP address, so that it is more difficult for Internet Service Providers (ISPs) and eavesdroppers to pinpoint which sites a user visits. DNS-over-HTTPS (DoH) sends encrypted DNS traffic over the Hypertext Transfer Protocol Secure (HTTPS) port and looks up encrypted queries made in the browser using the servers of a trusted DNS provider. These protocols make it difficult to detect, track, and block users’ DNS queries and therefore introduce needed privacy and security features to web browsing.
Privacy-oriented web browsers
Another shift is in the architectures and advertising models of web browsers themselves. Increasingly popular privacy browsers such as Tor and Brave help protect user data and identity. Tor encrypts and anonymises users’ traffic by routing it through the Tor network while Brave anonymises user authentication by using the Privacy Pass protocol, which allows users to prove that they are trusted without revealing identifying information to the browser. Brave’s efforts to develop a privacy-centric model for web advertising, including a protocol that confirms when a user observes an ad without revealing who they are and an anonymised, blockchain-based system to compensate publishers, have been closely followed by Apple and Google, which aim to standardise their own web architectures including Apple Webkit’s ad click attribution technology and Google Chrome’s Conversion Measurement API.
Although
there are some differences, Brave’s, Apple’s, and Google’s
advertising models all include mechanisms to limit the amount of data
passed between parties and the amount of time this data is kept in
their systems, disallow data such as cookies for reporting purposes,
delay reports randomly to prevent identifiability through timestamp
cross-referencing, and prevent arbitrary third parties from
registering user data. As such, they not only protect users’
privacy and anonymity, but also prevent cross-site tracking and user
profiling.
Despite protocols such as eSNI and DoH and recent privacy advances in web browser advertising models and architectures, tracking of online activities continues to be the norm. For this reason, service providers that are working toward industry change are advocating for the widespread adoption of secure protocols and the standardisation of web browsing privacy models to redress existing vulnerabilities that have been exploited to monetise users’ data without their knowledge, monitor and profile them, and restrict the availability of content.
If privacy-oriented protocols and privacy-respecting web browsing models are standardised and widely adopted by the sector, respect for privacy will become an essential parameter for competition among not only web browsers, but also ISPs and DNS servers. This change can stimulate innovation and provide users with the choice between more and better services that guarantee their fundamental rights.
Challenges for privacy-enhancing initiatives
While these protocols and models have been welcomed by a number of stakeholders, they have also been challenged. Critics claim that these measures make it more difficult, if not impossible, to perform internet blocking and filtering. They claim that, as a result, privacy models undermine features such as parental controls and thwart the ability of ISPs and governments to identify malware traffic and malicious actors. These challenges rest on the assumption that there is a natural trade-off between the power of parties who retain control of the internet and the privacy of individual users.
In reality, however, technological advancement constantly occurs as a whole; updated models lead to updated tools and mechanisms. Take DoH and its impact on parental controls as an example. DoH encrypts DNS queries, rendering most current DNS-filtering mechanisms used for parental controls obsolete; these mechanisms rely on DNS packet inspection that cannot be done on encrypted data without intercepting and decrypting the stream first. In response, both browsers and DNS servers are developing new technologies and services. Mozilla launched its “Canary Domains” mechanism, where queries for ISP-restricted domains are flagged and trigger DoH to be disabled. DoH-compatible DNS server providers like cleanbrowsing.org implement their own filtering policies at the resolver level. While these responses do not mitigate the need to ensure users’ privacy and access to information rights through strong legal and regulatory protections, accountability and transparency of service providers to users, and meaningful user choice, they demonstrate that the real benefits of browser privacy and security measures should not be thwarted on the basis of perceived threats to the status quo.
Leadership opportunity for the EU
In the European Union, the adoption of the General Data Protection Regulation (GDPR) has obliged all stakeholders in the debate to recognise and comply with data protection and privacy-by-design principles. Moreover, the Body of European Electronic Communication Regulators, whose main task is to contribute to the development and better functioning of the EU internal market for electronic communications networks and services, has identified users’ empowerment among its priorities. These dynamics create an opportunity for EU actors to advance global leadership in efforts toward a privacy-oriented internet infrastructure.
Recommendations
ARTICLE 19 strongly supports initiatives to advance browser privacy, including the implementation of protocols such as eSNI and DoH that facilitate stronger, more ubiquitous encryption of the Domain Name System and privacy-centric web advertising models for browsers. We believe these initiatives will lead to greater respect for privacy and human rights across the sector. In particular, we recommend that:
ISPs must help decentralize the encrypted DNS model by deploying their own DoH-compatible servers and encrypted services, taking advantage of the relatively low number of devices currently using DoH and the easy adoption curve it implies;
Browsers and DNS service providers should not override users’ configurations regarding when to enable or disable encryption services and which DNS service provider to use. Meaningful user choice should be facilitated by clear terms of service and accessible and clearly defined default, opt-in, and opt-out settings and options;
Browsers must additionally ensure that, even as they build privacy-friendly revenue generation schemes and move away from targeted ad models, all of these practices are transparent and clearly defined for users, both in the terms of service and codebase;
Internet standards bodies should encourage the inclusion of strong privacy and accountability considerations in the design of protocol specifications themselves, acknowledging the effects of these protocols in real-life testing and deployment; and
Civil society must promote the widespread adoption of secure tools, designs, and protocols through information dissemination to help educate the community and empower users’ choices;
Finally, Article 19 urges internet users to support the development and application of privacy-based tools that do not monetise their data by demanding products from their service providers that better protect their privacy.
The Catalan Department of Education has signed an agreement accepting the plan proposed by Xnet, EDRi member from Spain, titled “Privacy and Democratic Digitization of Educational Centers,” to guarantee the privacy of data and the democratic digitization of schools. The plan foresees the creation of a software-pack and protocols that ensure the educational establishments have alternatives to what until now seemed the only option: the technological dependence on Google and its attached elements, with worrying consequences on individual data.
Things can be different. With this plan, Xnet seeks to create an organic system in educational institutions that guarantees the use of auditable and accessible technologies and that said technologies contribute to preserving the rights of the educational community.
The key points of the project are:
Safe and human rights-compliant servers;
Auditable tools already in use, added in a stable pack;
Training that updates the culture in educational centers in favor of the use of digital technologies that respect human rights.
In Spain and in many other places, COVID-19 has shown how late institutions are in digitisation and their will to understand it. Digital-related public policies often range from carefree technosolutionism to technophobic neo-ludism. The result of these policies in which the educational community is being lectured about the dangers of technology while forced to bend to the will of large digital corporations, is that those dominant platforms already control the vast majority of educational establishments … and therefore the behavior of the students, their families and teachers.
In order to be a society suitable for the digital age in which we live in, it is not necessary to know about technology, nor to be more afraid of it than any other tool. This means that digitisation should be undertaken in an accessible and rational way for everyone; a truly democratic digitisation that improves society. Books have served to build our societies. Nobody expects that whoever wants to use or teach them has to know bookbinding. Perhaps this is where the initial problem arises. If the other subjects are taught by experts on these subjects, why in the field of digitisation do we often only resort to “technicians” and security officers to warn of their dangers?
The notions of network and connectivity allow us to operate in an agile way, having the ability to start processes being few in numbers but having a huge impact, even without the need of advanced technological knowledge.
European Digital Rights is proud to announce that Gail Rego has joined the team at the Brussels office as the new Senior Communications and Media Manager. Gail is responsible for promoting the work of the EDRi network, improving the communication and strengthening the public identity of EDRi as well as developing stronger relationships with media and the press specifically.
Gail has a decade of experience in communication and community building roles in the UAE, Colombia, Malaysia, Kenya, and Belgium. She started working on tech and child rights related projects and campaigns during her role as Head of Communications and Membership at Missing Children Europe. This included campaigns against child tracking apps, a multi-stakeholder project to improve missing children investigations using blockchain, geofencing, social media analysis etc. and the NotFound web app that replaces website 404 pages with posters of missing children. Previously, she worked as the Communications and Partnerships Manager at the European Venture Philanthropy Association. She is a member of Young Feminist Europe and the People of Colour Brussels group.
Gail is a human rights activist passionate about dismantling systems of oppression that continue to silence and threaten women, people of colour, migrants, and other marginalized groups. She hopes to help bridge the gap between inclusive communities, intersectional identities and technologies such as AI to face the growing inequality, misinformation and polarization of societies. Read her blogpost on how algorithmic bias prevents access to jobs here.
On 24 June, the European Commission published the Communication reviewing of the two years of application of the General Data Protection Regulation (GDPR) The Communication received input from the multistakeholder expert group on the application of the GDPR, of which EDRi members Access Now and Privacy International belong to. EDRi welcomes the publication of the review at a time where data protection needs to be reinforced and not only celebrated.
The GDPR is considered one of the “crown jewels” of the European legislation. However, 2 years after the Regulation entered into force, the GDPR has been increasingly receiving criticism from data protection activists (citing the lack of “teeth” of the Regulation) or, from the Big Tech side, because of accusations that GDPR stifles innovation.
What’s the GDPR Impact Assessment?
The Commission
review report highlights many of the similar analysis that civil
society groups have raised during the last two years, namely:
There are not
enough joint operations or investigations for cross-border cases
which could have led to a more harmonised enforcement.
Member states
need to allocate “sufficient human, financial and technical
resources to national data protection authorities”.
Despite the
“harmonised” legislation, different implementations still exist
in areas such as the age of children consent for processing data,
balancing freedom of expression and information with data protection
rights, as well as derogations from the general prohibition to
process certain categories of personal data.
Individuals are
not fully empowered yet, for example in the case of the right to
data portability.
It is unclear
how to adapt the GDPR to “new” technologies, such as contact
tracing apps and facial recognition.
Alexa, tell me
where to go from here
EDRi welcomes the
request from the Commission’s Communication to ask for stronger
enforcement by asking DPAs and member states to ensure harmonised
enforcement, the need for adequate funding for DPAs , as well as the
creation of specific guidelines when needed. If there is no adequate
progress, we agree with the Commission that infringement procedures
to ensure that Member States comply with GDPR are an adequate tool at
their disposal.
GDPR was the best possible outcome we could achieve during its contemporary political scenario. Now it is the time to ensure that all the work from activists, policy makers and academics were worth their efforts. We must ensure that GDPR’s complementary legislation, the ePrivacy Regulation, is strengthened and adopted during the German Presidency of the Council of the EU.