Today, 17 April 2019, the European Parliament (EP) adopted its Report on the proposed Terrorist Content Regulation. Although it has been questioned whether this additional piece of law is necessary to combat the dissemination of terrorist content online, the European Union (EU) institutions are determined to make sure it sees the light of day. The Regulation defines what “terrorist content” is and what the take-down process should look like. Fortunately, Members of the European Parliament (MEPs) have decided to include some necessary safeguards to protect fundamental rights against overbroad and disproportionate censorship measures. The adopted text follows suggestions from other EP committees (IMCO and CULT), the EU’s Fundamental Rights Agency, and UN Special Rapporteurs.
The European Parliament has fixed most of the highest risks that the original proposal posed for fundamental rights online.”
said Diego Naranjo, Senior Policy Advisor at EDRi.
We will follow closely next stages’ developments, since any change to today’s Report could be a potential threat to freedom of expression under the disguise of unsubstantiated ‘counter-terrorism’ policies.
he further added.
European Digital Rights (EDRi) and Access Now welcome the improvements to the initial European Commission (EC) proposal on this file. Neverthless, we doubt the proposal’s objectives will be achieved, and point that no meaningful evidence has yet been presented on the need for a new European counter-terrorism instrument. Across Europe, the inflation of counter-terror policies has had disproportionate impact on journalists, artists, human rights defenders and innocent groups at risk of racism.
The proposed legislation is another worrying example of a law that looks nice, politically, in an election period because its stated objective is to prevent horrendous terrorist content from spreading online. But worryingly, the law runs the severe risk of undermining freedoms and fundamental rights online without any convincing proof that it will achieve its objectives.
said Fanny Hidvegi,Europe Policy Manager at Access Now
During the rest of the process, the very least the EU co-legislator must do is to maintain the basic human rights safeguards provided by the European Parliament’s adopted text.
she further added
The next step in the process are trilogues negotiations between the European Commission, the European Parliament and Member States. Negotiations are expected to start in September / October 2019.
European Digital Rights (EDRi) is an international not-for-profit association of 42 civil society organisations. We defend and promote human rights and freedoms in the digital environment, such as the right to privacy, freedom of expression, and access to information.
We are looking for an interim Executive Director to replace our current Executive Director during her maternity leave (6 months from mid-July 2019 to mid-January 2020).
The Executive Director provides overall leadership and management of the strategy, policy, resources, operations, and communications of EDRi. The Executive Director is responsible for the management of the organisation and all aspects of its operations. While the Interim Executive Director is not expected to be a specialist in specific operations (campaigns, fundraising, HR, administration, finance, etc.), s/he has a sufficient grasp of all domains to ensure that staff members can achieve their objectives and that they and the EDRi members can work well together to achieve the organisation’s mission.
We are an equal opportunities employer with a strong commitment to transparency and inclusion. We strive to have a diverse and inclusive working environment. We encourage individual members of groups at risk of racism or other forms of discrimination to apply for this post.
Job title: Interim Executive Director Start and end dates: 15 July 2019 – 15 January 2020 Reports to: Board of Directors, (human resources task force) Line-manages: policy, advocacy, campaigning, communications, fundraising, and organisational support teams Scope: staff members 10, annual budget of approx. 830k euro
RESPONSIBILITIES
1. Leadership, organisation mission and strategy
steer the
consultation phase of the strategic planning process
provide leadership
and management for the organisation
implement the
annual work plan and ensure rigorous evaluation
Start preparations
for the 2020 general assembly
support the Board,
and prepare quarterly financial and narrative reports
represent the
organisation at events as necessary
support
development of policy strategy and taking of tactical decisions
2. Financial sustainability and oversight
prepare the yearly
budget, oversee expenditure
oversee and
contribute to the raising of funds from foundations corporations and
individual donors
maintain good
relations with donors and oversee reporting to them
oversee fiscal
management operating within the approved budget
ensure that sound
book-keeping and accounting procedures are followed
ensure that the
organisation complies with relevant legislation and grant contracts
3. Organisation operations
ensure the implementation of Board decisions
ensure that the Board is made aware of all matters requiring a Board decision
inform the Board of all developments of major significance to the organisation
oversee internal human resources policies and ensure staff retention
provide oversight of all staff and organise weekly meetings with staff
foster effective teamwork and establish a positive work environment
evaluate the individual objectives with staff members
undertake regular one to one meetings with all staff
sign contracts and other agreements on behalf of EDRi
give or refuse final approval for any unforeseen use of resources
QUALIFICATIONS
senior management
experience preferably in a non-governmental organisation
solid, hands-on
financial and budget management skills
strong
organisational abilities, especially for planning, delegation and
project management
ability to convey
the vision of EDRi’s strategic future to staff, Board, network and
donors
ability to build
trusted relationships with, and to collaborate with and oversee all
staff
knowledge of EU
policy-making processes
knowledge and/or
experience in understanding the NGO sector
awareness and
knowledge of the EU’s political environment
knowledge of the
human rights and digital rights field and affinity with EDRi’s
values and mission,
knowledge and/or
experience in the field of human resources management
knowledge and/or
experience in fundraising unique to nonprofit sector
knowledge and/or
experience in conflict resolution
public speaking
skills
ability to
interface and engage EDRi’s main stakeholders
Attitude
Passionate, idealistic,
enduring, team player, diplomatic, discreet, patient, mission-driven,
self-directed, and committed to knowledge-sharing and high-integrity
leadership.
Technical
fluency in written and spoken English
strong written and verbal communication skills
budgeting (oversight, presenting, monitoring)
knowledge of free and open source operating systems and software are a plus
HOW TO APPLY
To apply please send a maximum one-page cover letter and a maximum two-page CV (only PDFs are accepted) by email to applications[at]edri.org. Closing date for applications is 30 April 2019. Interviews with selected candidates will take place around mid-May, with a start date of (ideally) 15 July.
Today, on 15 April 2019, European Union Member States gave their final approval to the text of the copyright Directive as it was adopted by the European Parliament on 26 March. This vote in the Council of the EU was the last procedural requirement in the EU law-making process. Now the Directive, once translated and published in the Official Journal of the EU, will become law.
19 Member States voted in favor of the Directive and effectively ignored hundreds of thousands of people who went on the streets in Europe to protest against upload filters and a petition signed by five million people. Six Member States (Finland, Italy, Luxembourg, the Netherlands, Poland, and Sweden) voted against the Directive text, while three (Belgium, Estonia, and Slovenia) abstained, showing how controversial the text is. You can find the full results of the vote on the Save Your Internet campaign website.
Member States will now have two years to implement the Directive in their legislation. The only way to prevent, in practice, upload filters for copyright purposes in the EU is to influence the national level implementation. To do this, we encourage you to support civil rights groups working to defend digital rights in your country!
After months of waiting and speculation, the United Kingdom government Department for Digital, Culture, Media and Sport (DCMS) has finally published its White Paper on Online Harms – now appearing as a joint publication with the Home Office. The expected duty of care proposal is present, but substantive detail on what this actually means remains sparse: it would perhaps be more accurate to describe this paper as pasty green.
Increasingly
over the past year, DCMS has become fixated on the idea of imposing a
duty of care on social media platforms, seeing this as a flexible and
de-politicised way to emphasise the dangers of exposing children and
young people to certain online content and make Facebook in
particular liable for the uglier and darker side of its
user-generated material.
DCMS
talks a lot about the “harm” that social media causes, but its
proposals fail to explain how harm to free expression impacts would
be avoided.
On
the positive side, the paper lists free expression online as a core
value to be protected and addressed by the regulator. However,
despite the apparent prominence of this value, the mechanisms to
deliver this protection and the issues at play are not explored in
any detail at all.
In
many cases, online platforms already act as though they have a duty
of care towards their users. Though the efficacy of such measures in
practice is open to debate, terms and conditions, active moderation
of posts and algorithmic choices about what content is pushed or
downgraded are all geared towards ousting illegal activity and
creating open and welcoming shared spaces. DCMS hasn’t in the White
Paper elaborated on what its proposed duty would entail. If it’s
drawn narrowly so that it only bites when there is clear evidence of
real, tangible harm and a reason to intervene, nothing much will
change. However, if it’s drawn widely, sweeping up too much
content, it will start to act as a justification for widespread
internet censorship.
If
platforms are required to prevent potentially harmful content from
being posted, this incentivises widespread prior restraint. Platforms
can’t always know in advance the real-world harm that online
content might cause, nor can they accurately predict what people will
say or do when on their platform. The only way to avoid liability is
to impose wide-sweeping upload filters. Scaled implementation of this
relies on automated decision-making and algorithms, which risks even
greater speech restrictions given that machines are incapable of
making nuanced distinctions or recognising parody or sarcasm.
DCMS’s
policy is underpinned by societally-positive intentions, but in its
drive to make the internet “safe”, the government seems not to
recognise that ultimately its proposals don’t regulate social media
companies, they regulate social media users. The duty of care is
ostensibly aimed at shielding children from danger and harm but it
will in practice bite on adults too, wrapping society in cotton wool
and curtailing a whole host of legal expression.
Although
the scheme will have a statutory footing, its detail will depend on
codes of practice drafted by the regulator. This makes it difficult
to assess how the duty of care framework will ultimately play out.
The
duty of care seems to be broadly about whether systemic interventions
reduce overall “risk”. But must the risk be always to an
identifiable individual, or can it be broader – to identifiable
vulnerable groups? To society as a whole? What evidence of harm will
be required before platforms should intervene? These are all
questions that presently remain unanswered.
DCMS’s
approach appears to be that it will be up to the regulator to answer
these questions. But whilst a sensible regulator could take a
minimalist view of the extent to which commercial decisions made by
platforms should be interfered with, allowing government to distance
itself from taking full responsibility over the fine detailing of
this proposed scheme is a dangerous principle. It takes conversations
about how to police the internet out of public view and democratic
forums. It enables the government to opt not to create a transparent,
judicially reviewable legislative framework. And it permits DCMS to
light the touch-paper on a deeply problematic policy idea without
having to wrestle with the practical reality of how that scheme will
affect UK citizens’ free speech, both in the immediate future and
for years to come.
How
the government decides to legislate and regulate in this instance
will set a global norm.
The
UK government is clearly keen to lead international efforts to
regulate online content. It knows that if the outcome of the duty of
care is to change the way social media platforms work that will apply
worldwide. But to be a global leader, DCMS needs to stop basing
policy on isolated issues and anecdotes, and engage with a broader
conversation around how we as society want the internet to look.
Otherwise, governments both repressive and democratic are likely to
use the policy and regulatory model that emerge from this process as
a blueprint for more widespread internet censorship.
The UK House of Lords report on the future of the internet, published in early March 2019, set out ten principles it considered should underpin digital policy-making, including the importance of protecting free expression. The consultation that this White Paper introduces offers a positive opportunity to collectively reflect, across industry, civil society, academia and government, on how the negative aspects of social media can be addressed and risks mitigated. If the government were to use this process to emphasise its support for the fundamental right to freedom of expression – and in a way that goes beyond mere expression of principle – this would also reverberate around the world, particularly at a time when press and journalistic freedom is under attack.
The White Paper expresses a clear desire for tech companies to “design in safety”. As the process of consultation now begins, EDRi member Open Rights Group (ORG) calls on DCMS to “design in fundamental rights”. Freedom of expression is itself a framework, and must not be lightly glossed over. ORG welcomes the opportunity to engage with DCMS further on this topic: before policy ideas become entrenched, the government should consider deeply whether these will truly achieve outcomes that are good for everyone.
In February 2019, the Digital Freedom Fund (DFF) strategy meeting took place in Berlin. The meeting was the perfect occasion for experts, activists, and litigators from the broad digital and human rights movement to explore ways of working together and of levelling up the field.
The
group held discussions on several methods and avenues for social
change in our field, such as advocacy and litigation. Public
campaigning came up as an interesting option – many organisations
want to achieve massive mobilisation, while few have managed to
develop the tools and means needed for fulfilling this goal. One of
the breakout group discussions therefore focused on mapping the needs
for pan-European campaigns on digital rights.
First,
we need to define our way of doing campaigns, which might differ from
other movements. A value-based campaigning method should look into
questions such as: Who funds us? Do we take money from the big tech
companies and if yes, at what conditions and to which amount? Who are
we partnering with: a large, friendly civil society and industry
coalition or a restricted core group of digital rights experts? Are
we paying for advertising campaigns on social media or do we rely on
privacy-friendly mobilising techniques? It was agreed that being
clear on how we campaign and what our joined message is were crucial
elements for the success of a campaign. A risk-management system
should also be put in place to anticipate criticisms and attacks.
Second,
proper field mapping is important. Pre- and post- campaign public
opinion polls and focus groups are useful. Too often, we tend to go
ahead with our own plans without consulting the affected groups such
as those affected by hate speech online, child abuse and so on.
Third,
unsurprisingly, the need for staff and resources was ranked as a
priority. These include professional campaigners, support staff,
graphic designers, project managers and coordinators, communication
consultants and a central hub for a pan-European campaign.
Finally, we need to build and share campaigning tools that include visuals, software, websites, videos, celebrities and media contacts. Participants also mentioned the need for a safe communication infrastructure to exchange tools and coordinate actions.
At EDRi, all the above resonate as we embark on the journey of building our campaigning capacity to lead multiple pan-European campaigns. For instance, one of the current campaigns we have been involved in − the SaveYourInternet.eu campaign on the European Union Copyright Directive − has revealed the importance of fulfilling these needs. Throughout this particular campaign, human rights activists have faced unprecedented accusations of being paid by Google and similar actors, and of being against the principle of fair remuneration for artists. Despite disinformation waves, distraction tactics and our small resources, the wide mobilisation of the public against problematic parts of the Directive such as upload filters has been truly impressive. We witnessed over five million petition signatures, over 170 000 protesters across Europe, dozens of activists meeting Members of the European Parliament, and impressive engagement rates on social media. The European Parliament vote, in favour of the whole Copyright Directive including controversial articles, was only won by a very narrow margin, which shows the impact of the campaign.
The
EDRi network and the broader movement need to learn lessons from the
Copyright campaign and properly build our campaign capacity. EDRi
started this process during its General Assembly on 7-8 April in
London. The DFF strategy workshop held in Berlin gave us a lot of
food for thought for this process.
On 26 March 2019, the European Parliament (EP) adopted the new copyright Directive. The music industry and collecting societies celebrated it as a victory for authors and creators, despite actual authors (along with civil society groups) being worried about the outcome.
Article 17 of the Directive (referred as Article 13 in the previous draft text) includes a change of platforms’ responsibility that will lead to the implementation of upload filters on a vast number of internet platforms. In effect, Article 17 represents a threat to our fundamental right to freedom of expression.
We tried hard to stop the legalisation of the first EU internet filter. Read below a summary of what happened.
It all started in 2002
EDRi has been involved in copyright discussions since the beginning of our network’s existence. We’ve promoted a positive agenda aimed at fixing the main problems within the existing framework, and supported a copyright reform that included a request for authors and artists to receive fair remuneration for their work. We published handbooks, series of blogposts, responded to public consultations, spoke in numerous public events, and met with all key policy makers in Brussels and at national level. We participated in different joint actions and were involved in the inception and development of SaveYourInternet.eu along with Copyright for Creativity (C4C).
Civic engagement vs industry lobby
During the debates, the individuals’ and civil society groups’ participation was crucial in order to balance the massive lobby efforts by industries. In July 2018, thanks to the pressure of thousands of people calling their Members of the European Parliament (MEPs), the European Parliament rejected the mandate to proceed with a flawed proposal. This gave us hope that citizens’ voice can be heard, if we shout loud enough.
During the Copyright Action Week in March 2019, ahead of the final vote on the Directive in the European Parliament, a team of 17 people from all across Europe made it all the way to Brussels and Strasbourg. They all parked their studies or jobs for a few days in order to meet their elected representatives and have a final push to delete upload filters from the copyright Directive. We were impressed with their dedication, and their thorough knowledge of the consequences Article 13 could have on the internet. More, hundreds of thousands of people went on the streets in Europe to protest against upload filters.
The latest actions taken by all of those opposing internet filters were not in vain. In the vote adopting the Directive on 26 March, 55 MEPs who previously supported Article 17 (former Article 13) in September 2018, changed their position and were willing to delete it from the final text of the Directive. The deletion could have happened through an amendment proposed by several MEPs. In order for this amendment to be adopted and Article 17 deleted, a vote on whether the text should be first opened to amendments took place during the March 2019 plenary.
The vote: Blue pill, or red pill?
On 26 March, the possibility to have a discussion on the amendments to remove Articles 11 and 13 (15 and 17 in the final text) was voted down with a difference of five votes. Thirteen MEPs claimed that they had wished to open the debate to remove both Articles, but were confused by the previous change of votes order and the obvious lack of clarity this procedural vote was introduced with, and failed to vote “yes”. The vote has been corrected only in the records, but it will not affect the actual results of the vote. After this “mistake” that made it impossible for MEPs to vote on deleting Article 13/17, the text of the Directive (including Article 13/17) was adopted with 338 votes in favor, 283 against, 36 abstentions and 93 MEPs not attending the session.
Despite some policy-makers repeatedly stating that the Directive will not lead into upload filters, it turned out it was all about filters. The day after the Directive was adopted, France hurried to declare that it will ensure that “content recognition technologies” will be a key aspect in the upcoming laws implementing the Directive.
With the adoption of Article 17 as part of the Copyright directive text, the European Union is setting a terrible precedent for the rest of the world, encouraging the implementation of upload filters. Initially under the pretext of copyright infringement, filters are already being discussed also in the framework of online “terrorist content”.
Next steps: EU Council and implementation
The final vote in the Council of the European Union, where EU Member States are represented, is scheduled for 15 April. This is traditionally a merely procedural vote – after all, the Council already agreed, before the European Parliament’s final vote, to the text on which they will be voting. However, this is technically the last chance to get rid of the upload filters. If the Member States currently opposing the ”censorship machine” (Finland, Luxembourg, Poland, Netherlands, Italy and perhaps Sweden) remain on the side of their citizens, the only beam of hope would be that a country representing around 9,5% of the population of the whole EU rejects the text. Out of those countries (Germany, France, Spain), the only realistic candidate is Germany. Will the German government respect the coalition agreement which prohibits them from implementing upload filters? Will other EU countries stand up for the citizens, taking into consideration the upcoming European Parliament (and some national) elections? We’ll find out soon.
In the case of the copyright Directive becoming law, civil rights groups are set to reject upload filters in the national implementation phase. Planned actions include potential referrals to the Court of Justice of the European Union (CJEU).
Today,
on 8 April 2019, the European Parliament Committee on Civil
Liberties, Justice and Home Affairs (LIBE) adopted its Report on the
proposed Regulation for moderation of terrorist content online.
Released by the European Commission in September 2018, the proposal was very welcomed in the Council of Member States, which rapidly concluded a political agreement a few months later. Stronger reservations were, however, expressed in the different Committees in charge of the file in the European Parliament,which lead to substantial changes in the Commission’s original proposal.
The most critical points for the protection of fundamental rights concerning the proposed Regulation were taken on board by the LIBE Committee in its Report:
The definitions of “terrorist content” and “hosting service providers” are clarified and brought in line with the counter-terrorism acquis. Exceptions are provided for educational, journalistic or research material, and LIBE has limited the scope of the Regulation to only cover hosting service providers that make content available to the public at the application layer, leaving out infrastructures providers, as well as cloud and messaging services.
Amendments to the first instrument, removal orders, require that a single judicial or functionally independent administrative competent authority should be appointed. Unfortunately, the one-hour time frame to respond to removal orders, which is simply not feasible for smaller service providers with limited capacities, was not changed by LIBE, despite the blatant lack of evidence supporting this deadline.
The possibility for national authorities to refer content to service providers for deletion on the basis of their terms and conditions is now removed from the text. This is a major step forward because this instrument would amount to increased online policing by platforms and a circumvention of legal safeguards attached to removal orders in order to tackle content that is not illegal.
The LIBE Committee also deleted the obligation of pro-activity, involving the use of automated tools like upload filters. The Parliament is clearly reasserting the prohibition to oblige platforms to generally monitor the user-generated content they host on their services (Article 15 of the e-Commerce Directive).
Lastly, the principles of the rule of law and the protection of fundamental rights are substantiated with additional transparency requirements falling on competent authorities and stronger redress mechanisms for both hosting service providers and content providers.
After the European Parliament elections in May 2019, and once a new EU Commission has been set up, the text will be subject to several rounds of trilogue negotiations between the Parliament, the Council and the Commission. These closed-door meetings aim at finding a middle ground between the diverging positions of the three negotiators. Considering that the Council position did not depart a lot from the Commission’s proposal, there is a significant risk that the “damage control” conducted by the Parliament will be partly rolled back in the next phase of the policy-making process.
A note produced by the Romanian Presidency of the Council of the European Union sets out the EU’s response to terrorism since 2015. It highlights the main measures adopted and calls for a “reflection process on the way forward” in a number of areas including “interoperability and extended use of biometrics”; implementing the EU Passenger Name Record (PNR) Directive and possibly extending its scope beyond air travel; and “synergies” between internal and external policies.
The issues highlighted in the document were discussed by the Justice and Home Affairs (JHA) Council on 7-8 March. It was noted that “the process of reflecting on the way forward will continue at technical level”.
On the issue of “interoperability and extended use of biometrics”, the paper says (emphasis added): “The package on interoperability should be fully implemented. Existing databases should be filled with good quality data, and tools (such as biometrics and facial recognition) should be improved to enable querying with data across more EU information systems. All relevant competent authorities in the CT [counter-terrorism] area should have direct access to relevant information systems (notably SIS II and Prüm) to avoid information and security gaps. Connecting more systems could be explored in parallel to implementation.“
This implies an appetite for further expanding the interoperability initiative before there has been any opportunity to fully assess how it functions in practice – despite serious data protection and privacy concerns raised by specialists and some Members of the European Parliament (MEPs).
The Council and Parliament recently provisionally agreed a text on two key Regulations underpinning the interoperability plans.
Regarding PNR, the note recalls the importance of all Member States fully implementing the EU PNR Directive, agreed in 2016, and says: “The collection and processing of PNR data is crucial to detect, prevent and prosecute terrorist offences, and the effective connection of the PIUs of the Member States for information exchange is a priority. The further broadening of the scope of PNR (to other means of transportation) could be explored.”
Regarding internal-external “synergies”, the Presidency highlights: “The nexus between internal and external security has become increasingly prominent, and progress has been made in better connecting the two areas. Together with the Commission, the EEAS [European External Action Service] and the EU CTC [Counter-Terrorism Coordinator], the Presidency is further exploring ways to strengthen the links between the external and internal dimensions of security in relation to CT [counter-terrorism]. This includes focusing on the use of internal instruments to promote EU security interests related to CT in priority third countries (e.g. Western Balkans, Turkey and the MENA [Middle East and North Africa] region)…”
Other ongoing work outlined in the document concerns “violent extremism and radicalisation”; data retention; the financing of terrorism; “chemical, biological, radiological and nuclear (CBRN) risks, in particular chemical risks”; cooperation between EU agencies; and “emerging threats”: “Evolving technologies such as UAVs (unmanned aerial vehicles), artificial intelligence (AI), blockchain or the Internet of Things, could be misused by terrorist groups. Tackling these threats requires high-tech expertise, meaning that more efforts at national and EU level are required to address the emerging threats, including through public-private partnerships and research and development. At the same time, the opportunities of the new technologies for security need to be explored and mobilised.”
The document also includes a list of adopted counter-terrorism measures, measures awaiting formal adoption, and measures under discussion.
On 20 March, the European Commission imposed yet another massive fine, 1,5 billion euro, on Google. The Commission Directorate-General for Competition stated that the data company has abused its dominant position in the online advertising market by imposing restrictive contracts with third-party websites that prevented rivals from placing their search adverts on these websites.
Competition Commissioner Margrethe Vestager said that “Google has cemented its dominance in online search adverts and shielded itself from competitive pressure”. According to her findings, Google’s misconduct lasted over ten years and prevented other companies from competing in the ad market.
The fine is imposed for the way Google uses its “AdSense for Search” product, which delivers online ads to large third-party websites such as newspapers and travel sites embedding Google Search into their online presence. Embedding Google search took place via agreements, according to the Commission’s press release. Vestager’s team says they have “reviewed hundreds of such agreements in the course of its investigation”. What they found is quite alarming: Apparently, as of 2006, Google’s agreements prohibited publishers from placing search ads from competitors on their search result pages. This was later replaced with a clause reserving the most valuable ad space to Google ads and requiring any changes that publishers wanted to make be pre-approved by Google.
Google hasn’t denied the charges. In a press statement, Senior Vice President of Global Affairs, Kent Walker, said: “We’ve always agreed that healthy, thriving markets are in everyone’s interest. We’ve already made a wide range of changes to our products to address the Commission’s concerns. Over the next few months, we’ll be making further updates to give more visibility to rivals in Europe.”
Although Google ceased those practices a few months after the Commission issued a so-called statement of objections in July 2016, the EU authority still decided to impose this fine that represents 1,29 % of Google’s turnover in 2018. The fine follows two previous decisions by the Commission to impose fines of 4,3 billion euro in 2018 and 2,4 billion euro in 2017 for the abuse of dominant positions in the mobile and shopping search. Google is currently appealing both decisions in court.
Fines such as this one are paid into the general EU budget and will be deducted from next year’s Member State contributions to the EU budget. The fines therefore co-finance operations of the EU. The Commission’s Directorate-General for Competition is probably the only part of the EU administration that regularly makes more money than it costs.
The right to freedom of information (FOI) is protected by law in North Macedonia since 2006. In theory, the law complies with international standards and creates a solid basis for establishing a system to protect this right. However, the practice during the past 12 years has shown legal gaps, inconvenient practices, and inefficiency of the national authority at implementing the law.
The urgent reform priorities set by the European Union in 2015 as preconditions for North Macedonia accession to the EU specifically require that the government fundamentally improves access to information. Some improvements were made, forcing active transparency by declassifying and publishing documents online, and allowing access to data on spending of public money.
Meanwhile, the Commission for the Protection of the Right to Free Access to Public Information (KOMSPI) that is in charge of monitoring the implementation of the law, did not function. A huge backlog of unresolved complaints is waiting for completion, because the parliament failed to appoint new commissioners and replenish its ranks.
In December 2017, an initiative for a new FOI law was launched. After a year and a half, Macedonian citizens finally received the proposed text of the new law.
EDRi member Foundation for Internet and Society – Metamorphosis endorses the process of passing the new Law on Free Access to Public Information, which would provide more efficient protection of the fundamental right to access information.
With regard to specific provisions of the proposed text, Metamorphosis suggests the following:
Article 1, paragraph 1: The defining of political parties as public information holders in terms of income and expenditures is one of the key positive novelties of the Law on Free Access to Public Information. Metamorphosis believes that the funding of political parties should be considered public information to increase the transparency regarding the spending of public money on the part of the political parties.
Article 3, paragraph 1, indent 7: The draft text attempts to define cases where the access to information would be of public interest by establishing a fixed list of criteria. Metamorphosis does not recommend the use of a restricted list to define public interest since a narrow definition bears the risk of limiting the exercise of the irght to access information. To avoid such limited definition, we suggest to introduce a mandatory injury test to assess the existence or not of public interest when an information is being requested, without being defined by law.
Article 10. Metamorphosis deems the definition of public information detailed and providing legal certainty for public information holders. In addition, apart from the scale of the information, its availability on websites shall contribute to reducing the number of requests for access, thereby giving the opportunity to holders to be more efficient as regards the full implementation of the law.
Article 21, paragraph 1: Shortening the deadline by which holders need to respond to a request from 30 to 20 days is a change Metamorphosis believes will not drastically contribute to a better implementation of the law, especially when journalists request public information. Additionally, in its work plan 2017-2022, the Government of the Republic of North Macedonia states it will implement the open government concept in full to further increase transparency. It will propose amendments for halving the deadline for response to public information requests from 30 to 15 days as it was recommended in the plan for Open Government Partnership.
Article 31: Metamorphosis deems positive the change of the status of the authority responsible for implementing the Law on Free Access to Public Information, from a commission, as a collective body, to an agency, as an independent body, especially when it comes to leading a complaint procedure.
The positions listed above are defined following a public debate held in the Assembly of the Republic of North Macedonia. At the moment, the Parliament is working on amendments and the final text is expected to be given to the Members of the Parliament soon.