Article 13 — The Dangerous New Invention of Clueless Politicians

Bozhidar Bozhanov
6 min readApr 9, 2018

If you are not involved in EU debates, you might have missed the upcoming Copyright Directive. Sadly, there are too many issues there for it to stay under the radar. So many, that I won’t be able to cover them in one article.

I’ll focus on Article 13 that introduces the so called “upload filters”. The proposal of the European Commission can be found here. Article 13 in the proposal is as follows (don’t read it, it’s boring; just check the bold part):

Article 13
Use of protected content by information society service providers storing and giving access to large amounts of works and other subject-matter uploaded by their users

1.Information society service providers that store and provide to the public access to large amounts of works or other subject-matter uploaded by their users shall, in cooperation with rightholders, take measures to ensure the functioning of agreements concluded with rightholders for the use of their works or other subject-matter or to prevent the availability on their services of works or other subject-matter identified by rightholders through the cooperation with the service providers. Those measures, such as the use of effective content recognition technologies, shall be appropriate and proportionate. The service providers shall provide rightholders with adequate information on the functioning and the deployment of the measures, as well as, when relevant, adequate reporting on the recognition and use of the works and other subject-matter.

2.Member States shall ensure that the service providers referred to in paragraph 1 put in place complaints and redress mechanisms that are available to users in case of disputes over the application of the measures referred to in paragraph 1.

3.Member States shall facilitate, where appropriate, the cooperation between the information society service providers and rightholders through stakeholder dialogues to define best practices, such as appropriate and proportionate content recognition technologies, taking into account, among others, the nature of the services, the availability of the technologies and their effectiveness in light of technological developments.

Now that’s legalspeak that’s even more unintelligible than usual, and one can easily get lost in the euphemisms. But what that is accepted to mean in the policy-making circles is: a user uploads content, the service provider checks whether the content is not copyrighted and blocks it. So — upload filters.

The copyright lobby has managed to convince most people in Brussels and in important member states that this is the only way they can improve their bargaining power with Google (YouTube) and that it is the only way for authors and artists to get their fair share. And you can’t argue with the “fair shre” bit — so let’s start with the main “players” in this area:

Authors, artists, creative people — they should be getting money for their work. Culture is an important aspect of human life and even though it might not have direct value that you can calculate, it’s still work worth getting paid for. How much — whatever customers decide.

Digital companies — YouTube, Vimeo, Spotify, SoundCloud, etc. should be getting paid for the service they provide. They provide a way for millions to not only enjoy creative content, but also be the creators themselves.

Consumers — they should be able to choose the easiest way to consume content, but also not be subjected to algorithmic censorship if they decide to author content.

The main issue with the directive is that it tries to reverse the eCommerce directive principle that service providers are not liable for illegal content until they are notified. If that changes, service providers will be more overly cautious, and this will inevitably lead to filtering or even blocking uploads from non-trusted users.

In the minds of politicians and bureaucrats it’s all very simple — you force everyone to have content filters and things work like charm. This can’t be further from the truth.

Recently I gave a talk on a small conference related to Article 13 in attempt to both outline the issues and propose a different mode of thinking. My slides can be found here, but the main points are:

  • content recognition is far from perfect. Algorithms fail to recognize context and make mistakes. Here’s a detailed report on the state of content recognition technologies, and here‘s another paper. There are numerous examples where authors themselves fall victim of filters, e.g. when a royalty-free sample is used by a number of authors, the first one effectively blocks the rest.
  • content recognition is not cheap (YouTube spent 40–60 million developing ContentID) and forcing every service provider to have it will make it harder for startups and small companies to compete. In the current debates they are trying to limit the scope of affected organizations, but this includes tedious definitions that lead to even more uncertainty — “does this apply to me?”
  • we should think of different architectures — taking the content recognition outside of service providers — moving it to rightholders or their intermediaries (as shown in the chart below)

Many of the details are unaccounted for in the proposal — how will rightholders provider content to services (currently they refuse to give it to small ones, but now the small ones will be obligated to filter it. Catch 22.), how would the appeals process work, should it be “block first, ask later”, or should there be a process handling timeframe when the content is out there.

And ultimately — is filtering needed at all, since currently 98% of rightholders choose the “monetize” option on YouTube. Why regulate the 2% as a norm? We wouldn’t be having this debate at all if Article 13 required service providers to provide a “monetize” option to rightholders. The whole thing looks more and more misguided the more you look into it.

Politicians are not thinking in these details. They like putting everyone into boxes and then ignoring or accepting your argument based on which box you are into. For example, I’ll now give a link to MEP Julia Reda’s blog where she lists a few absurd examples of content filtering. This has a risk of automatically putting me into the “copyright taliban” box and everything that I write can be ignored. Just because I agree with Reda on some points.

But as you’ll see, upload filters bear a risk for freedom of speech. Algorithms will decide whether your content is allowed to be published or not. And this has already been abused — there are reports of silencing journalists by YouTube’s filtering options.

I’m not saying content recognition should not be used — it probably should, but if it should be regulated, it should be done very carefully.

Currently the debate in the Council is a trench warfare —every member state is sticking to their position, there are proposals for phrasing modifications, but overall, no agreement is on the horizon.

And that’s actually good news. Initially I thought there could be a meaningful compromise — with some clever software architecture and a few exceptions here and there. But when you look at the whole idea of the legislation — to harmonize legislation in Europe so that service providers do not need separate legal team in each country they provide services — is lost.

Go back and read the proposed text. The currently proposed compromises are even more complicated and convoluted. And then each member state parliament should take this text and introduce local legislation to adhere to the general principles of the Directive. The mess will be even bigger than what we started with in the first place.

So although I attempted to propose an alternative architecture, my plea to the European Commission, to Commissioner Gabriel (who in this case just follows the legacy of her predecessor), to MEPs and to the Council (and its Bulgarian presidency) would be — drop it and start over. Back to the drawing board. Leave it to the next Commission.

There is no meaningful out of this debacle. And next time, please be more concerned with the details, rather than with what fancy lobbyists are showing you in shiny PowerPoint slides that make no technical or business sense.

--

--

Bozhidar Bozhanov

Software engineering. Linguistics, algorithmic music composition. Founder at LogSentinel.com