You are currently viewing <span class='p-name'>How to turn a disaster into an Internet miracle</span>

How to turn a disaster into an Internet miracle

Last Monday, the EU failed us all.

It disregarded a record-breaking petition, thousands of protesters and Internet experts; it ignored the advice of the UN high commissioner on human rights, as well as that of the fucking inventor of the World Wide Web. It would have probably still brashly gone ahead if Jesus himself had floated down from the Heavens and said that this was a bad idea. And it passed into law its infamous Copyright Directive for the Digital Single Market, upload filters and link taxes and all.

I have been following the unfolding dumbassery for over a year, since only the faintest whispers on Y Combinator were echoing the words "Article 13". Though there is little one can be sure of regarding the future of this vaguely written piece of shit legislation, my extensive reading leads me to believe that the following is most likely to happen moving forward:

Over the next two years, EU states will transcribe the directive into national law. They have some leeway, so some will do so with a silk touch, and some will blunder into setting rules even more over the top than the directive mandates.

Yes, France, you can sit down. I do mean you.

As all but the tiniest and youngest of websites will be impacted, pretty much every platform with user-generated content that you know, from Google to Instagram and from Medium to Newgrounds, is already taking steps to make sure it'll be compliant when the day comes.

On that day, at the flick of a switch, the upload filters will turn on. Suddenly, the Internet that we know today will adopt a "guilty until proven innocent" mentality. The automated content filters will suck at their job, so some of the perfectly legal things you attempt to share online will never see the light of day, because they'll trigger alarm bells in the copyright monitoring system before they're even published.

Incentives will have been realigned for the platforms. As they will now be responsible for what their users publish, they will have much less of a reason to stand up for freedom of speech. On the contrary, they will choose to overfilter. This is sensible on the corporations' part. Rather than risk expensive lawsuits, it is more profitable to simply cast a net so wide that it also captures lots of innocents, as long as all the actually infringing material is suppressed. The age of careless copyright-motivated automatic censorship will have begun.

A separate effect will be that the internet platforms will pay licences to big media companies, so they can legally show thumbnails, titles and descriptions of their content when it is shared. All the myriad small companies, websites and blogs will get fucked over as a result, since the platforms won't risk lawsuits by showing their materials as well. The big platforms won't pay licences to all the small website owners either, simply because they are too many. It is too much effort for too little reward. The loss in diversity, creativity and little-guy brilliance will impact us all.

Many of the effects will spill over outside of the EU. There are two reasons: first, the Internet is transnational. Second, there is a little nifty thing called the "Brussels effect". It refers to the fact that, usually, the EU has the strictest laws in the world regarding product quality standards. Because any international company that wishes to do business in the EU has to comply, corporations usually end up adopting EU standards for all their products, everywhere in the world. It is simply cheaper to have a single production pipeline that's up to scratch regarding the strictest laws in the world, rather than having two separate production pipelines labeled "EU" and "everywhere else". As a result, it's likely that many countries outside of the EU will end up with upload filters and link taxes as well.

And so the copyright censorship filtering machine will have been set into place.

In the 2020s, the Copyright Directive and its constellation of national laws will likely be challenged in court on multiple fronts. The lawsuits will eventually make their way up to the European Court of Justice. Given that it is on shaky ground when it comes to both human rights and conflicting older EU laws, it is likely that the directive will be finally shot down.

Overnight, the reason for the upload filters will disappear. However, they will not. Because they will have already invested heavy money in them, and in order to appease the media giants they have to do business with, the internet platforms will leave the censorship machine in place.

The law will be gone, yet its effects will linger well into the 2030s and possibly beyond.

That is a likely story of the future, if we collectively roll over and accept an Internet that's so controlled that it resembles cable television.

Certainly, we have sailed over the edge. The legal means to significantly oppose this fiasco are exhausted. It's over.

Fortunately, legal process is not the only weapon we have. It is easy to forget sometimes, but the Internet isn't made of people, nor companies, nor even wires. It is made of software. The laws that govern it, as inevitable as thermodynamics, are the code that underpins it. If we can no longer fight Article 13 legally, perhaps it is time to code.

In the last decade, several technologies have been quietly gaining steam. Some of them have become well known, like torrents or the blockchain. Others, like the Interplanetary File System, the IndieWeb, the dat:// protocol or the Fediverse, have so far flown under most people's radars. All of these technologies point toward the same tech geek wet dream: the fully decentralized Web. In short, the d(ecentralized)Web means that all information is somehow spread out amongst everyone in a network. Moreover, everyone contributes a little bit of computing power to the network. There are no middlemen or gatekeepers, but just a roiling swarm of users. dWebsites are everywhere and nowhere. The network is neutral, agnostic and uncensorable.

That's the ideal, anyway. I have previously written at length about the mad potential of the dWeb/Web 3.0/[insert preferred buzzword] here.

Right now, though, I'd like to talk about something different:

I think that the rationale for adopting the decentralized Web has changed. Because of this EU copyright bill, the world is suddenly a very different place, and the dWeb may be exactly the idea whose time has come. In the next few years, the Web is going to become less vibrant, less diverse and altogether more controlled. People are going to notice. What if, alongside the centralized internet platforms that embody this control (Google, Facebook, Twitter, YouTube etc.), an increasingly enticing alternative rises? An alternative driven by all the enthusiasm, freedom and technological magnitude of the early Web. What's more, an alternative immune to the effects of the Copyright Directive, for it would not be driven by the centralized platforms that this law has domain over, but by open protocols. (Effectively, fleshing out the decentralized Web would mean building around this regulation, avoiding its grasp. dWeb platforms would not really have an owner and would not fulfill the definition of a "platform" considered by the Directive.)

The dilemma of the dWeb has long been the adoption paradox: no one will use it until a lot of people use it. Just now, the European Union may have unwittingly handed us a very compelling reason for people to adopt it en masse. If, as developers, as coders and geeks, we rise to the challenge and build the dWeb into something worth using, then I believe now may be the best time we could hope for. Ironically, Article 13 may be the spark that ignites an Internet renaissance.

Opportunity knocks. We mustn't keep it waiting.

The Ferridge

Wry writer and profligate pixelsmith

This Post Has 6 Comments

  1. KevinOnEarth [via Twitter]

    I look forward to it, though it doesn’t address how Intellectual Property creators would get paid for their work through the dWeb. That said, I believe I have a simple, comprehensive & effective (but radical) solution, which would work on the dWeb (or any Web).

  2. jaboja [via Reddit]

    Very good point.

    However I would argue with the “no one will use it until a lot of people use it” part. We should just reuse existing protocols, that even if not so popular nowadays are already adopted (e-mail, XMPP, RSS etc.) instead of reinventing the wheel (Matrix etc.). This way we would already some potential to start with that would bring us closer to the critical mass. We would just need to fill the gaps, e.g. one f the decentralized social networking protocols would need to became the de facto standard. And maybe move the web from HTTP(S) to something like IPFS.

    There is however on issue we may want to rethink now: the power of corporations lays not only in the servers but also in slowly dissolving existing technologies. We had quite much already solved problems that became problems again just because G####e has intentionally broken that for profit:

    We had unified multiplatform UI (GTK, QT, Java, even the HTML), but G####e has destroyed it designing A#####d in a way that destroys existing UI conventions and Ch###e that it breaks old websites on mobile (scaling down websites unless some is present – basically breaking the lex rerum non agit rule on the ground of webstandards).

    We had broken Windows desktop OS monopoly with Linux only to have it back with A#####d / i## duopoly.

    We had efficient network for idea discussion in the form of blogs interconnected with RSS, pingback and blogroll sections. As it was hosted on diverse platforms it was essentially uncensorable – but then F######k came and destroyed it by bringing everybody with network effect into their walled garden. And nowadays they abuse that power to empower radical left in US and right wing in eastern Europe (the former intentionally, the later unintentionally by walling fake profile creation in a way that blocks laypeople from doing so, but obviously is no limit for “russian trolls”).

    IMO, we as the open-source developers should take some countermeasures now as previous generation did when they started the OSS movement and licenses like GNU-GPL appeared. Maybe explicitly stating in our licenses that rendering old apps with blur applied (hello MS and 150% scaling in Windows) or scaled in an unpredictable way constitutes alteration of original work and therefore copyright infringement, and therefore our open-source tools cannot be used for that? Or maybe development of tools that don’t do that would be enough?

    In that time of history we may also want to think about preserving all the older works. I have books from 1920s that are obviously still readable, but if something was released in an electronic way in 1990s it is already unreadable. For that reason I think the future web should not only introduce ways to avoid article 13 censorship, but also ways of preserving all existing content. We already have web archives, we still need however to bring back Flash and Java-applet compatibility to our browsers instead of lying about technology progress – as the fact that we disabled some of the technologies that were widely used in the past is basically modern form of burning the libraries.

    1. The Ferridge

      @jaboja I agree with you on pretty much everything. The devs at Newgrounds have been doing some good work on preserving Flash, btw, but it’s still ongoing.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.