How IPFS solves the Internet’s speed-of-light problem

Eli Dourado
Plain Text
Published in
6 min readAug 31, 2016

--

In the early days of the Web, there was a dream of deterritorialization. In the view of Grateful Dead lyricist and Internet activist John Perry Barlow, governments of the industrial world were obsolete. “You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear,” he wrote in A Declaration of the Independence of Cyberspace.

It is easy to understand why Barlow and his comrades were so excited back in 1996 when the Declaration was written. The Internet connected the whole world. Information published online from anywhere in the world was accessible anywhere else. All you needed was one server anywhere on Earth to reach a global audience. Speech was uninhibited as long as there was at least one place that would tolerate it. There was nothing that the rest of the world could do to bottle it up.

In practice, it didn’t work out that way. The Internet, it turns out, is a physical thing. There are servers and routers and cables, and, perhaps most importantly, a pesky limit known as the speed of light.

If you want to serve a global audience, you can’t really do it from a single server or even a single data center. It takes hundreds of milliseconds for data to flow halfway around the world and back. Some of that time may represent routing or other signal propagation delays, but a good chunk of it is due simply to the fact that information can’t travel faster than light.

100 milliseconds’ delay is perceptible to ordinary Web users. When, as is common, a Web application requires multiple round trips between the browser and the server, the delays can add up to seconds. This is enough to cause users to reconsider whether they really want the information they just requested. Often, they will close the browser tab. If you’re running any kind of global online business, it’s suicide to try to serve all of your content from only one location and avoid having offices, personnel, and equipment around the world.

Unfortunately, this global entanglement brings territorial governments, those “weary giants of flesh and steel,” back into the picture. The poster child for this fact is Yahoo! In 2000, the company was sued in France for allowing auctions of Nazi memorabilia on yahoo.com (the company had already banned this material from the yahoo.fr auctions site). French law is clear—one may not wear or exhibit Nazi insignias except for artistic or historical exhibitions. Yahoo! lost the case, and, despite its claim that it was protected by the US Constitution’s First Amendment, eventually complied with the French court’s order to remove Nazi paraphernalia from its US auction site.

Nazi memorabilia from the 1930s at Berlin’s Museum of Things. Photo by user henrytapia on Flickr.

It’s worth underscoring exactly why Yahoo! was ultimately forced to comply with a French order. Yahoo! was a global company operating in many countries around the world, including France. Even though its main auction servers were in the US, its global operations made it vulnerable to court orders in every country in which it operated. Without following local court orders, assets could be seized or executives arrested.

To revive the early Internet’s dream of deterritorialization, then, we need to make global operations from a single location possible. To start with, that means fixing the Internet’s speed-of-light problem, which is exactly what a new Web-like system, IPFS, does.

When you request a piece of content on the Web, you supply your browser with a URL, which is information that can be used to locate the resource you’re looking for. That location process is remarkably physical. A Web URL usually has a domain name, which can be resolved to an IP address, which is ultimately used to make a connection between your browser and a specific server somewhere in the world. Your browser then supplies that server with the filename of the piece of content you seek, and the server responds with the content.

IPFS works differently. Instead of identifying content using the location and filename, the system identifies content using a cryptographic hash of the content itself. To fetch content, you connect to a peer-to-peer swarm and ask if anyone has the content that matches a specific hash. This hash is a tamper-proof digital fingerprint. It’s a 256-bit number that nearly uniquely identifies each piece of content.¹

It may be that nobody has the content that matches the hash you are looking for other than the original publisher. In that case, you will ultimately connect to the publisher’s computer, which may be located on the other side of the globe, with all the speed-of-light-driven latency that implies. But on the other hand, someone closer to you may have a copy of the content and may be the first to respond to your request. Because cryptographic hashes are tamper-proof, you can download the copy from your neighbor and know with certainty that it is the same content you were requesting. Your system recalculates the hash of the content automatically to verify that it matches what you asked for.

The benefit of this content-based addressing is even more stark if we consider its use on a nascent Mars colony. Imagine a colonist trying to connect to servers on Earth, with one-way latencies of between 4 and 24 minutes, depending on the planets’ relative orbital positions. Each round-trip request, therefore, takes between 8 to 48 minutes. With that kind of delay, it makes sense to cache everything you get back from Earth. That way, if another colonist wants the same content, they can get it locally without going through another interplanetary request, neatly sidestepping the speed-of-light problem. Indeed, this use case is what inspired IPFS’s name—Inter-Planetary File System.

But as the failure of the existing Web to disempower territorial governments demonstrates, you don’t have to be on Mars to benefit from content-based addressing. If IPFS were widely adopted, it would become possible for a single jurisdiction to become a data haven able to serve the globe—and indeed the solar system—at low latency. Alternatively, servers in orbit could be seeders of uncensored content on the IPFS network. The structure of the IPFS network decentralizes content distribution so that people who want to serve a global audience need not actually have a physical presence in multiple jurisdictions, at least not for the purpose of literally serving static files.

There is a lot more that is technically interesting about the IPFS project—it breaks files down into blocks arranged in a Merkle tree, it tracks file version history, and it is integrated with a new human-readable naming system. It has been described as “similar to a single bittorrent swarm exchanging git objects.” The project is also planning to adopt a mechanism for incentivizing (i.e., paying for) the caching and serving of files, so that users don’t have to rely on the goodwill of other nodes.

And in combination with other decentralized computing projects, IPFS is even more useful. IPFS distributes content, but application logic is also content. Whole apps can be distributed via IPFS, and where they need to, access a blockchain like Ethereum for what used to be server-side logic.

The Web has changed the world—just not as much as we initially hoped. The IPFS project is one of several, part of a crypto-renaissance, that gives me a renewed hope for a world in which old, Westphalian dinosaurs have much less power online.

Footnote

¹ Nearly, because in theory it’s possible for there to be a “hash collision,” two totally different pieces of content that cryptographically hash to the same value. But with 2²⁵⁶ possibilities (that’s on the order of 10⁷⁷) for the hash value, hash collisions are rare to say the least. The sun will likely explode before we find a 256-bit hash collision.

Eli Dourado is a research fellow at the Mercatus Center at George Mason University and director of its Technology Policy Program. Follow @elidourado on Twitter.

--

--