Beholden to Our Network Overlords

All Your Base Are Belong To Us

Houston we have a problem. What would you say if I told you that our interaction with information, networks, and technology is undermining the economy, democracy, and the very fabric of society? And what if I told you that the solution is smarter payments?

You’d probably call me crazy. But that’s exactly what Jaron Lanier—named one of Time’s 100 most influential people in 2010—posits in his book, Who Owns the Future?, a work Joe Nocera from the New York Times described as “the most important book I read in 2013.”

Within those pages, Lanier—the computer scientist and classical music composer credited with popularizing the term virtual reality—builds the case for why “our network architecture is shrinking the economy and impoverishing the middle class“—and how it’s also responsible for the financial crisis and the rise of the surveillance state.

But how?

The crux of his argument is attendant with the arrival of the Information Age and is best reflected in the architecture of the internet. It’s here that we readily submit ourselves to the whims of network overloads—think Facebook—who in turn leverage the spoils of our collective participation to enrich themselves and further their stranglehold over the system. It’s the reason why a two-year old company like Instagram that had no revenue, a dozen or so employees, and a piece of software even a novice programmer could reproduce in short order was apparently worth $1 billion—a figure observers later decried as “cheap” with the benefit of hindsight.

In today’s logic-defying landscape of ones and zeros, profit and power is distributed not to those who provide material value to the systems we inhabit—the majority of us—but to the handful of individuals sitting at the top, lucky enough to be at the right place at the right time to claim their seat at the throne. Sounds a bit like our economy, doesn’t it? This notion is confirmed by a recent Princeton study. America is no democracy, the researchers concluded. Much like the state of social networking, the US is ruled by oligarchy.

There’s a technological explanation for this, says Lanier and substantiates his case in a podcast with IEEE Spectrum:

Okay, so let us hypothesize that in the future there would be robotic devices that would create bread, or perhaps bread could be 3-D printed, or perhaps bread might go from some tiny seedling on its own, automatically. In a sense, it already does that, of course. But let us just suppose, at any rate, that there are technologies for making bread which require vastly less human labor than in our present times. And this might include the workers in the field who gather the grains and process the grains. The whole thing might become what we call more “automated,” right?

And so then, the usual line of thinking is that, “Well, it’s sad that all the people who might have had jobs making bread before or making the components for bread or transporting the bread, it’s sad that all those jobs went away. But we can count on new jobs coming about, or at least new paths to sustenance, because technology always creates new opportunities.” And that’s something that I think is true. However, it all really depends on how we think about the technologies.

If we think about the technologies purely in the terms of sort of an artificial intelligence framework, where we say, “Well, if the machine does it, then it’s as if nobody has to do anything anymore,” then we create two problems that are utterly unnecessary. There’s a microeconomic problem, and there’s a macroeconomic problem. The microeconomic problem is that we’re pretending that the people who do the real work don’t exist anymore. But then the macroeconomic level also has to be considered. If we are saying that we’re automating the world—which is what happens when you make technology more advanced, and therefore there will be more and more use of these corpora driving artificial intelligence algorithms to do everything, including bread making—if we’re saying that the information that drives all this is supposed to be off the books, if we’re saying that it’s the free stuff, it’s not part of the economy, it’s only the sort of starter material or the promotional material or whatever ancillary thing it might be, if the core value is actually treated as an epiphenomenon, what will happen is the better technology gets, the smaller the economy will get, because more and more of the real value will be forced off the books. So the real economy will start to shrink. And it won’t just shrink uniformly; it’ll shrink around whoever has the biggest routing computers that manage that data.

The fundamental catalyst for this phenomenon, according to Lanier, is how we treat information—that we’ve been conditioned to perceive it as free. Except that it’s not—for the same reason that perpetual motion machines don’t exist. Information might feel free, but there will always be costs associated with its creation, processing, and distribution—even if those costs aren’t explicit; they’re priced into the equation one way or another.

But if we are to vote with our dollars in a capitalist society and market-based economy, then in buying into this falsehood, what we are really doing is absolving ourselves of any say or power—which, increasingly, is becoming the only payment option available. It’s a particularly foolish trade—tyranny for a “free lunch”—given that someone is eventually going to have to make good on that debt no matter what. And going by history, it will more often than not still be you and me:

Let’s suppose that you want to be Maxwell’s demon. And Maxwell’s demon is this character who operates a little tiny door letting hot molecules pass one way and cold molecules pass the other way. And if you can operate this little guy, he could eventually, just as a matter of by opening and closing this little door, separate the hot from the cold. And then at that point, he could open up another big door and let them mix again, running a turbine, and then repeat, and then you get perpetual motion. And, indeed, every perpetual motion machine boils down to an attempt to make a Maxwell’s demon.

And so if we ask, Why doesn’t this work? Why don’t we have perpetual motion? the answer is that the very act of computing, the act of discrimination, the act of measurement, the act of even the smallest manipulation in response to those things, the act of keeping track of it all so you know what you’re doing, all that stuff is real work. And it takes energy. It radiates its own waste heat. It’s real work. And you can never get ahead. There’s no free lunch. The work involved in doing that will always be greater than the work you can earn by doing it. So that’s an aspect of thermodynamics in a nutshell.

So what I’m proposing is that finance, and indeed consumer Internet companies and all kinds of other people using giant computers, are trying to become Maxwell’s demons in an information network. The easiest way to understand it is to think about an insurance company. So an American health insurance company, before big computing came along, would hire actuaries to set rates. But the idea of, on a person-by-person basis, attempting to decide who should be in the plan so that you could only insure the people who need it the least on an individual basis, that wasn’t really viable. But with big computing and the ability to compute huge correlations with big data, it becomes irresistible. And so what you do is you start to say, “I’m going to…”—you’re like Maxwell’s demon with the little door—“I’m going to let the people who are cheap to insure through the door, and the people who are expensive to insure have to go the other way until I’ve created this perfect system that’s statistically guaranteed to be highly profitable.”

And so what’s wrong with that is that you can’t ever really get ahead. What you’re really doing then is you’re radiating waste heat. I mean, for yourself you’ve created this perfect little business, but you’ve radiated all the risk, basically, to the society at large. And if the society was infinitely large and could absorb it, it would work. There’s nothing intrinsically faulty about your scheme except for the assumption that the society can absorb the risk. And so what we’ve seen with big computing in finance is a repeated occurrence of people using a big computer to radiate risk away from themselves until the society can’t absorb it. And then there’s some giant bailout and some huge breakage. And so it happened with Long-Term Capital [Management] in the ’90s. It happened with Enron, and we saw a repeat of it in the events leading to the Great Recession in the late aughts. And we’ll just see it happening again and again until it’s recognized that this pattern is just not sustainable.

So it might seem like information is free or that these derivative positions are risk-free, but all we’re doing is tucking those costs and risks into other places. With the internet, these costs might manifest in abstract ways, like spam, the loss of privacy, or lower quality information—such as when journalists have to resort to sensationalist headlines or titillating slideshows just to grab your attention and inflate their page impressions. In the case of Wall Street, once the house of cards fell, it was the rest of society left holding the bill.

Central to this conundrum is the lack of a clear solution. During the Industrial Age, Henry Ford, credit to his great enlightenment, realized that his business model would only be sustainable if he empowered the middle class—by paying suitable wages that allowed his employees to afford a Model T. It’s a far cry from our situation today, where the go-to strategy for this latest tech boom has been the ol’ bait-and-switch, where guys like Mark Zuckerberg lure the masses with a free service only to sell them out once they’ve become untouchable, their monopoly unshakeable.

Of course, selling out isn’t all that straightforward. The ad-based model is itself a paradox. Feeling exposed and betrayed—instead of empowered and motivated, as they once did—disillusioned users become less inclined to produce and share information. On the receiving end, our Newsfeed becomes cluttered with sponsored stories and ads. As quality deteriorates, we naturally flee to the next best, still untainted thing, Instagram, for instance. Things might feel better, for a fleeting minute—because the variables are slightly tweaked and we appreciate the smell of new things—but history is doomed to repeat itself. We become enslaved to this wild goose chase on a never-ending roller-coaster, a perpetual search for the Next Big Thing—much like the boom and busts of our bubble economy. Well aware of its own ephemeral relevance, a fading Facebook reaches for its wallet—and the cycle continues.

Lanier, however, has a better solution, one that breaks the cycle. It’s not that we have to stop Liking, he argues, it’s that we have to stop copying—and move toward a system of “two-way” links:

Well, copying is a strange idea if you think about it from first principles. And, indeed, the first concept of digital networking, dating back to Ted Nelson’s work in the ’60s, didn’t include copying. And the reason why is if you have a network, the original’s right there, so why would you copy it? I mean, you know, it’s really that simple. In the book I tell a story about when I was a kid, visiting Xerox Park the first time in the ’70s and asking people, you know, “Why the hell are you copying files here when they had created Ethernet.”

And it was sort of strange because by rights it should have made—I mean copying was only necessary when computers weren’t connected because what you’d do is you’d copy the file to a disk or card or tape or whatever it had been in the old days and move it to another computer and reload it. So there was, like, a practical reason for it. But once you had computers connected, why on earth would you still copy files?

And the answer was really interesting. They said, “You know, we’re sponsored by a copying machine company, Xerox, and so we simply cannot say that even in the abstract copying will become obsolete.” So, in a sense, it was an anachronism used to please a sponsor.

But the problem with copying is—well, there are multiple problems. One is that it makes information less valuable because it loses the context. So if you don’t know what—like information is only information in context. So there’s a way in which copying intrinsically degrades the quality of information.

But economically, the problem is very simply that copying severs the link to where the information came from, so it creates this illusion that the information just came from the sky or from angels or sirens or some imaginary place. And that creates this economic falsehood that people didn’t really create.

Any time you have a no-copy network, there’s bidirectional links to the network, obviously, because you need to know where the thing was—you know, in order to have a local cache and a single logical copy, there has to be a back link as a matter of course.

So we severed those in protocols like HTML, where you can just copy things, and there’s a one-way link and there’s no way to really know who’s watching who. So what does that mean? It means that companies like Google had to come along to scrape the entire global network constantly to reconstitute the back links to try to contextualize things so they could do things like sort them to give search results that are meaningful.

Most of our online activity today is defined by one-way links—such as when we send an email, download an mp3, or share a photo—and it’s the reason why we have unwieldy spam and rampant piracy, why the Washington Post can’t make ends meet without a sugar daddy, and why the NSA is predisposed to unlimited spying.

Contrasting this is the world of two-way links, where information, no longer deemed “free,” would suddenly have a clear market value. Such two-way links, says Lanier, “creates a higher-quality network, and it creates a possibility of an economy in an information society”—where every connection we make would have a tiny cost associated with it. Spam and piracy wouldn’t be as financially feasible. The Post could conveniently charge its readers, who would readily pay for credibility and quality, much as they once did, throwing a quarter to the paperboy on the way to work without a second thought. And notably, the NSA would be kept in check:

We don’t give the police—we don’t issue the police an arbitrary number of free batons and police cars and guns; they have to pay for them out of a budget. And that’s a critical idea that’s been a part of all democracies, and the reason why is the citizenry isn’t a citizenry unless it controls the power of the purse. And, you know, if we say that the government doesn’t have to pay for information, that’s the same as giving them a license for infinite spying, which eventually means infinite power as technology becomes more advanced. It’s an absolutely unviable alternative.

So, yes, they must pay. And the reason that’ll be enforced is because lawyers and accountants will be on their ass. And just to answer some obvious things, yeah, if they have a specific criminal investigation, they don’t have to tell the people in advance that they’re getting paid, because that would reveal it. Yeah, that would be under court order; that should be an exception, as it always has to be in a democracy. They will not be able to do omni spying anymore. They won’t be able to spy in advance without people knowing they’re being spied on, because the people will get money, and that’s proper. It is actually a totally reasonable solution.

And so you can’t have democracy in a highly evolved information society if information is free. You just can’t. I mean, because you’ll be giving the government an infinite spying license. And it might sound like an odd idea, but I hope once you roll it over in your brain, you’ll start to see that it’s just a very simple and sensible idea.

The idea of two-way links then, in Lanier’s eyes, is a means to preserve democracy, both economically and politically—and in turn, empower the middle class, tilting the balance of power from the network’s operators back in favor of system’s users. Wouldn’t it be neat if—beyond taxation, regulation, and wealth redistribution—there was a technological solution to societal inequality?

Which is all fine and dandy, you might say, but how do we get there? Believe it or not, the answer lies in Bitcoin. As my colleague, Phil Rapoport, cannily pointed out, the concept of no-copying and two-way links sounds an awful like the double-spend problem, which Satoshi Nakamoto elegantly solved with cryptography and decentralization.

If you think about it, one-way links is precisely what Nakamoto addresses. How can we ensure that our transaction is unique? How do we know that someone hasn’t made a copy? How do we verify that the money is good? At its core, Bitcoin is a way to give information value in a digital world. By doing so, it effectively creates a model for two-way links. And without the need for a central authority, that value is distributed among the network’s participants.

Lanier’s utopia, then, could conceivably be built upon the technology that drives the cryptocurrency movement—a world where our digital connections with each other are, by definition, socially and economically meaningful; where businesses are incentivized to serve their customers, and networks are owned by the people that power them; and where privacy isn’t just a pipe dream and we control our personal information.

Having long been conditioned to expect “free,” the initial transition could be jarring for some. We might have to spend a tiny fraction of a penny to send an email or watch the latest viral video and maybe a few cents more to view the front page of the New York Times. These transactions would occur seamlessly in real-time through our browser, which would have a wallet. In practice, we would go about business as usual, except with the knowledge that our online activity came with a cost—perhaps no more, in aggregate, than what we already pay in the form of all-inclusive subscriptions, intrusive marketing, and compromised privacy. But those costs would be explicit and transparent and we would pay only for exactly what we consume.

In the long run, the benefits would far outweigh the learning curve. No longer would we be subject to the whims of network dictators. Content creators would finally get paid directly for their work without signing their life away to a record label or selling out to advertising companies (if that’s their prerogative). And as sharers of information, we would all be artists in a way—and compensated accordingly—where an unexpected burst of wit or an off-hand Tweet could mean that you’ll be covering the bar tab at happy hour.

Above all, it’s about taking back control of our destiny. Because when people are properly and fairly acknowledged for their contributions, as Lanier suggests, the quality and quantity of what they produce will invariably go up. When people believe they have a voice, they will likely make themselves heard. And when people know they can make a difference, they are willed to action. Isn’t that what democracy is all about?

Related:

The Greatest Social Network of All Time
Why Payments Matter

Follow Alec on Twitter: @sfnuop