Now that history has returned with the vengeance of the long-dismissed, few analyses of our present moment are complete without a ritual mockery of Fukuyama’s seemingly naive assumptions. The also-rans of the 1990s, Samuel P. Huntington’s The Clash of Civilisations thesis and Robert D. Kaplan’s The Coming Anarchy, which predicted a paradigm of growing disorder, tribalism and the breakdown of state authority, now seem more immediately prescient than Fukuyama’s offering. Yet nearly thirty years later, reading what Fukuyama actually wrote as opposed to the dismissive précis of his ideas, we see that he was right all along. Where Huntington and Kaplan predicted the threat to the Western liberal order coming from outside its cultural borders, Fukuyama discerned the weak points from within, predicting, with startling accuracy, our current moment.
Consider this paragraph from the book:
Experience suggests that if men cannot struggle on behalf of a just cause because that just cause was victorious in an earlier generation, then they will struggle against the just cause. They will struggle for the sake of struggle. They will struggle, in other words, out of a certain boredom: for they cannot imagine living in a world without struggle. And if the greater part of the world in which they live is characterized by peaceful and prosperous liberal democracy, then they will struggle against that peace and prosperity, and against democracy.
It was hard to not think of that paragraph as scenes emerged from last week’s invasion of the U.S. Capitol in an attempt to overturn a democratic election, particular those members of the mob LARPing a special forces military operation, and in the following days when it became clear just how many members of the mob were otherwise well-off members of society. Was belief that President Trump won the election a sufficient motivation to attack the Capitol, or, underneath it all, was there something more?
Win McNamee via Getty Images
I won’t pretend to know the answers to that question — this is a blog about technology and strategy, not philosophy and history. The events that followed Wednesday, though, bring to mind Fukuyama’s warning that history may be restarted by those unsatisfied with its end.
The End of the Beginning
One year ago I wrote The End of the Beginning, which posited that the history of information technology was not, as popularly believed, one of alternating epochs disrupted by new paradigms, but rather a continuous shift along two parallel axis:
The evolution of computing from the mainframe to cloud and mobile
The place we compute shifted from a central location to anywhere; the time in which we compute shifted from batch processes to continuous computing. The implication of viewing the shift from mainframe computing, to personal computing on a network, to mobile connections to the cloud, as manifestations of a single trend was just as counterintuitive:
What is notable is that the current environment appears to be the logical endpoint of all of these changes: from batch-processing to continuous computing, from a terminal in a different room to a phone in your pocket, from a tape drive to data centers all over the globe. In this view the personal computer/on-premises server era was simply a stepping stone between two ends of a clearly defined range.
Another way to think about the current state of affairs is that it is the inevitable economic endpoint of the technological underpinnings of the Internet.
Internet 1.0: Technology
The vast majority of the technologies undergirding the Internet were in fact developed decades ago. TCP/IP, for example, which undergirds the World Wide Web, email, and a whole host of familiar technologies, was first laid out in a paper in 1974; DNS, which translates domain names to numerical IP addresses, was introduced in 1985; HTTP, the application layer for the Web, was introduced in 1991. The year these technologies came together from an end user perspective, though, was 1993 with the introduction of Mosaic, a graphical web browser developed by Marc Andreessen at the University of Illinois.
Over the next few years websites proliferated rapidly, as did dreams about what this new technology might make possible. This mania led to the dot-com bubble, which, critically, fueled massive investments in telecoms infrastructure. Yes, the companies like Worldcom, NorthPoint, and Global Crossing making these investments went bankrupt, but the foundation had been laid for widespread high speed connectivity.
Internet 2.0: Economics
Google was founded in 1998, in the middle of the dot-com bubble, but it was the company’s IPO in 2004 that, to my mind, marked the beginning of Internet 2.0. This period of the Internet was about the economics of zero friction; specifically, unlike the assumptions that undergird Internet 1.0, it turned out that the Internet does not disperse economic power but in fact centralizes it. This is what undergirds Aggregation Theory: when services compete without the constraints of geography or marginal costs, dominance is achieved by controlling demand, not supply, and winners take most.
Aggregators like Google and Facebook weren’t the only winners though; the smartphone market was so large that it could sustain a duopoly of two platforms with multi-sided networks of developers, users, and OEMs (in the case of Android; Apple was both OEM and platform provider for iOS). Meanwhile, public cloud providers could provide back-end servers for companies of all types, with scale economics that not only lowered costs and increased flexibility, but which also justified far more investments in R&D that were immediately deployable by said companies.
The network effects of iOS and Android are so strong, and the scale economics of Amazon, Microsoft, and Google so overwhelming, that I concluded in The End of the Beginning:
The implication of this view should at this point be obvious, even if it feels a tad bit heretical: there may not be a significant paradigm shift on the horizon, nor the associated generational change that goes with it. And, to the extent there are evolutions, it really does seem like the incumbents have insurmountable advantages: the hyperscalers in the cloud are best placed to handle the torrent of data from the Internet of Things, while new I/O devices like augmented reality, wearables, or voice are natural extensions of the phone.
This, though, is where I am reminded of The End of History and the Last Man; Fukuyama writes in the final chapter:
If it is true that the historical process rests on the twin pillars of rational desire and rational recognition, and that modern liberal democracy is the political system that best satisfies the two in some kind of balance, then it would seem that the chief threat to democracy would be our own confusion about what is really at stake. For while modern societies have evolved toward democracy, modern thought has arrived at an impasse, unable to come to a consensus on what constitutes man and his specific dignity, and consequently unable to define the rights of man. This opens the way to a hyperintensified demand for the recognition of equal rights, on the one hand, and for the re-liberation of megalothymia on the other. This confusion in thought can occur despite the fact that history is being driven in a coherent direction by rational desire and rational recognition, and despite the fact that liberal democracy in reality constitutes the best possible solution to the human problem.
Megalothymia is “the desire to be recognized as superior to other people”, and “can be manifest both in the tyrant who invades and enslaves a neighboring people so that they will recognize his authority, as well as in the concert pianist who wants to be recognized as the foremost interpreter of Beethoven”; successful liberal democracies channel this desire into fields like entrepreneurship or competition, including electoral politics.
In the case of the Internet, we are at the logical endpoint of technological development; here, though, the impasse is not the nature of man, but the question of sovereignty, and the potential re-liberation of megalothymia is the likely refusal by people, companies, and countries around the world to be lorded over by a handful of American giants.
Big Tech’s Power
Last week, in response to the violence at the Capital and the fact it was incited by Trump, first Facebook and then Twitter de-platformed the President; a day later Apple, Google, and Amazon kicked Parler, another social network where Trump supporters congregated and in-part planned Wednesday’s action, out of their App Stores and hosting service, respectively, effectively killing the service.
After years of defending Facebook and Twitter’s decisions to keep Trump on their services, I called for him to be kicked off last Thursday, and I explained yesterday why tech’s collective action in response to last Wednesday’s events was a uniquely American solution to a genuine crisis:
So Facebook and Twitter and Apple and Google and Amazon and all of the rest were wrong, right? Well, again, context matters, and again, the context here was an elected official encouraging his supporters to storm the Capitol to overturn an election result and his supporters doing so. What I believe happened this weekend was a uniquely American solution to the problem of Trump’s refusal to concede and attempts to incite violence: all of corporate America collectively decided that enough was enough, and did what Congress has been unable to do, effectively ending the Trump presidency. Parler, to be honest, was just as much a bystander casualty as it was a direct target. That the tech sector is the only one with the capabilities to actually make a difference is what makes the industry stand out. I am not, to be clear, saying that this is some sort of ideal solution. As I noted last week impeachment is the way this is supposed to go, and hopefully that still occurs. And, as I also noted last week, if this triggers a debate about the power of tech companies, all the better. This solution was, though, a pragmatic and ultimately effective one, even if the full costs will take years to materialize (again, more on the long-term repercussions soon).
Soon is today; this Article is not about the rightness and wrongness of these decisions — again, please see the two articles I just linked — but rather about the implications of tech companies taking the actions they did last weekend.
Start with Europe; from Bloomberg:
Germany and France attacked Twitter Inc. and Facebook Inc. after U.S. President Donald Trump was shut off from the social media platforms, in an extension of Europe’s battle with big tech. German Chancellor Angela Merkel objected to the decisions, saying on Monday that lawmakers should set the rules governing free speech and not private technology companies. “The chancellor sees the complete closing down of the account of an elected president as problematic,” Steffen Seibert, her chief spokesman, said at a regular news conference in Berlin. Rights like the freedom of speech “can be interfered with, but by law and within the framework defined by the legislature — not according to a corporate decision.” The German leader’s stance is echoed by the French government. Junior Minister for European Union Affairs Clement Beaune said he was “shocked” to see a private company make such an important decision. “This should be decided by citizens, not by a CEO,” he told Bloomberg TV on Monday. “There needs to be public regulation of big online platforms.” Finance Minister Bruno Le Maire earlier said that the state should be responsible for regulations, rather than “the digital oligarchy,” and called big tech “one of the threats” to democracy.
Make no mistake, Europe is far more restrictive on speech than the U.S. is, including strict anti-Nazi laws in Germany, the right to be forgotten, and other prohibitions on broadly defined “harms”; the difference from the German and French perspective, though, is that those restrictions come from the government, not private companies.
This sentiment, as I noted yesterday, is completely foreign to Americans, who whatever their differences on the degree to which online speech should be policed, are united in their belief that the legislature is the wrong place to start; the First Amendment isn’t just a law, but a culture. The implication of American tech companies serving the entire world, though, is that that American culture, so familiar to Americans yet anathema to most Europeans, is the only choice for the latter.
Politicians from India’s ruling party expressed similar reservations; from The Times of India:
BJP leaders expressed concern on Saturday over the permanent suspension of US President Donald Trump’s Twitter account by the social media giant, saying it sets a dangerous precedent and is a wake-up call for democracies about the threat from unregulated big tech companies…”If they can do this to the President of the US, they can do this to anyone. Sooner India reviews intermediaries’ regulations, better for our democracy,” BJP’s youth wing president Tejaswi Surya said in a tweet.
Tech companies would surely argue that the context of Trump’s removal was exceptional, but when it comes to sovereignty it is not clear why U.S. domestic political considerations are India’s concern, or any other country’s. The fact that the capability exists for their own leaders to be silenced by an unreachable and unaccountable executive in San Francisco is all that matters, and it is completely understandable to think that countries will find this status quo unacceptable.
Companies, meanwhile, will note the fate of Parler. Sure, few have any intention of dealing with user-generated content, but the truth is that here the shift has already started: most retailers, for example, have been moving away from AWS for years; this will be another reminder that when push comes to shove, the cloud providers will act in their own interests first.
Meanwhile, there remain the tens of millions of Americans who voted for Trump, and the (significantly) smaller number that were on Parler; sure, they may be (back) on Twitter or Facebook, but this episode will not soon be forgotten: Congress may have not made a law abridging the freedom of speech, but Mark Zuckerberg and Jack Dorsey did, and Apple, Google, and Facebook soon fell in line. That all of those companies will be viewed with a dramatically heightened sense of suspicion should hardly be a surprise.
Internet 3.0: Politics
This is why I suspect that Internet 2.0, despite its economic logic predicated on the technology undergirding the Internet, is not the end-state. When I called the current status quo The End of the Beginning, it turns out “The Beginning” I was referring to was History. The capitalization is intentional; Fukuyama wrote in the Introduction of The End of History and the Last Man:
What I suggested had come to an end was not the occurrence of events, even large and grave events, but History: that is, history understood as a single, coherent, evolutionary process, when taking into account the experience of all peoples in all times…Both Hegel and Marx believed that the evolution of human societies was not open-ended, but would end when mankind had achieved a form of society that satisfied its deepest and most fundamental longings. Both thinkers thus posited an “end of history”: for Hegel this was the liberal state, while for Marx it was a communist society. This did not mean that the natural cycle of birth, life, and death would end, that important events would no longer happen, or that newspapers reporting them would cease to be published. It meant, rather, that there would be no further progress in the development of underlying principles and institutions, because all of the really big questions had been settled.
It turns out that when it comes to Information Technology, very little is settled; after decades of developing the Internet and realizing its economic potential, the entire world is waking up to the reality that the Internet is not simply a new medium, but a new maker of reality. I wrote in The Internet and the Third Estate:
What makes the Internet different from the printing press? Usually when I have written about this topic I have focused on marginal costs: books and newspapers may have been a lot cheaper to produce than handwritten manuscripts, but they are still not-zero. What is published on the Internet, meanwhile, can reach anyone anywhere, drastically increasing supply and placing a premium on discovery; this shifted economic power from publications to Aggregators. Just as important, though, particularly in terms of the impact on society, is the drastic reduction in fixed costs. Not only can existing publishers reach anyone, anyone can become a publisher. Moreover, they don’t even need a publication: social media gives everyone the means to broadcast to the entire world. Read again Zuckerberg’s description of the Fifth Estate: People having the power to express themselves at scale is a new kind of force in the world — a Fifth Estate alongside the other power structures of society. People no longer have to rely on traditional gatekeepers in politics or media to make their voices heard, and that has important consequences. It is difficult to overstate how much of an understatement that is. I just recounted how the printing press effectively overthrew the First Estate, leading to the establishment of nation-states and the creation and empowerment of a new nobility. The implication of overthrowing the Second Estate, via the empowerment of commoners, is almost too radical to imagine.
It is difficult to believe that the discussion of these implications will be reserved for posts on niche sites like Stratechery; the printing press transformed Europe from a continent of city-states loosely tied together by the Catholic Church, to a continent of nation-states with their own state churches. To the extent the Internet is as meaningful a shift — and I think it is! — is inversely correlated to how far along we are in the transformation that will follow — which is to say we have only gotten started. And, after last week, the world is awake to the stakes; politics — not economics — will decide, and be decided by, the Internet.
The Return of Technology
Here technology itself will return to the forefront: if the priority for an increasing number of citizens, companies, and countries is to escape centralization, then the answer will not be competing centralized entities, but rather a return to open protocols.
This is the only way to match and perhaps surpass the R&D advantages enjoyed by centralized tech companies; open technologies can be worked on collectively, and forked individually, gaining both the benefits of scale and inevitability of sovereignty and self-determination.
This process will take years; I would expect governments in Europe in particular to initially try and build their own centralized alternatives. Those efforts, though, will founder for a lack of R&D capabilities, and be outstripped by open alternatives that are perhaps not as full-featured and easy-to-use as big tech offerings, at least in the short to medium-term, but possess the killer feature of not having a San Francisco kill-switch.