Over the last few years, the idea has taken hold that “big data” is driving far-reaching, and typically positive, change. “How Big Data Changes the Banking Industry,” “Big Data Is Transforming Medicine,” and “How Big Data Can Improve Manufacturing,” are characteristic headlines. “Big data” has become ubiquitous, powering everything from models of climate change to the advertisements sent to Web searchers.

Even in a society in which acronyms and sound-bites pass for knowledge, this familiar formulation stands out as vacuous. It offers us a reified name rather than an explanation of what the name means. What is the phenomenon denoted by “big data”? Why and when did it emerge? How is “it” changing things? Which things, in particular, are being changed – as opposed to merely being hyped? And last, but hardly least, are these changes desirable and, if so, for whom?

Big data is usually defined as data sets that are so large and complex – both structured and unstructured – that they challenge existing forms of statistical analysis. For instance, Google alone processes more than 40 thousand search queries every second, which equates to 3.5 billion in a day and 1.2 trillion searches per year;[1] every minute, Facebook users post 31.25 million message and views 2.77 milion video, 347,222 tweets are generated; by the year 2020, 1.8 megabytes of new information is expected to be created every second for every person on the planet.[2]

The compounding production of data – “datafication,” in one account [3] – is tied to proliferating arrays of digital sensors and probes, embedded in diverse arcs of practice. New means of storing, processing, and analyzing these data are the needed complement.

A quick etymological search finds that the term “big data” began to circulate during the years just before and after 2000.[4] Its deployments than quickened; but this seemingly sharp-edged transition into what Andrejevic and Burdon call a “sensor society”[5] actually possesses a deeper-rooted history.

The uses of statistics in prediction and control have long been entrenched, and have increased rapidly throughout the last century[6] – as is pointed out by a working group on “Historicizing Big Data” established at the Max Planck Institute for the History of Science. The group emphasizes that big data must not be stripped out of “a Cold War political economy,” in that “many of the precursors to 21st century data sciences began as national security or military projects in the Big Science era of the 1950s and 1960s.”[7]


After 11 years of battle between Google — now a unit of the just-named Alphabet conglomerate — and the Authors Guild,[1] a professional organization of published writers, the second US Circuit Court of Appeals has ruled that Google’s book project is protected under fair use [paper trail].  Responding to this judgment, the American Library Association (ALA) announced that “Libraries laud appeals court affirmation that mass book digitization by Google is ‘fair use’.”[2] Larry Alford, president of the Association of Research Libraries (ARL) concurred, stating that ”ARL applauds this victory for fair use regarding the Google Books project, which involved partnerships with many of our libraries.”[3]

Is this outcome truly a cause for celebration? For whom is this a victory? What this verdict actually signifies may be understood only by clarifying the nature of the conflict; and analysis needs to go beyond “access,” as if “access” constitutes an absolute virtue – the be-all -and-the-end-all.

Google’s search engine has long been the premier gateway to the Web, granting Google a dominant market position online (around 60+ percent of all computer Web searches). This has enabled Google to seize a disproportionate share of Web advertising. However, Google’s dominance is not guaranteed. Competition to control the Internet has intensified between Google and other Web giants — Amazon, Facebook, and Apple. From Google’s perspective the question of how to maintain its market position could hardly be more vital.

Its superb search technology — its secret algorithm — isn’t enough, in itself, to ensure supremacy.  An additional element is also required. Only by protecting and, if possible, expanding its user-base, to feed streams of data into the company’s means of production — its search algorithm — will Google’s dominance in Web advertising be protected. Expanding its user-base in turn may be accomplished only by introducing, or taking over, services and content with which to draw additional users, and with which to target ads at what the industry calls “most-needed audiences.” (more…)

China has launched its China International Payment System (CIPS), which is intended eventually to provide cross-border transactions denominated in its own currency, the Yuan or Renmimbi.[1] Why couldn’t China, and international Renmimbi users, simply rely on the already existing and well-established telecommunication system — the Society for Worldwide Interbank Financial Telecommunications (SWIFT) — which has functioned as a global network for bank transactions for over forty years?

To understand China’s CIPS initiative requires a closer look at how specialized financial telecommunications are embedded in global power structures.

Corporate trade and investment generate enormous volumes of financial data to accompany transactions of many kinds. As U.S. businesses moved into transnational markets throughout the postwar decades, they turned to big banks to help them exchange payment data across national jurisdictions. Some leading U.S. banks addressed this opportunity by developing proprietary computer systems and linking to their corporate customers. A more encompassing option was established in the early 1970s through  SWIFT, a global system for sending and receiving instructions about payments and other financial transactions. No actual money transits the network: the money itself is sent via separate electronic funds transfer networks. By standardizing the format for such messages and winning over a growing fraction of international financial institutions, however, SWIFT surpassed individual banks’ proprietary systems. [2]  Today, nearly 11,000 financial institutions and corporations located in over 200 countries use SWIFT to exchange millions of messages each day.  SWIFT has grown into an essential infrastructure, not only of international finance but also world trade and investment.

One might think that such a mechanism would be above controversy, in that it provides only a technical means for conducting cross-border financial exchanges; but one would be mistaken. Politics has impinged continually on the network. This reflects its unbalanced control. (more…)

An earlier version of this article was presented to the Technology and Democracy project at CRASSH, Cambridge University, 29 September 2015.  A version without full footnotes was published at openDemocracy.

The International Monetary Fund warned last week that, nearly eight years after the global financial crisis, still-sluggish economic growth might combine with a new financial shock to tip the world back into recession.[1] Meanwhile, however, investment in information and communications technologies – long venerated as a growth dynamo – remains high.  How may we clarify this apparent contradiction?

The concept of “digital capitalism” [2] may help.  Great changes are ramifying around digital networks; at the same time, abiding political economic processes carry forward.  I developed the idea of digital capitalism for this reason: a phase-change is occurring within capitalism.

Before turning to consider the problems that confront digital capitalism today, I will trace four features of its industrial profile and give an account of its political development.

The Structural Profile of Digital Capitalism

Unevenly, across many decades, the industrial revolution gripped not only manufacturing but also agriculture, trade and, indeed, information.[3]  Aileen Fyfe captures this change in characterizing 19th-century British publishing as “steam-powered knowledge.”[4]  Similarly, the applications of digital networks today stretch beyond a discrete information sector.  Digital systems and services have been bolted to all parts of the political economy.

This carries ramifications.  One is that it is shortsighted to equate the digital merely with familiar consumer markets, as for search engines, smart phones and social networks.  Consumer expenditures on digital goods and services account for an estimated 30 percent of the worldwide total.[5]  Nor is it sufficient to try to grasp the digital merely by focusing only on vendors – whether IBM and AT&T in the 1970s; or Microsoft and Intel a decade or two later; or Google, Amazon, Facebook and Apple today.  Digital capitalism has been constructed, and reconstructed, not only by suppliers but also by corporate users on the demand side:  the likes of Wal-Mart and GM and Exxon-Mobil and Monsanto and JP Morgan Chase.  Later, we’ll see that these business users’ importance has been political as well as economic.

A second point pertains to the chronology and character of capital investment in what the Census Bureau now enumerates under the category of “Information Processing Equipment and Software.” U.S. capital investment in networks of course began to escalate long ago – well before the postwar period favored by most conceptions of the information society, and well before the arrival of microelectronics.[6]  Private line telegraphs – lines entirely dedicated to traffic sent and received by a single business user, which leased them from a carrier on a monthly basis – saw widening use, especially by large banks, meat packers, and by the Standard Oil Company, between the 1880s and the 1910s.  So-called “industrial radio services” thereupon proceeded to apply new wireless technology to the operations of railroads, power transmission systems, department store chains, airplane operating companies and newspapers.  By 1947, for example, U.S. oil companies, had built 500 radio stations and obtained licenses to 49 radio frequencies in their search for oil deposits; they began to use microwave radio in order to send data produced by their new offshore drilling rigs to geologists and engineers located at their headquarters by the 1950s.[7] The political scientist Murray Edelman and the economist Dallas Smythe sought to place these neglected network services on the academic agenda during the 1950s.[8]

Capital investment in information processing equipment and software, however, accelerated sharply after the recession of the early 1970s.  Expenditures on computing and accounting equipment, office machines, communication equipment, and instruments soon became the largest single category of capital investment in equipment – and a key driver of economic growth in their own right.[9] Data carriage took several forms:  by adding specialized equipment the national voice telecommunications infrastructure could be adapted to enable limited data transfer; and big organizational users also could lease dedicated private circuits and contract for commercial packet switched service. Data networking was fractured, however, as companies and even individual corporate departments became locked in by competing vendors’ proprietary products.  With the growth of the Internet after the mid-1980s, this fragmentation diminished somewhat, and businesses again increased their outlays.  James Cortada cites an internal IBM estimate that ICT investment escalated from 38 to 55 percent of all equipment spending between 1990 and 2001.[10]

Sectoral variation was, and remains, considerable.  Forestry, fishing and agricultural services account for a small portion of the 2013 U.S. total ($153 million in 2013).  Manufacturing claims a much larger share ($37 billion).  Professional, scientific and technical services also are very considerable ($30 billion in 2013).  The second-top category is finance and insurance ($60 billion).  The largest spender is the information industry itself, inclusive of everything from publishing to computer services: its collective outlay has spiraled to $86 billion.[11]

Digital networks did not free capitalism from its crisis tendencies, but became embedded in them: this is my third point.

By the early 2000s, financial intermediaries had been spearheading the development of digital systems and services for decades.  The largest U.S. financial companies – such as JP Morgan Chase – spent billions of dollars each year on ICTs, and boasted IT and software staffs numbering in the thousands.  This titanic build-up of networked finance occurred as banks pushed debt on every institution and packaged it in a staggering variety of instruments.[12]  Banks institutionalized fee-based products, own-account trades, and off-the-books investment vehicles, even as they also continually increased their own leverage – and, with it, risk.  This opaque, complex, and rickety system crashed, when some U.S. residential mortgage holders ceased to make payments. Leverage – debt – was the fuel that stoked this fire, and networked finance had spread debt literally everywhere.  In the aftermath of the crisis, menacingly, networked finance and the volatility that accompanies it have carried forward.

That networks were bearers of crisis again became apparent as the crisis cascaded into the wider economy.  Manufacturers had deployed digital networks to automate and to outsource production tasks[13]; and to disperse their operations in order to improve market access and/or to cheapen the cost of labor.[14]  This great buildup of manufacturing networks provided no guarantee of corporate profitability.  GM spent tens of billions of dollars on ICTs between 1970 and 2007, but after the crisis erupted the biggest U.S. automaker still had to be rescued by the U.S. Government. The underlying problem was that manufacturers’ networks had been folded into another sweeping crisis tendency as international competition engendered overcapacity – that is, more plants and factories than were needed to supply the global market.

This reorganization of manufacturing production contributed to what David Harvey calls “wage repression”[15] throughout working-class communities in the wealthy countries, as automakers responded by cutting high-wage union jobs. Reciprocal hits to demand were predictable. The average age of vehicles on the road in the United States hit an all-time high this year: 11.5 years.  This has been attributed to quality improvements; but for many consumers, it seems certain that hard times were the decisive factor: by 2015 both new and used car owners were holding onto their vehicles for more than two full years beyond what had been the average in 2006.[16]

And again, the underlying problem of overcapacity has not abated.  The big carmakers stampeded into China, and have come to depend on Chinese buyers for around one-third of their global sales.  As China’s economy has slowed during the year just past, however, they have now also turned to reduce capacity at their dozens of joint-venture Chinese plants.[17]

A fourth and final point: For decades – again, beginning well before the Internet’s take-up – political leaders recognized that the U.S. information industry was a spectacular pole of growth within the capitalist world economy.  Heads of state made a point of paying homage to leading U.S. tech executives.  Notably, China’s Deng Xiaoping in 1979, Jiang Zemin in 1993, and Hu Jintao in 2006 each made high-profile visits to U.S. tech companies.  France’s Jacques Chirac led a presidential entourage to Silicon Valley in 1996 and, nearly twenty years on, Brazil’s Dilma Rousseff and India’s Narendra Modi followed suit.  How has the U.S. worked its digital magic?  Can it be replicated?  Failing that, as the McKinsey consultancy puts it, “How should you tap into Silicon Valley?”[18] Because it is patent that these questions command the attention of governments, I turn next to consider digital capitalism’s political dimension.