This summer, I bought my wife a vintage watch—a model called the Big Crown Pointer Date, made by the Swiss company Oris. The watch was manufactured in 1995, and is small, elegant, and mechanical, which means that it doesn’t contain a battery; instead, you wind it, and it tells the time using an ingenious system of gears. The Pointer Date takes its name from what watch people call a “complication”—an added feature beyond timekeeping. It has a fourth hand, which reaches out to the edge of its face, where the numbers one to thirty-one are arranged. At midnight, the hand ticks forward, making it possible to see one’s progress through the month as a movement around a circle.
Even though the watch was assembled by hand nearly twenty years ago, it still works perfectly. But “perfectly” has a particular meaning for a mechanical watch. Not every month has thirty-one days, and so the date must be periodically adjusted. Moreover, its timekeeping drifts by a few seconds each day; as a result, my wife must occasionally synchronize it to the time on her phone. Whether all this is annoying or charming depends on your point of view. “Mechanical timekeeping devices were among our first complex machines,” the science fiction novelist William Gibson has observed, and among “the first to be miniaturized”; when wristwatches were new, it seemed remarkable that each was its own little gear-based world. Today, mechanical watches have come to feel “archaic in the singularity of their function, their lack of connectivity.” Yet the fact that they tell time in isolation, Gibson writes, also makes them “heroic.” By contrast, a smartphone’s primary purpose isn’t telling time but being “a node in a distributed network.”
The distance between a mechanical watch and a modern smartphone seems to embody the divide between the pre-digital and digital worlds. We imagine that people used to live among eccentric, fiddly, physical gizmos, whereas now we navigate a network of infallible devices animated by code. But the digital age is often more fiddly than it seems. In 2022, Nate Hopper wrote for this magazine about the complicated system that keeps all of the Internet’s digital clocks synchronized. At its center are atomic clocks, which measure the passage of time by tracking the quantum transitions of electrons. Atomic clocks are unimaginably precise. Unfortunately, Hopper writes, the Earth isn’t: its rotational speed “is affected by a variety of atmospheric and geologic factors, including the behavior of the planet’s inner layers; the reshaping of its crust, such as through the growth of mountains or bodies of magma; and the friction of the ocean’s tides against the seafloor.” As a result, each year, the planet spins a little more slowly—and this “risks opening a rift between the time as told by atoms and the time as told by astronomy.” Like my wife, the world’s timekeepers have been forced to adjust their clocks, adding thirty-seven leap seconds since 1972.
In 1990, Gibson and Bruce Sterling wrote “The Difference Engine,” an alternative-history novel, set in the nineteenth century, in which computers are built about a hundred years earlier than in reality, using quirky systems including gears, wheels, and levers. The novel helped popularize the genre of steampunk, in which nineteenth- and twentieth-century technologies are merged. Arguably, Jules Verne and H. G. Wells wrote steampunk avant la lettre, simply by crafting science fiction in the late nineteenth century; the genre’s aesthetic markers—valves, pipes, airships, monocles—have since informed the imaginative worlds of films and television shows like “Snowpiercer,” “Silo,” and much else. Steampunk mounts an imaginative protest against the apparent seamlessness of the high-tech world; it’s an antidote to the ethos of Jony Ive. It’s also fun because it’s counterfactual. It’s fascinating to imagine, implausibly, how ravishing technology could be constructed out of yesterday’s parts.
But what if the world really is constructed that way? In that case, it could be a mistake to put too much faith in digital perfection. We might need to fiddle with our technology more than we think. And we might also want to see it differently—less as an emanation from the future, and more like an inheritance from the past, with all the problems that entails.
On a Wednesday morning last January, every departing plane in America was grounded for more than an hour, in an outage that cost airlines and travellers millions and ultimately delayed more than nine thousand flights. The Federal Aviation Administration issued the “ground stop”—the first nationwide order since September 11, 2001—because of the sudden failure of a system called Notice to Air Missions, or NOTAM, meant to alert pilots to unusual or hazardous conditions, such as storms or closed runways. An investigation later found that the system had malfunctioned because a contractor, while working to synchronize two databases, had accidentally deleted files. But the incident also brought to light the strangeness of NOTAM itself, which was created in 1920. Its notifications, which follow a format established in 1924, are written in ALL CAPS and filled with cryptic abbreviations. (A closed runway at O’Hare, for example, is rendered as “!ORD 06/001 ORD RWY 04L/22R CLSD 2106231700 -2106232300.”) Airports and agencies issue more than a million NOTAMs each year, and so pilots routinely receive a flood of inscrutable notifications before each flight.
This ancient system, invented the same year as the Band-Aid, hides within the larger edifice of global aviation, which is often taken to symbolize the interconnectedness of the modern world. It’s not atypical. Much of the New York City subway depends on mechanical switching systems installed in the nineteen-thirties; many banks rely on software written in COBOL, a basically obsolete programming language. Ultimately, it’s no surprise that old systems contain old parts. But new systems can be old-school, too, often in unseen ways. Last year, in a story for New York magazine and The Verge, Josh Dzieza described how a workforce of “annotators” employed in Kenya, Nepal, and the United States was tasked with labelling images, video clips, tweets, and the like, so that A.I. systems could learn to understand them. “There are people classifying the emotional content of TikTok videos,” Dzieza wrote, or “checking e-commerce recommendations and deciding whether that shirt is really something you might like after buying that other shirt.” Another report, by Billy Perrigo, at Time, explained how, in making ChatGPT, OpenAI had relied on an army of low-paid Kenyan workers charged with labelling “toxic” content, including accounts of “child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest”; the company then used the labelled data to teach its models what kinds of material to avoid. Today’s advanced A.I. systems can figure out a lot on their own. But they can do this only because they’ve analyzed, in detail, the judgments and actions of real human beings.
In a particularly steampunk episode of “Doctor Who,” the show’s heroes visit a futuristic spaceship, and discover that its engine is actually just a giant, whale-like animal, which is continually being tortured; like a horse being spurred, it responds to pain by accelerating. Most of the ship’s passengers assume that it moves by means of some high-tech propulsion; the few people who get curious are discouraged through a combination of social pressure, outright punishment, and elaborate technological distraction. Steampunk stories often involve Victorian patterns of influence and power—they might include villainous barons, say, who enforce ignorance by insisting upon decorum, or evil bosses who promote a supposedly better future while hiding the churning machinery or bedraggled workers that make it possible. These Dickensian tropes evoke a prior age, but they’re also provocations about the present day. What, or who, is working in the engine room or cockpit? Are they living in the future their superiors are selling to the rest of us? How much of what’s being presented as new is really new, and how much is just the old, repackaged?
Steampunk machines are often extraordinary, improbable, and romantic—they represent the triumph of imagination over reality. At the same time, they’re a little sad, because their parts are always wearing out; the price of continued functionality is vigilance. In 2005, the science-fiction writer Alastair Reynolds published a novel called “Pushing Ice,” in which a camouflaged, moon-size alien spaceship is found lurking in our solar system. A team of researchers decides to land on it and study it, only to find themselves trapped on its surface as it suddenly accelerates into deep space. Stranded, they must find a way of surviving on what amounts to an alien world. They discover a giant building the size of a mountain, which for some reason is rotating at an almost imperceptible speed, and harness its energy by constructing a miles-long chain of gears, each smaller than the last, which leads from the side of the building to a generator. Each day, someone must walk the length of the chain, inspecting each gear, making sure that they all mesh together. If even one of the gears wears down, or becomes misaligned, the scientists will be plunged into icy darkness and die.
“There are 7.8 billion people alive on this planet—a stupendous social and technological achievement that’s unnatural and unstable,” the novelist Kim Stanley Robinson wrote, in a 2020 essay for this magazine. “When disaster strikes, we grasp the complexity of our civilization—we feel the reality, which is that the whole system is a technical improvisation that science keeps from crashing down.” The improvisation is, ultimately, human: it’s individual people who tend to the machines that sustain us (by fixing them when their inherent flaws wreak havoc, or laboriously mining the materials that make them work). People are vulnerable and fallible. And machines age and decay like everything else.
This steampunkish vision of the technological world as an aging contraption hurtling into the future might be especially vital now, when the newest technologies often encourage us to turn our attention away from their inner workings. Last week, the A.I. company Anthropic released a series of videos in which Claude, its A.I. model, takes over a person’s computer and uses it on their behalf. In one video, an Anthropic engineer asks Claude to plan a sunrise hike with a view of the Golden Gate Bridge; following her instructions, it Googles around, navigating various windows on her screen, and then enters a reminder into her calendar telling her exactly when she should leave her apartment for the optimal sunrise view. (“Bring warm layers as it can be chilly in the morning!” the A.I. suggests.) In another video, Claude fills out an annoying form by scrolling through a spreadsheet and searching a company database. When it finds what it needs, the A.I. system “autonomously starts transferring the information across without me having to do anything, and goes through the steps and fills out all the information needed, and then submits the form,” a researcher explains. “This example is representative of a lot of drudge work that people have to do.”
It’s easy to imagine a future in which we take our hands off the wheel. Just as it’s possible, today, to order something from Amazon and have it arrive instantly at your house from who knows where, so we may soon be able to ask an A.I. “agent” to do something for us, caring mainly that it completes our task, and not inquiring about how. (One YouTuber test-drove Claude’s new feature by asking it to try to “make $1000,” and to reply to a Reddit comment on his behalf; the A.I. made a decent start on the first request and, with a little help, succeeded at the second.) The risks of adopting such a relationship to technology are self-evident. But it’s not so hard to adjust our imaginations. After clicking Buy or Submit, we can take a moment to picture a labyrinthine, sometimes obscure process in action—people and machines working together, perhaps under unpleasant circumstances, to obey our command. After making a request of an A.I., we might imagine a forest of gears turning, intricately enmeshed, and wonder if they’re the right gears, and if they fit as they should, and if they’ve been inspected. We can remember that even atomic clocks need adjustment. It’s all less perfect than it seems. ♦