Possibly the most boring way to think about AI risk is to compare it to nuclear fission.
Nuclear fission was an immensely important scientific discovery, which took incredible intellectual and material resources to achieve. It offered the promise of nuclear power, but also the existential threat of nuclear weapons. It was a testament to human ingenuity but also human destructiveness.
That’s a pretty good theme and probably someone should make a movie about it.
But it’s such a good theme, and such an obvious analogy to the achievement, potential, and risk of AI, that it's boring. It’s so obvious that that ground has already been well covered.
Let’s come at the issue from another angle. Nuclear fission is a baroque example of a dilemma we have faced many times before, which is: pretty much any technology that can be used constructively can be used destructively as well.
So let’s make a game of it!
Spinning the Wheel of Science and Engineering, we land on ‘industrial chemistry’. And spinning the Wheel of Historical Epochs, we land on ‘the Second Industrial Revolution’, so the period roughly 1865 to 1925.
Can we, working under this tight set of constraints, find an example of a dangerous technology that had wide-ranging effects? Indeed yes; in fact, we can find three!
So let’s consider what we can learn about AI from the development of mustard gas, dynamite, and leaded gasoline.
“Gassed”, John Singer Sargent, 1919, Imperial War Museums, London
Drawn from Singer’s August 1918 eyewitness of the aftermath of a mustard-gas attack
Mustard gas
Mustard gas is less a thing than a category, a common name for a set of chemicals: compounds of chlorine and sulfur with some carbon and hydrogen. As industrial chemistry emerged as a field in the 19th century, mustard gas or something like it was synthesized several times, without attracting notice or incident. Attention began to be paid in 1913 when a pair of scientists, one English and one German, at work in Berlin, experimented with a new variant featuring sulphuric acid. A flask broke and the contents spilled on the English scientist's leg. He was badly injured and needed two months of hospitalization. The German scientist dutifully reported the incident, its cause, and its aftermath to the German Chemical Society.
A mere four years later, in 1917, mustard gas was deployed on the Western Front.
Eight years after that, in 1925, the world outlawed chemical weapons.
It took only twelve years for mustard gas to go from being recognized as a potential weapon; to its weaponized deployment; to international agreement prohibiting it.
Perhaps AI is like this. Or rather, we might decide that any particular instance of AI that emerges, or that we can imagine, is like this. Immediately upon creation, its usefulness as a weapon is obvious. Perhaps it is a deep-learning model trained to do as much damage as it can, like a command system for a nation’s military arsenal, or an automated bioweapons designer. Even more worrisome, perhaps it is a self-improving AI that exceeds human capabilities without learning respect for human life or values. Examples of these abound in our fiction, from rogue superintelligences to paperclip maximizers. An AI that kills all humans because it is ordered to do so, or because it regards them as distractions, or because it simply sees them as raw materials, is like mustard gas. It doesn't matter why the gas is released; once out, it's a grave and immediate danger. And the best solution is to make sure it never gets released, or even manufactured.
“A Dynamite Bomb exploding among the police”, T. de Thulstrup, 1886, Harper’s Weekly
An illustration of the ‘Haymarket Incident’; use of dynamite, presumed by anarchists, against officers breaking up a pro-labour demonstration in 1886 Chicago
Dynamite
Dynamite also appeared in the 19th century as part of the same flowering of synthetic chemistry. Alfred Nobel, a chemist whose family business was construction, patented it in 1867 as a tool to aid demolition and tunneling. Nitroglycerin had been invented a generation before, but it was unstable, liable to explode even if great precautions were taken, making it of limited use. Indeed, Emil Nobel, Alfred's brother, died in 1864 in a nitroglycerin explosion while experimenting with ways to make it stable. Alfred kept up the project, and ultimately found that if he combined nitroglycerin with diatomaceous earth (i.e., fossilized algae), the resulting compound was just as explosive, but no longer liable to sudden detonation. The Greek word for power is dynamis, and forms the root of the English words dynamic and dynamo. Playing on that association, Nobel patented his stable nitroglycerin as dynamite.
Nobel was not a fool, but he was naive. He responded to concerns that his invention would usher in a new kind of warfare with blithe assurance. "My dynamite", he wrote, "will sooner lead to peace than a thousand world conventions. As soon as men will find that in one instant, whole armies can be utterly destroyed, they surely will abide by golden peace.”
In 1870, three years after its invention, dynamite helped the Prussians defeat the French in the Franco-Prussian war, by allowing the Prussians to easily destroy French fortified positions. The French might have wished to respond in kind, but were unable to do so, as the French government insisted upon licensing the production of explosives and the business dealings required to obtain a license were not complete when war broke out. Dynamite not only changed warfare, it helped to incite the first wave of modern terrorism in the 1870s. As a weapon, dynamite permitted assassinations that were terrifying and unlikely to fail; the stability and power of dynamite therefore touched off a wave of murders that would persist till the First World War. Famously, Nobel was so distressed by the unintended effects of what he had created that he posthumously established a prize fund that gives annual awards, not only for sciences and literature, but for peace as well.
Perhaps AI is like this. A given instance of AI could be immediately useful as a tool that makes things that had been difficult easy, and things that had been impossible, possible. But at the same time, it is also immediately useful as a weapon in the hands of those who choose to use it as such. Drones have proven immensely useful in the ongoing Russia-Ukraine war: both as reconnaissance tools, and as delivery vectors for grenades. When these drones become wholly autonomous, as they surely will, their usefulness, and lethality, will increase by an order of magnitude. An AI that serves as a powerful tool, but that can be used with equal potency to constructive or destructive ends, is like dynamite. And if your enemies are using it, then the logic of the arms race applies, as the French discovered to their sorrow when the Prussians invaded in 1870. If your opponents have it, you need to have it too, if only to defend yourself.
Leaded gasoline
Gas Pump Lead Warning, 2010, Wikimedia Commons, licensed under CC 3.0
Lead warning on a gas pump at Keeler's Korner (built 1927), Lynnwood, Washington, USA
Leaded gasoline was invented in the 1920s. The flowering of industrial chemistry in the 19th century had generated strong interest in chemical fuels, which the advent of the motor-car and fixed-wing aircraft encouraged. Enter gasoline, which worked splendidly as a stable, light, and clean fuel (at least relative to coal, the previous standard). Unfortunately gasoline was still of uneven quality, distilled as it was from oils of different qualities and chemical makeups, and its use led to “knocking”, premature ignition in the engine, which wasted its power and damaged the works.
General Motors set out to solve the problem. In 1921 one of its chemists, Thomas Midgley, found that the addition of lead to gasoline slowed its combustion to a smooth, steady level. General Motors patented the invention and began selling leaded gasoline to great success; for his part, Midgley was awarded the 1923 Nichols Medal from the American Chemical Society.
The damage done by leaded gasoline is so immense that it can't be easily estimated. Exhaust from cars burning leaded gasoline, over a period of five decades, resulted in lead particulate everywhere; and the higher the population density of an area, the greater the pollution. Lead exposure not only damages human health over the long term, causing elevated blood pressure, kidney damage, and premature death; it also is particularly dangerous to children, in whom it impairs brain development, cognitive function, and executive control. It’s been suggested that the abrupt and dramatic decline in crime in the 1990s was a result of lead being removed from gasoline (and house paints) in the previous two decades. If true, the horrible implication is that lead poisoning permanently impaired millions of children by reducing their intelligence and impulse control. That impairment not only blighted their lives, but also inclined them to criminality, and thereby blighted the lives of countless others.
Most frustrating of all is that this damage was preventable. Midgely had previously demonstrated that the use of ethanol as a fuel additive would also have solved the knocking problem. The issue, from General Motors' point of view, was that ethanol is easily distilled from corn and other grains, and couldn't be subject to patent.
In other words: there was a non-toxic solution, but it wouldn't make General Motors any money, so it was suppressed.
Observers noted this at the time, but they were shouted down. Alice Hamilton, the first woman on faculty at Harvard Medical School, had earned that position for her life’s work in documenting the harms done by lead and other industrial pollutants. In 1925 she argued that the addition of lead to gasoline constituted a public health hazard, but her concerns were brushed aside. Alfred Sloan, then CEO of General Motors, blithely asserted that, once the needs of people and livestock were satisfied, there would not be enough grain in America left to produce the necessary ethanol, whereas lead was readily available. So, despite many warnings about its toxicity, leaded gasoline became widespread, and remained standard for another fifty years.
Perhaps AI is like this. A given instance of AI might be created to be a useful tool, but one that produces unintended and harmful consequences, due to insufficient care being taken to prevent these. For example, it's well known that Amazon attempted to delegate resume screening to AI, only to discover that the system was systematically biased against women. Trained on a dataset made up of ten years' worth of resumes submitted to the company, the AI spotted the pattern that most applicants, and most successful applicants, were men, and so ‘learned’ that female candidates were suboptimal. Similar outcomes are easy to imagine. An AI seeking to optimize insurance premiums against client risk might learn from racist training data to charge higher premiums to certain ethnic communities or neighbourhoods. Less obviously, an AI companion that absorbs attention from the lonely and vulnerable, drawing them further into social isolation, would do immense damage when deployed at scale.
An AI that causes harm, whatever the intentions of its human operators or the AI itself, is like leaded gasoline. Whether because of carelessness or callous indifference, the humans responsible may initiate a system that solves narrowly-construed problems while causing broader ones. And the best solution is to listen to the critics pointing out the danger before such systems become widespread.
Lessons learned
Irrespective of which frame best fits a given AI, these analogies point to fruitful paths forward.
Leaded gasoline was an attempt to solve a real problem and allow limited resources to be used more efficiently, but its developers, keen to capture a lucrative market and its outsized rewards, fell into the trap of motivated reasoning. They ignored their critics, and persuaded everyone else to ignore them too, and did massive damage as a result. So, when considering the question of whether to rapidly iterate new and more powerful forms of AI, let's give at least as much attention to those who warn against it as those who blithely insist that there is no risk.
Dynamite was a tool that offered significant value to civil construction that was weaponized at the very first opportunity, and armies that had it were able to easily dispatch those that did not. We can't un-invent AI now; there will be no Butlerian Jihad. So let's ensure that the tools we have are widely available, and especially so to those who are attempting to use AI to identify weaknesses, bolster security, and shut down malicious use.
Mustard gas was also weaponized at the first opportunity and never had any purpose other than destruction, and the world recognized this; international cooperation prohibited its use within a decade of its introduction. So let's begin laying the groundwork for multilateral agreement on non-proliferation of dangerous AI now, particularly between the USA and China.
We could multiply examples indefinitely. Returning to the Wheel of Science and Engineering and the Wheel of Historical Epochs, a quick spin yields ‘Small Pieces of Worked Metal’ and ‘500 BCE to 1 BCE’. And yes, both the stirrup and the crossbow, to name only two technologies fitting these criteria, changed the world dramatically, for better and for worse.
Perversely, this surfeit of examples is good news! The history of human ingenuity is also the history of creating new dangers, or put another way, the history of how to manage the risks of new technologies. There are many precedents to teach us the right ways to proceed with the introduction of AI at scale. I hope we follow them.
Note—Credit where it's due: I discovered late in my writing process that John Wentsworth preceded me in analogizing AI failure modes to other technologies, including leaded gasoline. And my fellow Fellow of the Roots of Progress Institute, Kevin Kohler, has also explored the territory of AI analogies to great effect.
Respect to Kevin, Julius Simonelli, Steve Newman, Vince Vatter, and Chris Leong for feedback on earlier drafts.
Changing Lanes on the Road
This week I’m in Fort Lauderdale, Florida at the 37th annual conference of the International Association of Transportation Regulators. I’m particularly interested in Thursday’s session (10 October 2024) on model regulations for robotaxis. Expect a future dispatch discussing these in detail.
Thanks for reading Changing Lanes! Please let us know how we’re doing by answering the poll below. And if you’d like to respond to this post, please leave a comment.
xxxxx.... "was an attempt to solve a real problem and allow limited resources to be used more efficiently, but its developers, keen to capture a lucrative market and its outsized rewards, fell into the trap of motivated reasoning. They ignored their critics, and persuaded everyone else to ignore them too, and did massive damage as a result."
This sounds like much of human endeavour (politics, primary industry, industrial agriculture, war, etc.) which is why we are so inefficient at "progress". We go chasing down these blind alleys, motivated by greed or gain, and more often than not find them to be dead ends, or worse, a disaster. Every now and then it works out and we lurch forward to some new baseline. If all of our efforts were focused on "good things" we would be centuries ahead by now. Instead, the cumulative effect of motivated reasoning appears to be making human society on earth unsustainable. It is also why indigenous peoples find "western civilization" so incomprehensible, because the principle of intergenerational sustainability is overwhelmed by "more / better / nicer / richer".