Sting, artificial intelligence, and that time the world almost ended

Jason Thomas
6 min readDec 4, 2020
Photo by Alexander Smagin on Unsplash

In June 1985, Sting released his first solo album, The Dream of the Blue Turtles. The third track on the record is one of my favorite tracks of all time, the song “Russians”:

Sting went on to earn a Grammy nomination for Album of the Year and one for Best Male Pop Vocal Performance for that record. And the backing music on the song is so typically Russian. In fact, Sting claims that he borrowed it directly from Russian composer Sergei Prokofiev. Interestingly, and although I’m not sure when it was originally recorded, there is audio of a concert, posted on YouTube back in 2010, where Sting explains that the song originated from watching pirated Soviet TV with his friend Kenny:

For so many reasons this song resonates with me, it reminds me of my childhood, it has a connection to hacking pirated TV, and it highlights a theme that I’ve been obsessed with as of late: the relationship between the United States and Soviet Union back in the 80s and its impact on modern Russian/US relations, specifically as it relates to technologies like artificial intelligence.

In January of 2019, Vladimir Putin, the current President of Russia, instructed the Russian government to create a national strategy for the development of artificial intelligence. According to my Google translation of the directive, Putin desires the government of Russian to:

“develop approaches to the national strategy for the development of artificial intelligence and submit appropriate proposals.”

Speaking to a group students back in 2017, Putin predicted that whichever country led AI research globally would eventually dominate the rest of the planet. “Artificial intelligence is the future, not only for Russia, but for all humankind,” said Putin. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Similarly, China is pursuing AI development, as is the United States. Most nations realize the potential for its use in all areas of human activity. And to be sure, there is REAL promise. Imagine systems that improve human decision-making, allowing us to discover cures to disease, enabling us to reduce famine and poverty. These are real, near-term challenges that AI can address. But there is also a soft underbelly to artificial intelligence that if not corrected quickly, will result in more setbacks than it purports to solve.

What is most frightening to me is not that an AI can make decisions faster than people, or maybe even better, what worries me is how it learns to make those decisions. What data is it learning from? Is it biased? Like anything that learns (you, me, my dog) we can and do become biased. So can an AI. Amazon learned this lesson pretty clearly in 2018:

Machines can make mistakes. Sometimes those errors result from the data its fed, sometimes it’s the type of algorithm it uses to process that data. Whatever the reason, machines can and do get it wrong. But then again, so do people. The key difference is generally the speed at which the mistake is made and the amount of harm it can do. Of course, this shouldn’t strike anyone as a new thing. Machines have been making mistakes forever.

Consider the Soviet early warning system responsible for detecting nuclear attacks. Oko is its name. It’s also the Russian word for “eye” and it’s a system of satellites that detects the launch of ballistic missiles by identifying engine exhaust via infrared light.

In September of 1983, just three weeks after the Soviet downing of KAL 007 which we discussed last time, one of Oko’s satellites detected the launch of five nuclear missiles from the United States. Imagine you’re the Soviets. You shoot down a civilian airliner, killing 61 Americans and a sitting US Congressman. Then three weeks later, your nuclear attack early warning system detects five nuclear bombs headed your way. What do you do? The screens are flashing red. Sirens are going off. Five warheads are inbound. You’ve murdered innocent civilians. The US has a reason to attack. Is it real? What should you do?

If you’re Stanislav Petrov, the lieutenant colonel in the Soviet Air Defence Force on duty that night, you do nothing. That’s right. You sit tight. You don’t say a word. In an interview with the BBC thirty years later, this is how Petrov described that night:

“I had all the data [to suggest there was an ongoing missile attack]. If I had sent my report up the chain of command, nobody would have said a word against it. The siren howled, but I just sat there for a few seconds, staring at the big, back-lit, red screen with the word ‘launch’ on it. A minute later the siren went off again. The second missile was launched. Then the third, and the fourth, and the fifth. Computers changed their alerts from ‘launch’ to ‘missile strike’. There was no rule about how long we were allowed to think before we reported a strike. But we knew that every second of procrastination took away valuable time; that the Soviet Union’s military and political leadership needed to be informed without delay. All I had to do was to reach for the phone; to raise the direct line to our top commanders — but I couldn’t move. I felt like I was sitting on a hot frying pan.”

In an interview with Wired Magazine in 2007, Petrov claimed that he had a funny feeling about the launch. That it wasn’t real. That the sensors and computers had made a mistake. He felt that if the US had REALLY launched an attack, they would’ve launched more missiles, not just five. As Wired describes it:

“Petrov’s gut feeling was due in large part to his lack of faith in the Soviet early-warning system, which he subsequently described as “raw.” He reported it as a false alarm to his superiors, and hoped to hell he was right.”

Thankfully he was right. The machine had been mistaken. And thanks to Petrov’s gut feeling, the earth still exists. I say that in the most serious way possible. There was a very real chance that his decision could have resulted in the destruction of the planet. The question then to ask is what if it wasn’t Petrov who had made that decision? What if it was an AI, operating autonomously? Would it have known to take a breath. To stop and consider what its decision would result in?

To be clear, I’m really not comfortable using the term AI to describe what we’re talking about here. Intelligence is more than rule-following. It’s knowing when to break the rules, it’s knowing when to color outside the lines. It’s knowing the limitations you possess and seeking out new information to improve on those limitations. It’s being intelligent about intelligence, artificial or not.

And in my mind, that’s why we continue to face ethical challenges related to autonomous thinking machines. While they are arguably good at solving problems using data, they still lack the fundamental ability to reason about the data they’re using.

Truthfully, all this talk of modern AI and the errors it can make feels eerily familiar doesn’t it? It feels like WarGames all over again. It feels like we’re back in 1985 listening to Sting sing about Russians. It feels like we’re one hack away from out of control killer robots. And while I’m an optimist when it comes to AI research and development, I am a realist when it comes to the consequences of its creation. Like Sting said, I hope the Russians love their children too.

--

--

Jason Thomas

Some folks say I know things about culture, data, identity, and how they blend with technology. I’m using this space to explore ideas I’m passionate about.