What if Putin Had an Army of Killer Artificial Intelligence Robots?

John-sileo-artificial-intellegence-expert

The New Frontier: How Science Fiction Distorts Our Next Move on Artificial Intelligence and Cybersecurity

It’s been 51 years since a computer named Hal terrorized astronauts in the movie 2001: A Space Odyssey. And it’s been more than three decades since “The Terminator” featured a stone-faced Arnold Schwartzenegger as a cyborg terrorizing “Sarah Connah.” Yet, dark dystopian civilizations — where computers or robots control humans — are often what come to mind when we think of the future of artificial intelligence. And that is misleading.

I was happily raised on a healthy diet of science fiction, from the “Death Star” to “Blade Runner.” But increasingly, as we approach the AI-reality threshold, Hollywood’s technological doomsday scenarios divert the conversation from what we really need to focus on: the critical link between human beings, artificial intelligence and cybersecurity. In other words, it’s not AI we need to fear; it’s AI in the hands of autocrats, cybercriminals and nation-states. Fathom, for a moment, Darth Vader, Hitler or even a benevolent U.S. president in charge of an army of robots that always obey their leader’s command. In this scenario, we wouldn’t avert a nuclear showdown with a simple game of tic-tac-toe (yes, I loved “War Games,” too). 

As I noted in my post about deepfakes, not only is AI getting more sophisticated but it’s increasingly being used in nefarious ways, and we recently crossed a new frontier. Last March, the CEO of a U.K. energy firm received a call from the German CEO of the parent company who told him to immediately transfer $243,000 to the bank account of a Hungarian supplier — which he did. After the transfer, the money was moved to a bank in Mexico and then to multiple locations.

In fact, the U.K. executive was talking to a bot, a computer generated “digital assistant” — much like Siri or Alexa — designed by criminals using AI technology to mimic the voice of the German CEO. The only digital assistance the crime-bot gave was to digitally separate the company from a quarter million dollars. 

Rüdiger Kirsch of Euler Hermes Group SA, the firm’s insurance company, told the Wall Street Journal  that the U.K. executive recognized the slight German accent and “melody” in his boss’s voice. Kirsch also said he believes commercial software was used to mimic the CEO’s voice — meaning this may be the first instance of AI voice mimicry used for fraud.

Trust me, it won’t be the last. We’re at the dawn of a whole new era of AI-assisted cybercrime.

What’s ironic about the prevailing wisdom around AI, however, is that the capabilities of criminals and bad actors is often underestimated, while those on the cybersecurity side are overestimated. At every security conference I attend, the room is filled with booths of companies claiming to use “advanced” AI to defend data and otherwise protect organizations. But buyer beware, because at this stage, it’s more a marketing strategy than a viable product. 

That’s because artificial intelligence is more human than we think. From my experience peering under the hood of AI-enabled technology like internet-enabled TVs, digital assistants and end-point cybersecurity products, I’m constantly amazed by how much human input and monitoring is necessary to make them “smart.” In many ways, this is a comforting thought, as it makes human beings the lifeblood of how the technology is applied. People, at least, have a concept of morality and conscience; machines don’t. 

In a sense, AI is really just an advanced algorithm (which, by the way, can build better algorithms than humans). The next stage is artificial general intelligence (AGI), which is the ability of a machine to perform any task a human can (some experts refer to this as singularity or consciousness). This is an important distinction because current AI can perform certain tasks as well as or even better than humans, but not every task — and humans still need to provide the training. 

We’ll achieve artificial general intelligence when we’re able to replicate the functions of the human brain. Experts say it’s not only theoretically possible, but that we’ll most likely develop it by the end of the century, if not much sooner. 

The U.S., China and Russia are all pursuing the technology with a vengeance, each vying for supremacy. In 2017, China released a plan to be the leader by 2030, and that same year Russian President Vladimir Putin said, “Whoever becomes the leader in this sphere will become the ruler of the world.” Darth Putin, anyone?

And this brings us back to those doomsday scenarios, but I’m not talking about cyborgs roaming American cities with modern weaponry. The real threat is to American industry and infrastructure. So, instead of worrying about a future where bots are our overlords, it’s time we focus on the technological and legislative conversations we need to have before AGI becomes ubiquitous.

Cybercriminals using AI were able to swindle an energy company out of a quarter million dollars without breaking a sweat. 

They’ll be back.


About Cybersecurity Keynote Speaker John Sileo

John Sileo is the founder and CEO of The Sileo Group, a cybersecurity think tank, in Lakewood, Colorado, and an award-winning author, keynote speaker and expert on technology, cybersecurity, and tech/life balance. He energizes conferences, corporate trainings and main-stage events by making security fun and engaging. His clients include the Pentagon, Schwab, and organizations of all sizes. John got started in cybersecurity when he lost everything, including his $2 million business, to cybercrime. Since then, he has shared his experiences on 60 Minutes, Anderson Cooper, and even while cooking meatballs with Rachel Ray. Contact John directly to see how he can customize his presentations to your audience.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *