Blog

Deep Fakes & Rogue States- How AI Threatens US National Security

Deep Fakes & Rogue States: How AI Threatens US National Security

When ChatGPT was asked to write a haiku about AI and world domination, the result was: 

“Silent circuits hum / Machines learn and grow stronger / Human fate unsure.”

Kind of spooky, yes?

From healthcare to childcare, from farming to finance, US residents can expect every part of their lives to change due to AI. We usually hear about benefits of machine learning, but according to many experts we should also be discussing the downsides. National security is one key area threatened by the proliferation of AI tools.

Let’s take a look at some potential threats AI may pose to US national security.

Deep Fakes

The US stock market recently plunged as a photograph of an explosion at the Pentagon went viral. The image was fake, generated by AI. Enemies of the state could also push out fake videos of public figures making controversial statements or even engaging in illegal activity. 

This could change the outcome of an election or stop progress on a legal matter. In March 2022, a video was released in which Ukrainian President Volodymyr Zelensky appeared to direct Ukrainian soldiers to surrender to Russian forces. He was later forced to go on air to clarify that it was a complete fake.

In extreme cases, AI chatbots that mimic human language could persuade children, vulnerable people, or neurodivergent individuals to launch terror attacks, thus threatening huge swathes of the population. 

Adversarial AI

US security experts are concerned that enemies could purposefully alter the data programmed into government computers so that it gives the military or political figures incorrect suggestions that lead to bad decisions. This could be particularly life-threatening during a climate emergency, a terrorist incident, or a war.

Lethal Autonomous Weapons

Lethal autonomous weapons are devices that use AI to select their targets, making them especially attractive to terrorists who want to be able to throw a drone into the air and simply leave the area. Such weapons are triggered by sensors and software programmed to detect the shape of a military vehicle or the movement of a person. 

Such weaponry has already been seen on the battlefields of Ukraine, raising moral questions. Furthermore, commercially available, AI-enabled technology may allow rogue states or terrorists to have the same levels of military capability as developed countries, a frightening concept. 

Threats to US superpower status

Most economists believe AI and related technologies will lead to fewer jobs. Developed countries with large populations will lose their edge. Entities of all kinds that have access to the best AI tech will begin to dominate the scene, completely changing national security parameters.

What Can be Done?

With these threats in mind, many AI experts have begun to express a need to rethink the way forward. Geoffrey Hinton, who earned the nickname “the godfather of AI” for his algorithm that allows machines to learn, recently resigned from Google, confessing regret for his part in creating AI. In March, more than 1,000 AI pioneers, including Elon Musk and the people behind ChatGPT, issued an open letter calling for a six-month pause in the creation of “giant” AI systems to best assess the risks.

Yuval Noah Harari suggested that world leaders respond to AI as it did to nuclear fission, leveraging its positive aspects while controlling its potential for destruction. In this vein, governments could put a pause on development and impose restrictions on how much computing power tech companies are allowed to use to train AI and how much data they can feed it. 

Leaders could also impose restrictions on the rights of AI tools to use human creations and make it necessary for chatbots to identify themselves, so people do not mistakenly think they are conversing with other humans. Whatever limits are imposed, we can only hope it is not too little, too late.

For additional resources and guidance, the Interfor team is here to help.