HomeTechnologyThe New Cold War:...

The New Cold War: Military AI Systems

The world of artificial intelligence is closer to war than ever before (photo: CC0 Public Domain)

While footage of the fighting in Ukraine often featured vintage camo-green trucks and explosions, the conflict spurred the development of new technologies — artificial intelligence combat systems — that might not have been rapidly developed under other circumstances.

Two weeks after the start of the military conflict between Russia and Ukraine in February 2022, Alexander Karp, CEO of data analytics company Palantir, is giving a presentation to European leaders. Europeans must urgently modernize their military arsenals with the help of Silicon Valley, he argued in an open letter.

For Europe to remain “strong enough to defeat the threat of foreign occupation,” Karp writes, countries must embrace “the nexus between technology and the state.” He suggests that government departments with funding should target young companies developing ultra-modern technologies and “dislodge the grip of entrenched providers”.

Whether because of him or for other reasons, the military services seem to be reacting in this direction. On June 30, NATO announced it was creating a $1 billion innovation fund that would invest in early-stage startups and venture capital funds developing “priority” technologies such as artificial intelligence, big data and automation. Great Britain and Germany allocate generous, multimillion-dollar funds to the development of artificial intelligence specifically for defense. All allies “invest”.

“War is a catalyst for change,” says Kenneth Payne, who heads defense research at King’s College London.

War as a catalyst

What’s happening in Ukraine has accelerated the drive to get as many AI tools on the battlefield as possible. Those with the most to gain are startups like Palantir, which hope to attract attention (and money) as the military races to update its arsenals with the latest technology.

Predictably, this also brings to the fore long-standing ethical concerns about the use of AI in military operations. Technology is advanced enough to wreak havoc and death.

Back in 2018, after protests and employee outrage, Google pulled out of the Pentagon’s Project Maven, which was essentially an attempt to build image recognition systems to improve military drone strikes. The case sparked a heated debate about human rights and morality in the development of AI for autonomous weapons.

At the time, some of the most authoritative scientists in the field of artificial intelligence took a pledge not to work on lethal AI.

But this is only excitement on the surface. Four years later, the tech world is closer to war than ever. All sorts of companies are racing to develop and deliver AI to governments, from entrenched IT behemoths to daring startups with mind-blowing ideas.

IT as critical infrastructure

Companies that sell military AI make big claims about what their technology is capable of. They promise everything from checking resumes to processing satellite data. Image recognition software can help identify any targets. Autonomous drones can be used for surveillance or attacks on land, air or water.

Some of these technologies are still in the early stages of the battlefield and are rather subject to experimentation, sometimes without much success.

But the investments do not stop, the developments do not stop and the experiments do not stop. Meredith Whittaker, senior adviser on AI at the US Federal Trade Commission, says the push to develop military AI systems is “inflaming Cold War rhetoric” and trying to create a situation that positions Big Tech as “critical national infrastructure.”

If this happens, the developers of military AI systems will enjoy a special status, as always with military suppliers. These businesses will enjoy protection and be able to avoid a number of regulations that might otherwise be imposed on them (in the civilian world).

China’s long shadow

Over the whole picture creeps the long and silent shadow of China, which in recent years has demonstrated how highly developed a technological industry it maintains and has even begun its own project for its own orbital station.

China’s military is likely spending at least $1.6 billion a year on AI, according to a report by the US Center for Security and Emerging Technologies in Georgetown. The U.S., on the other hand, “has already made significant progress toward parity,” said Lauren Kahn, a fellow at the Council on Foreign Relations. The US Department of Defense has requested $874 million for artificial intelligence for 2022, although that figure does not reflect the department’s overall investment in AI, a March 2022 report said.

In a report last year outlining the steps the United States must take to keep up with AI by 2025, the NSCAI urged the US military to invest $8 billion a year in these technologies or risk being left behind by China.

In parallel, the European Commission allocated 1 billion dollars for the development of new defense technologies.

The pressure of the “small”

In fact, the distribution of these investments is not at all clear, although the lion’s share seems to be eaten by a small number of large corporations that are “entrenched” suppliers in the military business. The rise of new players with new technologies has its tools of pressure.

AI is developing very quickly, governments cannot afford to continue to rely on the old approach of relying on long-term contractual partners with decades-long contracts, says Arnaud Guerin, CEO of Preligens, a French AI tracking technology startup.

Startups and venture capital funds are expressing frustration that the process is moving so slowly, says Catherine Boyle, general partner at venture capital firm Andreessen Horowitz. In her words, “talented engineers will leave disillusioned and look for jobs at Facebook and Google, and startups will go bankrupt waiting for contracts” in the defense sector.

Humans will not always have control over AI

And while governments, militaries, corporations, and startups are having this discussion, AI continues to steadily enter the battlefield, and ethical concerns become increasingly worrisome.

Almost all governments around the world are taking steps to declare that they implement frameworks and guidelines for using artificial intelligence in a way that is “lawful, accountable, reliable and traceable and seeks to mitigate the distortions resulting from algorithms.”

One of the key concepts is that humans should always be in control of AI systems. But as technology advances, that won’t really be possible, says Kenneth Payne.

“The whole point of an autonomous system is to make decisions faster and more accurately than a human could, and at a scale that is overwhelming for humans,” he says. According to him, it will soon be unthinkable for humanity to say, “No, we will think and evaluate every single decision.”

- A word from our sponsors -

Most Popular

LEAVE A REPLY

Please enter your comment!
Please enter your name here

More from Author

An Unprecedented 190% Quantum Efficiency – New Material Could Drastically Increase the Efficiency of Solar Panels

Lehigh University researchers have created a revolutionary solar cell material with...

Even Brief Secondhand Smoke Exposure Increases Risk of Dangerous Heart Rhythm Disorder

New research indicates that even minimal exposure to secondhand smoke increases...

Neuronal Crossroads: Decoding Brain Development

New research uncovers the developmental pathways of inhibitory neurons in the...

Quantum Control Unlocked: Creating Resistance-Free Electron Channels

New research demonstrates control over quantum states that could revolutionize energy...

- A word from our sponsors -

Read Now

An Unprecedented 190% Quantum Efficiency – New Material Could Drastically Increase the Efficiency of Solar Panels

Lehigh University researchers have created a revolutionary solar cell material with up to 190% external quantum efficiency, pushing beyond conventional efficiency limits and showing great promise for enhancing future solar energy systems. Further development is required for practical application, supported by a U.S. Department of Energy grant.It...

Even Brief Secondhand Smoke Exposure Increases Risk of Dangerous Heart Rhythm Disorder

New research indicates that even minimal exposure to secondhand smoke increases the risk of atrial fibrillation, a common heart rhythm disorder. The study, involving over 400,000 adults from the UK Biobank, found a progressive increase in risk with longer exposure durations, regardless of the environment. The findings...

Neuronal Crossroads: Decoding Brain Development

New research uncovers the developmental pathways of inhibitory neurons in the brain, highlighting the roles of proteins like MEIS2 and DLX5 in neuron differentiation and the potential link to neurodevelopmental disorders through genetic mutations. Credit: SciTechDaily.comStudy reveals how proteins direct nerve cell precursors to turn into specialized...