Ethical AI: The Minefield Of Lethal Autonomous Weapons

POSTED BY   Michele Baker
2nd June 2018

The automated era is here. Algorithms are already running a large number of everyday aspects of our lives, often behind the scenes where we do not even notice them. This stage, in which we are right now, is only the tip of the iceberg. More and deeper developments are coming, and the ramifications of many of these have the potential to alter the face of modern life forever.

Many experts are therefore calling for serious consideration of ethics in artificial intelligence. This is just a broad term; there are multiple facets to the issue of ethics as they affect different areas of life. In this post, I’ll be covering the loaded gun, as it were, of arguments for and against (but mainly the latter) the use of autonomous weapons.

 

The Ethical Minefield Of Lethal Autonomous Weapons Systems

 

 

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

– Isaac Asimov, ‘Runaround’ (1942)

 

 

In 2017, The Future of Life Institute published an open letter to the United Nations Convention on Certain Conventional Weapons. It was signed by over 160 experts, including Elon Musk and – interestingly – both founders of Cognition X, Tabitha Goldstaub and Charlie Muirhead. The letter was a plea to the High Contracting Parties participating in the convention to work hard to prevent an autonomous weapons arms race, “to protect civilians from their misuse, and to avoid the destabilising effects of these technologies”.

The letter went on to say that “Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend”.

Two years previously, the same convention was held and covered the topic of autonomous weapons as part of its agenda. An immediate ban was suggested by several countries, including Germany, who stated that it “will not accept that the decision over life and death is taken solely by an autonomous system” and Japan, which was explicit in saying that it “has no plans to develop robots with humans out of the loop, which may be capable of committing murder”.

The general consensus from all states participating in the CCW is that there needs to be “meaningful human control” over how autonomous weapons make targeting and engagement decisions. However, we must consider this word, ‘meaningful’ a little closer.

What does ‘meaningful’ mean in this context? It seems a somewhat subjective term – who, ultimately, gets to decide what constitutes ‘meaningful human control’? And though the largely upstanding members of the CCW attendee list may agree that ‘meaningful control’ is necessary, history tells us that there will always be malignant parties prepared to use devastating weaponry against the wishes of the rest of the world.

For, unlike nuclear weapons, which require expensive and rare raw materials, the components required to manufacture even rudimentary autonomous weapons are not terribly costly. As development grows, prices fall. They will be cheap to mass produce, and it’s likely that, even with stringent laws in place, the technology will quickly fall into the hands of despots, warlords, dictators and eventually to terrorists and general nutcases on the black market.

 

Ethical AI: The Minefield Of Lethal Autonomous Weapons

Do automated weapons really equate to fewer fatalities in war?

 

The 1949 Geneva Convention did set out firmly-worded rules on war-time conduct. Attacks, it stipulates, must satisfy military necessity, discriminate between soldiers and civilians, and balance the value of the objective against how much damage it is likely to do. Marten’s Clause, added in 1977, goes further, by effectively banning any weapon that violates the ‘principles of humanity and the dictates of public conscience’. Again, we see a worrying undercurrent of subjective judgment here – how can artificially intelligent systems meet these rules, when we can scarcely decide ourselves what constitute the ‘principles of humanity’, let alone the ‘dictates of public conscience’ in this day and age?

Whilst lip-service is being paid to the issue of autonomous weapon regulation, actions are speaking louder – and what they’re saying is that it’s all going ahead regardless.

In May 2018, a raft of Google employees quit their jobs in protest over the company’s collaboration with the Pentagon on the development and application of AI in autonomous weapons systems. Last year, drone swarms were tested in the US, and in 2015, Business Insider reported on Lockheed Martin’s LRASM (Long-Range Anti-Ship Missile), which can “be fired from a ship or plane and can autonomously travel to a specified area, avoiding obstacles it might encounter outside the target area. The missile will then choose one ship out of several possible options within its destination based on a pre-programmed algorithm”. The US, perhaps unsurprisingly, is one of the countries most ardently hard at work on discovering how best to use AI to its battlefield advantage. However, it’s not alone.

On the South Korean border (well, the 2.5m wide Korean demilitarised zone) stands the Samsung SGR-A1 sentry gun. This gun, developed by the same company that brought you your gorgeous smartphone, is reportedly capable of firing autonomously. It also performs surveillance, voice recognition, tracking and firing with a mounted machine gun and grenade launcher.

The UK is not entirely innocent, either. BAE Systems is developing its ‘Taranis’ drone, which took its first test flight in 2013, but won’t be operational until the 2030s. It is designed to replace human-piloted Tornado GR4 warplanes, performing air-to-air and air-to-ground ordinance across continents, autonomously.

Ethical AI: The Minefield Of Lethal Autonomous Weapons

Is the development of lethal autonomous weapons a ticking time-bomb?

 

One argument in favour of autonomous weapons goes thus: fewer human soldiers on the ground means fewer ally casualties, and a benevolently-engineered AI-driven autonomous weapon system may even be able to significantly curb civilian casualties, too. With fewer human deaths to worry about, however, the incentive to avoid battle is all but removed – meaning more battles may happen overall. And even if civilians do not die in these battles, destruction of infrastructure, including homes, hospitals and food sources, may not be reduced, resulting in inevitable famine, disease and refugee crises.

“Machines cannot feel hate,” argued The Guardian in 2015. “And they cannot lie about the causes of their actions.” On the flipside, this means that machines also cannot feel love, empathy, or compassion. They can therefore kill indiscriminately, developing strategies without pause for the human factor, if left without human supervision.

The ultimate question is whether we instigate a total ban now (as we did several years ago with blinding laser weapons) or allow an AI arms race to continue? There are so many ways that autonomous weapons could go wrong; it’s highly debatable about whether the technology is really ready to let loose. It may be a grave sin as a tech writer to invoke the well-worn figure of the Terminator, but I’m sure we can agree that that’s the last thing we want.



RECEIVE OUR PICK OF THE GLOBAL NEWS ON TECH AT 9.30 AM EVERY SUNDAY
Don’t worry, we’ll never share details with anyone.




0
Ethical AI: The Minefield Of Lethal Autonomous Weapons

Michele Baker

Michele Baker is the Senior Content Strategist at TDMB. She began her journey into tech marketing via a Masters in Creative Writing, evolving from a prize-winning poet and short story writer to a futuristic content guru. Michele now writes endlessly about all aspects of technology, hosts the TDMB Presents… tech podcast, and speaks at numerous tech and marketing events.


Get in Touch With Michele Baker

01306 632 854
michele@thedigitalmarketingbureau.com

You may also like

Our Pick Of The Best Robotics and Artificial Intelligence Events 2019
London Mayoral candidate promises ”ultra-efficient” AI to tackle knife crime
The Month In AI: The Best Artificial Intelligence Articles of July 2018