How can we programme machines to make moral decisions which humans can’t even agree on?
None but an imperceivable fraction of technology news is reported in the mainstream media and the vast majority of that focuses on social media, personal data, and Google. While these issues are important, they are not the elements of technological disruption which promise to change the course of civilisation.
For the most part, such elements rarely get the airtime or column inches they deserve but, recently, artificial intelligence has come firmly into the spotlight thanks to the debate around morality and driverless cars.
The BBC has reported on the efforts being made to give driverless vehicles a moral compass in order to dictate, in moments of impending collision, who the car tries to protect and who it chooses to sacrifice in scenarios when avoiding one collision will put it on route to another.
Should the car continue ahead and collide with the person crossing the road, or should it swerve and likely hit a different person on the pavement?
As part of the process behind designing this moral compass, researchers from MIT have orchestrated a colossal survey of 40 million global citizens, called the Morality Machine, in the hope of gaining insight into what importance the human race places on preserving certain lives over others.
In a replication of the classic Trolley Problem, the survey presented participants with hundreds of different scenarios, asking them to state which death the car should preferably cause. For example, should the car continue ahead and kill the child crossing the road, or swerve, avoid the child, but kill the old lady on the pavement? Or, with impending collisions, should the car try to preserve the life of its passengers or the pedestrians?
The moral compass is an essential part of the driverless car – it is argued that they simply cannot be allowed into the world without it. Such a compass, however, will have to be dictated and programmed by humans. It will also have to be standardised, at the very least, to a national level. This is where the problems begin.
My question is, how can we enable a machine to make certain life or death decisions if we as human beings cannot even agree on the right course of action among ourselves? One person will think a teenage boy is more worthy of saving than a group of three old ladies, but others will believe that killing one person is better than killing three. Some people might think passengers should be sacrificed over pedestrians because they are the ones who got into the car and therefore assumed all of the risk. But others will say that the car should protect its passengers first and foremost because that is, after all, what it is designed to do.
If we can’t agree on these issues, I don’t understand how we can programme machines to make life or death decisions based on them.
The Protected and The Doomed
In order to programme a moral compass into artificial intelligence, a small number of people must be tasked with designing it and setting its parameters.
Because humans cannot possibly agree on collision scenarios (see the MIT results listed in the BBC article for proof of this), the people who decide on the morals of technology will inevitably programme it to make decisions which many, many of us will be in direct disagreement with.
This means that a select few, most likely representatives from government bodies and large corporations, will be given the power to label who will is protected and who is doomed.
Not only is this a vulgar concept in itself – yet more power being placed in the hands of the powerful – it is also destined to create a new cultural and societal divide. The Protected and The Doomed: we will all have to be labelled one or the other.
(The below are presumed labels, those which seem most obvious to me.)
- Child under 18 years old – Protected.
- Pregnant woman – Protected.
- man – Doomed.
- Convicted convict – Doomed.
- Nurse – Protected.
- Woman with Cerebral Palsy – Doomed
- Minister for Transportation – Protected
From here, cracks will start to show because we will all know what our label is. To complicate things further, and create more divides, it won’t be as simple as Protected or Doomed because it will be entirely dependant who you come up against in each collision scenario.
Me, for example. I might be saved if my opponent is a 66-year-old woman with Stage 4 leukaemia, but I’m pretty sure I would lose out to a Group Captain in the RAF.
And, not only will we be left with the divide between The Protected and The Doomed, there will be another between those of us who agree with the moral compass that technology has been given, and those of us who don’t. Just look at the conflict, hatred, and violence that occurs from disagreements around issues like a presidential election or Brexit referendum, and then account for the fact that this will literally be a case of life or death. Chaos will ensue.
No man is born equal
The final result of all of this will be undeniable proof that no man or woman is born equal. We will know it because we will be able to measure how protected each of us is. Some newborn babies will grow into children with greater protection than others, and vice versa.
In 2017, there were 1,710 road deaths in the UK alone. Staggering when you remember that there are only 365 days in a year. Imagine how this number will rise when we introduce driverless cars onto the roads to integrate with traditional vehicles. With each death involving an autonomous car, there will be uproar and division before, inevitably, the levee breaks.
We already know how slow humans are to implement obvious safety changes when it comes to technology. The automobile, for example, was introduced in 1885 but the seatbelt didn’t become a legal necessity until 1983 despite the undeniable evidence of its life-saving capabilities. It just wasn’t in the interest of car manufacturers to have to include belts, so they successfully resisted having to do so for years. Think of how many lives ended as a result of their reluctance.
If we mirror this sluggish, self-interested behaviour with AI and autonomous vehicles, many people will die before the cars become truly safe.
This means that while the technology remains flawed, it is preordained that the moral compass will kill thousands in order to try and save thousands of others. This split will create societal unrest the likes of which we’ve never seen.
Can we live like this? Can we live with this knowledge? Will The Protected be able to make a little extra cash by becoming Protectors and accompanying The Doomed as they cross the road? Will those Protectors eventually strike over worker-rights in the gig economy?
Will a person be able to change their status from Doomed to Protected, and vice versa? If a neurosurgeon grows weary of their job and quits to follow their dreams of becoming a beat poet, will their status change accordingly from Protected to Doomed? If a former criminal wins a parliamentary seat, will the opposite happen? And to solve the issue of overcrowded prisons, will courts start removing Protected status as due punishment for crime?
How will a driverless car know a person’s health and employment status, age, and criminal record as it approaches them at speed? While not possible today, it certainly will be soon. Whether it’s because we all agree to have microchips sewn into our necks which contain all of our details as a human and civilian and are easily scanned by the car or, more realistically, facial recognition software becomes capable of instantaneous, dark analytics, autonomous vehicles will be able to drive down the street, fully aware of everybody around them.
Still seems too far-fetched? Then consider how welcoming we as a society are being towards technology and, more specifically, how loose we are being with our personal data. It has already been proven that our online footprint is such that AI can pull together a stunningly accurate social media profile page for each of us without our permission or input. We live in an age where all of this information is for sale.
And it goes even deeper, too. The number of people who have sent their saliva to online genealogy companies has nearly reached a point where, by analysing the millions of submitted samples, AI will be able to figure out all of our DNA, regardless of whether we submitted our gob. It’s fairly rudimentary machine learning.
As such, when the autonomous vehicle comes careering towards you, it might even be able to know that you suffer from a hereditary heart condition before you do and thus deem you sacrificable.
There is no end to the questions and concerns with placing morality inside machines.
The MIT Survey shows us that the moral compass shifts dramatically from country to country – does this mean that cars should be programmed to save pedestrians in one country but passengers in another?
If a human kills someone whilst driving, they are criminally investigated regardless of their level of fault in the crash: who is investigated when a driverless car kills someone in order to save another? And if the moral compass that led it to do so was compliant with the law, are the manufacturers immune to prosecution?
Here in England, we live in a democracy. The cynic inside me would tell you that the true purpose of democracy is to preserve power. When given the power to design a moral compass, will those powerful few not programme the compass which, when put into action, will behave in such a way as to preserve their existent power?
There is no right answer to any of these questions; no facts, just opinions. How exactly are we going to programme millions of driverless cars to behave in accordance with an agreed upon truth when no such truth can possibly exist?