Mark has come across some groundbreaking news this week. A draft report has been published by the European Parliament Committee on Legal Affairs, relating to how we will legally manage AI as it grows in capability. With robots and AI becoming an increasingly normalised addition to our lives, is it time that Asimov’s Laws of Robotics actually come to fruition?
The Laws of Robotics Come to the EU
With robots infiltrating our everyday lives, MEPs have asked for the introduction of a compendious set of protocols in relation to how humans are to engage with Artificial intelligence (AI) and robots. We will have no choice but to embrace the fact that the robot revolution is set to engulf every part of society.
A draft report by the European Parliament – Committee on Legal Affairs – makes it clear that we are about to cross the boundary into a new industrial revolution. In the report, it goes into great detail on whether or not robots should have legal status and be classified as electronic persons.
Engineers are recommended, when building our metal friends, to install a kill switch on all robotic humanoids. This basically means that we could shut down any robot if things were to go wrong. MEPs have suggested that humans should be able to use robots “without risk or fear of physical or psychological harm”.
A Lawyer’s View
Lorna Brazell is a practising law partner at Osborne Clarke, specialising in Intellectual Property. She is also a published author. Her book Nanotechnology Law: Best Practices was published in 2012, and the second edition of her Intellectual Property Handbook was published in mid-2013. The third edition of her book on Electronic Signatures and Identities is now due.
She is a little astonished about how wide-ranging these rules are, and scrutinised the real need to give future robots legal status. Lorna said, “Blue whales and gorillas don’t have personhood but I would suggest that they have as many aspects of humanity as robots, so I don’t see why we should jump into giving robots this status”.
It’s Happening… So What Next?
Robots and other forms of artificial intelligence are ready. They are set to infiltrate each level of society and are poised to do so, according to the report.
AI has the potential to bring in an abundance wealth, but as robots take jobs where does it leave their human counterparts? Governments will need to think carefully about some type of fallback income as industries change.
Looking ahead 10-20 years down the line, how will a “care robot” in a care home, looking after a human, be able to cope with privacy issues, dignity and the physical safety of patients if the robot’s systems fail or are hacked?
Prepare for the Singularity?
The report goes on to say that there is a possibility that within the space of a few decades AI could surpass human intellectual capacity. If current designers of such machines do not prepare properly, robots could define their own destiny and survival of their own race.
To combat this potential disaster, famed sci-fi writer, Isaac Asimov, came up with a set of rules which should be adhered to in the event that robots became self-aware. The laws will be directed at the designers, producers and operators of robots as they cannot be converted into machine code.
The Three Laws of Robotics state:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given by human beings except where such orders would conflict with the first law.
3. A robot must protect its own existence as long as such protection does not conflict with the first or second laws.
Whilst Asimov’s laws were created as part of the narrative of his 1942 short story, ‘Runaround’, they have been widely embraced by those working in the field of Robotics ever since.
The European Parliamentary report goes on to recommend that all robots built should be in the interest of humans… see above!
Designers will be required to register their robots and provide access to the source code for accident investigation. A research ethics committee will have the final say on whether a new robotic design can go ahead, and MEPs have called for a European Agency for Robotics and Artificial Intelligence that can provide technical, ethical and regulatory expertise, and a report on how this new technology will impact jobs.
Interestingly, the report explores the legal liabilities of robots. The liability should be in proportion to the amount of instructions given to the robot. The producers of the robots will have to take out specific insurance cover, for example.
The World Economic Forum has said robots and AI could replace 5.1 million jobs by 2020.
Finally, the report says: “The greater a robot’s learning capability or autonomy is, the lower other parties’ responsibilities should be and the longer a robot’s ‘education’ has lasted, the greater the responsibility of its ‘teacher’ should be.”
Looking forward to seeing if MEPs vote in favour of this statue. It will be up to individual Governments for further ratification before it becomes EU law.
What are your thoughts?
If you have any thoughts on today’s article or the laws of robotics in general, then why not drop us a tweet? Or use #AskTDMB to strike up a discussion!
Mark Grayson is the Paid Social Account Manager at TDMB. He comes from an Education background, having previously worked as Head of IT in secondary education, hence his interest in Technology in Education. He is also a gifted pianist, as well as being skilled in digital marketing, and is possibly the happiest, most positive person on Earth.