Ethical AI: The Need For Cultural Transparency

POSTED BY   Michele Baker
1st June 2018

The automated era is here. Algorithms are already running a large number of everyday aspects of our lives, often behind the scenes where we do not even notice them. This stage, in which we are right now, is only the tip of the iceberg. More and deeper developments are coming, and the ramifications of many of these have the potential to alter the face of modern life forever.

Many experts are therefore calling for serious consideration of ethics in artificial intelligence. This is just a broad term; there are multiple facets to the issue of ethics as they affect different areas of life. In this post, I’ll be covering the ways in which public access to knowledge and information about meaningful technological innovation is blocked by a combination of fake news and opaque academic attitudes.

You may also be interested in Ethical AI: Automation & The Post-Work Utopia (Or Dystopia)

 

Ethical AI: The Need For Cultural Transparency

 

The news headlines, driven as they are by encouraging clicks, foreground the negative sides of automation and technological advancement. About one in five of the articles I have clicked on over the last two years features a picture of the Terminator. This is unhelpful.

“We need better evidence about the impacts of technology,” argues Tim Gardam in an open letter from the Ada Lovelace Institute to the Financial Times. “Much of the current research is either being led by investigative journalism – which is inevitably drawn to the worst cases – or by industry – which profiles the best outcomes.”

Rather than focusing on warning us about impending joblessness, autonomous weapons and so on, it is vital that we make the issues a two-way street. Most of the best work in the field of artificial intelligence, machine learning and beyond is taking place in academic institutions or privately within the big technology companies. The latter does release news when something significant happens, but these tend – once again – to be those that unsettle the general public.

The triumph of DeepMind’s AlphaGo over leading Go champion, Lee Sedol, in March 2016, was one such milestone. It was a groundbreaking leap for artificial intelligence – but it also scared a lot of people. Chess was mastered by an AI back in 1997, but Go is an even more complex game.

There are reportedly more potential moves in Go than there are atoms in the universe – and even if each of those atoms had an entire universe of atoms within it, the total amount of atoms within our universe and all constituent ‘miniverses’ would only begin to approach the number of potential moves in Go. And AlphaGo beat Sedol in four out of five matches.

Since then, we have heard news on everything from two Google Home devices holding a conversation to AIs creating their own language that researchers could not understand. This week, Engadget reported the work of a rather irresponsible artist who rigged up a device enabling a Google Home to fire a gun, with the headline ‘Google Assistant Fired A Gun: We Need To Talk‘. The fact that this had nothing to do with the Google Home device having any inclination to fire a weapon makes this headline unhelpful and infuriating.

Speaking of unhelpful and infuriating, Sophia the Robot springs to mind, too – a robot that was granted Saudi Arabian citizenship in 2018 in spite of being incredibly rudimentary in terms of AI, and mostly fed its verbal output by David Hanson and his team.

Sophia annoys me because it is deliberately constructed to evoke ‘uncanny valley’ unease in the general public, who are largely unaware of how basic its AI is. As a PR device for driving awareness of robotics and artificial intelligence, it is utterly counterproductive. A circus sideshow designed to alienate and provoke morbid curiosity.

All these grumbles of mine aside, these reports and demonstrations do serve the function of driving public engagement with technology. It is, however, pretty one-sided.

 

The news headlines, driven as they are by encouraging clicks, foreground the negative sides of automation and technological advancement.

 

Sophia and its spooky counterparts take up a lot of news space, whilst the very real and useful developments going on in medical technologies, care, education and in environmental sectors take second place. We must be seeking to engage the public in a meaningful, informative way on the developments in these areas. This must come directly from academia, as argued by Sabine Hauert, Assistant Professor in Robotics at the University of Bristol:

“Many researchers have never tweeted, blogged or made a YouTube video. Second, outreach is ‘yet another thing to do’ and time is limited. Third, it can take years to build up a social media following that makes the effort worthwhile. And fourth, engagement work is rarely valued in research assessments or regarded seriously by tenure committees. Citations cannot be the only measure of success for grants and academic progression; we must also value shares, views, comments or likes”.

 

Hauer points to a well-known 1986 paper by MIT robotics researcher, Rodney Brooks, to illustrate her point. Brooks’ paper is considered a classic in academic circles. In the last thirty two years, it has gathered nearly 10,000 citations. By contrast, a video of a robot developed by Brooks’ company, Rethink Robotics, received more than 60,000 views in a single month.

The academy is a closed circle, where knowledge is almost exclusively accessible to those in the ‘club’. That circle needs opening and that knowledge to be passed on to the public.

 

Ethical AI: The Need For Cultural Transparency

The academy is a closed circle, where knowledge is almost exclusively accessible to those in the ‘club’. That circle needs opening and that knowledge to be passed on to the public.

 

It is ethically problematic to keep knowledge and insight on major developments that will affect our society away from the people who will be affected. It propagates elitism and privilege of the few over the many. We particularly need to engage more young people in STEM subjects, encouraging them to pursue a career in these sectors. Talent is already scarce, with a significant shortfall. Much of the best talent is being snapped up by the major tech companies, often abroad, and we need far more to fill the gap.

With half the top scientists in academia and the other half in private companies (to be frugal – the numbers are much more in favour of tech companies), the closed circle is also a dangerous breeding ground for bias. As argued by the Financial Times:

“The hybrid public-private identities of top scientists could bias discussions about the societal and moral implications of new technologies. Experts are scarce. If all the leaders in the key fields are affiliated with companies whose profits are directly affected by the regulatory environment, who is left to speak for the common good?”

 

Without access to real-world knowledge about what the latest developments are, we prevent an entire population from learning about the changes happening in our world. There are problems requiring solutions, and there may be minds out there able and willing to solve them, if only they knew what those problems were.



RECEIVE OUR PICK OF THE GLOBAL NEWS ON TECH AT 9.30 AM EVERY SUNDAY
Don’t worry, we’ll never share details with anyone.




0
Ethical AI: The Need For Cultural Transparency

Michele Baker

Michele Baker is the Senior Content Strategist at TDMB. She began her journey into tech marketing via a Masters in Creative Writing, evolving from a prize-winning poet and short story writer to a futuristic content guru. Michele now writes endlessly about all aspects of technology, hosts the TDMB Presents… tech podcast, and speaks at numerous tech and marketing events.


Get in Touch With Michele Baker

01306 632 854
michele@thedigitalmarketingbureau.com

You may also like

Our Pick Of The Best Robotics and Artificial Intelligence Events 2019
London Mayoral candidate promises ”ultra-efficient” AI to tackle knife crime
The Month In AI: The Best Artificial Intelligence Articles of July 2018