GOOD MORNING, MY LITTLE CYBORGS!
You may have noticed that I have been curiously absent for the last two weeks. I have little excuse other than a full workload, both job and parenting-wise. But I’m back now, having found some time to write this whilst my daughter and friends enjoy a swim at the local pool. I am sitting with all the other parents in the viewing area, all of whom are enjoying a little peace and quiet in the midst of the weekend rush.
So, what have I been up to since the last time I wrote?
Well, a few bits…
I’ve spent some time with the team at TRA Robotics, who are doing some simply awesome work in combining AI with industrial robotics to revolutionise production processes, particularly in the autonomous electric vehicles market. Their work stands to speed up time to market, remove human workers from hazardous manual jobs, and lower the barriers to entry for new innovators in the automotive sector (and, potentially, beyond).
In other news, I had the fantastic opportunity to talk with Luke Robert Mason of Virtual Futures about the issue of cyborg rights. I wrote this piece that summarises what Mason taught me during our conversation, and also goes further into exploring the many facets of cyborg and transhumanist movements. I found the whole thing very enlightening, and I think you may do, too.
Oh, and the other great thing that’s been going on recently is all the preparation for the upcoming CogX 2018 event. In case you didn’t already know, Cognition X is a market intelligence platform and massive key player in the European AI space. Cog X is their offering to the annual AI events calendar – and it’s an unparalleled extravaganza that everyone with an interest in AI and other far-reaching technologies clamours to be a part of.
I’m able to offer my readers 40% off tickets to the event, where I will be a panel speaker, on 11th and 12th June. Just type in the promo code CX18DMB40 to claim your discount. You can buy your ticket here or click on the banner.
All that aside, it’s time to get on to the news from the world of AI this week!
Top 5 Most Socially-Shared AI Articles of the Week
Top of the pops this week is the news that Google employees have made good on their protests about the company’s involvement in helping the Pentagon to militarise AI. Voting with their feet, several Google staff have resigned, stating that humans, not algorithms, should be administering the finer points of warfare. The former employees claim that their objections were disregarded, and they’d been left unimpressed with the ethical position taken by their employers. In addition to those who resigned, 4,000 employees have registered their objection to Project Maven. It’s unlikely that this dissent will deter Google from continuing the relationship. In my mind, it seems unclear whether or not the company – in spite of all its power – even has much of a choice.
Anyone who’s been following this review for some time will have gathered my weakness for doomsday prophesies involving AI. This article of that ilk is especially delicious, having been written by Henry Kissinger. His observations about the early omens posed by social media’s fake news/echo chamber epidemic, those well-trodden concerns raised after the DeepMind AlphaGo milestone, and human reduction to data, are not exactly original thoughts, but they’re well articulated. I was hoping this would be in the top five, and lo-and-behold, here it is. Enjoy.
Ooh, I actually tweeted a quote from this article today!
“If you deprive the robot of your intuition about cause and effect, you’re never going to communicate meaningfully. Robots could not say ‘I should have done better,’ as you and I do.”
It was in a recent email newsletter I got through from the MIT Technology Review, but I didn’t have a chance to read the actual article in Quanta until now.
As you can probably gather, it’s about the need to instil in our creations a sense of human context. However, this is most certainly easier said than done. After all, as Kissinger actually stated in his Atlantic article (above):
“To what extent is it possible to enable AI to comprehend the context that informs its instructions? What medium could have helped Tay define for itself offensive, a word upon whose meaning humans do not universally agree? Can we, at an early stage, detect and correct an AI program that is acting outside our framework of expectation? Or will AI, left to its own devices, inevitably develop slight deviations that could, over time, cascade into catastrophic departures?”
Judea Pearl, on whose opinions this article is built, asserts that the answer may lie in causal reasoning. It’s another good article, this one.
What happens when deep learning software and self-executing code are responsible for making legal decisions? Nothing good, I reckon. But perhaps I’m being too reductionist. lawyer and philosopher Mireille Hildebrandt, a professor at the research group for Law, Science, Technology and Society at Vrije Universiteit Brussels in Belgium (i.e. a greater mind than mine), has been awarded a grant of €2.5 million by the European Research Council to conduct foundational research with a dual technology focus: Artificial legal intelligence and legal applications of blockchain. The answer is, of course, pretty nuanced – hence the huge grant to examine the matter in more depth.
DOOM! DOOM! DOOOOOOOM!
Of course not. Silly billy.
Quote of the Week
“If Google wants to get in the business of doing classified things for the military, then the public has the right to be concerned about what kind of company Google is becoming”
– Gary Marcus, an AI researcher at New York University