We held our monthly TechTech Book Club last night at our usual venue of the Boot and Flogger, a gorgeous little pub in London Bridge. This time, we were chatting about artificial intelligence, with our source material being the wonderful Surviving AI: The Promise and Perils of Artificial Intelligence by Calum Chace. As always, the discussion was a lively one, with everybody adding their poignant perspectives on the issue at hand.
Whilst there’s always a great little group of us in attendance, we are aware that many more who wished to be there were unable to make it. Between the WhatsApp group and Twitter, however, we were able to use the power of technology to bring the discussion to more of our members. If you’re not already a member of the WhatsApp group and want to keep up-to-date with what’s going on, you can join up here. You can also take a look (and add your comments to) the summary document I put together with key quotes and questions on the book here. But, for now, here’s a rundown of how it all went last night…
The idea of a fully automated future, in which artificial general intelligence reigns following what Chace has termed the ‘economic singularity’ was a key talking point, one which we kept coming back to. In the book, Chace makes the following observation:
“An economic singularity might lead to an elite owning the means of production and suppressing the rest of us in a dystopian technological authoritarian regime. Or it could lead to an economy of radical abundance, where nobody has to work for a living and we are all free to have fun, and stretch our minds and develop our faculties to the full.”
The former seemed, to our group at least, to be a much more likely possibility, with Jo declaring straight off the bat that radical abundance will never happen, that human nature and the self-interest of the elite, will lead to a wealth disparity the likes of which we’ve never seen. The majority of us will be out of work, directionless, skint, and certainly not enjoying the fruits of technology.
— TDMB Tech (@TDMB_Tech) November 14, 2018
Will doubted that a future of freedom from work would be quite as awesome as it might initially seem. Particularly, he noted, if we manage to achieve radical life extension or even immortality (as some futurists are declaring). Our drive to create, Will opined, is fuelled by our own mortality – the need to leave some kind of legacy. Also, without an opportunity to financially and socially differentiate ourselves through creative work, where would be the motivation? Would we even bother?
Ultimately, he asked, would artists be inspired in a world where everything is ‘great’?
— TDMB Tech (@TDMB_Tech) November 14, 2018
Another concern for some members of the group, was how on Earth we manage a world in which radical improvements in health and longevity mean that global over-population is exacerbated? We don’t work, so how do we live? Can there even be a currency in an age of radical abundance?
Paul suggests taking a look at Malthusian Theory of Population, noting that – though it’s been dismissed to date – the 18th-century warnings could yet become a reality.
There’s the Elon Musk solution of colonising Mars. Will declared this a preposterous concept, one that is unrealistic in the extreme. And, assuming this unlikely event were to occur, do we realise how shitty it would be to live on Mars?!
Whilst the idea of living long and being healthy into old age is certainly desirable on an individual level, globally it would be an utter disaster. Food shortages, lack of space, poverty and apathy would be the norm. We’d need to keep the population under control somehow. We quickly changed the subject before eugenics came up…
The arrival of an artificial superintelligence, Will stated, can be likened to extraterrestrial contact.
Is the notion of an artificial superintelligence equally as likely as Earth being colonised by extraterrestrials? Do our fears and concerns about the former correspond to our (widely accepted to be) unfounded fears and concerns about the latter?
The notion of the ‘Singularity’ as applied to the matter of AI tends to refer to the development of a superintelligence, a point at which, as Chace states in the book, “the normal rules cease to apply, and what lies beyond is unknowable to anyone this side of the event horizon”. Chace splits the Singularity definition in two; the Technological Singularity and the Economic Singularity. It’s the Economic Singularity at play in the matter of the post-work future of potential horror or abundance, and Chace has written a whole book about it. The Economic Singularity is a companion book to Surviving AI (and certainly one that the book club has on our reading list following this session!).
The group agreed that we do not see an artificial superintelligence being created within our own lifetimes, though some of us did point out that the matter of exponential growth, which Chace’s book goes into in some depth, may apply here. At present, such a development seems very far away, particularly given the shortcomings of current AI technologies. Chace wrote to us ahead of the meeting, reminding us that, whilst current AI is very limited, we have to remember the power of exponential growth. The exponential growth of computing power is what is enabling all this. He pointed us in the direction of this video to explain the power of exponentials:
Interesting that it’s presented by Stephen Fry. For some reason, Mr Fry kept being brought up through the evening (though, admittedly, in conversations about Audible rather than AI!)
It’s possible that most people are in a state of denial about how close an Economic or Technological Singularity could be; we may be at the “45% full” mark already and not even realise it.
“A ‘big sister’ who loves us and understands us better than we understand ourselves could solve all our major problems, like poverty, war, death. Which would be nifty,” Chace mentioned in his email to the group. Maybe, I wonder, we’re simply trying to recreate a childhood state of reliance on an all-knowing, all-powerful parent who can fix our fuck-ups for us…
It feels to me like such a future, where we depend on AI to keep us safe and alleviate our problems, is a bit like falling back on our mothers’ apron strings. But really, we’ve f’d up so much, maybe we need a digital assistant to parent us like this!
— Michele Baker (@msmichelebaker) November 15, 2018
Maybe our concerns about the Singularity (technological or economical) are minor compared to more pressing concerns facing humankind, we discussed. How about the environment, food shortages, poverty? Perhaps we are having the wrong discussion here.
Everything seems to come back to data. It did last month, and it did again last night.
We discussed issues around privacy and data governance again, though concerns about whether we should be concerned by China’s surveillance state combined with Asia’s precocious AI development were considered doubtful and potentially xenophobic.
Jo drew our attention to an excerpt from the book, as follows:
“The IoT will… dramatically improve the amount and quality of information, and enable us to control many aspects of our environment. You will be able to find out instantaneously the location and price of any item you want to buy. You will know the whereabouts and welfare of all your friends and family – assuming they don’t mind – and the location of all your property: no more lost keys!”
Again, it’s a double-edged sword. Whilst these developments would certainly be convenient, there’s undeniably a dark side. Particularly regarding knowing “the whereabouts and welfare of all your friends and family”. Do you want your mum to know where you are at all times, even when you’re in your mid-thirties and living far away from home? Sure, it’d be good for safety and stuff, but it feels intrusive. But maybe it’s all too late, anyway. Are we already living in a post-privacy world, unaware of the full extent of how we are being surveilled?
Health data was another point raised. The idea of having all our vitals checked constantly may, again, feel a little invasive. But the vast benefits could far outweigh concerns about privacy, encourage us to lead healthier lives, and allow early intervention on potential threats to our health. Good ‘body behaviour’ could be incentivised by reduced health insurance costs.
But, as was pointed out, what if this monitoring uncovered a potential future threat to our health that may or may not occur? It could leave countless people in a constant state of anxiety. I suggested that widespread availability of CRISPR gene editing could alleviate such threats and anxieties. Paul told us about gene drives, an extension of CRISPR that could destroy health threats forever:
Of course, this all goes back to the point about whether improved health and extended lifespans are really what the world needs in the midst of a growing population crisis, but on an individual level, the opportunity to optimise our bodies, our health and wellbeing is very tempting.
A Facile Future?
Chapter 4 of Surviving AI presents a ‘day in the life’ of a fictional character named Julia, whose world is run by her digital assistant and a range of other technologies designed to make daily life run smoothly.
What’s interesting about the chapter is the way it’s presented without any value judgments, i.e. whether Julia’s reality is good or bad. That interpretation is left up to the reader. There were mixed responses from the book club group…
Whilst we generally agreed that the technologies discussed in the ‘story’ seem to be pretty useful and would certainly make life immeasurably (or perhaps measurably?) simpler, it appears that there are no challenges in Julia’s life. Everything is done for her, like a child whose parent (the digital assistant, in this case) is in charge of every aspect of her life and wellbeing. Wouldn’t such a life be utterly dull?
Is the notion, for example, of knowing details about every person at a networking event in order to open easy conversations a useful way to engage and build relationships with others, or does it mean we can give up our natural social skills altogether?
Maybe there’s a Luddite in all of us, and the idea of an easier life at the hands of technology is repugnant because it jars with our current sense of the world. Maybe our group is too old to appreciate the actual liberation a world like this could bring? In the book, Chace paraphrased Douglas Adams on this point…
“As Douglas Adams said, anything that is in the world when you’re born is just a natural part of the way the world works, anything invented between when you’re fifteen and thirty-five is new and exciting, and anything invented after you’re thirty-five is against the natural order of things, and should be banned.”
And perhaps that should be our takeaway from last night’s session. The ‘natural order of things’ changes all the time, and humanity’s sense of what is and isn’t natural is highly jaded by culture and environment from one generation to the next. Maybe our kids will welcome those things that we fear, and the new world order that we may well not live to see, will indeed be a vast improvement on what we are used to. Perhaps future generations will scoff and roll their eyes at the greed and failures of us and our own predecessors, and they’ll fix it all to be as awesome as it could possibly be.
Or we could all end up as batteries powering the Matrix. Who knows?