Facebook Mind-Reading: A Load of Mumbo Jumbo?
There’s been a big hoo-hah in the news recently about Facebook using machine learning to create a computer-brain interface that will translate your thoughts into text. Since the news broke in April, I’ve been pondering whether this announcement at the F8 developer conference was really in the pipeline. It seems, to me, a little far-fetched.
Noam Chomsky agrees. The emeritus professor of Linguistics at MIT (and probably the world’s most well-known academic) considers the tech that Facebook is talking about to be “beyond science fiction”.
Considering that we still have no idea precisely what thoughts are, and have not even perfected natural language processing or image recognition in machine learning yet, it does seem like a tall order.
Do we even think in language, or is it images, feelings, or some other ephemeral thing? It’s most likely a combination, and the answer is bound to be entirely a subjective thing. For example, some artists report thinking almost exclusively in terms of image and colour, whilst people who speak multiple language report thinking in a combination of the languages they know. How on earth would one go about breaking these intangible muddles into code that can be translated into language?
The first thing that Facebook would need to do is to actually give a concrete definition of a thought, Chomsky notes. Even the best academic minds on the planet have not been able to do this to date; philosophers have been pondering the notion for hundreds of years; scientists have no idea.
So what justification can Facebook give for its elaborate claims?
“It sounds impossible, but it’s closer than you may realise,” says Regina Dugan, the head of Facebook’s Building 8, who announced the news in her F8 presentation.
The plan is to develop non-invasive sensors that can measure brain activity hundreds of times per second at high resolution to decode brain signals associated with language in real time. As inspiration, Facebook is looking to invasive implants that have been used to help paralysed people to communicate by typing eight words a minute.
An invasive implant that’s hooked up direct to the brain may be able to achieve this, but the challenge is to do a similar job from the outside. And whilst the invasive tech being used for paralysis can type eight words a minute, Facebook is claiming that the technology it is working on will be able to accomplish a staggering 100 words a minute. That’s about 20 more than I, as a seasoned writer and ex-audio typist, can manage, and far more than anybody could manage from their smartphone.
Facebook plans to use optical imaging to glean words from our minds before we can say them. If this was, indeed, possible, it follows that we’d be able to transmit messages silently to one another… basically, we would have telepathy.
Let’s put to one side the ridiculousness of the whole thing, and brush over the obvious concerns about this sort of mad tech being implemented by a company that makes its billions from harvesting our personal data, and look at something interesting within this news.
Dugan went on, in April’s presentation, to say that this technology would be used to create a, wait for it… brain mouse. A brain mouse? WTF?
Why? For augmented reality. There’s this other thing buzzing around the tech world, of course, that we’ll all soon be wearing AR glasses. I’m still slightly sceptical that this will catch on, but the Silicon Valley powers-that-be are adamant that they’ll have us goggled up in just a handful of years.
Okay, so there’s a few cool benefits to the idea of AR glasses, despite being a bit of an ikky thought. We move beyond the need for a smartphone in our hand, which seems somewhat appealing to me, a person who has just freed myself from the long-term burden of an iPhone with such a smashed screen I had to have a spiderweb as a screensaver to try and conceal it (didn’t work). I also smashed up the screen on my new Samsung S7 Edge in a humiliating moment of mega-fail at Gatwick airport recently, in which I decided to run up the down escalator to make my connecting train when the up escalator was clogged by a teeming crowd of holidaymakers. To be fair, I’d probably smash up my AR glasses somehow too, but that remains to be seen.
Anyway, so with these hands-free AR specs, we’d be able to see directions laid out ahead within our field of vision, we’d be able to have real-time translation of other languages (apparently), and see dancing elves and fairies whenever we felt like it (so, like, all the time then). The thing that’s lacking in all of this, of course, is a user interface. Which is where Facebook’s sci-fi mind
control reading tech comes in. With a Facebook-branded brain mouse, we could control our augmented environment with the power of our minds. Okay, that’s pretty cool.
Nonetheless, as Chomsky says, it’s an idea that seems far beyond the realms of our current technological capacity right now. On the other hand, maybe Facebook is far more advanced than it’s letting on. And, again, this is a frightening thought.
I love technology, and I particularly love the weird stuff like this. I am the sort of geek who thinks the Singularity is going to happen within my daughter’s lifetime, and that we are living in a Matrix-style simulated universe created by more advanced civilisations. But I also believe that this Facebook thing is a bit further off than they’re telling us. What do you reckon?