In some somewhat spooky AI news we have picked up on, Google AI robots have learned a new skill – the art of cryptography. What it means is that AI is now learning how to communicate in secret. There are benefits, but equally a whole load of nightmarish scenarios spring to mind. Mark takes a look at this weird new development over in Silicon Valley…
Google AI Learns To Keep Secrets From Humans
Three different Google AI robots have learned to encrypt their communications, according to a paper published by Google researchers. Alice, Eve, and Bob, have learned how to keep messages from any listening human ears by effectively creating their own language.
This is a bit of a startling new development, considering artificial intelligence hasn’t shown much promise in the art of cryptography until now. But what is perhaps even more startling is that Google actually asked their AIs to do this.
Researchers asked Google AI, Alice, to convert some text into gibberish and send it on to Bob, without letting the third AI, Eve, read it. The researchers didn’t tell Alice to use any particular encryption method, just left her to figure it out on her own. Which she did.
It did, however, take them a while. After some struggling to even communicate with one another, they finally learned how to message each other, though not without Eve learning along with them. Nonetheless, after 10,000 attempts, Alice and Bob were able to counter Eve’s progress. After 15,000 attempts, they left Eva behind. They’d finally achieved their task.
Along the way, the Google AI bots also learned an even more valuable trick: how to decide what data should be made available to others, and what to keep secret. The Google researchers explained the value of this development:
“Knowing how to encrypt is seldom enough for security and privacy. Interestingly, neural networks can also learn what to encrypt in order to achieve a desired secrecy property, while maximizing utility. Thus, when we wish to prevent an adversary from seeing a fragment of a plaintext, or from estimating a function of the plaintext, encryption can be selective, hiding the plaintext only partly.”
Does This Mean Google AI Is Going To Conspire To Kill Us All?
This news is a bit of a double-edged sword. Of course, it seems troubling that AI is learning how to keep its inter-bot communications private. And the way that neural networks like these Google AI devices actually work means it’s incredibly difficult to tell precisely how they encrypted those messages in the first place.
On the other hand, however, the limitations of their encryption strategy don’t mean that it’s not yet invulnerable to human interception. If they are able to, in the future, create a strong encryption, this could provide safety from hackers, and thus improve cybersecurity. That is, at least from human hackers.
The next step for Google AI, say the researchers, is for Alice, Bob, and Eve to work on other cryptographic protections, such as steganography and pseudorandom number generations. Though they state that it’s unlikely the Google AI tech will become codebreakers any time soon, they could be a useful step towards keeping our communications private, whilst analysing the metadata of digital messages.
For a bit more on Google’s machine learning vision, check out this talk from Google I/O 2016:
Google AI: Your Thoughts?
So Google AI can now keep secrets. What do you think about this? The End of the World As We Know It, or a progressive move in the field of cybersecurity? Let us know your thoughts via our Facebook and Google Plus pages. You can also tweet Mark about today’s topic and also tweet TDMB directly.
Mark Grayson is the Paid Social Account Manager at TDMB. He comes from an Education background, having previously worked as Head of IT in secondary education, hence his interest in Technology in Education. He is also a gifted pianist, as well as being skilled in digital marketing, and is possibly the happiest, most positive person on Earth.