Dr Matthew Putman is the co-Founder and Chief Executive Officer of Nanotronics. He received A Ph.D in Applied Physics and Mathematics from Columbia University, and was previously President for Development at Tech Pro Inc. He has also published poetry, produced plays and films, and is an accomplished musical composer. I could go on.
Putman is, despite his many accomplishments, best known for his work on nanotechnology, along with expertise in artificial intelligence and robotics.
It is, therefore, a great honour to have the opportunity to sit down with Dr Putman over Skype for an early morning chat about big data, AI, and – of course – a little nanotech.
From your experience, what do you think we need to do in order to get the most out of the data we feed our AI?
The time is right to sculpt Big Data the way a sculpture would cut away much of the marble to create art. Michelangelo famously said “The sculpture is already complete within the marble block, before I start my work. It is already there, I just have to chisel away the superfluous material.” There is a lot to this statement that applies to AI for industry, what we are calling Artificial Intelligence Process Control (AIPC). A complex process is a mountain of information, and the right answer for generating a perfect part is in that data, but an AIPC factory needs to learn how to carve away the useless data to find what is important. A human is just providing the chisel.
How do you see our development into Artificial General Intelligence through deep learning?
So many in the community of AI researchers are focused on the emergence of human-level AI, what is called AGI (Artificial General Intelligence). A group of these scientists see it from the inverse perspective of a computer scientist.
Gary Marcus, founder of Geometric Intelligence and a Cognitive Scientist at NYU, looks at how a human develops and finds that much less information is gathered by a child than by a deep learning algorithm, instead of looking at every neuron in the brain to create an AGI. The team at the company Vicarious have concluded much the same, and used this in order to be the first group to crack Captcha with an extremely sparse data set. This is so important. Our abilities to gather data aside, it might not be the right way to think about it.
Despite every neuroscientist I have ever spoken to pointing out that the human brain is the most complex thing in the universe, it may not be true. And more importantly, it may not matter in creating AGI. I think about this from the perspective of someone who is building an entire factory to learn, and that needs not an AGI, but rather Narrow AI.
I wonder if even sparser data is more valuable. This may need to involve a variety of techniques, which is not completely unusual. Often the sparse data proponents, like those that I mentioned, seem to criticize deep learning approaches, and deep learning experts criticize older probabilistic methods, for reasons being that reliance on larger data sets are required for each. My feeling is that there is something about a binary discourse that is going to limit AGI.
There are times when we do not mind a computational solution that just computes quickly in order to inform a learning system. I am fairly sure that Deep Mind knew this when creating Alpha Go and Alpha Go Zero. They used the same impressive Reinforcement Learning techniques that they had used when demonstrating that their AI could teach itself how to win at all Atari 8 bit games, even though it had learned only one.
The game of Go has a near infinite amount of moves however, so the task required putting a “decision tree” on top of the Reinforcement Learning tools. In a factory, we will see much of the same. We will be morphing images with sparse data, which is already done, in order to create large data sets even though the actual observations did not occur.
Different mechanisms can be used when taking information in that is very plentiful. Basically, it seems to me like an AIPC factory will start as superhuman, rather than go through the step of first being as good as humans. We will see, however, I am fairly confident, not only given the successes of Deep Mind and others, but those from the world of generative design.
We are at a stage with AI where lateral thinking may be more important than rigorous adherence to systems and structures. How much do you consider it necessary to stick to the rules in order to maximise output?
In a very important moment in my life the Pulitzer Prize-winning jazz improviser (those two things do not normally go together), Ornette Coleman, gave me some advice that I think of every day, and sometimes many times in one day. I went to his house, which was basically an open loft for anyone who wanted to come by and play.
When I nervously started to play, he stopped me and told me to “throw away the music”. Though I had no music in front of me I began to know what he meant as I closed my eyes and started playing with the other musicians in the room by listening and reacting, not by remembering scales or songs.
This is very close to an effective AI system. There is a lot of training in the system that we want to remove so that both the human and the machine are free to listen to and process.
There are two priors in the playing of music that are analogous here. There is the human born ability to recognize tonality and to have a curiosity for music. Then there is the training, and what emerges finally is instinct.
The first equivalent prior for an AI is to have programmers create the innate structure of the algorithm so that goals to optimize for a desired outcome can emerge. The second is to have operators teach the AI, and finally a play of machine to machine crosstalk where morphing, experimentation, and generative outcomes arise in ways that humans alone could not have predicted.
Where do you see artificial intelligence leading us?
There are assumptions that nearly all humans have come to accept. In some ways we have to keep breaking them down, or at least looking at them with fresh eyes, to see if they are really the way that they have to be. If our science and technological capabilities have progressed in dramatic ways, and our art is a new reflection of society and personal exploration, it makes a lot of sense to think about what major changes to the planet mean.
Probably iterative steps will make the most sense, but looking towards the final desired outcome, like defining an AI utility function, will be the only way to achieve dramatic results. A team effort and belief in progress is required before progress can ever be made. So, I think of a few things that are assumed will not happen, even though Gene Roddenberry thought they could when creating Star Trek.
We have seen so much abundance created over the last hundred years, while not fully accepting what abundance really is.
Maslow’s Hierarchy of Need is often cited, and makes sense, except that in some way the very bottom of the triangle of food and shelter can even be readdressed even for those lucky enough to have those. Food should not be local, it should be hyperlocal. It should be growing in your home replacing the refrigerator with a garden and replicator. Furniture and electronics should not be made in large and expensive factories, but made in the place we want to use them.
This happens through the combination of AI and something that seems even more like science fiction; molecular nanotech or Atomically Precise Manufacturing. This is not a pie-in-the-sky dream. The two work very closely together to fulfil the theoretical models created by Richard Feynman, Eric Drexler, and others.
Our knowledge of mechanosynthesis and AI computation are known. We can build them from the bottom up. If this happens the concept of money changes. Political and military power changes, and the ability to be creative also changes.
Days of laboring for the sake of earning a living to buy things disappear and, along with them, traditional currency completely. I know that I will lose some readers with these as they will assume I am over reaching. It is true that they might not immediately see this, but they could, and the future of the planet certainly can.
For more insight into Dr Putman’s extensive knowledge-base, you can find him regularly publishing his articles on Medium. You can follow Dr Putman on Twitter, and Nanotronics itself, by following the links.