There were two pieces of news from the world of robotics that piqued my interest this week. The first, from Wired, is the story of a robot developed by researchers at Gallaudet University—in collaboration with Yale, the University of Southern California, and Italy’s University of D’Annunzio. What’s so special about this robot?
Well, its function is to help deaf children, and indeed those who have had little access to natural language in their early years, to communicate. The robot communicates with the child through facial cues, directing the child’s attention to a human avatar on a screen who is ‘speaking’ in sign language.
Primarily, the ability of a robot like this to help young children who are deaf to communicate is great news. For parents of deaf children, learning to sign is an added task of parenting that they must quickly learn. A system like this, that combines robotics, algorithms and brain science to teach this crucial skill, could be a welcome breakthrough that helps those children with developing their communication skills as early as possible.
Nonetheless, the use for deaf children is not the only one mentioned in this article. Wired then goes on to explain how such a robot could be used to aid communication development in hearing children. “In an ideal world,” the article notes. “Every child would get enough face-to-face communication during early development to build solid language skills, be that by way of sign language or the spoken word. The reality is, not all parents have the time to sit down and read to their kids.”
Though the article stresses that a robot should not be considered a replacement for parental interaction, it does beg the question of in which home setting, where a robot is affordable, the parents have no time to communicate with their children? I can see that such a robot could be a useful, if occasional, tool for children in special needs schools, homes, or hospices, but if we are talking about using a robot to communicate with our children at home, I have some concerns. Perhaps I’m missing something here?
Putting these concerns to one side, however, there is another angle to this development that feeds into the next piece of news…
The other piece of news that caught my eye this week was a report from Science Daily about a breakthrough by researchers at Columbia Engineering. The researchers claim to have created an untethered soft robot whose actions can ‘mimic natural biological systems’.
The 3D-printable, synthetic soft muscle, or ‘artificial active tissue’ as the article puts it, has a strain density of 15 times that of natural muscle and is capable of lifting 1000 times its own weight.
“We’ve been making great strides toward making robots minds, but robot bodies are still primitive,” said Hod Lipson, professor of mechanical engineering at Creative Machines lab, who led the project. “This is a big piece of the puzzle and, like biology, the new actuator can be shaped and reshaped a thousand ways. We’ve overcome one of the final barriers to making lifelike robots.”
We’ve been seeing news about developments in artificial intelligence based on the natural neural networks of the human brain for some time. The creation of more and more sophisticated AI is the tech news world’s daily bread. Building the bodies, on the other hand, to go with artificial intelligence has been, as Hod Lipson says, so far primitive. But with the development of synthetic muscle that feels natural and – crucially – is significantly stronger than human muscle should give us all pause. Combine this with the robot mentioned above, that can communicate using visual cues, and we’re on our way to something more ‘embodied’ for our AI creations.
Are we about to see Blade Runner-style replicants walking the streets? Maybe not quite yet, but these early movements suggest a step towards embodied artificial beings that are capable of communication, intelligence, and strength, could be closer than we previously thought. Or am I jumping the gun? Let me know your thoughts…