Amongst the array of robots on display at Robotech today out at Tokyo Big Sight, Kagawa University’s artificial voice system robot was possibly one of the most interesting and bizarre looking. The silicone mouth complete with moving lips and tongue aims at replicating human speech without using speakers or digital waves in order to come as close to a real life sound as possible.
The silicone mouth robot uses airflow and control valves to replicate a human trachea and vocal chords, and a resonance tube, or a silicone throat further manipulates the air into distinct sounds. The lips and tongue are then used to shape the sounds just as we do when talking in every day life. A microphone records the noises coming out of the silicone lips and is automatically processed through a computer, analyzing the pitch and frequency to match it against the pitch and frequency of a human’s voice. The computer then adjusts the sound automatically as the computer learns the correct valve adjustments and compression in the silicone throat to match a human voice, similar to tuning an instrument.
The robot currently can utter a number of Japanese alphabet sounds as well as sing a basic song, although not quite pop star level yet!
The silicone throat-like area is usually controlled by machine valve but was on display today for visitors to see the actual working parts of how the sounds are made. Although only rudimentary in its vocabulary at present (and a certain similarity with a cow!), the fact that this is generating and learning how to shape sound without any speakers certainly makes for a more natural sound than the current digital reincarnations.