Without doubt, speech is an essential aspect of life, especially that we live in a society empowered by communications. To lose the ability to speak detriments the quality of life in magnitudes. Scientists have pushed scientific boundaries on trying to replicate the ability of the brain to control the vocal muscles, and thus enable someone who lost his ability to speak, speak again. Though far from a success, the studies listed are signs that the goal is within reach.
One of the earliest recorded attempts to emulate speaking sounds was that of Russian Professor Christian Kratzenstein in 1779. The professor created acoustic resonators, which he patterned to how he envisioned the vocal cavity is positioned when producing the vowel sounds. Not long after, Wolfgang von Kempelen from Vienna published a book demonstrating how he created a machine that replicates speech. His work comprised of copying the structure of the entire respiratory system, including the lungs. These early mechanical or semi-electrical attempts to replicate inspired many studies thereafter including one of Alexander Graham Bell, who later created the telephone.
Some early studies focused on the full use of electrical structures to replicate vocal sounds. Pioneers in this area were the works of Stewart in 1922, and Homer Dudley in 1939. Stewarts’ work was composed of an array of electrical resonators combined and which sounds were adjusted to proper amplitudes to resonate vocal sounds.
Dudley’s invention was characterized as the first speech synthesizer and used bandpass filters to deconstruct signals received from the input actions.
The demonstration of the invention was so successful that scientists around the world began to develop their own version of the vocal synthesizer.
As sophisticated technology becomes more available, studies on speech synthesis become more complex but also more rewarding. From pure mechanical to electrical, speech synthesis studies have now stepped into the realms of neurology and machine learning. A recent study was able to demonstrate how the electrical pulses representing the words that the brain thinks can be translated into audible sounds. The study used electrodes that read brain activity and recorded those representing words or sentences. These electrical patterns were then translated to sound using computer algorithms. Such advances on this field are considered extremely beneficial for those who lost their ability to speak due to stroke or other neurological ailments.