Google Translate is the company’s most used product. It allows users to type, take pictures, and translate with speech-to-text technology. Now the company has launched a new project called Transliteratron . It allows users to directly do speech-to-speech translation instead of text.
A post on Google’s AI blog reveals that the technology uses a new model, which uses a neural network, to convert speech to text and then text to speech to voice.
Transliteratron does not convert any task to different stages, which has many benefits. Transliteratron also maintains the properties of accents by translating sounds from one language to another.
This feature is great for in-sound studios that translate movies and TV dramas into other languages.
Experts say the accuracy of the new model is not very good right now, but they are confident it will improve over time.
Surf, fill an information field, click a button. All in the right order. Rather easy for a human but not for an AI. Google has therefore developed new methods that will allow artificial intelligences in the future to learn, alone.
The purpose and task seem ridiculous and yet the progress needed to make it happen is enormous. The Google teams (and not Deepmind, which is a subsidiary of Alphabet) have published a new scientific article called Learning to navigate the Web, Learning to Surf the Web.
They describe how they trained a neural network through reinforcement learning to understand how a web page works and can then navigate alone. In reinforcement learning, a neural network seeks the solution to a problem. He is “rewarded” when he makes a good choice and continues through iteration at every step.
For example, researchers confronted their AI, QWeb, and INET with seemingly simple instructions like booking a plane ticket or interacting with a social network site.
Previous tests have used human demonstrations to drive the algorithm and facilitate its error learning process. Nevertheless, as Google researchers note, it is difficult to have demonstrations that correspond to each type of site. They therefore opted for two reinforcement learning methods.
The first, when demonstrations are available, consists of a workout that begins with simplified task sequences that become more and more complex. The second uses another method, which allows the algorithm to approach a random navigation as if it were described by instructions.
All thanks to a meta trainer, called INET. An artificial trainer that is able to generate instructions and demonstrations from a random web page in the form of a DOM (Document-Object Model). For, according to the researchers, it is easier to establish instructions than to follow them and to interact with a page, for example.
In addition, researchers are using another new approach, called curriculum learning, which turns complex tasks into smaller steps to make life easier for the neural network. Another find, called shallow encoding, helps the AI to have a better understanding of the web page and the information it contains.
This new work seems to have obtained better results than previous attempts. They came out victorious from trials where other AIs had failed so far.