An Article by Blair Hanley Frank from VentureBeat.
"Google quietly released an academic paper that could provide a blueprint for the future of machine learning. Called “One Model to Learn Them All,” it lays out a template for how to create a single machine learning model that can address multiple tasks well.
The MultiModel, as the Google researchers call it, was trained on a variety of tasks, including translation, language parsing, speech recognition, image recognition, and object detection. While its results don’t show radical improvements over existing approaches, they illustrate that training a machine learning system on a variety of tasks could help boost its overall performance.
For example, the MultiModel improved its accuracy on machine translation, speech, and parsing tasks when trained on all of the operations it was capable of, compared to when the model was just trained on one operation."
To continue reading, click here.