Google says today it’s making the machine learning technology that powers a number of its products, including Google Photos search, speech recognition in the Google app, and the newly launched “Smart Reply” feature for its email app Inbox. Called TensorFlow, the technology helps makes apps smarter, and Google says it’s far more powerful than its first-generation system – allowing the company to build and train neural nets up to five times faster than before.
For Google, that means it’s able to improve its products more quickly, the company explains.
TensorFlow was originally a project developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence research organization for the purpose of conducting machine learning and deep neural networks research. But the technology is applicable to a number of other domains, as well, says Google.
In more technical terms, the deep learning framework is a both a production-grade C++ backend which can run on CPUs, Nvidia GPUs, Android, iOS and OS X, as well as a Python front-end that interfaces with Numpy, iPython Notebooks, and other Python-based tooling, writes Vincent Vanhoucke,Tech Lead and Manager for the Brain Team on his Google+ profile.
Any computation that you can express as a computational flow graph, you can compute with TensorFlow. Any gradient-based machine learning algorithm will benefit from TensorFlow’s auto-differentiation and suite of first-rate optimizers, says Google.
“TensorFlow is what we use every day in the Google Brain team, and while it’s still very early days and there are a ton of rough edges to be ironed out, I’m excited about the opportunity to build a community of researchers, developers and infrastructure providers around it,” Vanhoucke says.
Source: TechCrunch - Sarah Perez