2015-11-11

Google aims to build computer that mimics human brain.

Creg Corrado, senior researcher at Google, delivers a keynote speech during an annual Asia-Pacific press event held by the company on Nov 11, 2015 in Tokyo. [Photo provided to chinadaily.com.cn] US tech giant Google held its annual Asia-Pacific press event on Tuesday in Tokyo to present the company’s latest improvements to machine learning technology. The new system is “faster, smarter, and more flexible” than the previous iteration, which allows Google to scale down to a single smartphone or up to an entire datacenter full of computers. “We’ve seen firsthand what TensorFlow can do, and we think it could make an even bigger impact outside Google,” writes Google CEO Sundar Pichai. “So today we’re also open-sourcingTensorFlow.” By open-sourcing the system, Google believes researchers, developers and hobbyists will be able to share their ideas more efficiently than ever before.Google has announced that it is releasing its artificial intelligence software into the wild, allowing third-party developers to contribute to its evolution.Despite sounding like a sanitary product, TensorFlow is in fact behind some of Google’s biggest recent advances, such as the improvements in speech recognition that have allowed Google Now to expand.

The theoretical goal with machine learning is to build a technology that works like the human brain, and engineers have been working for years to develop “deep neural networks” designed to mimic people’s minds. “Trying to manually program a computer to be clever is far more difficult than programing the computer to learn from experiences,” said Greg Corrado, Google’s senior research scientist, during the event named “The Magic in the Machine”. From image recognition to voice translation and noise cancellation, Google uses machine learning in many of its products, and has pumped a huge amount of its research and development budget into improving these systems. Originally developed by the Google Brain team, as a successor to its preview machine learning platform DistBelief, it has been an internal tool up to now, but as the website explains: “TensorFlow is not complete; it is intended to be built upon, improved, and extended. “We have made an initial release of the source code, and are currently moving our internal development efforts over to use a public repository for the day-to-day changes made by our team at Google. “We hope to build an active open source community that drives the future of this library, both by providing feedback and by actively contributing to the source code.” Earlier this year, a Tensorflow project made the news when Google’s Deepdream showed us what computer’s dream about.


Corrado joined the Deep Learning research team backed by Google in 2011, and currently there are more than 40 world-class engineers and research scientists, including Geoff Hinton, one of the field’s founding fathers. It is based on the same internal system Google has spent several years developing to support its AI software and other mathematically complex programs. It turns out that when you show them Fear and Loathing in Las Vegas, they dream about some quite terrifying stuff that takes it to a whole other level.


In open sourcing its TensorFlow AI engine, Google can feed all sorts of machine-learning research outside the company, and in many ways, this research will feed back into Google. The Google Research blog explains: “Today we’re proud to announce the open source release of TensorFlow – our second-generation machine learning system, specifically designed to correct these shortcomings. “TensorFlow is general, flexible, portable, easy-to-use, and completely open source. Google says TensorFlow is used today in a number of its most visible products, including image search in Google Photos, speech recognition systems, Gmail, and Google Search. We added all this while improving upon DistBelief’s speed, scalability, and production readiness – in fact, on some benchmarks, TensorFlow is twice as fast as DistBelief.” By releasing TensorFlow, Google hopes that software will become more advanced and widespread while machine learning and artificial intelligence (AI) is being used to make searching more accurate.


It makes entirely new product categories possible, ranging from self-driving cars from Tesla and Google, to new forms of entertainment in virtual reality applications being developed by Facebook for its virtual-reality system Oculus. Inside Google, when tackling tasks like image recognition and speech recognition and language translation, TensorFlow depends on machines equipped with GPUs, or graphics processing units, chips that were originally designed to render graphics for games and the like, but have also proven adept at other tasks. Companies ranging from Google to Facebook and Microsoft bet that by being open they can entice talented academics to work for them, while encouraging the wider community to work on new AI technologies. The news comes as new research released by online marketing technology company Rocket Fuel, reveals that almost twice as many people believe artificial intelligence can solve big world problems compared to those who think it is a threat to humanity.

Stephen Hawking has famously been quoted as saying that the rise of artificial intelligence could see the human race become extinct, warning that technology will eventually ”supersede” humanity, as it develops faster than biological evolution. AI is playing an increasingly important role in the world’s online services—and alternative chips are playing an increasingly important role in that AI. We hope this will let the machine learning community—everyone from academic researchers, to engineers, to hobbyists—exchange ideas much more quickly, through working code rather than just research papers. Mr Christopher Manning, a Professor of Linguistics and Computer Science at Stanford University, said the system can perform operations much faster than other tools. “As a researcher, if a tool makes you faster that’s pretty compelling.

Meanwhile, despite reports that thousands of British jobs have already been replaced by machines, only 9 per cent of people believe that artificial intelligence will threaten their job, while 10 per cent think it will enhance it. Today, inside its massive computer data centers, Facebook uses GPUs to train its face recognition services, but when delivering these services to Facebookers—actually identifying faces on its social networks—it uses traditional computer processors, or CPUs. And this basic setup is the industry norm, as Facebook CTO Mike “Schrep” Schroepfer recently pointed out during a briefing with reporters at the company’s Menlo Park, California headquarters. But as Google seeks an ever greater level of efficiency, there are cases where the company both trains and executes its AI models on GPUs inside the data center.

The search gaint has also been using an artificial intelligence programme called “RankBrain” to help determine the pecking order in its influential internet search results. Chinese search giant Baidu is building a new AI system that works in much the same way. “This is quite a big paradigm change,” says Baidu chief scientist Andrew Ng.

Some Internet companies and researchers, however, are now exploring FPGAs, or field-programmable gate arrays, as a replacement for GPUs in the AI arena, and Intel recently acquired a company that specializes in these programmable chips. At places like Google, Facebook, Microsoft, and Baidu, GPUs have proven remarkably important to so-called “deep learning” because they can process lots of little bits of data in parallel.

Deep learning relies on neural networks—systems that approximate the web of neurons in the human brain—and these networks are designed to analyze massive amounts of data at speed. But, typically, when these companies put deep learning into action—when they offer a smartphone app that recognizes cats, say—this app is driven by a data center system that runs on CPUs. The GPU never really gets going.” That said, if you can consistently feed data into your GPUs during this execution stage, they can provide even greater efficiency than CPUs. But the company says there are already cases where TensorFlow runs on GPUs during the execution stage. “We sometimes use GPUs for both training and recognition, depending on the problem,” confirms company spokesperson Jason Freidenfelds.

Google now uses deep learning not only to identify photos, recognize spoken words, and translate from one language to another, but also to boost search results. When you bark a command into your Android phone, for instance, it must send your command to a Google data center, where it can processed on one of those enormous networks of CPUs or GPUs. But Google has also honed its AI engine so that it, in some cases, it can execute on the phone itself. “You can take a model description and run it on a mobile phone,” Dean says, “and you don’t have to make any real changes to the model description or any of the code.” This is how the company built its Google Translate app. Deep learning software will improve, and mobile hardware will improve as well. “The future of deep learning is on small, mobile, edge devices,” says Chris Nicholson, the founder of a deep learning startup called Skymind. GPUs, for instance, are already starting to find their way onto phones, and hardware makers are always pushing to improve the speed and efficiency of CPUs.

Show more