2017-02-23

As PC World and a number of other news outlets have reported, Intel is “investing heavily” in “way out there” future technologies like neuromorphic computing. It’s an idea that could change the world.

In short, neuromorphic computing hopes to mimic the way the human brain works, replacing today’s transistor-based circuits with an architecture inspired by nerve cells or neurons — hence the name.

The benefits of such an approach offer more than just speed. A 2012 paper published by Charles Augustine, a scientist at Intel Circuit Research Labs, suggests that “neuromorphic designs can achieve 15X-300X lower computation energy” compared to state-of-the-art CMOS designs.

There’s nothing wrong with CMOS designs per se and, as Intel’s 7th generation Intel Core chips show, there’s still ample room for improvement. The current 14nm silicon will ultimately be superseded by a 10nm process in late 2017 and the first 10nm product, code-named Cannon Lake.

Beyond that, Intel is investing $7 billion to prepare its Fab 42 factory for 7nm production. “It’s a statement of our belief in our future products and in our manufacturing capability,” says Ann Kelleher, the vice president and general manager of Intel’s Technology and Manufacturing Group. “Moore’s Law is alive and well.”

So where do neuromorphic chips fit into all this? Augustine’s paper suggests that the technology is ideal for analysis-based tasks such as data-sensing, adaptive AI, associative memory and cognitive computing.

Companies like Intel, Google, IBM and Facebook have had some success with cognitive computing using traditional architectures. But power consumption is always a problem. The Chinese Sunway TaihuLight supercomputer, for example, draws 15 megawatts at peak power. That’s five wind turbines-worth.

Low-power neuromorphic hardware might be perfect for future supercomputing systems, especially as we transition from the petascale era (machines measured in petaflops, i.e. a quadrillion calculations per second) to an exascale one (machines measured in exaflops, a quintillion calculations per second).

It might also be an efficient solution for analysing and processing the vast amounts of data generated by self-driving cars and sensor networks. French startup Chronocam, for example, is pursuing the idea of neuromorphic vision, a method of “sensing and processing inspired by the human eye.”

Neuromorphic chips might be unleash a revolution in robotics, allowing future automatons to process the world around them while remaining power efficient. Nervana’s application specific integrated circuits (ASICs) have already proved their worth in the cloud, training deep learning algorithms.

They might also explode the Internet of Things (IoT). Neuromorphic processors could enable all manner of gadgets, wearables and sensors to operate intelligently and independently without demanding a constant network connection to provide their thinking power.

As exciting as all this potential is, don’t expect neuromorphic technology to appear inside your smartphone or Amazon Echo any time soon. We’ve still got a long way to go before a silicon brain can rival the one in your head.

The post What is neuromorphic computing (and why might we need it)? appeared first on iQ UK.

Show more