Intel & Google and the Quest for Nervana

Intel is making further advancements in artificial intelligence with the announcement of the industry’s first neural network processor (NNP) designed for broad commercial enterprise use of AI -- the Intel Nervana Neural Network Processor. The Intel Nervana NNP is specifically designed for AI and optimized for deep learning applications. (Credit: Intel Corporation)

A couple of news stories from last week written for RedShark that highlight the brave new world of silicon development.

Intel unleashes the industry’s first dedicated neural network silicon looks at the announcement of the Nervana Neural Network Processor. Apart from Nervana being an atrocious name, the chip is remarkable for Intel claiming it will contribute to a 100x performance gain in the time taken to train a deep learning model by 2020 compared to GPU techniques. The piece then takes a quick look at the latest developments in quantum computing and neuromorphic chips from the company.

Hidden chip in Pixel 2 is a huge leap forward in video technology examines the Pixel Visual Core, a previously unheralded chip inside Google’s new phone that provides on-board machine learning for taking HDR+ pictures. That’s put in context of some new research looking at the growth in on-device silicon.

Here’s a snippet:

On-board silicon is a very definite growing trend in the mobile world as, in the words of ABI Research, companies fear being “left behind as AI rockets beyond news headlines to both practical application and market interest.”

And ABI is name-dropped here rather than anyone else because it has identified a definite growing trend in on-device machine learning (which it also refers to as edge processing and/or edge learning) in everything from earbuds to cameras. It reckons that only about 3% of AI processing will be done on-device in 2017 with the balance taking place in the cloud, but that will rocket to 49% by 2022.

That’s 2.7 billion devices. Which is a lot.

Ain’t it just.