Friday, 23 June 2017

Rice U. scientists slash computations for ‘deep learning' by Jade Boyd via @RiceUNews

In a recent study, Rice University researchers adapted a widely used technique for rapid data lookup to slash the amount of computation required for deep learning. "This applies to any deep learning architecture, and the technique scales sublinearly, which means that the larger the deep neural network to which this is applied, the more the savings in computations there will be," said lead researcher Anshumali Shrivastava.

This is very interesting.  Some of us can already relate to hashing as  remember using the technique on Oracle and Teradata.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.