10.25394/PGS.7539974.v1
Vineeth Chigarangappa Rangadhamap
Vineeth
Chigarangappa Rangadhamap
Fast Computation of Wide Neural Networks
Purdue University Graduate School
2019
Deep learning
Tensor Ring Nets (TRN) Compression
Targeted Rank Selection
Tensorflow Profiling
Quick Network Training
Layer Runtimes
Knowledge Representation and Machine Learning
2019-01-02 18:16:15
Thesis
https://hammer.purdue.edu/articles/thesis/Fast_Computation_of_Wide_Neural_Networks/7539974
<div>The recent advances in articial neural networks have demonstrated competitive performance of deep neural networks (and it is comparable with humans) on tasks like image classication, natural language processing and time series classication. These large scale networks pose an enormous computational challenge, especially in resource constrained devices. The current work proposes a targeted-rank based framework for accelerated computation of wide neural networks. It investigates the problem of rank-selection for tensor ring nets to achieve optimal network compression. When applied to a state of the art wide residual network, namely WideResnet, the framework yielded a signicant reduction in computational time. The optimally compressed non-parallel WideResnet is faster to compute on a CPU by almost 2x with only 5% degradation in accuracy when compared to a non-parallel implementation of uncompressed WideResnet.</div>