Fast Computation of Wide Neural Networks Vineeth Chigarangappa Rangadhamap 10.25394/PGS.7539974.v1 https://hammer.purdue.edu/articles/thesis/Fast_Computation_of_Wide_Neural_Networks/7539974 <div>The recent advances in arti cial neural networks have demonstrated competitive performance of deep neural networks (and it is comparable with humans) on tasks like image classi cation, natural language processing and time series classi cation. These large scale networks pose an enormous computational challenge, especially in resource constrained devices. The current work proposes a targeted-rank based framework for accelerated computation of wide neural networks. It investigates the problem of rank-selection for tensor ring nets to achieve optimal network compression. When applied to a state of the art wide residual network, namely WideResnet, the framework yielded a signi cant reduction in computational time. The optimally compressed non-parallel WideResnet is faster to compute on a CPU by almost 2x with only 5% degradation in accuracy when compared to a non-parallel implementation of uncompressed WideResnet.</div> 2019-01-02 18:16:15 Deep learning Tensor Ring Nets (TRN) Compression Targeted Rank Selection Tensorflow Profiling Quick Network Training Layer Runtimes Knowledge Representation and Machine Learning