Quantcast
Channel: Hacker News
Viewing all articles
Browse latest Browse all 25817

Factorization tricks for LSTM networks

$
0
0

(Submitted on 31 Mar 2017)

Abstract: We present two simple ways of reducing the number of parameters and accelerating the training of large Long Short-Term Memory (LSTM) networks: the first one is "matrix factorization by design" of LSTM matrix into the product of two smaller matrices, and the second one is partitioning of LSTM matrix, its inputs and states into the independent groups. Both approaches allow us to train large LSTM networks significantly faster to the state-of the art perplexity. On the One Billion Word Benchmark we improve single model perplexity down to 24.29.
Comments:accepted to ICLR 2017 Workshop
Subjects:Computation and Language (cs.CL); Neural and Evolutionary Computing (cs.NE); Machine Learning (stat.ML)
Cite as: arXiv:1703.10722 [cs.CL]
 (or arXiv:1703.10722v1 [cs.CL] for this version)
From: Oleksii Kuchaiev [view email]
[v1] Fri, 31 Mar 2017 00:50:37 GMT (474kb,D)

Viewing all articles
Browse latest Browse all 25817

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>