index
:
nerv
bugfix-matrixfree
fastnn
gh-pages
master
rnn
tnn
wrapped-handles
Lua-based toolkit for high-performance deep learning
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Commit message (
Collapse
)
Author
Age
*
removed flush_all for every mb in process_birnn
txh18
2015-12-10
|
*
bug fix for recent changes in tnn
txh18
2015-12-10
|
*
...
txh18
2015-12-10
|
*
bilstm_v2 did not run well
txh18
2015-12-10
|
*
...
txh18
2015-12-09
|
*
...
txh18
2015-12-09
|
*
...
txh18
2015-12-09
|
*
small bugs fix for bilstm script
txh18
2015-12-08
|
*
small bugs fix
txh18
2015-12-07
|
*
...
txh18
2015-12-07
|
*
lm script change:user can set start_lr
txh18
2015-12-06
|
*
some log chang
txh18
2015-12-06
|
*
added extend_t for tnn to save GPU memory
txh18
2015-12-06
|
*
small bug fix on lm train script
txh18
2015-12-06
|
*
small bug fix in lm training script
txh18
2015-12-06
|
*
small bug fix in lm training script
txh18
2015-12-05
|
*
added twitter, added bilstmlm script, todo: test bilstmlm
txh18
2015-12-05
|
*
changed lstm_t to a more standard version
txh18
2015-12-05
|
*
small bug fix for lmptb.lstm_t_v2
txh18
2015-12-04
|
*
trying to use lstm_t_v2 for ptb
txh18
2015-12-04
|
*
added one_sen_report to lm_process_file
txh18
2015-12-04
|
*
...
txh18
2015-12-04
|
*
added testout command for lstmlm
txh18
2015-12-04
|
*
added log_redirect to SUtil
txh18
2015-12-04
|
*
applying dropout on lstm.h before combinerL seems bad, PPL only 95, trying ↵
txh18
2015-12-03
|
|
|
|
another way(like in the paper) of dropout for lstm.h after combinerL
*
added two layers to lstmlm_ptb, todo:test it
txh18
2015-12-03
|
*
added al_sen_start stat for lmseqreader
txh18
2015-12-03
|
*
moved tnn to main nerv dir and added it to Makefile
txh18
2015-12-03
|
*
...
txh18
2015-12-03
|
*
small bug fix in tnn for se_mode, todo:test it
txh18
2015-12-02
|
*
added se_mode for lmseqreader, todo:check it
txh18
2015-12-02
|
*
function name change in LMTrainer
txh18
2015-12-02
|
*
added dropout_t layer
txh18
2015-12-02
|
*
changed thres_mask function of matrix to a more standard api
txh18
2015-12-02
|
*
added rand_uniform and thres_mask for cumatrix
txh18
2015-12-01
|
*
got PPL115 for ptb on h300lr1bat10wc1e-4
txh18
2015-12-01
|
*
bug fix for lstm_t layer, t not inclueded in propagate!
txh18
2015-11-30
|
*
small opt for initing tnn:clip_t
txh18
2015-11-30
|
*
added ooutputGate for lstm_t
txh18
2015-11-30
|
*
bug fix: tanh implementation could cause nan
txh18
2015-11-29
|
*
added clip_t for tnn
txh18
2015-11-27
|
*
lstm_tnn can be run, todo:testing
txh18
2015-11-27
|
*
still working..
txh18
2015-11-26
|
*
working on lstm
txh18
2015-11-26
|
*
changed auto-generating params, won not save in global_conf.param
txh18
2015-11-25
|
*
added tanh operation for matrix
txh18
2015-11-25
|
*
let affine supported multiple inputs
txh18
2015-11-24
|
*
added wcost for biasparam in lm_trainer
txh18
2015-11-24
|
*
still working on dagL_T
txh18
2015-11-24
|
*
completed layerdag_t, now testing...
txh18
2015-11-23
|
[next]