summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
* ...txh182015-12-07
|
* lm script change:user can set start_lrtxh182015-12-06
|
* some log changtxh182015-12-06
|
* added extend_t for tnn to save GPU memorytxh182015-12-06
|
* small bug fix on lm train scripttxh182015-12-06
|
* small bug fix in lm training scripttxh182015-12-06
|
* small bug fix in lm training scripttxh182015-12-05
|
* added twitter, added bilstmlm script, todo: test bilstmlmtxh182015-12-05
|
* changed lstm_t to a more standard versiontxh182015-12-05
|
* small bug fix for lmptb.lstm_t_v2txh182015-12-04
|
* trying to use lstm_t_v2 for ptbtxh182015-12-04
|
* added one_sen_report to lm_process_filetxh182015-12-04
|
* ...txh182015-12-04
|
* added testout command for lstmlmtxh182015-12-04
|
* added log_redirect to SUtiltxh182015-12-04
|
* applying dropout on lstm.h before combinerL seems bad, PPL only 95, trying ↵txh182015-12-03
| | | | another way(like in the paper) of dropout for lstm.h after combinerL
* added two layers to lstmlm_ptb, todo:test ittxh182015-12-03
|
* added al_sen_start stat for lmseqreadertxh182015-12-03
|
* moved tnn to main nerv dir and added it to Makefiletxh182015-12-03
|
* ...txh182015-12-03
|
* small bug fix in tnn for se_mode, todo:test ittxh182015-12-02
|
* added se_mode for lmseqreader, todo:check ittxh182015-12-02
|
* function name change in LMTrainertxh182015-12-02
|
* added dropout_t layertxh182015-12-02
|
* changed thres_mask function of matrix to a more standard apitxh182015-12-02
|
* added rand_uniform and thres_mask for cumatrixtxh182015-12-01
|
* got PPL115 for ptb on h300lr1bat10wc1e-4txh182015-12-01
|
* bug fix for lstm_t layer, t not inclueded in propagate!txh182015-11-30
|
* small opt for initing tnn:clip_ttxh182015-11-30
|
* added ooutputGate for lstm_ttxh182015-11-30
|
* bug fix: tanh implementation could cause nantxh182015-11-29
|
* added clip_t for tnntxh182015-11-27
|
* lstm_tnn can be run, todo:testingtxh182015-11-27
|
* still working..txh182015-11-26
|
* working on lstmtxh182015-11-26
|
* changed auto-generating params, won not save in global_conf.paramtxh182015-11-25
|
* added tanh operation for matrixtxh182015-11-25
|
* let affine supported multiple inputstxh182015-11-24
|
* added wcost for biasparam in lm_trainertxh182015-11-24
|
* still working on dagL_Ttxh182015-11-24
|
* completed layerdag_t, now testing...txh182015-11-23
|
* small bug fixtxh182015-11-23
|\
| * Merge branch 'master' of github.com:Nerv-SJTU/nervDeterminant2015-11-23
| |\
| * | correct the use of self.gconfDeterminant2015-11-23
| | |
* | | merge in recent changes about param updatestxh182015-11-23
|\ \ \ | | | | | | | | | | | | Merge branch 'master' into txh18/rnnlm
| * | | small bug fixtxh182015-11-23
| | | |
| * | | Merge remote-tracking branch 'upstream/master'txh182015-11-23
| |\ \ \ | | | |/ | | |/|
| | * | doc changeTianxingHe2015-11-23
| | |/
| | * add cflag __NERV_FUTURE_CUDA_7Determinant2015-11-23
| | |
| | * use consistent update calc; clean up code; no need for `direct_update`Determinant2015-11-21
| | |