aboutsummaryrefslogtreecommitdiff
Commit message (Expand)AuthorAge
* applying dropout on lstm.h before combinerL seems bad, PPL only 95, trying an...txh182015-12-03
* added two layers to lstmlm_ptb, todo:test ittxh182015-12-03
* added al_sen_start stat for lmseqreadertxh182015-12-03
* moved tnn to main nerv dir and added it to Makefiletxh182015-12-03
* ...txh182015-12-03
* small bug fix in tnn for se_mode, todo:test ittxh182015-12-02
* added se_mode for lmseqreader, todo:check ittxh182015-12-02
* function name change in LMTrainertxh182015-12-02
* added dropout_t layertxh182015-12-02
* changed thres_mask function of matrix to a more standard apitxh182015-12-02
* added rand_uniform and thres_mask for cumatrixtxh182015-12-01
* got PPL115 for ptb on h300lr1bat10wc1e-4txh182015-12-01
* bug fix for lstm_t layer, t not inclueded in propagate!txh182015-11-30
* small opt for initing tnn:clip_ttxh182015-11-30
* added ooutputGate for lstm_ttxh182015-11-30
* bug fix: tanh implementation could cause nantxh182015-11-29
* added clip_t for tnntxh182015-11-27
* lstm_tnn can be run, todo:testingtxh182015-11-27
* still working..txh182015-11-26
* working on lstmtxh182015-11-26
* changed auto-generating params, won not save in global_conf.paramtxh182015-11-25
* added tanh operation for matrixtxh182015-11-25
* let affine supported multiple inputstxh182015-11-24
* added wcost for biasparam in lm_trainertxh182015-11-24
* still working on dagL_Ttxh182015-11-24
* completed layerdag_t, now testing...txh182015-11-23
* small bug fixtxh182015-11-23
|\
| * Merge branch 'master' of github.com:Nerv-SJTU/nervDeterminant2015-11-23
| |\
| * | correct the use of self.gconfDeterminant2015-11-23
* | | merge in recent changes about param updatestxh182015-11-23
|\ \ \
| * | | small bug fixtxh182015-11-23
| * | | Merge remote-tracking branch 'upstream/master'txh182015-11-23
| |\ \ \ | | | |/ | | |/|
| | * | doc changeTianxingHe2015-11-23
| | |/
| | * add cflag __NERV_FUTURE_CUDA_7Determinant2015-11-23
| | * use consistent update calc; clean up code; no need for `direct_update`Determinant2015-11-21
| | * Merge pull request #12 from cloudygoose/txh18/rnnlmTed Yin2015-11-18
| | |\
| | * \ Merge pull request #10 from cloudygoose/txh18/rnnlmTed Yin2015-11-16
| | |\ \
| * | \ \ Merge branch 'txh18/rnnlm' of github.com:cloudygoose/nervtxh182015-11-16
| |\ \ \ \ | | |/ / / | |/| | |
* | | | | completed gate_fff layertxh182015-11-23
* | | | | implementing GateFFF layertxh182015-11-23
* | | | | added has_param api for param_repotxh182015-11-20
* | | | | complete auto-generate paramstxh182015-11-20
* | | | | working on automatic parameter for layerstxh182015-11-20
* | | | | changed work_dir settingtxh182015-11-18
| |_|_|/ |/| | |
* | | | small coding style changetxh182015-11-18
* | | | h300 and h400 worked well, log addedtxh182015-11-18
* | | | switch to kernel updatetxh182015-11-17
* | | | bug fix for select_linear layer-by-layer updatetxh182015-11-17
* | | | added atomicAdd for select_linear update, however, the result still seems unr...txh182015-11-17
* | | | using atomicAdd for select_linear updatetxh182015-11-17