| Commit message (Expand) | Author | Age |
* | small bug fix on lm train script | txh18 | 2015-12-06 |
* | small bug fix in lm training script | txh18 | 2015-12-06 |
* | small bug fix in lm training script | txh18 | 2015-12-05 |
* | added twitter, added bilstmlm script, todo: test bilstmlm | txh18 | 2015-12-05 |
* | changed lstm_t to a more standard version | txh18 | 2015-12-05 |
* | small bug fix for lmptb.lstm_t_v2 | txh18 | 2015-12-04 |
* | trying to use lstm_t_v2 for ptb | txh18 | 2015-12-04 |
* | added one_sen_report to lm_process_file | txh18 | 2015-12-04 |
* | ... | txh18 | 2015-12-04 |
* | added testout command for lstmlm | txh18 | 2015-12-04 |
* | added log_redirect to SUtil | txh18 | 2015-12-04 |
* | applying dropout on lstm.h before combinerL seems bad, PPL only 95, trying an... | txh18 | 2015-12-03 |
* | added two layers to lstmlm_ptb, todo:test it | txh18 | 2015-12-03 |
* | added al_sen_start stat for lmseqreader | txh18 | 2015-12-03 |
* | moved tnn to main nerv dir and added it to Makefile | txh18 | 2015-12-03 |
* | ... | txh18 | 2015-12-03 |
* | small bug fix in tnn for se_mode, todo:test it | txh18 | 2015-12-02 |
* | added se_mode for lmseqreader, todo:check it | txh18 | 2015-12-02 |
* | function name change in LMTrainer | txh18 | 2015-12-02 |
* | added dropout_t layer | txh18 | 2015-12-02 |
* | changed thres_mask function of matrix to a more standard api | txh18 | 2015-12-02 |
* | added rand_uniform and thres_mask for cumatrix | txh18 | 2015-12-01 |
* | got PPL115 for ptb on h300lr1bat10wc1e-4 | txh18 | 2015-12-01 |
* | bug fix for lstm_t layer, t not inclueded in propagate! | txh18 | 2015-11-30 |
* | small opt for initing tnn:clip_t | txh18 | 2015-11-30 |
* | added ooutputGate for lstm_t | txh18 | 2015-11-30 |
* | bug fix: tanh implementation could cause nan | txh18 | 2015-11-29 |
* | added clip_t for tnn | txh18 | 2015-11-27 |
* | lstm_tnn can be run, todo:testing | txh18 | 2015-11-27 |
* | still working.. | txh18 | 2015-11-26 |
* | working on lstm | txh18 | 2015-11-26 |
* | changed auto-generating params, won not save in global_conf.param | txh18 | 2015-11-25 |
* | added tanh operation for matrix | txh18 | 2015-11-25 |
* | let affine supported multiple inputs | txh18 | 2015-11-24 |
* | added wcost for biasparam in lm_trainer | txh18 | 2015-11-24 |
* | still working on dagL_T | txh18 | 2015-11-24 |
* | completed layerdag_t, now testing... | txh18 | 2015-11-23 |
* | small bug fix | txh18 | 2015-11-23 |
|\ |
|
| * | Merge branch 'master' of github.com:Nerv-SJTU/nerv | Determinant | 2015-11-23 |
| |\ |
|
| * | | correct the use of self.gconf | Determinant | 2015-11-23 |
* | | | merge in recent changes about param updates | txh18 | 2015-11-23 |
|\ \ \ |
|
| * | | | small bug fix | txh18 | 2015-11-23 |
| * | | | Merge remote-tracking branch 'upstream/master' | txh18 | 2015-11-23 |
| |\ \ \
| | | |/
| | |/| |
|
| | * | | doc change | TianxingHe | 2015-11-23 |
| | |/ |
|
| | * | add cflag __NERV_FUTURE_CUDA_7 | Determinant | 2015-11-23 |
| | * | use consistent update calc; clean up code; no need for `direct_update` | Determinant | 2015-11-21 |
| | * | Merge pull request #12 from cloudygoose/txh18/rnnlm | Ted Yin | 2015-11-18 |
| | |\ |
|
| | * \ | Merge pull request #10 from cloudygoose/txh18/rnnlm | Ted Yin | 2015-11-16 |
| | |\ \ |
|
| * | \ \ | Merge branch 'txh18/rnnlm' of github.com:cloudygoose/nerv | txh18 | 2015-11-16 |
| |\ \ \ \
| | |/ / /
| |/| | | |
|
* | | | | | completed gate_fff layer | txh18 | 2015-11-23 |