Commit message (Collapse) | Author | Age | ||
---|---|---|---|---|
... | ||||
| * | | | added al_sen_start stat for lmseqreader | txh18 | 2015-12-03 | |
| | | | | ||||
| * | | | moved tnn to main nerv dir and added it to Makefile | txh18 | 2015-12-03 | |
| | | | | ||||
| * | | | ... | txh18 | 2015-12-03 | |
| | | | | ||||
| * | | | small bug fix in tnn for se_mode, todo:test it | txh18 | 2015-12-02 | |
| | | | | ||||
| * | | | added se_mode for lmseqreader, todo:check it | txh18 | 2015-12-02 | |
| | | | | ||||
| * | | | function name change in LMTrainer | txh18 | 2015-12-02 | |
| | | | | ||||
| * | | | added dropout_t layer | txh18 | 2015-12-02 | |
| | | | | ||||
| * | | | changed thres_mask function of matrix to a more standard api | txh18 | 2015-12-02 | |
| | | | | ||||
| * | | | added rand_uniform and thres_mask for cumatrix | txh18 | 2015-12-01 | |
| | | | | ||||
| * | | | got PPL115 for ptb on h300lr1bat10wc1e-4 | txh18 | 2015-12-01 | |
| | | | | ||||
| * | | | bug fix for lstm_t layer, t not inclueded in propagate! | txh18 | 2015-11-30 | |
| | | | | ||||
| * | | | small opt for initing tnn:clip_t | txh18 | 2015-11-30 | |
| | | | | ||||
| * | | | added ooutputGate for lstm_t | txh18 | 2015-11-30 | |
| | | | | ||||
| * | | | bug fix: tanh implementation could cause nan | txh18 | 2015-11-29 | |
| | | | | ||||
| * | | | added clip_t for tnn | txh18 | 2015-11-27 | |
| | | | | ||||
| * | | | lstm_tnn can be run, todo:testing | txh18 | 2015-11-27 | |
| | | | | ||||
| * | | | still working.. | txh18 | 2015-11-26 | |
| | | | | ||||
| * | | | working on lstm | txh18 | 2015-11-26 | |
| | | | | ||||
| * | | | changed auto-generating params, won not save in global_conf.param | txh18 | 2015-11-25 | |
| | | | | ||||
| * | | | added tanh operation for matrix | txh18 | 2015-11-25 | |
| | | | | ||||
| * | | | let affine supported multiple inputs | txh18 | 2015-11-24 | |
| | | | | ||||
| * | | | added wcost for biasparam in lm_trainer | txh18 | 2015-11-24 | |
| | | | | ||||
| * | | | still working on dagL_T | txh18 | 2015-11-24 | |
| | | | | ||||
| * | | | completed layerdag_t, now testing... | txh18 | 2015-11-23 | |
| | | | | ||||
| * | | | small bug fix | txh18 | 2015-11-23 | |
| |\| | | ||||
| | * | | Merge branch 'master' of github.com:Nerv-SJTU/nerv | Determinant | 2015-11-23 | |
| | |\ \ | ||||
| | * | | | correct the use of self.gconf | Determinant | 2015-11-23 | |
| | | | | | ||||
| * | | | | merge in recent changes about param updates | txh18 | 2015-11-23 | |
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | Merge branch 'master' into txh18/rnnlm | |||
| | * | | | | small bug fix | txh18 | 2015-11-23 | |
| | | | | | | ||||
| | * | | | | Merge remote-tracking branch 'upstream/master' | txh18 | 2015-11-23 | |
| | |\ \ \ \ | | | | |/ / | | | |/| | | ||||
| | | * | | | doc change | TianxingHe | 2015-11-23 | |
| | | |/ / | ||||
| | | * | | add cflag __NERV_FUTURE_CUDA_7 | Determinant | 2015-11-23 | |
| | | | | | ||||
| | | * | | use consistent update calc; clean up code; no need for `direct_update` | Determinant | 2015-11-21 | |
| | | | | | ||||
| | | * | | Merge pull request #12 from cloudygoose/txh18/rnnlm | Ted Yin | 2015-11-18 | |
| | | |\ \ | | | | | | | | | | | | | add atomicAdd for cukernel | |||
| | | * \ \ | Merge pull request #10 from cloudygoose/txh18/rnnlm | Ted Yin | 2015-11-16 | |
| | | |\ \ \ | |_|_|/ / / |/| | | | | | add optimization for parameter update | |||
| | * | | | | Merge branch 'txh18/rnnlm' of github.com:cloudygoose/nerv | txh18 | 2015-11-16 | |
| | |\ \ \ \ | | | |/ / / | | |/| | | | ||||
| * | | | | | completed gate_fff layer | txh18 | 2015-11-23 | |
| | | | | | | ||||
| * | | | | | implementing GateFFF layer | txh18 | 2015-11-23 | |
| | | | | | | ||||
| * | | | | | added has_param api for param_repo | txh18 | 2015-11-20 | |
| | | | | | | ||||
| * | | | | | complete auto-generate params | txh18 | 2015-11-20 | |
| | | | | | | ||||
| * | | | | | working on automatic parameter for layers | txh18 | 2015-11-20 | |
| | | | | | | ||||
| * | | | | | changed work_dir setting | txh18 | 2015-11-18 | |
| | |_|/ / | |/| | | | ||||
| * | | | | small coding style change | txh18 | 2015-11-18 | |
| | | | | | ||||
| * | | | | h300 and h400 worked well, log added | txh18 | 2015-11-18 | |
| | | | | | ||||
| * | | | | switch to kernel update | txh18 | 2015-11-17 | |
| | | | | | ||||
| * | | | | bug fix for select_linear layer-by-layer update | txh18 | 2015-11-17 | |
| | | | | | ||||
| * | | | | added atomicAdd for select_linear update, however, the result still seems ↵ | txh18 | 2015-11-17 | |
| | | | | | | | | | | | | | | | | | | | | unreproducable, I changed select_linear layer update back to line-by-line | |||
| * | | | | using atomicAdd for select_linear update | txh18 | 2015-11-17 | |
| | | | | | ||||
| * | | | | added small opt: use mmatrix in lm_trainer and reader | txh18 | 2015-11-17 | |
| | | | | | ||||
| * | | | | coding style change | txh18 | 2015-11-17 | |
| | | | | |