aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
...
| * Merge branch 'master' of github.com:Nerv-SJTU/nervDeterminant2015-11-23
| |\
| * | correct the use of self.gconfDeterminant2015-11-23
| | |
* | | merge in recent changes about param updatestxh182015-11-23
|\ \ \ | | | | | | | | | | | | Merge branch 'master' into txh18/rnnlm
| * | | small bug fixtxh182015-11-23
| | | |
| * | | Merge remote-tracking branch 'upstream/master'txh182015-11-23
| |\ \ \ | | | |/ | | |/|
| | * | doc changeTianxingHe2015-11-23
| | |/
| | * add cflag __NERV_FUTURE_CUDA_7Determinant2015-11-23
| | |
| | * use consistent update calc; clean up code; no need for `direct_update`Determinant2015-11-21
| | |
| | * Merge pull request #12 from cloudygoose/txh18/rnnlmTed Yin2015-11-18
| | |\ | | | | | | | | add atomicAdd for cukernel
| | * \ Merge pull request #10 from cloudygoose/txh18/rnnlmTed Yin2015-11-16
| | |\ \ | | | | | | | | | | add optimization for parameter update
| * | \ \ Merge branch 'txh18/rnnlm' of github.com:cloudygoose/nervtxh182015-11-16
| |\ \ \ \ | | |/ / / | |/| | |
* | | | | completed gate_fff layertxh182015-11-23
| | | | |
* | | | | implementing GateFFF layertxh182015-11-23
| | | | |
* | | | | added has_param api for param_repotxh182015-11-20
| | | | |
* | | | | complete auto-generate paramstxh182015-11-20
| | | | |
* | | | | working on automatic parameter for layerstxh182015-11-20
| | | | |
* | | | | changed work_dir settingtxh182015-11-18
| |_|_|/ |/| | |
* | | | small coding style changetxh182015-11-18
| | | |
* | | | h300 and h400 worked well, log addedtxh182015-11-18
| | | |
* | | | switch to kernel updatetxh182015-11-17
| | | |
* | | | bug fix for select_linear layer-by-layer updatetxh182015-11-17
| | | |
* | | | added atomicAdd for select_linear update, however, the result still seems ↵txh182015-11-17
| | | | | | | | | | | | | | | | unreproducable, I changed select_linear layer update back to line-by-line
* | | | using atomicAdd for select_linear updatetxh182015-11-17
| | | |
* | | | added small opt: use mmatrix in lm_trainer and readertxh182015-11-17
| | | |
* | | | coding style changetxh182015-11-17
| | | |
* | | | added LOG-tnn-h400 LOGtxh182015-11-16
| |_|/ |/| |
* | | change updateEI to update_by_err_inputtxh182015-11-16
| | |
* | | coding style changestxh182015-11-16
| | |
* | | changed the updates for affine layer, now just use update will be okaytxh182015-11-16
|\ \ \ | | |/ | |/| | | | Merge branch 'txh18/rnnlm'
| * | unified param updates, now direct_update is the same speed with undirect_updatetxh182015-11-16
| | |
| * | ...txh182015-11-16
| | |
| * | ...txh182015-11-16
| | |
| * | used os.clock() for timertxh182015-11-16
| | |
| * | fixed direct update, did not know the resulttxh182015-11-16
| | |
| * | added timertxh182015-11-15
| | |
| * | merge lr schedule changetxh182015-11-15
| |\ \ | | | | | | | | | | | | Merge branch 'txh18/rnnlm' of github.com:Nerv-SJTU/nerv into txh18/rnnlm
| | * | got good PPL for H400...cloudygoose2015-11-15
| | | |
| * | | added msr_sc settxh182015-11-15
| |/ /
| * | small bug: lr_halftxh182015-11-13
| | |
| * | ...txh182015-11-13
| | |
| * | added random seedtxh182015-11-13
| | |
| * | added loadstringtxh182015-11-13
| | |
| * | saving param file for every itertxh182015-11-13
| | |
| * | change ppl_net to ppl_alltxh182015-11-13
| | |
| * | added wcost for select_linear layertxh182015-11-12
| | |
| * | cleaning files...txh182015-11-12
| | |
| * | get good PPL for h300, see m-tests/LOG-tnn-h300txh182015-11-12
| | |
| * | added a little debug info in readertxh182015-11-11
| | |
| * | got good result when batch_size=1, strange!txh182015-11-11
| | |
| * | bug fix : changed zero-filling across borderstxh182015-11-10
| | |