summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorDeterminant <ted.sybil@gmail.com>2016-03-13 17:02:58 +0800
committerDeterminant <ted.sybil@gmail.com>2016-03-13 17:02:58 +0800
commit5d8f596e8fc5be4538f033405079584e4be00f38 (patch)
tree3ea1080087dbb4c2d0b3c2f84504539e405ca793
parent93eb84aca23526959b76401fd6509f151a589e9a (diff)
correct a typoalpha-2
-rw-r--r--tutorial/howto_pretrain_from_tnet.rst23
1 files changed, 12 insertions, 11 deletions
diff --git a/tutorial/howto_pretrain_from_tnet.rst b/tutorial/howto_pretrain_from_tnet.rst
index 7636478..b37a1a7 100644
--- a/tutorial/howto_pretrain_from_tnet.rst
+++ b/tutorial/howto_pretrain_from_tnet.rst
@@ -17,6 +17,7 @@ How to Use a Pre-trained Model from TNet
- To convert a TNet DNN model file:
::
+
# compile the tool written in C++:
g++ -o tnet_to_nerv <nerv_home>/speech/htk_io/tools/tnet_to_nerv.cpp
# conver the model (the third argument indicates the initial number used in naming the parameters)
@@ -31,18 +32,18 @@ How to Use a Pre-trained Model from TNet
- Create a copy of ``<nerv_home>/speech/htk_io/tools/nerv_to_tnet.lua``.
- - Modify the list named ``lnames`` to list the name of layers you want to
- put into the output TNet parameter file in order. You may ask why the
- NERV-to-TNet converstion is so cumbersome. This is because TNet nnet is a
- special case of more general NERV toolkit -- it only allows stacked DNNs
- and therefore TNet-to-NERV conversion is lossless but the other direction
- is not. Your future NERV network may have multiple branches and that's
- why you need to specify how to select and "stack" your layers in the TNet
- parameter output.
+ - Modify the list named ``lnames`` to list the name of layers you want to
+ put into the output TNet parameter file in order. You may ask why the
+ NERV-to-TNet converstion is so cumbersome. This is because TNet nnet is a
+ special case of more general NERV toolkit -- it only allows stacked DNNs
+ and therefore TNet-to-NERV conversion is lossless but the other direction
+ is not. Your future NERV network may have multiple branches and that's
+ why you need to specify how to select and "stack" your layers in the TNet
+ parameter output.
- - Do the conversion by:
+ - Do the conversion by:
- ::
+ ::
- <nerv_home>/install/bin/nerv --use-cpu nerv_to_tnet.lua <your_network_config>.lua <your_trained_params>.nerv <path_to_converted>.nnet
+ <nerv_home>/install/bin/nerv --use-cpu nerv_to_tnet.lua <your_network_config>.lua <your_trained_params>.nerv <path_to_converted>.nnet