summaryrefslogtreecommitdiff
path: root/tutorial/howto_pretrain_from_tnet.rst
diff options
context:
space:
mode:
Diffstat (limited to 'tutorial/howto_pretrain_from_tnet.rst')
-rw-r--r--tutorial/howto_pretrain_from_tnet.rst48
1 files changed, 48 insertions, 0 deletions
diff --git a/tutorial/howto_pretrain_from_tnet.rst b/tutorial/howto_pretrain_from_tnet.rst
new file mode 100644
index 0000000..7636478
--- /dev/null
+++ b/tutorial/howto_pretrain_from_tnet.rst
@@ -0,0 +1,48 @@
+How to Use a Pre-trained Model from TNet
+========================================
+
+:author: Ted Yin (mfy43) <ted.sybil@gmail.com>
+:abstract: Instruct on how to convert a pre-trained TNet model to NERV format,
+ train the converted model and finally convert back to TNet format
+ for subsequent decoding.
+
+- Note: this tutorial is the counterpart to "Plan B" of decoding in *How to Use
+ a Pre-trained nnet Model from Kaldi*. For more complete information, please
+ refer to that tutorial.
+
+- Note: in this tutorial, we use the following notations to denote the directory prefix:
+
+ - ``<nerv_home>``: the path of NERV (the location of outer most directory ``nerv``)
+
+- To convert a TNet DNN model file:
+
+ ::
+ # compile the tool written in C++:
+ g++ -o tnet_to_nerv <nerv_home>/speech/htk_io/tools/tnet_to_nerv.cpp
+ # conver the model (the third argument indicates the initial number used in naming the parameters)
+ ./tnet_to_nerv <path_to_tnet_nn>.nnet <path_to_converted>.nerv 0
+
+- Apply the method above to convert your global transformation file and network
+ file to NERV chunk files respectively.
+
+- Train the converted parameters. Here, a network configuration file similar to
+ the one used in Kaldi tutorial could be found at
+ ``<nerv_home>/nerv/examples/swb_baseline2.lua``.
+
+- Create a copy of ``<nerv_home>/speech/htk_io/tools/nerv_to_tnet.lua``.
+
+ - Modify the list named ``lnames`` to list the name of layers you want to
+ put into the output TNet parameter file in order. You may ask why the
+ NERV-to-TNet converstion is so cumbersome. This is because TNet nnet is a
+ special case of more general NERV toolkit -- it only allows stacked DNNs
+ and therefore TNet-to-NERV conversion is lossless but the other direction
+ is not. Your future NERV network may have multiple branches and that's
+ why you need to specify how to select and "stack" your layers in the TNet
+ parameter output.
+
+ - Do the conversion by:
+
+ ::
+
+ <nerv_home>/install/bin/nerv --use-cpu nerv_to_tnet.lua <your_network_config>.lua <your_trained_params>.nerv <path_to_converted>.nnet
+