summaryrefslogtreecommitdiff
path: root/tutorial
diff options
context:
space:
mode:
authorDeterminant <[email protected]>2016-03-13 16:18:36 +0800
committerDeterminant <[email protected]>2016-03-13 16:18:36 +0800
commit93eb84aca23526959b76401fd6509f151a589e9a (patch)
treef9abef4f3bc9b49190ec2ec775344d5cdb52388c /tutorial
parentddc4545050b41d12cfdc19cea9ba31c940d3d537 (diff)
add TNet tutorial; support converting global transf from TNet format
Diffstat (limited to 'tutorial')
-rw-r--r--tutorial/howto_pretrain_from_tnet.rst48
1 files changed, 48 insertions, 0 deletions
diff --git a/tutorial/howto_pretrain_from_tnet.rst b/tutorial/howto_pretrain_from_tnet.rst
new file mode 100644
index 0000000..7636478
--- /dev/null
+++ b/tutorial/howto_pretrain_from_tnet.rst
@@ -0,0 +1,48 @@
+How to Use a Pre-trained Model from TNet
+========================================
+
+:author: Ted Yin (mfy43) <[email protected]>
+:abstract: Instruct on how to convert a pre-trained TNet model to NERV format,
+ train the converted model and finally convert back to TNet format
+ for subsequent decoding.
+
+- Note: this tutorial is the counterpart to "Plan B" of decoding in *How to Use
+ a Pre-trained nnet Model from Kaldi*. For more complete information, please
+ refer to that tutorial.
+
+- Note: in this tutorial, we use the following notations to denote the directory prefix:
+
+ - ``<nerv_home>``: the path of NERV (the location of outer most directory ``nerv``)
+
+- To convert a TNet DNN model file:
+
+ ::
+ # compile the tool written in C++:
+ g++ -o tnet_to_nerv <nerv_home>/speech/htk_io/tools/tnet_to_nerv.cpp
+ # conver the model (the third argument indicates the initial number used in naming the parameters)
+ ./tnet_to_nerv <path_to_tnet_nn>.nnet <path_to_converted>.nerv 0
+
+- Apply the method above to convert your global transformation file and network
+ file to NERV chunk files respectively.
+
+- Train the converted parameters. Here, a network configuration file similar to
+ the one used in Kaldi tutorial could be found at
+ ``<nerv_home>/nerv/examples/swb_baseline2.lua``.
+
+- Create a copy of ``<nerv_home>/speech/htk_io/tools/nerv_to_tnet.lua``.
+
+ - Modify the list named ``lnames`` to list the name of layers you want to
+ put into the output TNet parameter file in order. You may ask why the
+ NERV-to-TNet converstion is so cumbersome. This is because TNet nnet is a
+ special case of more general NERV toolkit -- it only allows stacked DNNs
+ and therefore TNet-to-NERV conversion is lossless but the other direction
+ is not. Your future NERV network may have multiple branches and that's
+ why you need to specify how to select and "stack" your layers in the TNet
+ parameter output.
+
+ - Do the conversion by:
+
+ ::
+
+ <nerv_home>/install/bin/nerv --use-cpu nerv_to_tnet.lua <your_network_config>.lua <your_trained_params>.nerv <path_to_converted>.nnet
+