From 93eb84aca23526959b76401fd6509f151a589e9a Mon Sep 17 00:00:00 2001 From: Determinant Date: Sun, 13 Mar 2016 16:18:36 +0800 Subject: add TNet tutorial; support converting global transf from TNet format --- tutorial/howto_pretrain_from_tnet.rst | 48 +++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+) create mode 100644 tutorial/howto_pretrain_from_tnet.rst (limited to 'tutorial') diff --git a/tutorial/howto_pretrain_from_tnet.rst b/tutorial/howto_pretrain_from_tnet.rst new file mode 100644 index 0000000..7636478 --- /dev/null +++ b/tutorial/howto_pretrain_from_tnet.rst @@ -0,0 +1,48 @@ +How to Use a Pre-trained Model from TNet +======================================== + +:author: Ted Yin (mfy43) +:abstract: Instruct on how to convert a pre-trained TNet model to NERV format, + train the converted model and finally convert back to TNet format + for subsequent decoding. + +- Note: this tutorial is the counterpart to "Plan B" of decoding in *How to Use + a Pre-trained nnet Model from Kaldi*. For more complete information, please + refer to that tutorial. + +- Note: in this tutorial, we use the following notations to denote the directory prefix: + + - ````: the path of NERV (the location of outer most directory ``nerv``) + +- To convert a TNet DNN model file: + + :: + # compile the tool written in C++: + g++ -o tnet_to_nerv /speech/htk_io/tools/tnet_to_nerv.cpp + # conver the model (the third argument indicates the initial number used in naming the parameters) + ./tnet_to_nerv .nnet .nerv 0 + +- Apply the method above to convert your global transformation file and network + file to NERV chunk files respectively. + +- Train the converted parameters. Here, a network configuration file similar to + the one used in Kaldi tutorial could be found at + ``/nerv/examples/swb_baseline2.lua``. + +- Create a copy of ``/speech/htk_io/tools/nerv_to_tnet.lua``. + + - Modify the list named ``lnames`` to list the name of layers you want to + put into the output TNet parameter file in order. You may ask why the + NERV-to-TNet converstion is so cumbersome. This is because TNet nnet is a + special case of more general NERV toolkit -- it only allows stacked DNNs + and therefore TNet-to-NERV conversion is lossless but the other direction + is not. Your future NERV network may have multiple branches and that's + why you need to specify how to select and "stack" your layers in the TNet + parameter output. + + - Do the conversion by: + + :: + + /install/bin/nerv --use-cpu nerv_to_tnet.lua .lua .nerv .nnet + -- cgit v1.2.3