How to Use a Pre-trained Model from TNet ======================================== :author: Ted Yin (mfy43) :abstract: Instruct on how to convert a pre-trained TNet model to NERV format, train the converted model and finally convert back to TNet format for subsequent decoding. - Note: this tutorial is the counterpart to "Plan B" of decoding in *How to Use a Pre-trained nnet Model from Kaldi*. For more complete information, please refer to that tutorial. - Note: in this tutorial, we use the following notations to denote the directory prefix: - ````: the path of NERV (the location of outer most directory ``nerv``) - To convert a TNet DNN model file: :: # compile the tool written in C++: g++ -o tnet_to_nerv /speech/htk_io/tools/tnet_to_nerv.cpp # conver the model (the third argument indicates the initial number used in naming the parameters) ./tnet_to_nerv .nnet .nerv 0 - Apply the method above to convert your global transformation file and network file to NERV chunk files respectively. - Train the converted parameters. Here, a network configuration file similar to the one used in Kaldi tutorial could be found at ``/nerv/examples/swb_baseline2.lua``. - Create a copy of ``/speech/htk_io/tools/nerv_to_tnet.lua``. - Modify the list named ``lnames`` to list the name of layers you want to put into the output TNet parameter file in order. You may ask why the NERV-to-TNet converstion is so cumbersome. This is because TNet nnet is a special case of more general NERV toolkit -- it only allows stacked DNNs and therefore TNet-to-NERV conversion is lossless but the other direction is not. Your future NERV network may have multiple branches and that's why you need to specify how to select and "stack" your layers in the TNet parameter output. - Do the conversion by: :: /install/bin/nerv --use-cpu nerv_to_tnet.lua .lua .nerv .nnet