summaryrefslogtreecommitdiff
path: root/tutorial/howto_pretrain_from_tnet.rst
blob: 763647807b0426657796c654d9eb3d7610835e9a (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
How to Use a Pre-trained Model from TNet
========================================

:author: Ted Yin (mfy43) <[email protected]>
:abstract: Instruct on how to convert a pre-trained TNet model to NERV format,
           train the converted model and finally convert back to TNet format
           for subsequent decoding.

- Note: this tutorial is the counterpart to "Plan B" of decoding in *How to Use
  a Pre-trained nnet Model from Kaldi*. For more complete information, please
  refer to that tutorial.

- Note: in this tutorial, we use the following notations to denote the directory prefix:

  - ``<nerv_home>``: the path of NERV (the location of outer most directory ``nerv``)

- To convert a TNet DNN model file:

  ::
    # compile the tool written in C++:
    g++ -o tnet_to_nerv <nerv_home>/speech/htk_io/tools/tnet_to_nerv.cpp
    # conver the model (the third argument indicates the initial number used in naming the parameters)
    ./tnet_to_nerv <path_to_tnet_nn>.nnet <path_to_converted>.nerv 0

- Apply the method above to convert your global transformation file and network
  file to NERV chunk files respectively.

- Train the converted parameters. Here, a network configuration file similar to
  the one used in Kaldi tutorial could be found at
  ``<nerv_home>/nerv/examples/swb_baseline2.lua``.

- Create a copy of ``<nerv_home>/speech/htk_io/tools/nerv_to_tnet.lua``.

    - Modify the list named ``lnames`` to list the name of layers you want to
      put into the output TNet parameter file in order. You may ask why the
      NERV-to-TNet converstion is so cumbersome. This is because TNet nnet is a
      special case of more general NERV toolkit -- it only allows stacked DNNs
      and therefore TNet-to-NERV conversion is lossless but the other direction
      is not. Your future NERV network may have multiple branches and that's
      why you need to specify how to select and "stack" your layers in the TNet
      parameter output.

    - Do the conversion by:

      ::

         <nerv_home>/install/bin/nerv --use-cpu nerv_to_tnet.lua <your_network_config>.lua <your_trained_params>.nerv <path_to_converted>.nnet