diff options
Diffstat (limited to 'nerv/doc/nerv_nn.md')
-rw-r--r-- | nerv/doc/nerv_nn.md | 43 |
1 files changed, 22 insertions, 21 deletions
diff --git a/nerv/doc/nerv_nn.md b/nerv/doc/nerv_nn.md index c57447d..63537fb 100644 --- a/nerv/doc/nerv_nn.md +++ b/nerv/doc/nerv_nn.md @@ -1,19 +1,19 @@ -#The Nerv NN Package# +# The Nerv NN Package Part of the [Nerv](../README.md) toolkit. -##Description## -###Class hierarchy### +## Description +### Class hierarchy it contains __nerv.LayerRepo__, __nerv.ParamRepo__, and __nerv.DAGLayer__(inherits __nerv.Layer__). -###Class hierarchy and their members### -####nerv.ParamRepo#### +### Class hierarchy and their members +#### nerv.ParamRepo Get parameter object by ID. * `table param_table` Contains the mapping of parameter ID to parameter file(__nerv.ChunkFile__) * __nerv.LayerRepo__ Get layer object by ID. * `table layers` Contains the mapping of layer ID to layer object. objects. -####__nerv.DAGLayer__#### +#### __nerv.DAGLayer__ Inherits __nerv.Layer__. * `layers`: __table__, a mapping from a layer ID to its "ref". A ref is a structure that contains reference to space allocations and other info of the layer. * `inputs`: __table__, a mapping from the inputs ports of the DAG layer to the input ports of the sublayer, the key is the port number, the value is `{ref, port}`. @@ -21,17 +21,17 @@ Inherits __nerv.Layer__. * `parsed_conn`: __table__, a list of parsed connections, each entry is of format `{{ref_from, port_from}, {ref_to, port_to}}`. * `queue`: __table__, a list of "ref"s, the propagation of the DAGLayer will follow this order, and back-propagation will follow a reverse order. -##Methods## +## Methods -###__nerv.ParamRepo__### +### __nerv.ParamRepo__ -####nerv.ParamRepo:\_\_init(param\_files)#### +#### nerv.ParamRepo:\_\_init(param\_files) * Parameters: `param_files`: __table__ * Description: `param_files` is a list of file names that stores parameters, the newed __ParamRepo__ will read them from file and store the mapping for future fetching. -####nerv.Param ParamRepo.get_param(ParamRepo self, string pid, table global_conf)#### +#### nerv.Param ParamRepo.get_param(ParamRepo self, string pid, table global_conf) * Returns: __nerv.Layer__ * Parameters: @@ -41,8 +41,8 @@ Inherits __nerv.Layer__. * Description: __ParamRepo__ will find the __nerv.ChunkFile__ `pf` that contains parameter of ID `pid` and return `pf:read_chunk(pid, global_conf)`. -###__nerv.LayerRepo__### -####nerv.LayerRepo:\_\_init(layer\_spec, param\_repo, global\_conf)#### +### __nerv.LayerRepo__ +#### nerv.LayerRepo:\_\_init(layer\_spec, param\_repo, global\_conf) * Returns: __nerv.LayerRepo__. * Parameters: @@ -60,7 +60,7 @@ Inherits __nerv.Layer__. __LayerRepo__ will merge `param_config` into `layer_config` and construct a layer by calling `layer_type(layerid, global_conf, layer_config)`. -####nerv.LayerRepo.get\_layer(self, lid)#### +#### nerv.LayerRepo.get\_layer(self, lid) * Returns: __nerv.LayerRepo__, the layer with ID `lid`. * Parameters: @@ -69,8 +69,8 @@ Inherits __nerv.Layer__. * Description: Returns the layer with ID `lid`. -###nerv.DAGLayer### -####nerv.DAGLayer:\_\_init(id, global\_conf, layer\_conf)#### +### nerv.DAGLayer +#### nerv.DAGLayer:\_\_init(id, global\_conf, layer\_conf) * Returns: __nerv.DAGLayer__ * Parameters: @@ -89,7 +89,7 @@ Inherits __nerv.Layer__. }}) ``` -####nerv.DAGLayer.init(self, batch\_size)#### +#### nerv.DAGLayer.init(self, batch\_size) * Parameters: `self`: __nerv.DAGLayer__ `batch_size`: __int__ @@ -97,7 +97,7 @@ Inherits __nerv.Layer__. This initialization method will allocate space for output and input matrice, and will call `init()` for each of its sub layers. -####nerv.DAGLayer.propagate(self, input, output)#### +#### nerv.DAGLayer.propagate(self, input, output) * Parameters: `self`: __nerv.DAGLayer__ `input`: __table__ @@ -105,7 +105,7 @@ Inherits __nerv.Layer__. * Description: The same function as __nerv.Layer.propagate__, do propagation for each layer in the order of `self.queue`. -####nerv.DAGLayer.back\_propagate(self, next\_bp\_err, bp\_err, input, output)#### +#### nerv.DAGLayer.back\_propagate(self, next\_bp\_err, bp\_err, input, output) * Parameters: `self`: __nerv.DAGLayer__ `next_bp_err`: __table__ @@ -115,7 +115,7 @@ Inherits __nerv.Layer__. * Description: The same function as __nerv.Layer.back_propagate__, do back-propagation for each layer in the reverse order of `self.queue`. -####nerv.DAGLayer.update(self, bp\_err, input, output)#### +#### nerv.DAGLayer.update(self, bp\_err, input, output) * Parameters: `self`: __nerv.DAGLayer__ `bp_err`: __table__ @@ -124,7 +124,7 @@ Inherits __nerv.Layer__. * Description: The same function as __nerv.Layer.update__, do update for each layer in the order of `self.queue`. -##Examples## +## Examples * aaa ``` @@ -253,4 +253,5 @@ for l = 0, 10, 1 do ce_last = softmaxL.total_ce end --[[end training]]-- -```
\ No newline at end of file +``` + |