aboutsummaryrefslogtreecommitdiff
path: root/doc/nerv_layer.md
diff options
context:
space:
mode:
Diffstat (limited to 'doc/nerv_layer.md')
-rw-r--r--doc/nerv_layer.md13
1 files changed, 11 insertions, 2 deletions
diff --git a/doc/nerv_layer.md b/doc/nerv_layer.md
index ac6480c..de2fb12 100644
--- a/doc/nerv_layer.md
+++ b/doc/nerv_layer.md
@@ -15,7 +15,7 @@ __nerv.Layer__ is the base class and most of its methods are abstract.
* __nerv.BiasLayer__ inherits __nerv.Layer__, both `#dim_in` nad `#dim_out` are 1.
* `BiasParam bias` The bias parameter.
* __nerv.SigmoidLayer__ inherits __nerv.Layer__, both `#dim_in` and `#dim_out` are 1.
-* __nerv.SoftmaxCELayer__ inherits __nerv.Layer__, `#dim_in` is 2 and `#dim_out` is 0. `input[1]` is the input to the softmax layer, `input[2]` is the reference distribution.
+* __nerv.SoftmaxCELayer__ inherits __nerv.Layer__, `#dim_in` is 2 and `#dim_out` is -1(optional). `input[1]` is the input to the softmax layer, `input[2]` is the reference distribution. In its `propagate(input, output)` method, if `output[1] ~= nil`, cross\_entropy value will outputed.
* `float total_ce` Records the accumlated cross entropy value.
* `int total_frams` Records how many frames have passed.
* `bool compressed` The reference distribution can be a one-hot format. This feature is enabled by `layer_conf.compressed`.
@@ -43,6 +43,15 @@ Check whether `#self.dim_in == len_in` and `#self.dim_out == len_out`, if violat
Abstract method.
The layer should return a list containing its parameters.
+####nerv.Layer.get\_dim(self)####
+* Returns:
+ `dim_in`: __table__.
+ `dim_out`: __table__.
+* Parameters:
+ `self`: __nerv.Layer__.
+* Description:
+ Returns `self.dim_in, self.dim_out`.
+
##Examples##
* a basic example using __Nerv__ layers to a linear classification.
@@ -168,4 +177,4 @@ for l = 0, 10, 1 do
end
end
--[[end training]]--
-``` \ No newline at end of file
+```