aboutsummaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorDeterminant <ted.sybil@gmail.com>2015-06-22 19:01:29 +0800
committerDeterminant <ted.sybil@gmail.com>2015-06-22 19:01:29 +0800
commit2497fd9e7a0fae5ee4887890d7a312e0e08a93b8 (patch)
tree382f97575bd2df9ee6abb1662b11b279fc22d72b /doc
parent196e9b48a3541caccdffc5743001cced70667091 (diff)
major change: use luarocks to manage project
Diffstat (limited to 'doc')
-rw-r--r--doc/nerv.md17
-rw-r--r--doc/nerv_class.md36
-rw-r--r--doc/nerv_io.md113
-rw-r--r--doc/nerv_layer.md180
-rw-r--r--doc/nerv_matrix.md165
-rw-r--r--doc/nerv_nn.md256
-rw-r--r--doc/nerv_param.md27
7 files changed, 0 insertions, 794 deletions
diff --git a/doc/nerv.md b/doc/nerv.md
deleted file mode 100644
index 28411f5..0000000
--- a/doc/nerv.md
+++ /dev/null
@@ -1,17 +0,0 @@
-#The Nerv utility functions#
-Part of the [Nerv](../README.md) toolkit.
-##Methods##
-* __string = nerv.typename(obj a)__
-A registered function, the original function is `luaT_lua_typename`. In some cases if you call `type(a)` for object of some class in __Nerv__(like __Nerv.CuMatrix__) it will only return "userdata"(because it is created in C), in this case you can use this method to get its type.
-
----
-
-* __metatable = nerv.getmetatable(string tname)__
-A registered function, the original function is `luaT_lua_getmetatable`. `tname` should be a class name that has been registered in __luaT__.
-
-* __metatable = nerv.newmetatable(string tname, string parenttname, function constructor, function destructor, function factory)__
-A registered function, the original function is `luaT_newmetatable`, it returns the metatable of the created class by the name `tname`.
-* __string = nerv.setmetatable(table self, string tname)__
-A registered function, the original function is `luaT_lua_setmetatable`. It assigns the metatable registered in __luaT__ by the name *tname* to the table *self*. And return *tname* to user.
-* __table = nerv.get_type(string typename)__
-Returns the type(`loadstring("return " .. typename)`). \ No newline at end of file
diff --git a/doc/nerv_class.md b/doc/nerv_class.md
deleted file mode 100644
index 99f63e7..0000000
--- a/doc/nerv_class.md
+++ /dev/null
@@ -1,36 +0,0 @@
-#The Nerv OOP#
-Part of the [Nerv](../README.md) toolkit.
-##Methods##
-* __metatable mt, metatable mpt = nerv.class(string tname, string parenttname)__
-This method is used to create a class by the name `tname`, which inherits `parenttname` in __Nerv__, then you create a new instance of this class by calling `obj=tname(...)`. The `tname.__init(...)` method(if defined) will be called in the constructing. The metatable of the class and its parent class will be returned.
-
-##Examples##
-* This example implements a simple `nerv.Counter` class which is inherited by `nerv.BetterCounter`.
-
-```
-do
- nerv.class("nerv.Counter")
- function nerv.Counter:__init(c)
- if (c) then
- self.c = c
- else
- self.c = 0
- end
- end
-end
-do
- local mt, mpt = nerv.class("nerv.BetterCounter", "nerv.Counter")
- function nerv.BetterCounter:__init(c, bc)
- mpt.__init(self, c)
- if (bc) then
- self.bc = bc
- else
- self.bc = 0
- end
- end
-end
-c1 = nerv.Counter(1)
-print(c1.c)
-bc1 = nerv.BetterCounter(1, 1)
-print(bc1.c, bc1.bc)
-``` \ No newline at end of file
diff --git a/doc/nerv_io.md b/doc/nerv_io.md
deleted file mode 100644
index 07589df..0000000
--- a/doc/nerv_io.md
+++ /dev/null
@@ -1,113 +0,0 @@
-#The Nerv IO Package#
-Part of the [Nerv](../README.md) toolkit.
-
-##Description##
-The main class that the user uses to store and read parameter object to and from files is __nerv.ChunkFile__.
-In the file, a parameter object will be saved using a standard format. First is the length(in byte) of this object, then a table which includes some meta information of the object, and a data area. Below is an example text file.
-```
-[0000000000202]
-{type="nerv.ExampleP",info={message="just-a-try"},id="exampleP1"}
-3 3
-5.000000 5.000000 5.000000
-5.000000 5.000000 5.000000
-5.000000 5.000000 5.000000
-1 3
-4.000000 4.000000 4.000000
-[0000000000202]
-{type="nerv.ExampleP",info={message="just-a-try"},id="exampleP2"}
-3 3
-4.000000 4.000000 4.000000
-4.000000 4.000000 4.000000
-4.000000 4.000000 4.000000
-1 3
-3.000000 3.000000 3.000000
-```
-
-##Methods##
-* __ChunkFile ChunkFile(string fn, string mode)__
-`mode` can be `r` or `w`, for reading or writing a file. The returned __ChunkFile__ will be ready to write or read objects which follows the __nerv.Param__ interface(using `write_chunk` and `read_chunk`).
-* __void ChunkFile.write_chunk(ChunkFile self, Param p)__
-Write `p` into the file. `p:write` will be called.
-* __Param ChunkFile.read_chunk(ChunkFile self, string id, table global_conf)__
-Read the __Param__ object by id `id` from the file `self`. It will be constructed using `__init(id, global_conf)`. `p:read` will be called.
-* __void ChunkFile.close(ChunkFile self)__
-Close the opened file.
-
-##Examples##
-* An example showing how to use __ChunkFile__ to store and read parameter objects.
-```
-require 'io'
-do
- local mt, mpt = nerv.class('nerv.ExampleP', 'nerv.Param')
- function nerv.ExampleP:__init(id, global_conf)
- self.id = id
- self.global_conf = global_conf
- self.matrix = nerv.MMatrixFloat(3, 3)
- for i = 0, 2, 1 do
- for j = 0, 2, 1 do
- self.matrix[i][j] = 3
- end
- end
- self.bias = nerv.MMatrixFloat(1, 3)
- for i = 0, 2, 1 do
- self.bias[i] = 2;
- end
- self:set_info({message = 'just-a-try'})
- end
- function nerv.ExampleP:addOne()
- for i = 0, 2, 1 do
- for j = 0, 2, 1 do
- self.matrix[i][j] = self.matrix[i][j] + 1
- end
- end
- for i = 0, 2, 1 do
- self.bias[i] = self.bias[i] + 1
- end
- end
- function nerv.ExampleP:read(pcdata)
- self.matrix = nerv.MMatrixFloat.load(pcdata)
- self.bias = nerv.MMatrixFloat.load(pcdata)
- end
- function nerv.ExampleP:write(pfhandle)
- self.matrix:save(pfhandle)
- self.bias:save(pfhandle)
- end
-end
-global_conf = {}
-do
- local f = nerv.ChunkFile('../tmp', 'w')
- local exampleP1 = nerv.ExampleP('exampleP1', global_conf)
- local exampleP2 = nerv.ExampleP('exampleP2', global_conf)
- exampleP1:addOne()
- exampleP1:addOne()
- exampleP2:addOne()
-
- f:write_chunk(exampleP1)
- f:write_chunk(exampleP2)
- f:close()
-end
-do
- local f = nerv.ChunkFile('../tmp', 'r')
- local exampleP1 = f:read_chunk('exampleP1', global_conf)
- local exampleP2 = f:read_chunk('exampleP2', global_conf)
- f:close()
- print(exampleP1.matrix)
- print(exampleP2.matrix)
-end
-```
-
-##Developer Notes##
-* There are four classes in to deal with chunk data, which are __nerv.ChunkFile__, __nerv.ChunkFileHandle__, __nerv.ChunkInfo__, __nerv.ChunkData__. Below is the underlying C structs.
-```
-typedef struct ChunkFileHandle {
- FILE *fp;
-} ChunkFileHandle;
-typedef struct ChunkInfo {
- off_t offset, length;
-} ChunkInfo;
-typedef struct ChunkData {
- FILE *fp;
- char *data;
-} ChunkData;
-```
-* In __Nerv.io__, a returned(by `ChunkFile.__init`) __nerv.ChunkFile__ will have a member `handle`, which is a __nerv.ChunkFileHandle__. \ No newline at end of file
diff --git a/doc/nerv_layer.md b/doc/nerv_layer.md
deleted file mode 100644
index de2fb12..0000000
--- a/doc/nerv_layer.md
+++ /dev/null
@@ -1,180 +0,0 @@
-#The Nerv Layer Package#
-Part of the [Nerv](../README.md) toolkit.
-
-##Description##
-__nerv.Layer__ is the base class and most of its methods are abstract.
-###Class hierarchy and their members###
-* __nerv.Layer__.
- * `table dim_in` It specifies the dimensions of the inputs.
- * `table dim_out` It specifies the dimensions of the outputs.
- * `string id` ID of this layer.
- * `table gconf` Stores the `global_conf`.
-* __nerv.AffineLayer__ inherits __nerv.Layer__, both `#dim_in` and `#dim_out` are 1.
- * `MatrixParam ltp` The liner transform parameter.
- * `BiasParam bp` The bias parameter.
-* __nerv.BiasLayer__ inherits __nerv.Layer__, both `#dim_in` nad `#dim_out` are 1.
- * `BiasParam bias` The bias parameter.
-* __nerv.SigmoidLayer__ inherits __nerv.Layer__, both `#dim_in` and `#dim_out` are 1.
-* __nerv.SoftmaxCELayer__ inherits __nerv.Layer__, `#dim_in` is 2 and `#dim_out` is -1(optional). `input[1]` is the input to the softmax layer, `input[2]` is the reference distribution. In its `propagate(input, output)` method, if `output[1] ~= nil`, cross\_entropy value will outputed.
- * `float total_ce` Records the accumlated cross entropy value.
- * `int total_frams` Records how many frames have passed.
- * `bool compressed` The reference distribution can be a one-hot format. This feature is enabled by `layer_conf.compressed`.
-
-##Methods##
-* __void Layer.\_\_init(Layer self, string id, table global_conf, table layer_conf)__
-Abstract method.
-The constructing method should assign `id` to `self.id` and `global_conf` to `self.gconf`, `layer_conf.dim_in` to `self.dim_in`, `layer_conf.dim_out` to `self.dim_out`. `dim_in` and `dim_out` are a list specifies the dimensions of the inputs and outputs. Also, `layer_conf` will include the parameters, which should also be properly saved.
-* __void Layer.init(Layer self)__
-Abstract method.
-Initialization method, in this method the layer should do some self-checking and allocate space for intermediate results.
-* __void Layer.update(Layer self, table bp_err, table input, table output)__
-Abstract method.
-`bp_err[i]` should be the error on `output[i]`. In this method the parameters of `self` is updated.
-* __void Layer.propagate(Layer self, table input, table output)__
-Abstract method.
-Given `input` and the current parameters, propagate and store the result in `output`.
-* __void Layer.back_propagate(Layer self, Matrix next_bp_err, Matrix bp_err, Matrix input, Matrix output)__
-Abstract method.
-Calculate the error on the inputs and store them in `next_bp_err`.
-
-* __void Layer.check_dim_len(int len_in, int len_out)__
-Check whether `#self.dim_in == len_in` and `#self.dim_out == len_out`, if violated, an error will be posted.
-* __void Layer.get_params(Layer self)__
-Abstract method.
-The layer should return a list containing its parameters.
-
-####nerv.Layer.get\_dim(self)####
-* Returns:
- `dim_in`: __table__.
- `dim_out`: __table__.
-* Parameters:
- `self`: __nerv.Layer__.
-* Description:
- Returns `self.dim_in, self.dim_out`.
-
-##Examples##
-* a basic example using __Nerv__ layers to a linear classification.
-
-```
-require 'math'
-
-require 'layer.affine'
-require 'layer.softmax_ce'
-
---[[Example using layers, a simple two-classification problem]]--
-
-function calculate_accurate(networkO, labelM)
- sum = 0
- for i = 0, networkO:nrow() - 1, 1 do
- if (labelM[i][0] == 1 and networkO[i][0] >= 0.5) then
- sum = sum + 1
- end
- if (labelM[i][1] == 1 and networkO[i][1] >= 0.5) then
- sum = sum + 1
- end
- end
- return sum
-end
-
---[[begin global setting and data generation]]--
-global_conf = {lrate = 10,
- wcost = 1e-6,
- momentum = 0.9,
- cumat_type = nerv.CuMatrixFloat}
-
-input_dim = 5
-data_num = 100
-ansV = nerv.CuMatrixFloat(input_dim, 1)
-for i = 0, input_dim - 1, 1 do
- ansV[i][0] = math.random() - 0.5
-end
-ansB = math.random() - 0.5
-print('displaying ansV')
-print(ansV)
-print('displaying ansB(bias)')
-print(ansB)
-
-dataM = nerv.CuMatrixFloat(data_num, input_dim)
-for i = 0, data_num - 1, 1 do
- for j = 0, input_dim - 1, 1 do
- dataM[i][j] = math.random() * 2 - 1
- end
-end
-refM = nerv.CuMatrixFloat(data_num, 1)
-refM:fill(ansB)
-refM:mul(dataM, ansV, 1, 1) --refM = dataM * ansV + ansB
-
-labelM = nerv.CuMatrixFloat(data_num, 2)
-for i = 0, data_num - 1, 1 do
- if (refM[i][0] > 0) then
- labelM[i][0] = 1
- labelM[i][1] = 0
- else
- labelM[i][0] = 0
- labelM[i][1] = 1
- end
-end
---[[global setting and data generation end]]--
-
-
---[[begin network building]]--
---parameters
-affineL_ltp = nerv.LinearTransParam('AffineL_ltp', global_conf)
-affineL_ltp.trans = nerv.CuMatrixFloat(input_dim, 2)
-for i = 0, input_dim - 1, 1 do
- for j = 0, 1, 1 do
- affineL_ltp.trans[i][j] = math.random() - 0.5
- end
-end
-affineL_bp = nerv.BiasParam('AffineL_bp', global_conf)
-affineL_bp.trans = nerv.CuMatrixFloat(1, 2)
-for j = 0, 1, 1 do
- affineL_bp.trans[j] = math.random() - 0.5
-end
-
---layers
-affineL = nerv.AffineLayer('AffineL', global_conf, {['ltp'] = affineL_ltp,
- ['bp'] = affineL_bp,
- dim_in = {input_dim},
- dim_out = {2}})
-softmaxL = nerv.SoftmaxCELayer('softmaxL', global_conf, {dim_in = {2, 2},
- dim_out = {}})
-print('layers initializing...')
-affineL:init()
-softmaxL:init()
---[[network building end]]--
-
-
---[[begin space allocation]]--
-print('network input&output&error space allocation...')
-affineI = {dataM} --input to the network is data
-affineO = {nerv.CuMatrixFloat(data_num, 2)}
-softmaxI = {affineO[1], labelM}
-softmaxO = {}
-output = nerv.CuMatrixFloat(data_num, 2)
-
-affineE = {nerv.CuMatrixFloat(data_num, 2)}
---[[space allocation end]]--
-
-
---[[begin training]]--
-ce_last = 0
-for l = 0, 10, 1 do
- affineL:propagate(affineI, affineO)
- softmaxL:propagate(softmaxI, softmaxO)
- output:softmax(softmaxI[1])
-
- softmaxL:back_propagate(affineE, {}, softmaxI, softmaxO)
-
- affineL:update(affineE, affineI, affineO)
-
- if (l % 5 == 0) then
- nerv.utils.printf("training iteration %d finished\n", l)
- nerv.utils.printf("cross entropy: %.8f\n", softmaxL.total_ce - ce_last)
- ce_last = softmaxL.total_ce
- nerv.utils.printf("accurate labels: %d\n", calculate_accurate(output, labelM))
- nerv.utils.printf("total frames processed: %.8f\n", softmaxL.total_frames)
- end
-end
---[[end training]]--
-```
diff --git a/doc/nerv_matrix.md b/doc/nerv_matrix.md
deleted file mode 100644
index 22971d2..0000000
--- a/doc/nerv_matrix.md
+++ /dev/null
@@ -1,165 +0,0 @@
-#The Nerv Matrix Package#
-Part of the [Nerv](../README.md) toolkit.
-
-##Description##
-###Underlying structure###
-In the begining is could be useful to know something about the underlying structure of a __Nerv__ matrix. Please keep in mind that matrice in __Nerv__ is row-major.
-Every matrix object is a encapsulation of a C struct that describes the attributes of this matrix.
-```
-typedef struct Matrix {
- size_t stride; /* size of a row */
- long ncol, nrow, nmax; /* dimension of the matrix, nmax is simply nrow * ncol */
- union {
- float *f;
- double *d;
- long *i;
- } data; /* pointer to actual storage */
- long *data_ref;
-} Matrix;
-```
-It is worth mentioning that that `data_ref` is a counter which counts the number of references to its memory space, mind that it will also be increased when a row of the matrix is referenced(`col = m[2]`). A __Nerv__ matrix will deallocate its space when this counter is decreased to zero.
-Also note that all assigning operation in __Nerv__ is reference copy, you can use `copy_tod` or `copy_toh` method to copy value. Also, row assigning operations like `m1[2]=m2[3]` is forbidden in __Nerv__.
-
-###Class hierarchy###
-The class hierarchy of the matrix classes can be clearly observed in `matrix/init.c`.
-First there is a abstract base class __Nerv.Matrix__, which is inherited by __Nerv.CuMatrix__ and __Nerv.MMatrix__(also abstract).
-Finally, there is __Nerv.CuMatrixFloat__, __Nerv.CuMatrixDouble__, inheriting __Nerv.CuMatrix__, and __Nerv.MMatrixFloat__, __Nerv.MMatrixDouble__, __Nerv.MMatrixInt__ , inheriting __Nerv.MMatrix__.
-
-##Methods##
-Mind that usually a matrix object can only do calculation with matrix of its own type(a __Nerv.CuMatrixFloat__ matrix can only do add operation with a __Nerv.CuMatrixFloat__).
-In the methods description below, __Matrix__ could be __Nerv.CuMatrixFloat__, __Nerv.CuMatrixDouble__, __Nerv.MMatrixFloat__ or __Nerv.MMatrixDouble__. __Element_type__ could be `float` or `double`, respectively.
-* __Matrix = Matrix(int nrow, int ncol)__
-Returns a __Matrix__ object of `nrow` rows and `ncol` columns.
-* __Element_type = Matrix.get_elem(Matrix self, int index)__
-Returns the element value at the specific index(treating the matrix as a vector). The index should be less than `nmax` of the matrix.
-* __void Matrix.set_elem(Matrix self, int index, Element_type value)__
-Set the value at `index` to be `value`.
-* __int Matrix.ncol(Matrix self)__
-Get `ncol`, the number of columns.
-* __int Matrix.nrow(Matrix self)__
-Get `nrow`, the number of rows.
-* __int Matrix.get_dataref_value(Matrix self)__
-Returns the value(not a pointer) of space the `data_ref` pointer pointed to. This function is mainly for debugging.
-* __Matrix/Element\_type, boolean Matrix.\_\_index\_\_(Matrix self, int index)__
-If the matrix has more than one row, will return the row at `index` as a __Matrix__ . Otherwise it will return the value at `index`.
-* __void Matrix.\_\_newindex\_\_(Matrix self, int index, Element_type value)__
-Set the element at `index` to be `value`.
----
-* __Matrix Matrix.create(Matrix a)__
-Return a new __Matrix__ of `a`'s size(of the same number of rows and columns).
-* __Matrix Matrix.colsum(Matrix self)__
-Return a new __Matrix__ of size (1,`self.ncol`), which stores the sum of all columns of __Matrix__ `self`.
-* __Matrix Matrix.rowsum(Matrix self)__
-Return a new __Matrix__ of size (`self.nrow`,1), which stores the sum of all rows of __Matrix__ `self`.
-* __Matrix Matrix.rowmax(Matrix self)__
-Return a new __Matrix__ of size (`self.nrow`,1), which stores the max value of all rows of __Matrix__ `self`.
-* __Matrix Matrix.trans(Matrix self)__
-Return a new __Matrix__ of size (`self.ncol`,`self.nrow`), which stores the transpose of __Matrix__ `self`.
-* __void Matrix.copy_fromh(Matrix self, MMatrix a)__
-Copy the content of a __MMatrix__ `a` to __Matrix__ `self`, they should be of the same size.
-* __void Matrix.copy_fromd(Matrix self, CuMatrix a)__
-Copy the content of a __CuMatrix__ `a` to __Matrix__ `self`, they should be of the same size.
-* __void Matrix.copy_toh(Matrix self, MMatrix a)__
-Copy the content of the __Matrix__ `self` to a __MMatrix__ `a`.
-* __void Matrix.copy_tod(Matrix self, CuMatrix a)__
-Copy the content of the __Matrix__ `self` to a __CuMatrix__ `a`.
-* __void Matrix.add(Matrix self, Matrix ma, Matrix mb, Element_type alpha, Element_type beta)__
-It sets the content of __Matrix__ `self` to be `alpha * ma + beta * mb`.__Matrix__ `ma,mb,self` should be of the same size.
-* __void Matrix.mul(Matrix self, Matrix ma, Matrix mb, Element_type alpha, Element_type beta, [string ta, string tb])__
-It sets the content of __Matrix__ `self` to be `beta * self + alpha * ma * mb`. `ta` and `tb` is optional, if `ta` is 'T', then `ma` will be transposed, also if `tb` is 'T', `mb` will be transposed.
-* __void Matrix.add_row(Matrix self, Matrix va, Element_type beta)__
-Add `beta * va` to every row of __Matrix__ `self`.
-* __void Matrix.fill(Matrix self, Element_type value)__
-Fill the content of __Matrix__ `self` to be `value`.
-* __void Matrix.sigmoid(Matrix self, Matrix ma)__
-Set the element of __Matrix__ `self` to be elementwise-sigmoid of `ma`.
-* __void Matrix.sigmoid_grad(Matrix self, Matrix err, Matrix output)__
-Set the element of __Matrix__ `self`, to be `self[i][j]=err[i][j]*output[i][j]*(1-output[i][j])`. This function is used to propagate sigmoid layer error.
-* __void Matrix.softmax(Matrix self, Matrix a)__
-Calculate a row-by-row softmax of __Matrix__ `a` and save the result in `self`.
-* __void Matrix.mul_elem(Matrix self, Matrix ma, Matrix mb)__
-Calculate element-wise multiplication of __Matrix__ `ma` and `mb`, store the result in `self`.
-* __void Matrix.log_elem(Matrix self, Matrix ma)__
-Calculate element-wise log of __Matrix__ `ma`, store the result in `self`.
-* __void Matrix.copy_rows_fromh_by_idx(Matrix self, MMatrix ma, MMatrixInt idx)__
-`idx` should be a row vector. This function copy the rows of `ma` to `self` according to `idx`, in other words, it assigns `ma[idx[i]]` to `self[i]`.
-* __void Matrix.expand_frm(Matrix self, Matrix a, int context)__
-Treating each row of `a` as speech feature, and do a feature expansion. The `self` should of size `(a.nrow, a.ncol * (context * 2 + 1))`. `self[i]` will be `(a[i-context] a[i-context+1] ... a[i] a[i+1] a[i+context])`. `a[0]` and `a[nrow]` will be copied to extend the index range.
-* __void Matrix.rearrange_frm(Matrix self, Matrix a, int step)__
-Rearrange `a` according to its feature dimension. The `step` is the length of context. So, `self[i][j]` will be assigned `a[i][j / step + (j % step) * (a.ncol / step)]`. `a` and `self` should be of the same size and `step` should be divisible by `a.ncol`.
-* __void Matrix.scale_row(Matrix self, Matrix scale)__
-Scale each column of `self` according to a vector `scale`. `scale` should be of size `1 * self.ncol`.
-* __Matrix Matrix.\_\_add\_\_(Matrix ma, Matrix mb)__
-Returns a new __Matrix__ which stores the result of `ma+mb`.
-* __Matrix Matrix.\_\_sub\_\_(Matrix ma, Matrix mb)__
-Returns a new __Matrix__ which stores the result of `ma-mb`.
-* __Matrix Matrix.\_\_mul\_\_(Matrix ma, Matrix mb)__
-Returns a new __Matrix__ which stores the result of `ma*mb`.
-* __CuMatrix CuMatrix.new_from_host(MMatrix m)__
-Return a new __CuMatrix__ which is a copy of `m`.
-* __MMatrix CuMatrix.new_to_host(CuMatrix self)__
-Return a new __MMatrix__ which is a copy of `self`.
-* __string Matrix.\_\_tostring\_\_(Matrix self)__
-Returns a string containing values of __Matrix__ `self`.
----
-* __MMatrix MMatrix.load(ChunkData chunk)__
-Return a new __MMatrix__ loaded from the file position in `chunk`.
-* __void MMatrix.save(MMatrix self, ChunkFileHandle chunk)__
-Write `self` to the file position in `chunk`.
-* __void MMatrix.copy_from(MMatrix ma, MMatrix mb,[int b_bgein, int b_end, int a_begin])__
-Copy a part of `mb`(rows of index `[b_begin..b_end)`) to `ma` beginning at row index `a_begin`. If not specified, `b_begin` will be `0`, `b_end` will be `b.nrow`, `a_begin` will be `0`.
-
-##Examples##
-* Use `get_dataref_value` to test __Nerv__'s matrix space allocation.
-```
-m = 10
-n = 10
-fm = nerv.MMatrixFloat(m, n)
-dm = nerv.MMatrixDouble(m, n)
-for i = 0, m - 1 do
- for j = 0, n - 1 do
- t = i / (j + 1)
- fm[i][j] = t
- dm[i][j] = t
- end
-end
-print("test fm:get_dataref_value:", fm:get_dataref_value())
-print("forced a garbade collect")
-collectgarbage("collect")
-print("test fm:get_dataref_value:", fm:get_dataref_value())
-print(fm)
-print(dm)
-```
-* Test some __Matrix__ calculations.
-```
-m = 4
-n = 4
-fm = nerv.CuMatrixFloat(m, n)
-dm = nerv.CuMatrixDouble(m, n)
-for i = 0, m - 1 do
- for j = 0, n - 1 do
- -- local t = math.random(10)
- t = i / (j + 1)
- fm[i][j] = t
- dm[i][j] = t
- end
-end
-print(fm)
-fs = fm:create()
-fs:softmax(fm)
--- print(fs)
-print(dm)
-ds = dm:create()
-ds:softmax(dm)
--- print(ds)
-print(fs)
-print(fs + fs)
-print(ds + ds)
-print(fs - fs)
-print(ds - ds)
-a = fs:create()
-a:mul_elem(fs, fs)
-print(a)
-a:log_elem(fs)
-print(a)
-``` \ No newline at end of file
diff --git a/doc/nerv_nn.md b/doc/nerv_nn.md
deleted file mode 100644
index c57447d..0000000
--- a/doc/nerv_nn.md
+++ /dev/null
@@ -1,256 +0,0 @@
-#The Nerv NN Package#
-Part of the [Nerv](../README.md) toolkit.
-
-##Description##
-###Class hierarchy###
-it contains __nerv.LayerRepo__, __nerv.ParamRepo__, and __nerv.DAGLayer__(inherits __nerv.Layer__).
-
-###Class hierarchy and their members###
-####nerv.ParamRepo####
-Get parameter object by ID.
-* `table param_table` Contains the mapping of parameter ID to parameter file(__nerv.ChunkFile__)
-* __nerv.LayerRepo__ Get layer object by ID.
-* `table layers` Contains the mapping of layer ID to layer object.
-objects.
-
-####__nerv.DAGLayer__####
-Inherits __nerv.Layer__.
-* `layers`: __table__, a mapping from a layer ID to its "ref". A ref is a structure that contains reference to space allocations and other info of the layer.
-* `inputs`: __table__, a mapping from the inputs ports of the DAG layer to the input ports of the sublayer, the key is the port number, the value is `{ref, port}`.
-* `outputs`:__table__, the counterpart of `inputs`.
-* `parsed_conn`: __table__, a list of parsed connections, each entry is of format `{{ref_from, port_from}, {ref_to, port_to}}`.
-* `queue`: __table__, a list of "ref"s, the propagation of the DAGLayer will follow this order, and back-propagation will follow a reverse order.
-
-##Methods##
-
-###__nerv.ParamRepo__###
-
-####nerv.ParamRepo:\_\_init(param\_files)####
-* Parameters:
- `param_files`: __table__
-* Description:
- `param_files` is a list of file names that stores parameters, the newed __ParamRepo__ will read them from file and store the mapping for future fetching.
-
-####nerv.Param ParamRepo.get_param(ParamRepo self, string pid, table global_conf)####
-* Returns:
- __nerv.Layer__
-* Parameters:
- `self`: __nerv.ParamRepo__.
- `pid`: __string__.
- `global_conf`: __table__.
-* Description:
- __ParamRepo__ will find the __nerv.ChunkFile__ `pf` that contains parameter of ID `pid` and return `pf:read_chunk(pid, global_conf)`.
-
-###__nerv.LayerRepo__###
-####nerv.LayerRepo:\_\_init(layer\_spec, param\_repo, global\_conf)####
-* Returns:
- __nerv.LayerRepo__.
-* Parameters:
- `self`: __nerv.ParamRepo__.
- `layer_spec`: __table__.
- `param_repo`: __nerv.ParamRepo__.
- `global_conf`: __table__.
-* Description:
- __LayerRepo__ will construct the layers specified in `layer_spec`. Every entry in the `layer_spec` table should follow the format below:
-
- > layer_spec : {[layer_type1] = llist1, [layer_type2] = llist2, ...}
- > llist : {layer1, layer2, ...}
- > layer : layerid = {param_config, layer_config}
- > param_config : {param1 = paramID1, param2 = paramID2}
-
- __LayerRepo__ will merge `param_config` into `layer_config` and construct a layer by calling `layer_type(layerid, global_conf, layer_config)`.
-
-####nerv.LayerRepo.get\_layer(self, lid)####
-* Returns:
- __nerv.LayerRepo__, the layer with ID `lid`.
-* Parameters:
- `self`:__nerv.LayerRepo__.
- `lid`:__string__.
-* Description:
- Returns the layer with ID `lid`.
-
-###nerv.DAGLayer###
-####nerv.DAGLayer:\_\_init(id, global\_conf, layer\_conf)####
-* Returns:
- __nerv.DAGLayer__
-* Parameters:
- `id`: __string__
- `global_conf`: __table__
- `layer_conf`: __table__
-* Description:
- The `layer_conf` should contain `layer_conf.sub_layers` which is a __nerv.LayerRepo__ storing the sub layers of the DAGLayer. It should also contain `layer_conf.connections`, which is a string-to-string mapping table describing the DAG connections. See an example below:
-
- ```
- dagL = nerv.DAGLayer("DAGL", global_conf, {["dim_in"] = {input_dim, 2}, ["dim_out"] = {}, ["sub_layers"] = layerRepo,
- ["connections"] = {
- ["<input>[1]"] = "AffineL[1]",
- ["AffineL[1]"] = "SoftmaxL[1]",
- ["<input>[2]"] = "SoftmaxL[2]",
- }})
- ```
-
-####nerv.DAGLayer.init(self, batch\_size)####
-* Parameters:
- `self`: __nerv.DAGLayer__
- `batch_size`: __int__
-* Description:
- This initialization method will allocate space for output and input matrice, and will call `init()` for each of its sub layers.
-
-
-####nerv.DAGLayer.propagate(self, input, output)####
-* Parameters:
- `self`: __nerv.DAGLayer__
- `input`: __table__
- `output`: __table__
-* Description:
- The same function as __nerv.Layer.propagate__, do propagation for each layer in the order of `self.queue`.
-
-####nerv.DAGLayer.back\_propagate(self, next\_bp\_err, bp\_err, input, output)####
-* Parameters:
- `self`: __nerv.DAGLayer__
- `next_bp_err`: __table__
- `bp_err`: __table__
- `input`: __table__
- `output`: __table__
-* Description:
- The same function as __nerv.Layer.back_propagate__, do back-propagation for each layer in the reverse order of `self.queue`.
-
-####nerv.DAGLayer.update(self, bp\_err, input, output)####
-* Parameters:
- `self`: __nerv.DAGLayer__
- `bp_err`: __table__
- `input`: __table__
- `output`: __table__
-* Description:
- The same function as __nerv.Layer.update__, do update for each layer in the order of `self.queue`.
-
-##Examples##
-* aaa
-
-```
-require 'math'
-
-require 'layer.affine'
-require 'layer.softmax_ce'
-
---[[Example using DAGLayer, a simple two-classification problem]]--
-
---[[begin global setting and data generation]]--
-global_conf = {lrate = 10,
- wcost = 1e-6,
- momentum = 0.9,
- cumat_type = nerv.CuMatrixFloat,
- }
-
-input_dim = 5
-data_num = 100
-param_fn = "../tmp"
-ansV = nerv.CuMatrixFloat(input_dim, 1)
-for i = 0, input_dim - 1, 1 do
- ansV[i][0] = math.random() - 0.5
-end
-ansB = math.random() - 0.5
-print('displaying ansV')
-print(ansV)
-print('displaying ansB(bias)')
-print(ansB)
-
-dataM = nerv.CuMatrixFloat(data_num, input_dim)
-for i = 0, data_num - 1, 1 do
- for j = 0, input_dim - 1, 1 do
- dataM[i][j] = math.random() * 2 - 1
- end
-end
-refM = nerv.CuMatrixFloat(data_num, 1)
-refM:fill(ansB)
-refM:mul(dataM, ansV, 1, 1) --refM = dataM * ansV + ansB
-
-labelM = nerv.CuMatrixFloat(data_num, 2)
-for i = 0, data_num - 1, 1 do
- if (refM[i][0] > 0) then
- labelM[i][0] = 1
- labelM[i][1] = 0
- else
- labelM[i][0] = 0
- labelM[i][1] = 1
- end
-end
---[[global setting and data generation end]]--
-
-
---[[begin network building]]--
---parameters
-do
- local affineL_ltp = nerv.LinearTransParam('AffineL_ltp', global_conf)
- affineL_ltp.trans = nerv.CuMatrixFloat(input_dim, 2)
- for i = 0, input_dim - 1, 1 do
- for j = 0, 1, 1 do
- affineL_ltp.trans[i][j] = math.random() - 0.5
- end
- end
- local affineL_bp = nerv.BiasParam('AffineL_bp', global_conf)
- affineL_bp.trans = nerv.CuMatrixFloat(1, 2)
- for j = 0, 1, 1 do
- affineL_bp.trans[j] = math.random() - 0.5
- end
-
- local chunk = nerv.ChunkFile(param_fn, 'w')
- chunk:write_chunk(affineL_ltp)
- chunk:write_chunk(affineL_bp)
- chunk:close()
-
- paramRepo = nerv.ParamRepo({param_fn})
-end
-
---layers
-layerRepo = nerv.LayerRepo({
- ["nerv.AffineLayer"] =
- {
- ["AffineL"] = {{["ltp"] = "AffineL_ltp", ["bp"] = "AffineL_bp"}, {["dim_in"] = {input_dim}, ["dim_out"] = {2}}},
- },
- ["nerv.SoftmaxCELayer"] =
- {
- ["SoftmaxL"] = {{}, {["dim_in"] = {2, 2}, ["dim_out"] = {}}}
- },
- }, paramRepo, global_conf)
-affineL = layerRepo:get_layer("AffineL")
-softmaxL = layerRepo:get_layer("SoftmaxL")
-print('layers initializing...')
-dagL = nerv.DAGLayer("DAGL", global_conf, {["dim_in"] = {input_dim, 2}, ["dim_out"] = {}, ["sub_layers"] = layerRepo,
- ["connections"] = {
- ["<input>[1]"] = "AffineL[1]",
- ["AffineL[1]"] = "SoftmaxL[1]",
- ["<input>[2]"] = "SoftmaxL[2]",
- }})
-dagL:init(data_num)
---affineL:init()
---softmaxL:init()
---[[network building end]]--
-
-
---[[begin space allocation]]--
-print('network input&output&error space allocation...')
-dagL_input = {dataM, labelM}
-dagL_output = {}
-dagL_err = {}
-dagL_ierr = {nerv.CuMatrixFloat(data_num, input_dim), nerv.CuMatrixFloat(data_num, 2)}
---[[space allocation end]]--
-
-
---[[begin training]]--
-ce_last = 0
-for l = 0, 10, 1 do
- dagL:propagate(dagL_input, dagL_output)
- dagL:back_propagate(dagL_ierr, dagL_err, dagL_input, dagL_output)
- dagL:update(dagL_err, dagL_input, dagL_output)
-
- if (l % 2 == 0) then
- nerv.utils.printf("training iteration %d finished\n", l)
- nerv.utils.printf("cross entropy: %.8f\n", softmaxL.total_ce - ce_last)
- --nerv.utils.printf("accurate labels: %d\n", calculate_accurate(output, labelM))
- nerv.utils.printf("total frames processed: %.8f\n", softmaxL.total_frames)
- end
- ce_last = softmaxL.total_ce
-end
---[[end training]]--
-``` \ No newline at end of file
diff --git a/doc/nerv_param.md b/doc/nerv_param.md
deleted file mode 100644
index 167cb11..0000000
--- a/doc/nerv_param.md
+++ /dev/null
@@ -1,27 +0,0 @@
-#The Nerv Parameter Package#
-Part of the [Nerv](../README.md) toolkit.
-
-##Description##
-###Class hierarchy###
-There is a base class __Nerv.Param__ defined in `layer/init.lua`.
-
-###Class hierarchy and their members###
-* __nerv.MatrixParam__ inherits __nerv.Param__
- * `Matrix trans` stores the parameter matrix.
-* __nerv.LinearTransParam__ inherits __Nerv.MatrixParam__.
-* __Nerv.BiasParam__ inherits __Nerv.MatrixParam__.
-
-##Methods##
-* __void Param.\_\_init(Param self, string id, table global_conf)__
-Constructor of a __Param__, it will set `self.id` to be `id` and `self.gconf` to be `global_conf`.
-* __void Param.set_info(Param self, table info)__
-Set `self.info` to be `info`.
-* __table Param.get_info(Param self)__
-Returns `self.info`.
-* __void Param.read(Param self, ChunkData pcdata)__
-Abstract method.
-In this method, `self` should in turn calls its members to load from `pcdata`.
-* __void Param.write(Param self, ChunkFileHandle pfhandle)__
-Abstract method.
-Save parameters to file. In this method, `self` should in turn calls its members to save to `pfhandle`.
-