You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@singa.apache.org by wa...@apache.org on 2016/12/29 09:46:25 UTC

svn commit: r1776389 [7/13] - in /incubator/singa/site/trunk: en/ en/_sources/ en/_sources/docs/examples/caffe/ en/_static/ en/_static/fonts/ en/_static/js/ en/community/ en/develop/ en/docs/ en/docs/examples/ en/docs/examples/caffe/ en/docs/examples/c...

Modified: incubator/singa/site/trunk/en/docs/installation.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/docs/installation.html?rev=1776389&r1=1776388&r2=1776389&view=diff
==============================================================================
--- incubator/singa/site/trunk/en/docs/installation.html (original)
+++ incubator/singa/site/trunk/en/docs/installation.html Thu Dec 29 09:46:24 2016
@@ -91,7 +91,7 @@
                 <ul class="current">
 <li class="toctree-l1"><a class="reference internal" href="../downloads.html">Download SINGA</a></li>
 <li class="toctree-l1 current"><a class="reference internal" href="index.html">Documentation</a><ul class="current">
-<li class="toctree-l2 current"><a class="current reference internal" href="#">Installation</a><ul>
+<li class="toctree-l2 current"><a class="current reference internal" href="">Installation</a><ul>
 <li class="toctree-l3"><a class="reference internal" href="#install-pysinga">Install PySINGA</a><ul>
 <li class="toctree-l4"><a class="reference internal" href="#install-dependent-libraries">Install dependent libraries</a></li>
 <li class="toctree-l4"><a class="reference internal" href="#virtual-environment">Virtual environment</a></li>
@@ -193,7 +193,7 @@
 <div class="section" id="install-dependent-libraries">
 <span id="install-dependent-libraries"></span><h3>Install dependent libraries<a class="headerlink" href="#install-dependent-libraries" title="Permalink to this headline">¶</a></h3>
 <p>Python 2.7 is required to run PySINGA.</p>
-<div class="highlight-default"><div class="highlight"><pre># For Ubuntu
+<div class="highlight-python"><div class="highlight"><pre># For Ubuntu
 $ sudo apt-get install python2.7-dev python-pip
 
 # For Mac
@@ -206,14 +206,14 @@ $ brew install python
 <span id="virtual-environment"></span><h3>Virtual environment<a class="headerlink" href="#virtual-environment" title="Permalink to this headline">¶</a></h3>
 <p>Users are recommended to use PySINGA in python virtual environment.</p>
 <p>To use pip with virtual environment,</p>
-<div class="highlight-default"><div class="highlight"><pre># install virtualenv
+<div class="highlight-python"><div class="highlight"><pre># install virtualenv
 $ pip install virtualenv
 $ virtualenv pysinga
 $ source pysinga/bin/activate
 </pre></div>
 </div>
 <p>To use anaconda with virtual environment,</p>
-<div class="highlight-default"><div class="highlight"><pre>$ conda create --name pysinga python=2
+<div class="highlight-python"><div class="highlight"><pre>$ conda create --name pysinga python=2
 $ source activate pysinga
 </pre></div>
 </div>
@@ -223,7 +223,7 @@ to avoid the conflicts of system path an
 <div class="section" id="from-wheel">
 <span id="from-wheel"></span><h3>From wheel<a class="headerlink" href="#from-wheel" title="Permalink to this headline">¶</a></h3>
 <p>Currently, we have three versions of wheel files,</p>
-<div class="highlight-default"><div class="highlight"><pre># Ubuntu 14.04 64-bit, CPU only, Python 2.7
+<div class="highlight-python"><div class="highlight"><pre># Ubuntu 14.04 64-bit, CPU only, Python 2.7
 $ pip install --upgrade http://comp.nus.edu.sg/~dbsystem/singa/assets/file/ubuntu1404/singa-1.0.0-cp27-none-linux_x86_64.whl
 
 # Ubuntu 14.04 64-bit, GPU enabled, Python 2.7, CUDA toolkit 7.5 and CuDNN v5
@@ -245,7 +245,7 @@ The option <code class="docutils literal
 <li>numpy(&gt;=1.11.0)</li>
 </ul>
 <p>They can be installed by</p>
-<div class="highlight-default"><div class="highlight"><pre>$ Ubuntu 14.04 and 16.04
+<div class="highlight-python"><div class="highlight"><pre>$ Ubuntu 14.04 and 16.04
 $ sudo apt-get install python-numpy
 # Ubuntu 16.04
 $ sudo apt-get install swig
@@ -253,18 +253,18 @@ $ sudo apt-get install swig
 </div>
 <p>Note that swig has to be installed from source on Ubuntu 14.04.
 After installing numpy, please export the header path of numpy.i as</p>
-<div class="highlight-default"><div class="highlight"><pre>$ export CPLUS_INCLUDE_PATH=`python -c &quot;import numpy; print numpy.get_include()&quot;`:$CPLUS_INCLUDE_PATH
+<div class="highlight-python"><div class="highlight"><pre>$ export CPLUS_INCLUDE_PATH=`python -c &quot;import numpy; print numpy.get_include()&quot;`:$CPLUS_INCLUDE_PATH
 </pre></div>
 </div>
 <p>Please compile SINGA from source (see the next section) with <code class="docutils literal"><span class="pre">cmake</span> <span class="pre">-DUSE_PYTHON=ON</span> <span class="pre">-DUSE_MODULES=ON</span> <span class="pre">..</span></code>,
 and then run the following commands,</p>
-<div class="highlight-default"><div class="highlight"><pre># under the build directory
+<div class="highlight-python"><div class="highlight"><pre># under the build directory
 $ cd python
 $ pip install .
 </pre></div>
 </div>
 <p>Developers can build the wheel file via</p>
-<div class="highlight-default"><div class="highlight"><pre># under the build directory
+<div class="highlight-python"><div class="highlight"><pre># under the build directory
 $ cd python
 $ python setup.py bdist_wheel
 </pre></div>
@@ -275,12 +275,12 @@ $ python setup.py bdist_wheel
 <div class="section" id="build-singa-on-linux-and-mac-os">
 <span id="build-singa-on-linux-and-mac-os"></span><h2>Build SINGA on Linux and Mac OS<a class="headerlink" href="#build-singa-on-linux-and-mac-os" title="Permalink to this headline">¶</a></h2>
 <p>The source files could be downloaded either as a <a class="reference external" href="https://dist.apache.org/repos/dist/dev/incubator/singa/1.0.0/apache-singa-incubating-1.0.0-RC2.tar.gz">tar.gz file</a>, or as a git repo</p>
-<div class="highlight-default"><div class="highlight"><pre>$ git clone https://github.com/apache/incubator-singa.git
+<div class="highlight-python"><div class="highlight"><pre>$ git clone https://github.com/apache/incubator-singa.git
 $ cd incubator-singa/
 </pre></div>
 </div>
 <p>cmake (&gt;=2.8) is used for compile SINGA, which can be installed by</p>
-<div class="highlight-default"><div class="highlight"><pre># For Ubuntu 14.04 and 16.04
+<div class="highlight-python"><div class="highlight"><pre># For Ubuntu 14.04 and 16.04
 $ sudo apt-get install cmake
 </pre></div>
 </div>
@@ -290,7 +290,7 @@ For Mac OS users, you can use either GCC
 <span id="compile-singa-together-with-dependent-libraries"></span><h3>Compile SINGA together with dependent libraries<a class="headerlink" href="#compile-singa-together-with-dependent-libraries" title="Permalink to this headline">¶</a></h3>
 <p>SINGA code uses CBLAS and Protobuf (&gt;=2.5, &lt;3).
 If they are not installed in your OS, you can compile SINGA together with them</p>
-<div class="highlight-default"><div class="highlight"><pre>$ In SINGA ROOT folder
+<div class="highlight-python"><div class="highlight"><pre>$ In SINGA ROOT folder
 $ mkdir build
 $ cd build
 $ cmake -DUSE_MODULES=ON ..
@@ -317,7 +317,7 @@ with SINGA.</p>
 <p>Most of the dependent libraries could be installed from source or via package mangers like
 apt-get, yum, and homebrew. Please refer to FAQ for problems caused by the path setting of the dependent libraries.</p>
 <p>The following instructions are tested on Ubuntu 14.04 for installing dependent libraries.</p>
-<div class="highlight-default"><div class="highlight"><pre># required libraries
+<div class="highlight-python"><div class="highlight"><pre># required libraries
 $ sudo apt-get install libprotobuf-dev libopenblas-dev protobuf-compiler
 
 # optional libraries
@@ -328,7 +328,7 @@ $ sudo apt-get install libopencv-dev lib
 <p>Please note that PySINGA requires swig &gt;=3.0, which could be installed via
 apt-get on Ubuntu 16.04; but it has to be installed from source for other Ubuntu versions including 14.04.</p>
 <p>The following instructions are tested on Mac OS X Yosemite (10.10.5) for installing dependent libraries.</p>
-<div class="highlight-default"><div class="highlight"><pre># required libraries
+<div class="highlight-python"><div class="highlight"><pre># required libraries
 $ brew tap homebrew/science
 $ brew install openblas
 $ brew install protobuf260
@@ -342,16 +342,16 @@ $ brew install -vd glog lmdb
 </div>
 <p>By default, openblas is installed into /usr/local/opt/openblas. To let the compiler (and cmake) know the openblas
 path, please export</p>
-<div class="highlight-default"><div class="highlight"><pre>$ export CMAKE_INCLUDE_PATH=/usr/local/opt/openblas/include:$CMAKE_INCLUDE_PATH
+<div class="highlight-python"><div class="highlight"><pre>$ export CMAKE_INCLUDE_PATH=/usr/local/opt/openblas/include:$CMAKE_INCLUDE_PATH
 $ export CMAKE_LIBRARY_PATH=/usr/local/opt/openblas/lib:$CMAKE_LIBRARY_PATH
 </pre></div>
 </div>
 <p>To let the runtime know the openblas path, please export</p>
-<div class="highlight-default"><div class="highlight"><pre>$ export LD_LIBRARY_PATH=/usr/local/opt/openblas/library:$LD_LIBRARY_PATH
+<div class="highlight-python"><div class="highlight"><pre>$ export LD_LIBRARY_PATH=/usr/local/opt/openblas/library:$LD_LIBRARY_PATH
 </pre></div>
 </div>
 <p>With the dependent libraries installed, SINGA can be compiled via</p>
-<div class="highlight-default"><div class="highlight"><pre>$ mkdir build
+<div class="highlight-python"><div class="highlight"><pre>$ mkdir build
 $ cd build
 $ cmake ..
 $ make
@@ -359,7 +359,7 @@ $ make install
 </pre></div>
 </div>
 <p>After compiling SINGA, you can run the unit tests by</p>
-<div class="highlight-default"><div class="highlight"><pre>$ ./bin/test_singa
+<div class="highlight-python"><div class="highlight"><pre>$ ./bin/test_singa
 </pre></div>
 </div>
 <p>You can see all the testing cases with testing results. If SINGA passes all
@@ -367,8 +367,8 @@ tests, then you have successfully instal
 <p>You can use <code class="docutils literal"><span class="pre">ccmake</span> <span class="pre">..</span></code> to configure the compilation options.
 If some dependent libraries are not in the system default paths, you need to export
 the following environment variables</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="n">export</span> <span class="n">CMAKE_INCLUDE_PATH</span><span class="o">=&lt;</span><span class="n">path</span> <span class="n">to</span> <span class="n">the</span> <span class="n">header</span> <span class="n">file</span> <span class="n">folder</span><span class="o">&gt;</span>
-<span class="n">export</span> <span class="n">CMAKE_LIBRARY_PATH</span><span class="o">=&lt;</span><span class="n">path</span> <span class="n">to</span> <span class="n">the</span> <span class="n">lib</span> <span class="n">file</span> <span class="n">folder</span><span class="o">&gt;</span>
+<div class="highlight-python"><div class="highlight"><pre>export CMAKE_INCLUDE_PATH=&lt;path to the header file folder&gt;
+export CMAKE_LIBRARY_PATH=&lt;path to the lib file folder&gt;
 </pre></div>
 </div>
 </div>
@@ -380,13 +380,13 @@ get better performance.</p>
 <p>SINGA has been tested over CUDA (7 and 7.5), and CUDNN (4-5.1).  If CUDNN is
 decompressed into non-system folder, e.g. /home/bob/local/cudnn/, the following
 commands should be executed for cmake and the runtime to find it</p>
-<div class="highlight-default"><div class="highlight"><pre>$ export CMAKE_INCLUDE_PATH=/home/bob/local/cudnn/include:$CMAKE_INCLUDE_PATH
+<div class="highlight-python"><div class="highlight"><pre>$ export CMAKE_INCLUDE_PATH=/home/bob/local/cudnn/include:$CMAKE_INCLUDE_PATH
 $ export CMAKE_LIBRARY_PATH=/home/bob/local/cudnn/lib64:$CMAKE_LIBRARY_PATH
 $ export LD_LIBRARY_PATH=/home/bob/local/cudnn/lib64:$LD_LIBRARY_PATH
 </pre></div>
 </div>
 <p>The cmake options for CUDA and CUDNN should be switched on</p>
-<div class="highlight-default"><div class="highlight"><pre># Dependent libs are install already
+<div class="highlight-python"><div class="highlight"><pre># Dependent libs are install already
 $ cmake -DUSE_CUDA=ON ..
 
 # Compile dependent libs together with SINGA
@@ -398,7 +398,7 @@ $ cmake -DUSE_CUDA=ON -DUSE_MODULES=ON .
 <span id="compile-singa-with-opencl-support-linux"></span><h3>Compile SINGA with OpenCL support (Linux)<a class="headerlink" href="#compile-singa-with-opencl-support-linux" title="Permalink to this headline">¶</a></h3>
 <p>SINGA uses opencl-headers and viennacl (version 1.7.1 or newer) for OpenCL support, which
 can be installed using via</p>
-<div class="highlight-default"><div class="highlight"><pre># On Ubuntu 16.04
+<div class="highlight-python"><div class="highlight"><pre># On Ubuntu 16.04
 $ sudo apt-get install opencl-headers, libviennacl-dev
 # On Fedora
 $ sudo yum install opencl-headers, viennacl
@@ -416,7 +416,7 @@ $ sudo yum install opencl-headers, vienn
 <p>Clone <a class="reference external" href="https://github.com/viennacl/viennacl-dev">the repository from here</a>, checkout the <code class="docutils literal"><span class="pre">release-1.7.1</span></code> tag and build it.
 Remember to add its directory to <code class="docutils literal"><span class="pre">PATH</span></code> and the built libraries to <code class="docutils literal"><span class="pre">LD_LIBRARY_PATH</span></code>.</p>
 <p>To build SINGA with OpenCL support, you need to pass the flag during cmake:</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="n">cmake</span> <span class="o">-</span><span class="n">DUSE_OPENCL</span><span class="o">=</span><span class="n">ON</span> <span class="o">..</span>
+<div class="highlight-python"><div class="highlight"><pre>cmake -DUSE_OPENCL=ON ..
 </pre></div>
 </div>
 </div>
@@ -426,13 +426,13 @@ Remember to add its directory to <code c
 <p>For the dependent library installation, please refer to <a class="reference external" href="dependencies.md">Dependencies</a>.
 After all the dependencies are successfully installed, just run the following commands to
 generate the VS solution in cmd under singa folder:</p>
-<div class="highlight-default"><div class="highlight"><pre>$ md build &amp;&amp; cd build
+<div class="highlight-python"><div class="highlight"><pre>$ md build &amp;&amp; cd build
 $ cmake -G &quot;Visual Studio 14&quot; -DUSE_CUDA=OFF -DUSE_PYTHON=OFF ..
 </pre></div>
 </div>
 <p>The default project generated by the command is 32-bit version. You can also
 specify a 64-bit version project by:</p>
-<div class="highlight-default"><div class="highlight"><pre>$ md build &amp;&amp; cd build
+<div class="highlight-python"><div class="highlight"><pre>$ md build &amp;&amp; cd build
 $ cmake -G &quot;Visual Studio 14 Win64&quot; -DUSE_CUDA=OFF -DUSE_PYTHON=OFF ..
 </pre></div>
 </div>
@@ -441,12 +441,12 @@ library missing, please configure your l
 For example, you get an error &#8220;Could NOT find CBLAS&#8221; and suppose you installed
 openblas header files at &#8220;d:\include&#8221; and openblas library at &#8220;d:\lib&#8221;. You should run the
 following command to specify your cblas parameters in cmake:</p>
-<div class="highlight-default"><div class="highlight"><pre>$ cmake -G &quot;Visual Studio 14&quot; -DUSE_CUDA=OFF -DUSE_PYTHON=OFF -DCBLAS_INCLUDE_DIR=&quot;d:\include&quot; -DCBLAS_LIBRARIES=&quot;d:\lib\libopenblas.lib&quot; -DProtobuf_INCLUDE_DIR=&lt;include dir of protobuf&gt; -DProtobuf_LIBRARIES=&lt;path to libprotobuf.lib&gt; -DProtobuf_PROTOC_EXECUTABLE=&lt;path to protoc.exe&gt; -DGLOG_INCLUDE_DIR=&lt;include dir of glog&gt; -DGLOG_LIBRARIES=&lt;path to libglog.lib&gt; ..
+<div class="highlight-python"><div class="highlight"><pre>$ cmake -G &quot;Visual Studio 14&quot; -DUSE_CUDA=OFF -DUSE_PYTHON=OFF -DCBLAS_INCLUDE_DIR=&quot;d:\include&quot; -DCBLAS_LIBRARIES=&quot;d:\lib\libopenblas.lib&quot; -DProtobuf_INCLUDE_DIR=&lt;include dir of protobuf&gt; -DProtobuf_LIBRARIES=&lt;path to libprotobuf.lib&gt; -DProtobuf_PROTOC_EXECUTABLE=&lt;path to protoc.exe&gt; -DGLOG_INCLUDE_DIR=&lt;include dir of glog&gt; -DGLOG_LIBRARIES=&lt;path to libglog.lib&gt; ..
 </pre></div>
 </div>
 <p>To find out the parameters you need to specify for some special libraries, you
 can run the following command:</p>
-<div class="highlight-default"><div class="highlight"><pre>$ cmake -LAH
+<div class="highlight-python"><div class="highlight"><pre>$ cmake -LAH
 </pre></div>
 </div>
 <p>If you use cmake GUI tool in windows, please make sure you configure the right
@@ -467,23 +467,23 @@ steps show the solutions for different c
 <ol>
 <li><p class="first">Check the cudnn and cuda and gcc versions, cudnn5 and cuda7.5 and gcc4.8/4.9 are preferred. if gcc is 5.0, then downgrade it.
 If cudnn is missing or not match with the wheel version, you can download the correct version of cudnn into ~/local/cudnn/ and</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="n">echo</span> <span class="s">&quot;export LD_LIBRARY_PATH=/home/&lt;yourname&gt;/local/cudnn/lib64:$LD_LIBRARY_PATH&quot;</span> <span class="o">&gt;&gt;</span> <span class="o">~/.</span><span class="n">bashrc</span>
+<div class="highlight-python"><div class="highlight"><pre>echo &quot;export LD_LIBRARY_PATH=/home/&lt;yourname&gt;/local/cudnn/lib64:$LD_LIBRARY_PATH&quot; &gt;&gt; ~/.bashrc
 </pre></div>
 </div>
 </li>
 <li><p class="first">If it is the problem related to protobuf, then please download the newest whl files which have <a class="reference external" href="https://issues.apache.org/jira/browse/SINGA-255">compiled protobuf and openblas into the whl</a> file of PySINGA.
 Or you can install protobuf from source into a local folder, say ~/local/;
 Decompress the tar file, and then</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="o">./</span><span class="n">configure</span> <span class="o">--</span><span class="n">prefix</span><span class="o">=/</span><span class="n">home</span><span class="o">/&lt;</span><span class="n">yourname</span><span class="o">&gt;</span><span class="n">local</span>
-<span class="n">make</span> <span class="o">&amp;&amp;</span> <span class="n">make</span> <span class="n">install</span>
-<span class="n">echo</span> <span class="s">&quot;export LD_LIBRARY_PATH=/home/&lt;yourname&gt;/local/lib:$LD_LIBRARY_PATH&quot;</span> <span class="o">&gt;&gt;</span> <span class="o">~/.</span><span class="n">bashrc</span>
-<span class="n">source</span> <span class="o">~/.</span><span class="n">bashrc</span>
+<div class="highlight-python"><div class="highlight"><pre>./configure --prefix=/home/&lt;yourname&gt;local
+make &amp;&amp; make install
+echo &quot;export LD_LIBRARY_PATH=/home/&lt;yourname&gt;/local/lib:$LD_LIBRARY_PATH&quot; &gt;&gt; ~/.bashrc
+source ~/.bashrc
 </pre></div>
 </div>
 </li>
 <li><p class="first">If it cannot find other libs including python, then please create virtual env using pip or conda;
 and then install SINGA via</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="n">pip</span> <span class="n">install</span> <span class="o">--</span><span class="n">upgrade</span> <span class="o">&lt;</span><span class="n">url</span> <span class="n">of</span> <span class="n">singa</span> <span class="n">wheel</span><span class="o">&gt;</span>
+<div class="highlight-python"><div class="highlight"><pre>pip install --upgrade &lt;url of singa wheel&gt;
 </pre></div>
 </div>
 </li>
@@ -495,8 +495,8 @@ and then install SINGA via</p>
 <p>A: If you haven&#8217;t installed the libraries, please install them. If you installed
 the libraries in a folder that is outside of the system folder, e.g. /usr/local,
 please export the following variables</p>
-<div class="highlight-default"><div class="highlight"><pre>  <span class="n">export</span> <span class="n">CMAKE_INCLUDE_PATH</span><span class="o">=&lt;</span><span class="n">path</span> <span class="n">to</span> <span class="n">your</span> <span class="n">header</span> <span class="n">file</span> <span class="n">folder</span><span class="o">&gt;</span>
-  <span class="n">export</span> <span class="n">CMAKE_LIBRARY_PATH</span><span class="o">=&lt;</span><span class="n">path</span> <span class="n">to</span> <span class="n">your</span> <span class="n">lib</span> <span class="n">file</span> <span class="n">folder</span><span class="o">&gt;</span>
+<div class="highlight-python"><div class="highlight"><pre>  export CMAKE_INCLUDE_PATH=&lt;path to your header file folder&gt;
+  export CMAKE_LIBRARY_PATH=&lt;path to your lib file folder&gt;
 </pre></div>
 </div>
 </li>
@@ -513,23 +513,23 @@ $ export LD_LIBRARY_PATH=<path to your l
 <li><p class="first">Q: Error from header files, e.g. &#8216;cblas.h no such file or directory exists&#8217;</p>
 <p>A: You need to include the folder of the cblas.h into CPLUS_INCLUDE_PATH,
 e.g.,</p>
-<div class="highlight-default"><div class="highlight"><pre>  $ export CPLUS_INCLUDE_PATH=/opt/OpenBLAS/include:$CPLUS_INCLUDE_PATH
+<div class="highlight-python"><div class="highlight"><pre>  $ export CPLUS_INCLUDE_PATH=/opt/OpenBLAS/include:$CPLUS_INCLUDE_PATH
 </pre></div>
 </div>
 </li>
 <li><p class="first">Q:While compiling SINGA, I get error <code class="docutils literal"><span class="pre">SSE2</span> <span class="pre">instruction</span> <span class="pre">set</span> <span class="pre">not</span> <span class="pre">enabled</span></code></p>
 <p>A:You can try following command:</p>
-<div class="highlight-default"><div class="highlight"><pre>  $ make CFLAGS=&#39;-msse2&#39; CXXFLAGS=&#39;-msse2&#39;
+<div class="highlight-python"><div class="highlight"><pre>  $ make CFLAGS=&#39;-msse2&#39; CXXFLAGS=&#39;-msse2&#39;
 </pre></div>
 </div>
 </li>
 <li><p class="first">Q:I get <code class="docutils literal"><span class="pre">ImportError:</span> <span class="pre">cannot</span> <span class="pre">import</span> <span class="pre">name</span> <span class="pre">enum_type_wrapper</span></code> from google.protobuf.internal when I try to import .py files.</p>
 <p>A: You need to install the python binding of protobuf, which could be installed via</p>
-<div class="highlight-default"><div class="highlight"><pre>  $ sudo apt-get install protobuf
+<div class="highlight-python"><div class="highlight"><pre>  $ sudo apt-get install protobuf
 </pre></div>
 </div>
 <p>or from source</p>
-<div class="highlight-default"><div class="highlight"><pre>  $ cd /PROTOBUF/SOURCE/FOLDER
+<div class="highlight-python"><div class="highlight"><pre>  $ cd /PROTOBUF/SOURCE/FOLDER
   $ cd python
   $ python setup.py build
   $ python setup.py install
@@ -538,11 +538,11 @@ e.g.,</p>
 </li>
 <li><p class="first">Q: When I build OpenBLAS from source, I am told that I need a Fortran compiler.</p>
 <p>A: You can compile OpenBLAS by</p>
-<div class="highlight-default"><div class="highlight"><pre>  $ make ONLY_CBLAS=1
+<div class="highlight-python"><div class="highlight"><pre>  $ make ONLY_CBLAS=1
 </pre></div>
 </div>
 <p>or install it using</p>
-<div class="highlight-default"><div class="highlight"><pre>  $ sudo apt-get install libopenblas-dev
+<div class="highlight-python"><div class="highlight"><pre>  $ sudo apt-get install libopenblas-dev
 </pre></div>
 </div>
 </li>
@@ -555,20 +555,20 @@ must be told how to find the newer libst
 The simplest way to fix this is to find the correct libstdc++ and export it to
 LD_LIBRARY_PATH. For example, if GLIBC++_3.4.20 is listed in the output of the
 following command,</p>
-<div class="highlight-default"><div class="highlight"><pre>  $ strings /usr/local/lib64/libstdc++.so.6|grep GLIBC++
+<div class="highlight-python"><div class="highlight"><pre>  $ strings /usr/local/lib64/libstdc++.so.6|grep GLIBC++
 </pre></div>
 </div>
 <p>then you just set your environment variable as</p>
-<div class="highlight-default"><div class="highlight"><pre>  $ export LD_LIBRARY_PATH=/usr/local/lib64:$LD_LIBRARY_PATH
+<div class="highlight-python"><div class="highlight"><pre>  $ export LD_LIBRARY_PATH=/usr/local/lib64:$LD_LIBRARY_PATH
 </pre></div>
 </div>
 </li>
 <li><p class="first">Q: When I build glog, it reports that &#8220;src/logging_unittest.cc:83:20: error: ‘gflags’ is not a namespace-name&#8221;</p>
 <p>A: It maybe that you have installed gflags with a different namespace such as &#8220;google&#8221;. so glog can&#8217;t find &#8216;gflags&#8217; namespace.
 Because it is not necessary to have gflags to build glog. So you can change the configure.ac file to ignore gflags.</p>
-<div class="highlight-default"><div class="highlight"><pre>  <span class="mf">1.</span> <span class="n">cd</span> <span class="n">to</span> <span class="n">glog</span> <span class="n">src</span> <span class="n">directory</span>
-  <span class="mf">2.</span> <span class="n">change</span> <span class="n">line</span> <span class="mi">125</span> <span class="n">of</span> <span class="n">configure</span><span class="o">.</span><span class="n">ac</span>  <span class="n">to</span> <span class="s">&quot;AC_CHECK_LIB(gflags, main, ac_cv_have_libgflags=0, ac_cv_have_libgflags=0)&quot;</span>
-  <span class="mf">3.</span> <span class="n">autoreconf</span>
+<div class="highlight-python"><div class="highlight"><pre>  1. cd to glog src directory
+  2. change line 125 of configure.ac  to &quot;AC_CHECK_LIB(gflags, main, ac_cv_have_libgflags=0, ac_cv_have_libgflags=0)&quot;
+  3. autoreconf
 </pre></div>
 </div>
 <p>After this, you can build glog again.</p>
@@ -579,7 +579,7 @@ the virtual environment.</p>
 </li>
 <li><p class="first">Q: When compiling PySINGA from source, there is a compilation error due to the missing of &lt;numpy/objectarray.h&gt;</p>
 <p>A: Please install numpy and export the path of numpy header files as</p>
-<div class="highlight-default"><div class="highlight"><pre>  $ export CPLUS_INCLUDE_PATH=`python -c &quot;import numpy; print numpy.get_include()&quot;`:$CPLUS_INCLUDE_PATH
+<div class="highlight-python"><div class="highlight"><pre>  $ export CPLUS_INCLUDE_PATH=`python -c &quot;import numpy; print numpy.get_include()&quot;`:$CPLUS_INCLUDE_PATH
 </pre></div>
 </div>
 </li>

Modified: incubator/singa/site/trunk/en/docs/layer.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/docs/layer.html?rev=1776389&r1=1776388&r2=1776389&view=diff
==============================================================================
--- incubator/singa/site/trunk/en/docs/layer.html (original)
+++ incubator/singa/site/trunk/en/docs/layer.html Thu Dec 29 09:46:24 2016
@@ -95,7 +95,7 @@
 <li class="toctree-l2"><a class="reference internal" href="software_stack.html">Software Stack</a></li>
 <li class="toctree-l2"><a class="reference internal" href="device.html">Device</a></li>
 <li class="toctree-l2"><a class="reference internal" href="tensor.html">Tensor</a></li>
-<li class="toctree-l2 current"><a class="current reference internal" href="#">Layer</a><ul>
+<li class="toctree-l2 current"><a class="current reference internal" href="">Layer</a><ul>
 <li class="toctree-l3"><a class="reference internal" href="#module-singa.layer">Python API</a></li>
 <li class="toctree-l3"><a class="reference internal" href="#cpp-api">CPP API</a></li>
 </ul>
@@ -177,25 +177,24 @@
 <span id="python-api"></span><h2>Python API<a class="headerlink" href="#module-singa.layer" title="Permalink to this headline">¶</a></h2>
 <p>Python layers wrap the C++ layers to provide simpler construction APIs.</p>
 <p>Example usages:</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="kn">from</span> <span class="nn">singa</span> <span class="k">import</span> <span class="n">layer</span>
-<span class="kn">from</span> <span class="nn">singa</span> <span class="k">import</span> <span class="n">tensor</span>
-<span class="kn">from</span> <span class="nn">singa</span> <span class="k">import</span> <span class="n">device</span>
-<span class="kn">from</span> <span class="nn">singa.model_pb2</span> <span class="k">import</span> <span class="n">kTrain</span>
+<div class="highlight-python"><div class="highlight"><pre><span class="kn">from</span> <span class="nn">singa</span> <span class="kn">import</span> <span class="n">layer</span>
+<span class="kn">from</span> <span class="nn">singa</span> <span class="kn">import</span> <span class="n">tensor</span>
+<span class="kn">from</span> <span class="nn">singa</span> <span class="kn">import</span> <span class="n">device</span>
 
-<span class="n">layer</span><span class="o">.</span><span class="n">engine</span> <span class="o">=</span> <span class="s">&#39;cudnn&#39;</span>  <span class="c"># to use cudnn layers</span>
+<span class="n">layer</span><span class="o">.</span><span class="n">engine</span> <span class="o">=</span> <span class="s1">&#39;cudnn&#39;</span>  <span class="c1"># to use cudnn layers</span>
 <span class="n">dev</span> <span class="o">=</span> <span class="n">device</span><span class="o">.</span><span class="n">create_cuda_gpu</span><span class="p">()</span>
 
-<span class="c"># create a convolution layer</span>
-<span class="n">conv</span> <span class="o">=</span> <span class="n">layer</span><span class="o">.</span><span class="n">Conv2D</span><span class="p">(</span><span class="s">&#39;conv&#39;</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="n">pad</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">input_sample_shape</span><span class="o">=</span><span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">))</span>
-<span class="n">conv</span><span class="o">.</span><span class="n">to_device</span><span class="p">(</span><span class="n">dev</span><span class="p">)</span>  <span class="c"># move the layer data onto a CudaGPU device</span>
+<span class="c1"># create a convolution layer</span>
+<span class="n">conv</span> <span class="o">=</span> <span class="n">layer</span><span class="o">.</span><span class="n">Conv2D</span><span class="p">(</span><span class="s1">&#39;conv&#39;</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="n">pad</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">input_sample_shape</span><span class="o">=</span><span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">))</span>
+<span class="n">conv</span><span class="o">.</span><span class="n">to_device</span><span class="p">(</span><span class="n">dev</span><span class="p">)</span>  <span class="c1"># move the layer data onto a CudaGPU device</span>
 <span class="n">x</span> <span class="o">=</span> <span class="n">tensor</span><span class="o">.</span><span class="n">Tensor</span><span class="p">((</span><span class="mi">3</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">),</span> <span class="n">dev</span><span class="p">)</span>
 <span class="n">x</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
-<span class="n">y</span> <span class="o">=</span> <span class="n">conv</span><span class="o">.</span><span class="n">foward</span><span class="p">(</span><span class="n">kTrain</span><span class="p">,</span> <span class="n">x</span><span class="p">)</span>
+<span class="n">y</span> <span class="o">=</span> <span class="n">conv</span><span class="o">.</span><span class="n">foward</span><span class="p">(</span><span class="bp">True</span><span class="p">,</span> <span class="n">x</span><span class="p">)</span>
 
 <span class="n">dy</span> <span class="o">=</span> <span class="n">tensor</span><span class="o">.</span><span class="n">Tensor</span><span class="p">()</span>
 <span class="n">dy</span><span class="o">.</span><span class="n">reset_like</span><span class="p">(</span><span class="n">y</span><span class="p">)</span>
 <span class="n">dy</span><span class="o">.</span><span class="n">set_value</span><span class="p">(</span><span class="mf">0.1</span><span class="p">)</span>
-<span class="c"># dp is a list of tensors for parameter gradients</span>
+<span class="c1"># dp is a list of tensors for parameter gradients</span>
 <span class="n">dx</span><span class="p">,</span> <span class="n">dp</span> <span class="o">=</span> <span class="n">conv</span><span class="o">.</span><span class="n">backward</span><span class="p">(</span><span class="n">kTrain</span><span class="p">,</span> <span class="n">dy</span><span class="p">)</span>
 </pre></div>
 </div>
@@ -241,25 +240,6 @@ construct layer with input_sample_shapes
 </tbody>
 </table>
 <dl class="method">
-<dt id="singa.layer.Layer.caffe_layer">
-<code class="descname">caffe_layer</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Layer.caffe_layer" title="Permalink to this definition">¶</a></dt>
-<dd><p>Create a singa layer based on caffe layer configuration.</p>
-</dd></dl>
-
-<dl class="method">
-<dt id="singa.layer.Layer.param_names">
-<code class="descname">param_names</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Layer.param_names" title="Permalink to this definition">¶</a></dt>
-<dd><table class="docutils field-list" frame="void" rules="none">
-<col class="field-name" />
-<col class="field-body" />
-<tbody valign="top">
-<tr class="field-odd field"><th class="field-name">Returns:</th><td class="field-body">a list of strings, one for the name of one parameter Tensor</td>
-</tr>
-</tbody>
-</table>
-</dd></dl>
-
-<dl class="method">
 <dt id="singa.layer.Layer.setup">
 <code class="descname">setup</code><span class="sig-paren">(</span><em>in_shapes</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Layer.setup" title="Permalink to this definition">¶</a></dt>
 <dd><p>Call the C++ setup function to create params and set some meta data.</p>
@@ -277,6 +257,12 @@ in_shapes is a tuple of tuples, each for
 </dd></dl>
 
 <dl class="method">
+<dt id="singa.layer.Layer.caffe_layer">
+<code class="descname">caffe_layer</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Layer.caffe_layer" title="Permalink to this definition">¶</a></dt>
+<dd><p>Create a singa layer based on caffe layer configuration.</p>
+</dd></dl>
+
+<dl class="method">
 <dt id="singa.layer.Layer.get_output_sample_shape">
 <code class="descname">get_output_sample_shape</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Layer.get_output_sample_shape" title="Permalink to this definition">¶</a></dt>
 <dd><p>Called after setup to get the shape of the output sample(s).</p>
@@ -292,6 +278,19 @@ has multiple outputs</td>
 </dd></dl>
 
 <dl class="method">
+<dt id="singa.layer.Layer.param_names">
+<code class="descname">param_names</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Layer.param_names" title="Permalink to this definition">¶</a></dt>
+<dd><table class="docutils field-list" frame="void" rules="none">
+<col class="field-name" />
+<col class="field-body" />
+<tbody valign="top">
+<tr class="field-odd field"><th class="field-name">Returns:</th><td class="field-body">a list of strings, one for the name of one parameter Tensor</td>
+</tr>
+</tbody>
+</table>
+</dd></dl>
+
+<dl class="method">
 <dt id="singa.layer.Layer.param_values">
 <code class="descname">param_values</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Layer.param_values" title="Permalink to this definition">¶</a></dt>
 <dd><p>Return param value tensors.</p>
@@ -317,7 +316,8 @@ which would result in inconsistency.</p>
 <col class="field-body" />
 <tbody valign="top">
 <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
-<li><strong>flag</strong> (<em>int</em>) &#8211; kTrain or kEval</li>
+<li><strong>flag</strong> &#8211; True (kTrain) for training (kEval); False for evaluating;
+other values for furture use.</li>
 <li><strong>x</strong> (<em>Tensor or list&lt;Tensor&gt;</em>) &#8211; an input tensor if the layer is
 connected from a single layer; a list of tensors if the layer
 is connected from multiple layers.</li>
@@ -377,6 +377,36 @@ objective loss</li>
 </dd></dl>
 
 <dl class="class">
+<dt id="singa.layer.Dummy">
+<em class="property">class </em><code class="descclassname">singa.layer.</code><code class="descname">Dummy</code><span class="sig-paren">(</span><em>name</em>, <em>input_sample_shape=None</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Dummy" title="Permalink to this definition">¶</a></dt>
+<dd><p>Bases: <a class="reference internal" href="#singa.layer.Layer" title="singa.layer.Layer"><code class="xref py py-class docutils literal"><span class="pre">singa.layer.Layer</span></code></a></p>
+<p>A dummy layer that does nothing but just forwards/backwards the data
+(the input/output is a single tensor).</p>
+<dl class="method">
+<dt id="singa.layer.Dummy.get_output_sample_shape">
+<code class="descname">get_output_sample_shape</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Dummy.get_output_sample_shape" title="Permalink to this definition">¶</a></dt>
+<dd></dd></dl>
+
+<dl class="method">
+<dt id="singa.layer.Dummy.setup">
+<code class="descname">setup</code><span class="sig-paren">(</span><em>input_sample_shape</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Dummy.setup" title="Permalink to this definition">¶</a></dt>
+<dd></dd></dl>
+
+<dl class="method">
+<dt id="singa.layer.Dummy.forward">
+<code class="descname">forward</code><span class="sig-paren">(</span><em>flag</em>, <em>x</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Dummy.forward" title="Permalink to this definition">¶</a></dt>
+<dd><p>Return the input x</p>
+</dd></dl>
+
+<dl class="method">
+<dt id="singa.layer.Dummy.backward">
+<code class="descname">backward</code><span class="sig-paren">(</span><em>falg</em>, <em>dy</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Dummy.backward" title="Permalink to this definition">¶</a></dt>
+<dd><p>Return dy, []</p>
+</dd></dl>
+
+</dd></dl>
+
+<dl class="class">
 <dt id="singa.layer.Conv2D">
 <em class="property">class </em><code class="descclassname">singa.layer.</code><code class="descname">Conv2D</code><span class="sig-paren">(</span><em>name</em>, <em>nb_kernels</em>, <em>kernel=3</em>, <em>stride=1</em>, <em>border_mode='same'</em>, <em>cudnn_prefer='fatest'</em>, <em>data_format='NCHW'</em>, <em>use_bias=True</em>, <em>W_specs=None</em>, <em>b_specs=None</em>, <em>pad=None</em>, <em>input_sample_shape=None</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Conv2D" title="Permalink to this definition">¶</a></dt>
 <dd><p>Bases: <a class="reference internal" href="#singa.layer.Layer" title="singa.layer.Layer"><code class="xref py py-class docutils literal"><span class="pre">singa.layer.Layer</span></code></a></p>
@@ -678,12 +708,35 @@ inputs should be the same.</td>
 <dl class="method">
 <dt id="singa.layer.Merge.forward">
 <code class="descname">forward</code><span class="sig-paren">(</span><em>flag</em>, <em>inputs</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Merge.forward" title="Permalink to this definition">¶</a></dt>
-<dd></dd></dl>
+<dd><p>Merge all input tensors by summation.</p>
+<p>TODO(wangwei) do element-wise merge operations, e.g., avg, count
+:param flag: not used.
+:param inputs: a list of tensors</p>
+<table class="docutils field-list" frame="void" rules="none">
+<col class="field-name" />
+<col class="field-body" />
+<tbody valign="top">
+<tr class="field-odd field"><th class="field-name">Returns:</th><td class="field-body">A single tensor as the sum of all input tensors</td>
+</tr>
+</tbody>
+</table>
+</dd></dl>
 
 <dl class="method">
 <dt id="singa.layer.Merge.backward">
 <code class="descname">backward</code><span class="sig-paren">(</span><em>flag</em>, <em>grad</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Merge.backward" title="Permalink to this definition">¶</a></dt>
-<dd></dd></dl>
+<dd><p>Replicate the grad for each input source layer.</p>
+<table class="docutils field-list" frame="void" rules="none">
+<col class="field-name" />
+<col class="field-body" />
+<tbody valign="top">
+<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><strong>grad</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) &#8211; </td>
+</tr>
+<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body">A list of replicated grad, one per source layer</td>
+</tr>
+</tbody>
+</table>
+</dd></dl>
 
 </dd></dl>
 
@@ -718,13 +771,173 @@ feature size.</li>
 <dl class="method">
 <dt id="singa.layer.Split.forward">
 <code class="descname">forward</code><span class="sig-paren">(</span><em>flag</em>, <em>input</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Split.forward" title="Permalink to this definition">¶</a></dt>
-<dd></dd></dl>
+<dd><p>Replicate the input tensor into mutiple tensors.</p>
+<table class="docutils field-list" frame="void" rules="none">
+<col class="field-name" />
+<col class="field-body" />
+<tbody valign="top">
+<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
+<li><strong>flag</strong> &#8211; not used</li>
+<li><strong>input</strong> &#8211; a single input tensor</li>
+</ul>
+</td>
+</tr>
+<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last">a list a output tensor (each one is a copy of the input)</p>
+</td>
+</tr>
+</tbody>
+</table>
+</dd></dl>
 
 <dl class="method">
 <dt id="singa.layer.Split.backward">
 <code class="descname">backward</code><span class="sig-paren">(</span><em>flag</em>, <em>grads</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Split.backward" title="Permalink to this definition">¶</a></dt>
+<dd><p>Sum all grad tensors to generate a single output tensor.</p>
+<table class="docutils field-list" frame="void" rules="none">
+<col class="field-name" />
+<col class="field-body" />
+<tbody valign="top">
+<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><strong>grads</strong> (<em>list of Tensor</em>) &#8211; </td>
+</tr>
+<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body">a single tensor as the sum of all grads</td>
+</tr>
+</tbody>
+</table>
+</dd></dl>
+
+</dd></dl>
+
+<dl class="class">
+<dt id="singa.layer.Concat">
+<em class="property">class </em><code class="descclassname">singa.layer.</code><code class="descname">Concat</code><span class="sig-paren">(</span><em>name</em>, <em>axis</em>, <em>input_sample_shapes=None</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Concat" title="Permalink to this definition">¶</a></dt>
+<dd><p>Bases: <a class="reference internal" href="#singa.layer.Layer" title="singa.layer.Layer"><code class="xref py py-class docutils literal"><span class="pre">singa.layer.Layer</span></code></a></p>
+<p>Concatenate tensors vertically (axis = 0) or horizontally (axis = 1).</p>
+<p>Currently, only support tensors with 2 dimensions.</p>
+<table class="docutils field-list" frame="void" rules="none">
+<col class="field-name" />
+<col class="field-body" />
+<tbody valign="top">
+<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple">
+<li><strong>axis</strong> (<em>int</em>) &#8211; 0 for concat row; 1 for concat columns;</li>
+<li><strong>input_sample_shapes</strong> &#8211; a list of sample shape tuples, one per input tensor</li>
+</ul>
+</td>
+</tr>
+</tbody>
+</table>
+<dl class="method">
+<dt id="singa.layer.Concat.forward">
+<code class="descname">forward</code><span class="sig-paren">(</span><em>flag</em>, <em>inputs</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Concat.forward" title="Permalink to this definition">¶</a></dt>
+<dd><p>Concatenate all input tensors.</p>
+<table class="docutils field-list" frame="void" rules="none">
+<col class="field-name" />
+<col class="field-body" />
+<tbody valign="top">
+<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
+<li><strong>flag</strong> &#8211; same as Layer::forward()</li>
+<li><strong>input</strong> &#8211; a list of tensors</li>
+</ul>
+</td>
+</tr>
+<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last">a single concatenated tensor</p>
+</td>
+</tr>
+</tbody>
+</table>
+</dd></dl>
+
+<dl class="method">
+<dt id="singa.layer.Concat.backward">
+<code class="descname">backward</code><span class="sig-paren">(</span><em>flag</em>, <em>dy</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Concat.backward" title="Permalink to this definition">¶</a></dt>
+<dd><p>Backward propagate gradients through this layer.</p>
+<table class="docutils field-list" frame="void" rules="none">
+<col class="field-name" />
+<col class="field-body" />
+<tbody valign="top">
+<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
+<li><strong>flag</strong> &#8211; same as Layer::backward()</li>
+<li><strong>dy</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) &#8211; the gradient tensors of y w.r.t objective loss</li>
+</ul>
+</td>
+</tr>
+<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last">&lt;dx, []&gt;, dx is a list tensors for the gradient of the inputs; []
+is an empty list.</p>
+</td>
+</tr>
+</tbody>
+</table>
+</dd></dl>
+
+</dd></dl>
+
+<dl class="class">
+<dt id="singa.layer.Slice">
+<em class="property">class </em><code class="descclassname">singa.layer.</code><code class="descname">Slice</code><span class="sig-paren">(</span><em>name</em>, <em>axis</em>, <em>slice_point</em>, <em>input_sample_shape=None</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Slice" title="Permalink to this definition">¶</a></dt>
+<dd><p>Bases: <a class="reference internal" href="#singa.layer.Layer" title="singa.layer.Layer"><code class="xref py py-class docutils literal"><span class="pre">singa.layer.Layer</span></code></a></p>
+<p>Slice the input tensor into multiple sub-tensors vertially (axis=0) or
+horizontally (axis=1).</p>
+<table class="docutils field-list" frame="void" rules="none">
+<col class="field-name" />
+<col class="field-body" />
+<tbody valign="top">
+<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple">
+<li><strong>axis</strong> (<em>int</em>) &#8211; 0 for slice rows; 1 for slice columns;</li>
+<li><strong>slice_point</strong> (<em>list</em>) &#8211; positions along the axis to do slice; there are n-1
+points for n sub-tensors;</li>
+<li><strong>input_sample_shape</strong> &#8211; input tensor sample shape</li>
+</ul>
+</td>
+</tr>
+</tbody>
+</table>
+<dl class="method">
+<dt id="singa.layer.Slice.get_output_sample_shape">
+<code class="descname">get_output_sample_shape</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Slice.get_output_sample_shape" title="Permalink to this definition">¶</a></dt>
 <dd></dd></dl>
 
+<dl class="method">
+<dt id="singa.layer.Slice.forward">
+<code class="descname">forward</code><span class="sig-paren">(</span><em>flag</em>, <em>x</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Slice.forward" title="Permalink to this definition">¶</a></dt>
+<dd><p>Slice the input tensor on the given axis.</p>
+<table class="docutils field-list" frame="void" rules="none">
+<col class="field-name" />
+<col class="field-body" />
+<tbody valign="top">
+<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
+<li><strong>flag</strong> &#8211; same as Layer::forward()</li>
+<li><strong>x</strong> &#8211; a single input tensor</li>
+</ul>
+</td>
+</tr>
+<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last">a list a output tensor</p>
+</td>
+</tr>
+</tbody>
+</table>
+</dd></dl>
+
+<dl class="method">
+<dt id="singa.layer.Slice.backward">
+<code class="descname">backward</code><span class="sig-paren">(</span><em>flag</em>, <em>grads</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.layer.Slice.backward" title="Permalink to this definition">¶</a></dt>
+<dd><p>Concate all grad tensors to generate a single output tensor</p>
+<table class="docutils field-list" frame="void" rules="none">
+<col class="field-name" />
+<col class="field-body" />
+<tbody valign="top">
+<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
+<li><strong>flag</strong> &#8211; same as Layer::backward()</li>
+<li><strong>grads</strong> &#8211; a list of tensors, one for the gradient of one sliced tensor</li>
+</ul>
+</td>
+</tr>
+<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last">a single tensor for the gradient of the original user, and an empty
+list.</p>
+</td>
+</tr>
+</tbody>
+</table>
+</dd></dl>
+
 </dd></dl>
 
 <dl class="class">
@@ -764,7 +977,8 @@ feature size.</li>
 <col class="field-body" />
 <tbody valign="top">
 <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
-<li><strong>kTrain or kEval.</strong> (<em>flag,</em>) &#8211; </li>
+<li><strong>flag</strong> &#8211; True(kTrain) for training; False(kEval) for evaluation;
+others values for future use.</li>
 <li><strong>&lt;x1, x2,...xn, hx, cx&gt;, where xi is the input tensor for the</strong> (<em>inputs,</em>) &#8211; i-th position, its shape is (batch_size, input_feature_length);
 the batch_size of xi must &gt;= that of xi+1; hx is the initial
 hidden state of shape (num_stacks * bidirection?2:1, batch_size,
@@ -775,15 +989,11 @@ data.</li>
 </ul>
 </td>
 </tr>
-<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last"><dl class="docutils">
-<dt>&lt;y1, y2, ... yn, hy, cy&gt;, where yi is the output tensor for the i-th</dt>
-<dd><p class="first last">position, its shape is (batch_size,
+<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last">&lt;y1, y2, ... yn, hy, cy&gt;, where yi is the output tensor for the i-th
+position, its shape is (batch_size,
 hidden_size * bidirection?2:1). hy is the final hidden state
 tensor. cx is the final cell state tensor. cx is only used for
 lstm.</p>
-</dd>
-</dl>
-</p>
 </td>
 </tr>
 </tbody>
@@ -811,15 +1021,11 @@ data.</li>
 </ul>
 </td>
 </tr>
-<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last"><dl class="docutils">
-<dt>&lt;dx1, dx2, ... dxn, dhx, dcx&gt;, where dxi is the gradient tensor for</dt>
-<dd><p class="first last">the i-th input, its shape is (batch_size,
+<tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last">&lt;dx1, dx2, ... dxn, dhx, dcx&gt;, where dxi is the gradient tensor for
+the i-th input, its shape is (batch_size,
 input_feature_length). dhx is the gradient for the initial
 hidden state. dcx is the gradient for the initial cell state,
 which is valid only for lstm.</p>
-</dd>
-</dl>
-</p>
 </td>
 </tr>
 </tbody>

Modified: incubator/singa/site/trunk/en/docs/loss.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/docs/loss.html?rev=1776389&r1=1776388&r2=1776389&view=diff
==============================================================================
--- incubator/singa/site/trunk/en/docs/loss.html (original)
+++ incubator/singa/site/trunk/en/docs/loss.html Thu Dec 29 09:46:24 2016
@@ -98,7 +98,7 @@
 <li class="toctree-l2"><a class="reference internal" href="layer.html">Layer</a></li>
 <li class="toctree-l2"><a class="reference internal" href="net.html">FeedForward Net</a></li>
 <li class="toctree-l2"><a class="reference internal" href="initializer.html">Initializer</a></li>
-<li class="toctree-l2 current"><a class="current reference internal" href="#">Loss</a></li>
+<li class="toctree-l2 current"><a class="current reference internal" href="">Loss</a></li>
 <li class="toctree-l2"><a class="reference internal" href="metric.html">Metric</a></li>
 <li class="toctree-l2"><a class="reference internal" href="optimizer.html">Optimizer</a></li>
 <li class="toctree-l2"><a class="reference internal" href="data.html">Data</a></li>
@@ -173,17 +173,17 @@
 from C++ implementation, and the rest are implemented directly using python
 Tensor.</p>
 <p>Example usage:</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="kn">from</span> <span class="nn">singa</span> <span class="k">import</span> <span class="n">tensor</span>
-<span class="kn">from</span> <span class="nn">singa</span> <span class="k">import</span> <span class="n">loss</span>
-<span class="kn">from</span> <span class="nn">singa.proto</span> <span class="k">import</span> <span class="n">model_pb2</span>
+<div class="highlight-python"><div class="highlight"><pre><span class="kn">from</span> <span class="nn">singa</span> <span class="kn">import</span> <span class="n">tensor</span>
+<span class="kn">from</span> <span class="nn">singa</span> <span class="kn">import</span> <span class="n">loss</span>
+<span class="kn">from</span> <span class="nn">singa.proto</span> <span class="kn">import</span> <span class="n">model_pb2</span>
 
 <span class="n">x</span> <span class="o">=</span> <span class="n">tensor</span><span class="o">.</span><span class="n">Tensor</span><span class="p">((</span><span class="mi">3</span><span class="p">,</span> <span class="mi">5</span><span class="p">))</span>
-<span class="n">x</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>  <span class="c"># randomly genearte the prediction activation</span>
-<span class="n">y</span> <span class="o">=</span> <span class="n">tensor</span><span class="o">.</span><span class="n">from_numpy</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">3</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">np</span><span class="o">.</span><span class="n">int</span><span class="p">))</span>  <span class="c"># set the truth</span>
+<span class="n">x</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>  <span class="c1"># randomly genearte the prediction activation</span>
+<span class="n">y</span> <span class="o">=</span> <span class="n">tensor</span><span class="o">.</span><span class="n">from_numpy</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">3</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">np</span><span class="o">.</span><span class="n">int</span><span class="p">))</span>  <span class="c1"># set the truth</span>
 
 <span class="n">f</span> <span class="o">=</span> <span class="n">loss</span><span class="o">.</span><span class="n">SoftmaxCrossEntropy</span><span class="p">()</span>
-<span class="n">l</span> <span class="o">=</span> <span class="n">f</span><span class="o">.</span><span class="n">forward</span><span class="p">(</span><span class="n">model_pb2</span><span class="o">.</span><span class="n">kTrain</span><span class="p">,</span> <span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span>  <span class="c"># l is tensor with 3 loss values</span>
-<span class="n">g</span> <span class="o">=</span> <span class="n">f</span><span class="o">.</span><span class="n">backward</span><span class="p">()</span>  <span class="c"># g is a tensor containing all gradients of x w.r.t l</span>
+<span class="n">l</span> <span class="o">=</span> <span class="n">f</span><span class="o">.</span><span class="n">forward</span><span class="p">(</span><span class="n">model_pb2</span><span class="o">.</span><span class="n">kTrain</span><span class="p">,</span> <span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span>  <span class="c1"># l is tensor with 3 loss values</span>
+<span class="n">g</span> <span class="o">=</span> <span class="n">f</span><span class="o">.</span><span class="n">backward</span><span class="p">()</span>  <span class="c1"># g is a tensor containing all gradients of x w.r.t l</span>
 </pre></div>
 </div>
 <dl class="class">
@@ -260,6 +260,11 @@ function must be called before calling f
 <p>This loss function is a combination of SoftMax and Cross-Entropy loss.</p>
 <p>It converts the inputs via SoftMax function and then
 computes the cross-entropy loss against the ground truth values.</p>
+<p>For each sample, the ground truth could be a integer as the label index;
+or a binary array, indicating the label distribution. The ground truth
+tensor thus could be a 1d or 2d tensor.
+The data/feature tensor could 1d (for a single sample) or 2d for a batch of
+samples.</p>
 </dd></dl>
 
 <dl class="class">

Modified: incubator/singa/site/trunk/en/docs/metric.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/docs/metric.html?rev=1776389&r1=1776388&r2=1776389&view=diff
==============================================================================
--- incubator/singa/site/trunk/en/docs/metric.html (original)
+++ incubator/singa/site/trunk/en/docs/metric.html Thu Dec 29 09:46:24 2016
@@ -99,7 +99,7 @@
 <li class="toctree-l2"><a class="reference internal" href="net.html">FeedForward Net</a></li>
 <li class="toctree-l2"><a class="reference internal" href="initializer.html">Initializer</a></li>
 <li class="toctree-l2"><a class="reference internal" href="loss.html">Loss</a></li>
-<li class="toctree-l2 current"><a class="current reference internal" href="#">Metric</a></li>
+<li class="toctree-l2 current"><a class="current reference internal" href="">Metric</a></li>
 <li class="toctree-l2"><a class="reference internal" href="optimizer.html">Optimizer</a></li>
 <li class="toctree-l2"><a class="reference internal" href="data.html">Data</a></li>
 <li class="toctree-l2"><a class="reference internal" href="image_tool.html">Image Tool</a></li>
@@ -173,16 +173,16 @@
 performance. The specific metric classes could be converted from C++
 implmentation or implemented directly using Python.</p>
 <p>Example usage:</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="kn">from</span> <span class="nn">singa</span> <span class="k">import</span> <span class="n">tensor</span>
-<span class="kn">from</span> <span class="nn">singa</span> <span class="k">import</span> <span class="n">metric</span>
+<div class="highlight-python"><div class="highlight"><pre><span class="kn">from</span> <span class="nn">singa</span> <span class="kn">import</span> <span class="n">tensor</span>
+<span class="kn">from</span> <span class="nn">singa</span> <span class="kn">import</span> <span class="n">metric</span>
 
 <span class="n">x</span> <span class="o">=</span> <span class="n">tensor</span><span class="o">.</span><span class="n">Tensor</span><span class="p">((</span><span class="mi">3</span><span class="p">,</span> <span class="mi">5</span><span class="p">))</span>
-<span class="n">x</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>  <span class="c"># randomly genearte the prediction activation</span>
-<span class="n">x</span> <span class="o">=</span> <span class="n">tensor</span><span class="o">.</span><span class="n">SoftMax</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>  <span class="c"># normalize the prediction into probabilities</span>
-<span class="n">y</span> <span class="o">=</span> <span class="n">tensor</span><span class="o">.</span><span class="n">from_numpy</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">3</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">np</span><span class="o">.</span><span class="n">int</span><span class="p">))</span>  <span class="c"># set the truth</span>
+<span class="n">x</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>  <span class="c1"># randomly genearte the prediction activation</span>
+<span class="n">x</span> <span class="o">=</span> <span class="n">tensor</span><span class="o">.</span><span class="n">SoftMax</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>  <span class="c1"># normalize the prediction into probabilities</span>
+<span class="n">y</span> <span class="o">=</span> <span class="n">tensor</span><span class="o">.</span><span class="n">from_numpy</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">3</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">np</span><span class="o">.</span><span class="n">int</span><span class="p">))</span>  <span class="c1"># set the truth</span>
 
 <span class="n">f</span> <span class="o">=</span> <span class="n">metric</span><span class="o">.</span><span class="n">Accuracy</span><span class="p">()</span>
-<span class="n">acc</span> <span class="o">=</span> <span class="n">f</span><span class="o">.</span><span class="n">evaluate</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span>  <span class="c"># averaged accuracy over all 3 samples in x</span>
+<span class="n">acc</span> <span class="o">=</span> <span class="n">f</span><span class="o">.</span><span class="n">evaluate</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span>  <span class="c1"># averaged accuracy over all 3 samples in x</span>
 </pre></div>
 </div>
 <dl class="class">

Modified: incubator/singa/site/trunk/en/docs/net.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/docs/net.html?rev=1776389&r1=1776388&r2=1776389&view=diff
==============================================================================
--- incubator/singa/site/trunk/en/docs/net.html (original)
+++ incubator/singa/site/trunk/en/docs/net.html Thu Dec 29 09:46:24 2016
@@ -96,7 +96,7 @@
 <li class="toctree-l2"><a class="reference internal" href="device.html">Device</a></li>
 <li class="toctree-l2"><a class="reference internal" href="tensor.html">Tensor</a></li>
 <li class="toctree-l2"><a class="reference internal" href="layer.html">Layer</a></li>
-<li class="toctree-l2 current"><a class="current reference internal" href="#">FeedForward Net</a></li>
+<li class="toctree-l2 current"><a class="current reference internal" href="">FeedForward Net</a></li>
 <li class="toctree-l2"><a class="reference internal" href="initializer.html">Initializer</a></li>
 <li class="toctree-l2"><a class="reference internal" href="loss.html">Loss</a></li>
 <li class="toctree-l2"><a class="reference internal" href="metric.html">Metric</a></li>

Modified: incubator/singa/site/trunk/en/docs/neural-net.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/docs/neural-net.html?rev=1776389&r1=1776388&r2=1776389&view=diff
==============================================================================
--- incubator/singa/site/trunk/en/docs/neural-net.html (original)
+++ incubator/singa/site/trunk/en/docs/neural-net.html Thu Dec 29 09:46:24 2016
@@ -168,33 +168,33 @@ category.</p>
 </div><p>Feed-forward models, e.g., CNN and MLP, can easily get configured as their layer
 connections are undirected without circles. The
 configuration for the MLP model shown in Figure 1 is as follows,</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="n">net</span> <span class="p">{</span>
-  <span class="n">layer</span> <span class="p">{</span>
-    <span class="n">name</span> <span class="p">:</span> <span class="s">&#39;data&quot;</span>
-    <span class="nb">type</span> <span class="p">:</span> <span class="n">kData</span>
-  <span class="p">}</span>
-  <span class="n">layer</span> <span class="p">{</span>
-    <span class="n">name</span> <span class="p">:</span> <span class="s">&#39;image&quot;</span>
-    <span class="nb">type</span> <span class="p">:</span> <span class="n">kImage</span>
-    <span class="n">srclayer</span><span class="p">:</span> <span class="s">&#39;data&#39;</span>
-  <span class="p">}</span>
-  <span class="n">layer</span> <span class="p">{</span>
-    <span class="n">name</span> <span class="p">:</span> <span class="s">&#39;label&quot;</span>
-    <span class="nb">type</span> <span class="p">:</span> <span class="n">kLabel</span>
-    <span class="n">srclayer</span><span class="p">:</span> <span class="s">&#39;data&#39;</span>
-  <span class="p">}</span>
-  <span class="n">layer</span> <span class="p">{</span>
-    <span class="n">name</span> <span class="p">:</span> <span class="s">&#39;hidden&quot;</span>
-    <span class="nb">type</span> <span class="p">:</span> <span class="n">kHidden</span>
-    <span class="n">srclayer</span><span class="p">:</span> <span class="s">&#39;image&#39;</span>
-  <span class="p">}</span>
-  <span class="n">layer</span> <span class="p">{</span>
-    <span class="n">name</span> <span class="p">:</span> <span class="s">&#39;softmax&quot;</span>
-    <span class="nb">type</span> <span class="p">:</span> <span class="n">kSoftmaxLoss</span>
-    <span class="n">srclayer</span><span class="p">:</span> <span class="s">&#39;hidden&#39;</span>
-    <span class="n">srclayer</span><span class="p">:</span> <span class="s">&#39;label&#39;</span>
-  <span class="p">}</span>
-<span class="p">}</span>
+<div class="highlight-python"><div class="highlight"><pre>net {
+  layer {
+    name : &#39;data&quot;
+    type : kData
+  }
+  layer {
+    name : &#39;image&quot;
+    type : kImage
+    srclayer: &#39;data&#39;
+  }
+  layer {
+    name : &#39;label&quot;
+    type : kLabel
+    srclayer: &#39;data&#39;
+  }
+  layer {
+    name : &#39;hidden&quot;
+    type : kHidden
+    srclayer: &#39;image&#39;
+  }
+  layer {
+    name : &#39;softmax&quot;
+    type : kSoftmaxLoss
+    srclayer: &#39;hidden&#39;
+    srclayer: &#39;label&#39;
+  }
+}
 </pre></div>
 </div>
 </div>
@@ -209,25 +209,25 @@ connections, as shown in Figure 3a. In o
 layer field should include each other&#8217;s name.
 The full <a class="reference external" href="rbm.html">RBM example</a> has
 detailed neural net configuration for a RBM model, which looks like</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="n">net</span> <span class="p">{</span>
-  <span class="n">layer</span> <span class="p">{</span>
-    <span class="n">name</span> <span class="p">:</span> <span class="s">&quot;vis&quot;</span>
-    <span class="nb">type</span> <span class="p">:</span> <span class="n">kVisLayer</span>
-    <span class="n">param</span> <span class="p">{</span>
-      <span class="n">name</span> <span class="p">:</span> <span class="s">&quot;w1&quot;</span>
-    <span class="p">}</span>
-    <span class="n">srclayer</span><span class="p">:</span> <span class="s">&quot;hid&quot;</span>
-  <span class="p">}</span>
-  <span class="n">layer</span> <span class="p">{</span>
-    <span class="n">name</span> <span class="p">:</span> <span class="s">&quot;hid&quot;</span>
-    <span class="nb">type</span> <span class="p">:</span> <span class="n">kHidLayer</span>
-    <span class="n">param</span> <span class="p">{</span>
-      <span class="n">name</span> <span class="p">:</span> <span class="s">&quot;w2&quot;</span>
-      <span class="n">share_from</span><span class="p">:</span> <span class="s">&quot;w1&quot;</span>
-    <span class="p">}</span>
-    <span class="n">srclayer</span><span class="p">:</span> <span class="s">&quot;vis&quot;</span>
-  <span class="p">}</span>
-<span class="p">}</span>
+<div class="highlight-python"><div class="highlight"><pre>net {
+  layer {
+    name : &quot;vis&quot;
+    type : kVisLayer
+    param {
+      name : &quot;w1&quot;
+    }
+    srclayer: &quot;hid&quot;
+  }
+  layer {
+    name : &quot;hid&quot;
+    type : kHidLayer
+    param {
+      name : &quot;w2&quot;
+      share_from: &quot;w1&quot;
+    }
+    srclayer: &quot;vis&quot;
+  }
+}
 </pre></div>
 </div>
 </div>
@@ -249,10 +249,10 @@ layers except the data layer, loss layer
 redundant configurations for the shared layers, users can uses the <code class="docutils literal"><span class="pre">exclude</span></code>
 filed to filter a layer in the neural net, e.g., the following layer will be
 filtered when creating the testing <code class="docutils literal"><span class="pre">NeuralNet</span></code>.</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="n">layer</span> <span class="p">{</span>
-  <span class="o">...</span>
-  <span class="n">exclude</span> <span class="p">:</span> <span class="n">kTest</span> <span class="c"># filter this layer for creating test net</span>
-<span class="p">}</span>
+<div class="highlight-python"><div class="highlight"><pre>layer {
+  ...
+  exclude : kTest # filter this layer for creating test net
+}
 </pre></div>
 </div>
 </div>
@@ -285,32 +285,32 @@ partitioned into two sub-layers.</p>
 <li><p class="first">Partitioning each singe layer into sub-layers on batch dimension (see
 below). It is enabled by configuring the partition dimension of the layer to
 0, e.g.,</p>
-<div class="highlight-default"><div class="highlight"><pre> <span class="c"># with other fields omitted</span>
- <span class="n">layer</span> <span class="p">{</span>
-   <span class="n">partition_dim</span><span class="p">:</span> <span class="mi">0</span>
- <span class="p">}</span>
+<div class="highlight-python"><div class="highlight"><pre> # with other fields omitted
+ layer {
+   partition_dim: 0
+ }
 </pre></div>
 </div>
 </li>
 <li><p class="first">Partitioning each singe layer into sub-layers on feature dimension (see
 below).  It is enabled by configuring the partition dimension of the layer to
 1, e.g.,</p>
-<div class="highlight-default"><div class="highlight"><pre> <span class="c"># with other fields omitted</span>
- <span class="n">layer</span> <span class="p">{</span>
-   <span class="n">partition_dim</span><span class="p">:</span> <span class="mi">1</span>
- <span class="p">}</span>
+<div class="highlight-python"><div class="highlight"><pre> # with other fields omitted
+ layer {
+   partition_dim: 1
+ }
 </pre></div>
 </div>
 </li>
 <li><p class="first">Partitioning all layers into different subsets. It is enabled by
 configuring the location ID of a layer, e.g.,</p>
-<div class="highlight-default"><div class="highlight"><pre> <span class="c"># with other fields omitted</span>
- <span class="n">layer</span> <span class="p">{</span>
-   <span class="n">location</span><span class="p">:</span> <span class="mi">1</span>
- <span class="p">}</span>
- <span class="n">layer</span> <span class="p">{</span>
-   <span class="n">location</span><span class="p">:</span> <span class="mi">0</span>
- <span class="p">}</span>
+<div class="highlight-python"><div class="highlight"><pre> # with other fields omitted
+ layer {
+   location: 1
+ }
+ layer {
+   location: 0
+ }
 </pre></div>
 </div>
 </li>
@@ -320,21 +320,21 @@ configuring the location ID of a layer,
 useful for large models. An example application is to implement the
 <a class="reference external" href="http://arxiv.org/abs/1404.5997">idea proposed by Alex</a>.
 Hybrid partitioning is configured like,</p>
-<div class="highlight-default"><div class="highlight"><pre> <span class="c"># with other fields omitted</span>
- <span class="n">layer</span> <span class="p">{</span>
-   <span class="n">location</span><span class="p">:</span> <span class="mi">1</span>
- <span class="p">}</span>
- <span class="n">layer</span> <span class="p">{</span>
-   <span class="n">location</span><span class="p">:</span> <span class="mi">0</span>
- <span class="p">}</span>
- <span class="n">layer</span> <span class="p">{</span>
-   <span class="n">partition_dim</span><span class="p">:</span> <span class="mi">0</span>
-   <span class="n">location</span><span class="p">:</span> <span class="mi">0</span>
- <span class="p">}</span>
- <span class="n">layer</span> <span class="p">{</span>
-   <span class="n">partition_dim</span><span class="p">:</span> <span class="mi">1</span>
-   <span class="n">location</span><span class="p">:</span> <span class="mi">0</span>
- <span class="p">}</span>
+<div class="highlight-python"><div class="highlight"><pre> # with other fields omitted
+ layer {
+   location: 1
+ }
+ layer {
+   location: 0
+ }
+ layer {
+   partition_dim: 0
+   location: 0
+ }
+ layer {
+   partition_dim: 1
+   location: 0
+ }
 </pre></div>
 </div>
 </li>
@@ -367,7 +367,7 @@ gradients will be averaged by the stub o
 <span id="advanced-user-guide"></span><h2>Advanced user guide<a class="headerlink" href="#advanced-user-guide" title="Permalink to this headline">¶</a></h2>
 <div class="section" id="creation">
 <span id="creation"></span><h3>Creation<a class="headerlink" href="#creation" title="Permalink to this headline">¶</a></h3>
-<div class="highlight-default"><div class="highlight"><pre><span class="n">static</span> <span class="n">NeuralNet</span><span class="o">*</span> <span class="n">NeuralNet</span><span class="p">::</span><span class="n">Create</span><span class="p">(</span><span class="n">const</span> <span class="n">NetProto</span><span class="o">&amp;</span> <span class="n">np</span><span class="p">,</span> <span class="n">Phase</span> <span class="n">phase</span><span class="p">,</span> <span class="nb">int</span> <span class="n">num</span><span class="p">);</span>
+<div class="highlight-python"><div class="highlight"><pre>static NeuralNet* NeuralNet::Create(const NetProto&amp; np, Phase phase, int num);
 </pre></div>
 </div>
 <p>The above function creates a <code class="docutils literal"><span class="pre">NeuralNet</span></code> for a given phase, and returns a
@@ -380,23 +380,23 @@ function takes in the full net configura
 validation and test.  It removes layers for phases other than the specified
 phase based on the <code class="docutils literal"><span class="pre">exclude</span></code> field in
 <a class="reference external" href="layer.html">layer configuration</a>:</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="n">layer</span> <span class="p">{</span>
-  <span class="o">...</span>
-  <span class="n">exclude</span> <span class="p">:</span> <span class="n">kTest</span> <span class="c"># filter this layer for creating test net</span>
-<span class="p">}</span>
+<div class="highlight-python"><div class="highlight"><pre>layer {
+  ...
+  exclude : kTest # filter this layer for creating test net
+}
 </pre></div>
 </div>
 <p>The filtered net configuration is passed to the constructor of <code class="docutils literal"><span class="pre">NeuralNet</span></code>:</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="n">NeuralNet</span><span class="p">::</span><span class="n">NeuralNet</span><span class="p">(</span><span class="n">NetProto</span> <span class="n">netproto</span><span class="p">,</span> <span class="nb">int</span> <span class="n">npartitions</span><span class="p">);</span>
+<div class="highlight-python"><div class="highlight"><pre>NeuralNet::NeuralNet(NetProto netproto, int npartitions);
 </pre></div>
 </div>
 <p>The constructor creates a graph representing the net structure firstly in</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="n">Graph</span><span class="o">*</span> <span class="n">NeuralNet</span><span class="p">::</span><span class="n">CreateGraph</span><span class="p">(</span><span class="n">const</span> <span class="n">NetProto</span><span class="o">&amp;</span> <span class="n">netproto</span><span class="p">,</span> <span class="nb">int</span> <span class="n">npartitions</span><span class="p">);</span>
+<div class="highlight-python"><div class="highlight"><pre>Graph* NeuralNet::CreateGraph(const NetProto&amp; netproto, int npartitions);
 </pre></div>
 </div>
 <p>Next, it creates a layer for each node and connects layers if their nodes are
 connected.</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="n">void</span> <span class="n">NeuralNet</span><span class="p">::</span><span class="n">CreateNetFromGraph</span><span class="p">(</span><span class="n">Graph</span><span class="o">*</span> <span class="n">graph</span><span class="p">,</span> <span class="nb">int</span> <span class="n">npartitions</span><span class="p">);</span>
+<div class="highlight-python"><div class="highlight"><pre>void NeuralNet::CreateNetFromGraph(Graph* graph, int npartitions);
 </pre></div>
 </div>
 <p>Since the <code class="docutils literal"><span class="pre">NeuralNet</span></code> instance may be shared among multiple workers, the
@@ -408,12 +408,12 @@ connected.</p>
 is enabled by first sharing the Param configuration (in <code class="docutils literal"><span class="pre">NeuralNet::Create</span></code>)
 to create two similar (e.g., the same shape) Param objects, and then calling
 (in <code class="docutils literal"><span class="pre">NeuralNet::CreateNetFromGraph</span></code>),</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="n">void</span> <span class="n">Param</span><span class="p">::</span><span class="n">ShareFrom</span><span class="p">(</span><span class="n">const</span> <span class="n">Param</span><span class="o">&amp;</span> <span class="n">from</span><span class="p">);</span>
+<div class="highlight-python"><div class="highlight"><pre>void Param::ShareFrom(const Param&amp; from);
 </pre></div>
 </div>
 <p>It is also possible to share <code class="docutils literal"><span class="pre">Param</span></code>s of two nets, e.g., sharing parameters of
 the training net and the test net,</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="n">void</span> <span class="n">NeuralNet</span><span class="p">:</span><span class="n">ShareParamsFrom</span><span class="p">(</span><span class="n">NeuralNet</span><span class="o">*</span> <span class="n">other</span><span class="p">);</span>
+<div class="highlight-python"><div class="highlight"><pre>void NeuralNet:ShareParamsFrom(NeuralNet* other);
 </pre></div>
 </div>
 <p>It will call <code class="docutils literal"><span class="pre">Param::ShareFrom</span></code> for each Param object.</p>
@@ -422,10 +422,10 @@ the training net and the test net,</p>
 <span id="access-functions"></span><h3>Access functions<a class="headerlink" href="#access-functions" title="Permalink to this headline">¶</a></h3>
 <p><code class="docutils literal"><span class="pre">NeuralNet</span></code> provides a couple of access function to get the layers and params
 of the net:</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="n">const</span> <span class="n">std</span><span class="p">::</span><span class="n">vector</span><span class="o">&lt;</span><span class="n">Layer</span><span class="o">*&gt;&amp;</span> <span class="n">layers</span><span class="p">()</span> <span class="n">const</span><span class="p">;</span>
-<span class="n">const</span> <span class="n">std</span><span class="p">::</span><span class="n">vector</span><span class="o">&lt;</span><span class="n">Param</span><span class="o">*&gt;&amp;</span> <span class="n">params</span><span class="p">()</span> <span class="n">const</span> <span class="p">;</span>
-<span class="n">Layer</span><span class="o">*</span> <span class="n">name2layer</span><span class="p">(</span><span class="n">string</span> <span class="n">name</span><span class="p">)</span> <span class="n">const</span><span class="p">;</span>
-<span class="n">Param</span><span class="o">*</span> <span class="n">paramid2param</span><span class="p">(</span><span class="nb">int</span> <span class="nb">id</span><span class="p">)</span> <span class="n">const</span><span class="p">;</span>
+<div class="highlight-python"><div class="highlight"><pre>const std::vector&lt;Layer*&gt;&amp; layers() const;
+const std::vector&lt;Param*&gt;&amp; params() const ;
+Layer* name2layer(string name) const;
+Param* paramid2param(int id) const;
 </pre></div>
 </div>
 </div>

Modified: incubator/singa/site/trunk/en/docs/optimizer.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/docs/optimizer.html?rev=1776389&r1=1776388&r2=1776389&view=diff
==============================================================================
--- incubator/singa/site/trunk/en/docs/optimizer.html (original)
+++ incubator/singa/site/trunk/en/docs/optimizer.html Thu Dec 29 09:46:24 2016
@@ -100,7 +100,7 @@
 <li class="toctree-l2"><a class="reference internal" href="initializer.html">Initializer</a></li>
 <li class="toctree-l2"><a class="reference internal" href="loss.html">Loss</a></li>
 <li class="toctree-l2"><a class="reference internal" href="metric.html">Metric</a></li>
-<li class="toctree-l2 current"><a class="current reference internal" href="#">Optimizer</a></li>
+<li class="toctree-l2 current"><a class="current reference internal" href="">Optimizer</a></li>
 <li class="toctree-l2"><a class="reference internal" href="data.html">Data</a></li>
 <li class="toctree-l2"><a class="reference internal" href="image_tool.html">Image Tool</a></li>
 <li class="toctree-l2"><a class="reference internal" href="snapshot.html">Snapshot</a></li>
@@ -171,17 +171,17 @@
 <span id="optimizer"></span><h1>Optimizer<a class="headerlink" href="#module-singa.optimizer" title="Permalink to this headline">¶</a></h1>
 <p>This module includes a set of optimizers for updating model parameters.</p>
 <p>Example usage:</p>
-<div class="highlight-default"><div class="highlight"><pre><span class="kn">from</span> <span class="nn">singa</span> <span class="k">import</span> <span class="n">optimizer</span>
-<span class="kn">from</span> <span class="nn">singa</span> <span class="k">import</span> <span class="n">tensor</span>
+<div class="highlight-python"><div class="highlight"><pre><span class="kn">from</span> <span class="nn">singa</span> <span class="kn">import</span> <span class="n">optimizer</span>
+<span class="kn">from</span> <span class="nn">singa</span> <span class="kn">import</span> <span class="n">tensor</span>
 
-<span class="n">sgd</span> <span class="o">=</span> <span class="n">optimizer</span><span class="o">.</span><span class="n">SGD</span><span class="p">(</span><span class="n">lr</span><span class="o">=</span><span class="mf">0.01</span><span class="p">,</span> <span class="n">momentum</span><span class="o">=</span><span class="mf">0.9</span><span class="p">,</span> <span class="n">weight_decay</span><span class="o">=</span><span class="mi">1</span><span class="n">e</span><span class="o">-</span><span class="mi">4</span><span class="p">)</span>
+<span class="n">sgd</span> <span class="o">=</span> <span class="n">optimizer</span><span class="o">.</span><span class="n">SGD</span><span class="p">(</span><span class="n">lr</span><span class="o">=</span><span class="mf">0.01</span><span class="p">,</span> <span class="n">momentum</span><span class="o">=</span><span class="mf">0.9</span><span class="p">,</span> <span class="n">weight_decay</span><span class="o">=</span><span class="mf">1e-4</span><span class="p">)</span>
 <span class="n">p</span> <span class="o">=</span> <span class="n">tensor</span><span class="o">.</span><span class="n">Tensor</span><span class="p">((</span><span class="mi">3</span><span class="p">,</span><span class="mi">5</span><span class="p">))</span>
 <span class="n">p</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
 <span class="n">g</span> <span class="o">=</span> <span class="n">tensor</span><span class="o">.</span><span class="n">Tensor</span><span class="p">((</span><span class="mi">3</span><span class="p">,</span><span class="mi">5</span><span class="p">))</span>
 <span class="n">g</span><span class="o">.</span><span class="n">gaussian</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mf">0.01</span><span class="p">)</span>
 
-<span class="n">sgd</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="n">g</span><span class="p">,</span> <span class="n">p</span><span class="p">,</span> <span class="s">&#39;param&#39;</span><span class="p">)</span>  <span class="c"># use the global lr=0.1 for epoch 1</span>
-<span class="n">sgd</span><span class="o">.</span><span class="n">apply_with_lr</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mf">0.03</span><span class="p">,</span> <span class="n">g</span><span class="p">,</span> <span class="n">p</span><span class="p">,</span> <span class="s">&#39;param&#39;</span><span class="p">)</span>  <span class="c"># use lr=0.03 for epoch 2</span>
+<span class="n">sgd</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="n">g</span><span class="p">,</span> <span class="n">p</span><span class="p">,</span> <span class="s1">&#39;param&#39;</span><span class="p">)</span>  <span class="c1"># use the global lr=0.1 for epoch 1</span>
+<span class="n">sgd</span><span class="o">.</span><span class="n">apply_with_lr</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mf">0.03</span><span class="p">,</span> <span class="n">g</span><span class="p">,</span> <span class="n">p</span><span class="p">,</span> <span class="s1">&#39;param&#39;</span><span class="p">)</span>  <span class="c1"># use lr=0.03 for epoch 2</span>
 </pre></div>
 </div>
 <dl class="class">

Modified: incubator/singa/site/trunk/en/docs/snapshot.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/docs/snapshot.html?rev=1776389&r1=1776388&r2=1776389&view=diff
==============================================================================
--- incubator/singa/site/trunk/en/docs/snapshot.html (original)
+++ incubator/singa/site/trunk/en/docs/snapshot.html Thu Dec 29 09:46:24 2016
@@ -103,7 +103,7 @@
 <li class="toctree-l2"><a class="reference internal" href="optimizer.html">Optimizer</a></li>
 <li class="toctree-l2"><a class="reference internal" href="data.html">Data</a></li>
 <li class="toctree-l2"><a class="reference internal" href="image_tool.html">Image Tool</a></li>
-<li class="toctree-l2 current"><a class="current reference internal" href="#">Snapshot</a></li>
+<li class="toctree-l2 current"><a class="current reference internal" href="">Snapshot</a></li>
 <li class="toctree-l2"><a class="reference internal" href="converter.html">Caffe Converter</a></li>
 <li class="toctree-l2"><a class="reference internal" href="utils.html">Utils</a></li>
 <li class="toctree-l2"><a class="reference internal" href="examples/index.html">Examples</a></li>