You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@singa.apache.org by wa...@apache.org on 2017/02/13 05:13:22 UTC

svn commit: r1782721 [2/24] - in /incubator/singa/site/trunk/v1.1.0: ./ _sources/ _sources/community/ _sources/develop/ _sources/docs/ _sources/docs/examples/ _sources/docs/examples/caffe/ _sources/docs/examples/char-rnn/ _sources/docs/examples/cifar10...

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/installation.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/installation.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/installation.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/installation.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,505 @@
+# Installation
+
+
+## From wheel
+
+Users can download the pre-compiled wheel files to install PySINGA.
+PySINGA has been tested on Linux (Ubunu 14.04 and 16.04) and Mac OS (10.11 and 10.12).
+
+### Pre-requisite
+
+Python 2.7 and pip are required
+
+    # For Ubuntu
+    $ sudo apt-get install python2.7-dev python-pip
+
+    # For Mac
+    $ brew tap homebrew/python
+    $ brew install python
+
+Note for Mac OS, you need to configure the (python) paths correctly if multiple python versions are installed.
+Refer to FAQ for the errors and solutions.
+
+### Virtual environment
+
+Users are recommended to use PySINGA in python virtual environment.
+
+To use pip with virtual environment,
+
+    # install virtualenv
+    $ pip install virtualenv
+    $ virtualenv pysinga
+    $ source pysinga/bin/activate
+
+To use anaconda with virtual environment,
+
+    $ conda create --name pysinga python=2
+    $ source activate pysinga
+
+
+Note that in python virtual environment, you may need to reset the `PYTHONPATH` to empty
+to avoid the conflicts of system path and virtual environment path.
+
+
+### Instructions
+
+Currently, the following wheel files are available,
+
+<table border="1">
+  <tr>
+    <th>OS</th>
+    <th>Device</th>
+    <th>CUDA/cuDNN</th>
+    <th>Link</th>
+  </tr>
+  <tr>
+    <td>Ubuntu14.04</td>
+    <td>CPU</td>
+    <td>-</td>
+    <td><a href="http://comp.nus.edu.sg/~dbsystem/singa/assets/file/wheel/linux/latest/ubuntu14.04-cpp/">latest</a>, <a href="http://www.comp.nus.edu.sg/~dbsystem/singa/assets/file/wheel/linux">history</a></td>
+  </tr>
+  <tr>
+    <td>Ubuntu14.04</td>
+    <td>GPU</td>
+    <td>CUDA7.5+cuDNN4</td>
+    <td><a href="http://comp.nus.edu.sg/~dbsystem/singa/assets/file/wheel/linux/latest/ubuntu14.04-cuda7.5-cudnn4/">latest</a>, <a href="http://www.comp.nus.edu.sg/~dbsystem/singa/assets/file/wheel/linux">history</a></td>
+  </tr>
+  <tr>
+    <td>Ubuntu14.04</td>
+    <td>GPU</td>
+    <td>CUDA7.5+cuDNN5</td>
+    <td><a href="http://comp.nus.edu.sg/~dbsystem/singa/assets/file/wheel/linux/latest/ubuntu14.04-cuda7.5-cudnn5/">latest</a>, <a href="http://www.comp.nus.edu.sg/~dbsystem/singa/assets/file/wheel/linux">history</a></td>
+  </tr>
+  <tr>
+    <td>Ubuntu16.04</td>
+    <td>CPU</td>
+    <td>-</td>
+    <td><a href="http://comp.nus.edu.sg/~dbsystem/singa/assets/file/wheel/linux/latest/ubuntu16.04-cpp/">latest</a>, <a href="http://www.comp.nus.edu.sg/~dbsystem/singa/assets/file/wheel/linux">history</a></td>
+  </tr>
+  <tr>
+    <td>Ubuntu16.04</td>
+    <td>GPU</td>
+    <td>CUDA8.0+cuDNN5</td>
+    <td><a href="http://comp.nus.edu.sg/~dbsystem/singa/assets/file/wheel/linux/latest/ubuntu16.04-cuda8.0-cudnn5/">latest</a>, <a href="http://www.comp.nus.edu.sg/~dbsystem/singa/assets/file/wheel/linux">history</a></td>
+  </tr>
+  <tr>
+    <td>MacOSX10.11</td>
+    <td>CPU</td>
+    <td>-</td>
+    <td><a href="http://comp.nus.edu.sg/~dbsystem/singa/assets/file/wheel/macosx/latest/macosx10.11-cpp/">latest</a>, <a href="http://www.comp.nus.edu.sg/~dbsystem/singa/assets/file/wheel/macosx">history</a></td>
+  </tr>
+  <tr>
+    <td>MacOSX10.12</td>
+    <td>CPU</td>
+    <td>-</td>
+    <td><a href="http://comp.nus.edu.sg/~dbsystem/singa/assets/file/wheel/macosx/latest/macosx10.12-cpp/">latest</a>, <a href="http://www.comp.nus.edu.sg/~dbsystem/singa/assets/file/wheel/macosx">history</a></td>
+  </tr>
+</table>
+
+Download the whl file and execute the following command to install PySINGA,
+
+    $ pip install --upgrade <path to the whel file>
+
+To install the wheel file compiled with CUDA, you need to install CUDA and export the `LD_LIBRARY_PATH` to cuDNN before running the above instruction.
+
+If you have sudo right, you can run the above commands using `sudo pip install` without python virtual environment.
+The option `--upgrade` may cause errors sometimes, in which case you can ignore it.
+
+## From Debian Package
+
+The following Debian packages (on architecture: amd64) are available
+
+<table border="1">
+  <tr>
+    <th>OS</th>
+    <th>Device</th>
+    <th>CUDA/cuDNN</th>
+    <th>Link</th>
+  </tr>
+  <tr>
+    <td>Ubuntu14.04</td>
+    <td>CPU</td>
+    <td>-</td>
+    <td><a href="http://comp.nus.edu.sg/~dbsystem/singa/assets/file/debian/latest/ubuntu14.04-cpp/python-singa.deb">latest</a>, <a href="http://www.comp.nus.edu.sg/~dbsystem/singa/assets/file/debian">history</a></td>
+  </tr>
+  <tr>
+    <td>Ubuntu14.04</td>
+    <td>GPU</td>
+    <td>CUDA7.5+cuDNN4</td>
+    <td>coming soon</td>
+  </tr>
+  <tr>
+    <td>Ubuntu14.04</td>
+    <td>GPU</td>
+    <td>CUDA7.5+cuDNN5</td>
+    <td>coming soon</td>
+  </tr>
+  <tr>
+    <td>Ubuntu16.04</td>
+    <td>CPU</td>
+    <td>-</td>
+    <td><a href="http://comp.nus.edu.sg/~dbsystem/singa/assets/file/debian/latest/ubuntu16.04-cpp/python-singa.deb">latest</a>, <a href="http://www.comp.nus.edu.sg/~dbsystem/singa/assets/file/debian">history</a></td>
+  </tr>
+  <tr>
+    <td>Ubuntu16.04</td>
+    <td>GPU</td>
+    <td>CUDA8.0+cuDNN5</td>
+    <td>coming soon</td>
+  </tr>
+</table>
+
+Download the deb file and install it via
+
+    apt-get install <path to the deb file, e.g., ./python-singa.deb>
+
+Note that the path must include `./` if the file in inside the current folder.
+
+## From source
+
+The source files could be downloaded either as a [tar.gz file](https://dist.apache.org/repos/dist/dev/incubator/singa/), or as a git repo
+
+    $ git clone https://github.com/apache/incubator-singa.git
+    $ cd incubator-singa/
+
+### Pre-requisite
+
+The following libraries are required
+* cmake (>=2.8)
+* gcc (>=4.8.1) or Clang
+* google protobuf (>=2.5,<3)
+* blas (tested with openblas >=0.2.10)
+* swig(>=3.0.10) for compiling PySINGA
+* numpy(>=1.11.0) for compiling PySINGA
+
+The following libraries are optional
+* opencv (tested with 2.4.8)
+* lmdb (tested with 0.9)
+* glog
+
+### Instructions
+
+1. create a `build` folder inside incubator-singa and go into that folder
+2. run `cmake [options] ..`
+  by default all options are OFF except `USE_PYTHON`
+
+    * `USE_MODUELS=ON`, used if protobuf and blas are not installed a prior
+    * `USE_CUDA=ON`, used if CUDA and cuDNN is available
+    * `USE_PYTHON=ON`, used for compiling PySINGA
+    * `USE_OPENCL=ON`, used for compiling with OpenCL support
+3. compile the code, e.g., `make`
+4. goto python folder
+5. run `pip install .`
+6. [optional] run `python setup.py bdist_wheel` to generate the wheel file
+
+Step 4 and 5 are to install PySINGA.
+Details on the installation of dependent libraries and the instructions for each OS are given in the following sections.
+
+### Linux and Mac OS
+
+Most of the dependent libraries could be installed from source or via package mangers like
+apt-get, yum, and homebrew. Please refer to FAQ for problems caused by the path setting of the dependent libraries.
+
+The following instructions are tested on Ubuntu 14.04  and 16.04for installing dependent libraries.
+
+    # required libraries
+    $ sudo apt-get install libprotobuf-dev libopenblas-dev protobuf-compiler
+
+    # optional libraries
+    $ sudo apt-get install python2.7-dev python-pip python-numpy
+    $ sudo apt-get install libopencv-dev libgoogle-glog-dev liblmdb-dev
+
+The following instructions are tested on Mac OS X Yosemite (10.11 and 10.12) for installing dependent libraries.
+
+    # required libraries
+    $ brew tap homebrew/science
+    $ brew install openblas
+    $ brew install protobuf260
+
+    # optional libraries
+    $ brew tap homebrew/python
+    $ brew install python
+    $ brew install opencv
+    $ brew install -vd glog lmdb
+
+By default, openblas is installed into /usr/local/opt/openblas. To let the compiler (and cmake) know the openblas
+path,
+
+    $ export CMAKE_INCLUDE_PATH=/usr/local/opt/openblas/include:$CMAKE_INCLUDE_PATH
+    $ export CMAKE_LIBRARY_PATH=/usr/local/opt/openblas/lib:$CMAKE_LIBRARY_PATH
+
+To let the runtime know the openblas path,
+
+    $ export LD_LIBRARY_PATH=/usr/local/opt/openblas/library:$LD_LIBRARY_PATH
+
+
+#### Compile with USE_MODULES=ON
+
+If protobuf and openblas are not installed, you can compile SINGA together with them
+
+    $ In SINGA ROOT folder
+    $ mkdir build
+    $ cd build
+    $ cmake -DUSE_MODULES=ON ..
+    $ make
+
+cmake would download OpenBlas and Protobuf (2.6.1) and compile them together
+with SINGA.
+
+After compiling SINGA, you can run the unit tests by
+
+    $ ./bin/test_singa
+
+You can see all the testing cases with testing results. If SINGA passes all
+tests, then you have successfully installed SINGA.
+
+You can use `ccmake ..` to configure the compilation options.
+If some dependent libraries are not in the system default paths, you need to export
+the following environment variables
+
+    export CMAKE_INCLUDE_PATH=<path to the header file folder>
+    export CMAKE_LIBRARY_PATH=<path to the lib file folder>
+
+#### Compile with USE_PYTHON=ON
+swig and numpy can be install by
+
+    $ Ubuntu 14.04 and 16.04
+    $ sudo apt-get install python-numpy
+    # Ubuntu 16.04
+    $ sudo apt-get install swig
+
+Note that swig has to be installed from source on Ubuntu 14.04.
+After installing numpy, export the header path of numpy.i as
+
+    $ export CPLUS_INCLUDE_PATH=`python -c "import numpy; print numpy.get_include()"`:$CPLUS_INCLUDE_PATH
+
+Similar to compile CPP code, PySINGA is compiled by
+
+    $ cmake -DUSE_PYTHON=ON ..
+    $ make
+    $ cd python
+    $ pip install .
+
+Developers can build the wheel file via
+
+    # under the build directory
+    $ cd python
+
+The generated wheel file is under "dist" directory.
+
+
+#### Compile SINGA with USE_CUDA=ON
+
+Users are encouraged to install the CUDA and
+[cuDNN](https://developer.nvidia.com/cudnn) for running SINGA on GPUs to
+get better performance.
+
+SINGA has been tested over CUDA (7, 7.5, 8), and cuDNN (4 and 5).  If cuDNN is
+decompressed into non-system folder, e.g. /home/bob/local/cudnn/, the following
+commands should be executed for cmake and the runtime to find it
+
+    $ export CMAKE_INCLUDE_PATH=/home/bob/local/cudnn/include:$CMAKE_INCLUDE_PATH
+    $ export CMAKE_LIBRARY_PATH=/home/bob/local/cudnn/lib64:$CMAKE_LIBRARY_PATH
+    $ export LD_LIBRARY_PATH=/home/bob/local/cudnn/lib64:$LD_LIBRARY_PATH
+
+The cmake options for CUDA and cuDNN should be switched on
+
+    # Dependent libs are install already
+    $ cmake -DUSE_CUDA=ON ..
+
+#### Compile SINGA with USE_OPENCL=ON
+
+SINGA uses opencl-headers and viennacl (version 1.7.1 or newer) for OpenCL support, which
+can be installed using via
+
+    # On Ubuntu 16.04
+    $ sudo apt-get install opencl-headers, libviennacl-dev
+    # On Fedora
+    $ sudo yum install opencl-headers, viennacl
+
+Additionally, you will need the OpenCL Installable Client Driver (ICD) for the platforms that you want to run OpenCL on.
+
+* For AMD and nVidia GPUs, the driver package should also install the correct OpenCL ICD.
+* For Intel CPUs and/or GPUs, get the driver from the [Intel website.](https://software.intel.com/en-us/articles/opencl-drivers) Note that the drivers provided on that website only supports recent CPUs and Iris GPUs.
+* For older Intel CPUs, you can use the `beignet-opencl-icd` package.
+
+Note that running OpenCL on CPUs is not currently recommended because it is slow. Memory transfer is on the order of whole seconds (1000's of ms on CPUs as compared to 1's of ms on GPUs).
+
+More information on setting up a working OpenCL environment may be found [here](https://wiki.tiker.net/OpenCLHowTo).
+
+If the package version of ViennaCL is not at least 1.7.1, you will need to build it from source:
+
+Clone [the repository from here](https://github.com/viennacl/viennacl-dev), checkout the `release-1.7.1` tag and build it.
+Remember to add its directory to `PATH` and the built libraries to `LD_LIBRARY_PATH`.
+
+To build SINGA with OpenCL support, you need to pass the flag during cmake:
+
+    cmake -DUSE_OPENCL=ON ..
+
+### Compile SINGA on Windows
+
+For the dependent library installation, please refer to [Dependencies](dependencies.md).
+After all the dependencies are successfully installed, just run the following commands to
+generate the VS solution in cmd under singa folder:
+
+    $ md build && cd build
+    $ cmake -G "Visual Studio 14" -DUSE_CUDA=OFF -DUSE_PYTHON=OFF ..
+
+The default project generated by the command is 32-bit version. You can also
+specify a 64-bit version project by:
+
+    $ md build && cd build
+    $ cmake -G "Visual Studio 14 Win64" -DUSE_CUDA=OFF -DUSE_PYTHON=OFF ..
+
+If you get error outputs like "Could NOT find xxxxx" indicating a dependent
+library missing, configure your library file and include path for cmake or the system.
+For example, you get an error "Could NOT find CBLAS" and suppose you installed
+openblas header files at "d:\include" and openblas library at "d:\lib". You should run the
+following command to specify your cblas parameters in cmake:
+
+    $ cmake -G "Visual Studio 14" -DUSE_CUDA=OFF -DUSE_PYTHON=OFF -DCBLAS_INCLUDE_DIR="d:\include" -DCBLAS_LIBRARIES="d:\lib\libopenblas.lib" -DProtobuf_INCLUDE_DIR=<include dir of protobuf> -DProtobuf_LIBRARIES=<path to libprotobuf.lib> -DProtobuf_PROTOC_EXECUTABLE=<path to protoc.exe> -DGLOG_INCLUDE_DIR=<include dir of glog> -DGLOG_LIBRARIES=<path to libglog.lib> ..
+
+To find out the parameters you need to specify for some special libraries, you
+can run the following command:
+
+    $ cmake -LAH
+
+If you use cmake GUI tool in windows, make sure you configure the right
+parameters for the singa solution by select "Advanced" box. After generating the VS project,
+open the "singa.sln" project file under
+the "build" folder and compile it as a normal VS solution. You will find the
+unit tests file named "test_singa" in the project binary folder.
+If you get errors when running test_singa.exe due to libglog.dll/libopenblas.dll missing,
+just copy the dll files into the same folder as test_singa.exe
+
+## FAQ
+
+* Q: Error from 'import singa' using PySINGA installed from wheel.
+
+    A: Please check the detailed error from `python -c  "from singa import _singa_wrap"`. Sometimes it is
+    caused by the dependent libraries, e.g. there are multiple versions of protobuf or missing of cudnn. Following
+    steps show the solutions for different cases
+    1. Check the cudnn and cuda and gcc versions, cudnn5 and cuda7.5 and gcc4.8/4.9 are preferred. if gcc is 5.0, then downgrade it.
+       If cudnn is missing or not match with the wheel version, you can download the correct version of cudnn into ~/local/cudnn/ and
+
+            $ echo "export LD_LIBRARY_PATH=/home/<yourname>/local/cudnn/lib64:$LD_LIBRARY_PATH" >> ~/.bashrc
+
+    2. If it is the problem related to protobuf, then download the newest whl files which have [compiled protobuf and openblas into the whl](https://issues.apache.org/jira/browse/SINGA-255) file of PySINGA.
+       Or you can install protobuf from source into a local folder, say ~/local/;
+       Decompress the tar file, and then
+
+            $ ./configure --prefix=/home/<yourname>local
+            $ make && make install
+            $ echo "export LD_LIBRARY_PATH=/home/<yourname>/local/lib:$LD_LIBRARY_PATH" >> ~/.bashrc
+            $ source ~/.bashrc
+
+    3. If it cannot find other libs including python, then create virtual env using pip or conda;
+       and then install SINGA via
+
+            $ pip install --upgrade <url of singa wheel>
+
+
+* Q: Error from running `cmake ..`, which cannot find the dependent libraries.
+
+    A: If you haven't installed the libraries, install them. If you installed
+    the libraries in a folder that is outside of the system folder, e.g. /usr/local,
+    you need to export the following variables
+
+        $ export CMAKE_INCLUDE_PATH=<path to your header file folder>
+        $ export CMAKE_LIBRARY_PATH=<path to your lib file folder>
+
+
+* Q: Error from `make`, e.g. the linking phase
+
+    A: If your libraries are in other folders than system default paths, you need
+    to export the following varaibles
+
+        $ export LIBRARY_PATH=<path to your lib file folder>
+        $ export LD_LIBRARY_PATH=<path to your lib file folder>
+
+
+* Q: Error from header files, e.g. 'cblas.h no such file or directory exists'
+
+    A: You need to include the folder of the cblas.h into CPLUS_INCLUDE_PATH,
+    e.g.,
+
+        $ export CPLUS_INCLUDE_PATH=/opt/OpenBLAS/include:$CPLUS_INCLUDE_PATH
+
+* Q:While compiling SINGA, I get error `SSE2 instruction set not enabled`
+
+    A:You can try following command:
+
+        $ make CFLAGS='-msse2' CXXFLAGS='-msse2'
+
+* Q:I get `ImportError: cannot import name enum_type_wrapper` from google.protobuf.internal when I try to import .py files.
+
+    A: You need to install the python binding of protobuf, which could be installed via
+
+        $ sudo apt-get install protobuf
+
+    or from source
+
+        $ cd /PROTOBUF/SOURCE/FOLDER
+        $ cd python
+        $ python setup.py build
+        $ python setup.py install
+
+* Q: When I build OpenBLAS from source, I am told that I need a Fortran compiler.
+
+    A: You can compile OpenBLAS by
+
+        $ make ONLY_CBLAS=1
+
+    or install it using
+
+        $ sudo apt-get install libopenblas-dev
+
+* Q: When I build protocol buffer, it reports that GLIBC++_3.4.20 not found in /usr/lib64/libstdc++.so.6.
+
+    A: This means the linker found libstdc++.so.6 but that library
+    belongs to an older version of GCC than was used to compile and link the
+    program. The program depends on code defined in
+    the newer libstdc++ that belongs to the newer version of GCC, so the linker
+    must be told how to find the newer libstdc++ shared library.
+    The simplest way to fix this is to find the correct libstdc++ and export it to
+    LD_LIBRARY_PATH. For example, if GLIBC++_3.4.20 is listed in the output of the
+    following command,
+
+        $ strings /usr/local/lib64/libstdc++.so.6|grep GLIBC++
+
+    then you just set your environment variable as
+
+        $ export LD_LIBRARY_PATH=/usr/local/lib64:$LD_LIBRARY_PATH
+
+* Q: When I build glog, it reports that "src/logging_unittest.cc:83:20: error: ‘gflags’ is not a namespace-name"
+
+    A: It maybe that you have installed gflags with a different namespace such as "google". so glog can't find 'gflags' namespace.
+    Because it is not necessary to have gflags to build glog. So you can change the configure.ac file to ignore gflags.
+
+        1. cd to glog src directory
+        2. change line 125 of configure.ac  to "AC_CHECK_LIB(gflags, main, ac_cv_have_libgflags=0, ac_cv_have_libgflags=0)"
+        3. autoreconf
+
+    After this, you can build glog again.
+
+* Q: When using virtual environment, everytime I run pip install, it would reinstall numpy. However, the numpy would not be used when I `import numpy`
+
+    A: It could be caused by the `PYTHONPATH` which should be set to empty when you are using virtual environment to avoid the conflicts with the path of
+    the virtual environment.
+
+* Q: When compiling PySINGA from source, there is a compilation error due to the missing of <numpy/objectarray.h>
+
+    A: Please install numpy and export the path of numpy header files as
+
+        $ export CPLUS_INCLUDE_PATH=`python -c "import numpy; print numpy.get_include()"`:$CPLUS_INCLUDE_PATH
+
+* Q: When I run PySINGA in Mac OS X, I got the error "Fatal Python error: PyThreadState_Get: no current thread  Abort trap: 6"
+
+    A: This error happens typically when you have multiple version of Python on your system,
+    e.g, the one comes with the OS and the one installed by Homebrew. The Python linked by PySINGA must be the same as the Python interpreter.
+    You can check your interpreter by `which python` and check the Python linked by PySINGA via `otool -L <path to _singa_wrap.so>`.
+    To fix this error, compile SINGA with the correct version of Python.
+    In particular, if you build PySINGA from source, you need to specify the paths when invoking [cmake](http://stackoverflow.com/questions/15291500/i-have-2-versions-of-python-installed-but-cmake-is-using-older-version-how-do)
+
+        $ cmake -DPYTHON_LIBRARY=`python-config --prefix`/lib/libpython2.7.dylib -DPYTHON_INCLUDE_DIR=`python-config --prefix`/include/python2.7/ ..
+
+    If installed PySINGA from binary packages, e.g. debian or wheel, then you need to change the python interpreter, e.g., reset the $PATH to put the correct path of Python at the front position.

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/layer.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/layer.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/layer.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/layer.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,32 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements.  See the NOTICE file
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+
+
+Layer
+======
+
+Python API
+-----------
+.. automodule:: singa.layer
+   :members:
+   :member-order: bysource
+   :show-inheritance:
+   :undoc-members:
+
+
+CPP API
+--------

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/loss.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/loss.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/loss.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/loss.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,25 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements.  See the NOTICE file
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+
+
+Loss
+=========
+
+
+.. automodule:: singa.loss
+   :members:
+   :show-inheritance:

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/metric.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/metric.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/metric.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/metric.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,26 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements.  See the NOTICE file
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+
+
+Metric
+=========
+
+
+.. automodule:: singa.metric
+   :members:
+   :show-inheritance:
+   :member-order: bysource

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/caffe/README.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/caffe/README.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/caffe/README.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/caffe/README.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,32 @@
+# Use parameters pre-trained from Caffe in SINGA
+
+In this example, we use SINGA to load the VGG parameters trained by Caffe to do image classification.
+
+## Run this example
+You can run this example by simply executing `run.sh vgg16` or `run.sh vgg19`
+The script does the following work.
+
+### Obtain the Caffe model
+* Download caffe model prototxt and parameter binary file.
+* Currently we only support the latest caffe format, if your model is in
+    previous version of caffe, please update it to current format.(This is
+    supported by caffe)
+* After updating, we can obtain two files, i.e., the prototxt and parameter
+    binary file.
+
+### Prepare test images
+A few sample images are downloaded into the `test` folder.
+
+### Predict
+The `predict.py` script creates the VGG model and read the parameters,
+
+    usage: predict.py [-h] model_txt model_bin imgclass
+
+where `imgclass` refers to the synsets of imagenet dataset for vgg models.
+You can start the prediction program by executing the following command:
+
+    python predict.py vgg16.prototxt vgg16.caffemodel synset_words.txt
+
+Then you type in the image path, and the program would output the top-5 labels.
+
+More Caffe models would be tested soon.

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/char-rnn/README.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/char-rnn/README.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/char-rnn/README.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/char-rnn/README.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,33 @@
+# Train Char-RNN over plain text
+
+Recurrent neural networks (RNN) are widely used for modelling sequential data,
+e.g., natural language sentences. This example describes how to implement a RNN
+application (or model) using SINGA's RNN layers.
+We will use the [char-rnn](https://github.com/karpathy/char-rnn) model as an
+example, which trains over sentences or
+source code, with each character as an input unit. Particularly, we will train
+a RNN using GRU over Linux kernel source code. After training, we expect to
+generate meaningful code from the model.
+
+
+## Instructions
+
+* Compile and install SINGA. Currently the RNN implementation depends on Cudnn with version >= 5.05.
+
+* Prepare the dataset. Download the [kernel source code](http://cs.stanford.edu/people/karpathy/char-rnn/).
+Other plain text files can also be used.
+
+* Start the training,
+
+        python train.py linux_input.txt
+
+  Some hyper-parameters could be set through command line,
+
+        python train.py -h
+
+* Sample characters from the model by providing the number of characters to sample and the seed string.
+
+        python sample.py 'model.bin' 100 --seed '#include <std'
+
+  Please replace 'model.bin' with the path to one of the checkpoint paths.
+

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/cifar10/README.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/cifar10/README.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/cifar10/README.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/cifar10/README.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,76 @@
+# Train CNN over Cifar-10
+
+
+Convolution neural network (CNN) is a type of feed-forward artificial neural
+network widely used for image and video classification. In this example, we
+will train three deep CNN models to do image classification for the CIFAR-10 dataset,
+
+1. [AlexNet](https://code.google.com/p/cuda-convnet/source/browse/trunk/example-layers/layers-18pct.cfg)
+the best validation accuracy (without data augmentation) we achieved was about 82%.
+
+2. [VGGNet](http://torch.ch/blog/2015/07/30/cifar.html), the best validation accuracy (without data augmentation) we achieved was about 89%.
+3. [ResNet](https://github.com/facebook/fb.resnet.torch), the best validation accuracy (without data augmentation) we achieved was about 83%.
+4. [Alexnet from Caffe](https://github.com/BVLC/caffe/tree/master/examples/cifar10), SINGA is able to convert model from Caffe seamlessly.
+
+
+## Instructions
+
+
+### SINGA installation
+
+Users can compile and install SINGA from source or install the Python version.
+The code can ran on both CPU and GPU. For GPU training, CUDA and CUDNN (V4 or V5)
+are required. Please refer to the installation page for detailed instructions.
+
+### Data preparation
+
+The binary Cifar-10 dataset could be downloaded by
+
+    python download_data.py bin
+
+The Python version could be downloaded by
+
+    python download_data.py py
+
+### Training
+
+There are four training programs
+
+1. train.py. The following command would train the VGG model using the python
+version of the Cifar-10 dataset in 'cifar-10-batches-py' folder.
+
+        python train.py vgg cifar-10-batches-py
+
+    To train other models, please replace 'vgg' to 'alexnet', 'resnet' or 'caffe', 
+    where 'caffe' refers to the alexnet model converted from Caffe. By default
+    the training would run on a CudaGPU device, to run it on CppCPU, add an additional
+    argument
+
+        python train.py vgg cifar-10-batches-py  --use_cpu
+
+2. alexnet.cc. It trains the AlexNet model using the CPP APIs on a CudaGPU,
+
+        ./run.sh
+
+3. alexnet-parallel.cc. It trains the AlexNet model using the CPP APIs on two CudaGPU devices.
+The two devices run synchronously to compute the gradients of the mode parameters, which are
+averaged on the host CPU device and then be applied to update the parameters.
+
+        ./run-parallel.sh
+
+4. vgg-parallel.cc. It trains the VGG model using the CPP APIs on two CudaGPU devices similar to alexnet-parallel.cc.
+
+### Prediction
+
+predict.py includes the prediction function
+
+        def predict(net, images, dev, topk=5)
+
+The net is created by loading the previously trained model; Images consist of
+a numpy array of images (one row per image); dev is the training device, e.g.,
+a CudaGPU device or the host CppCPU device; It returns the topk labels for each instance.
+
+The predict.py file's main function provides an example of using the pre-trained alexnet model to do prediction for new images.
+The 'model.bin' file generated by the training program should be placed at the cifar10 folder to run
+
+        python predict.py

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/caffe/README.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/caffe/README.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/caffe/README.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/caffe/README.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,32 @@
+# Use parameters pre-trained from Caffe in SINGA
+
+In this example, we use SINGA to load the VGG parameters trained by Caffe to do image classification.
+
+## Run this example
+You can run this example by simply executing `run.sh vgg16` or `run.sh vgg19`
+The script does the following work.
+
+### Obtain the Caffe model
+* Download caffe model prototxt and parameter binary file.
+* Currently we only support the latest caffe format, if your model is in
+    previous version of caffe, please update it to current format.(This is
+    supported by caffe)
+* After updating, we can obtain two files, i.e., the prototxt and parameter
+    binary file.
+
+### Prepare test images
+A few sample images are downloaded into the `test` folder.
+
+### Predict
+The `predict.py` script creates the VGG model and read the parameters,
+
+    usage: predict.py [-h] model_txt model_bin imgclass
+
+where `imgclass` refers to the synsets of imagenet dataset for vgg models.
+You can start the prediction program by executing the following command:
+
+    python predict.py vgg16.prototxt vgg16.caffemodel synset_words.txt
+
+Then you type in the image path, and the program would output the top-5 labels.
+
+More Caffe models would be tested soon.

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/char-rnn/README.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/char-rnn/README.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/char-rnn/README.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/char-rnn/README.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,33 @@
+# Train Char-RNN over plain text
+
+Recurrent neural networks (RNN) are widely used for modelling sequential data,
+e.g., natural language sentences. This example describes how to implement a RNN
+application (or model) using SINGA's RNN layers.
+We will use the [char-rnn](https://github.com/karpathy/char-rnn) model as an
+example, which trains over sentences or
+source code, with each character as an input unit. Particularly, we will train
+a RNN using GRU over Linux kernel source code. After training, we expect to
+generate meaningful code from the model.
+
+
+## Instructions
+
+* Compile and install SINGA. Currently the RNN implementation depends on Cudnn with version >= 5.05.
+
+* Prepare the dataset. Download the [kernel source code](http://cs.stanford.edu/people/karpathy/char-rnn/).
+Other plain text files can also be used.
+
+* Start the training,
+
+        python train.py linux_input.txt
+
+  Some hyper-parameters could be set through command line,
+
+        python train.py -h
+
+* Sample characters from the model by providing the number of characters to sample and the seed string.
+
+        python sample.py 'model.bin' 100 --seed '#include <std'
+
+  Please replace 'model.bin' with the path to one of the checkpoint paths.
+

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/cifar10/README.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/cifar10/README.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/cifar10/README.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/cifar10/README.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,76 @@
+# Train CNN over Cifar-10
+
+
+Convolution neural network (CNN) is a type of feed-forward artificial neural
+network widely used for image and video classification. In this example, we
+will train three deep CNN models to do image classification for the CIFAR-10 dataset,
+
+1. [AlexNet](https://code.google.com/p/cuda-convnet/source/browse/trunk/example-layers/layers-18pct.cfg)
+the best validation accuracy (without data augmentation) we achieved was about 82%.
+
+2. [VGGNet](http://torch.ch/blog/2015/07/30/cifar.html), the best validation accuracy (without data augmentation) we achieved was about 89%.
+3. [ResNet](https://github.com/facebook/fb.resnet.torch), the best validation accuracy (without data augmentation) we achieved was about 83%.
+4. [Alexnet from Caffe](https://github.com/BVLC/caffe/tree/master/examples/cifar10), SINGA is able to convert model from Caffe seamlessly.
+
+
+## Instructions
+
+
+### SINGA installation
+
+Users can compile and install SINGA from source or install the Python version.
+The code can ran on both CPU and GPU. For GPU training, CUDA and CUDNN (V4 or V5)
+are required. Please refer to the installation page for detailed instructions.
+
+### Data preparation
+
+The binary Cifar-10 dataset could be downloaded by
+
+    python download_data.py bin
+
+The Python version could be downloaded by
+
+    python download_data.py py
+
+### Training
+
+There are four training programs
+
+1. train.py. The following command would train the VGG model using the python
+version of the Cifar-10 dataset in 'cifar-10-batches-py' folder.
+
+        python train.py vgg cifar-10-batches-py
+
+    To train other models, please replace 'vgg' to 'alexnet', 'resnet' or 'caffe', 
+    where 'caffe' refers to the alexnet model converted from Caffe. By default
+    the training would run on a CudaGPU device, to run it on CppCPU, add an additional
+    argument
+
+        python train.py vgg cifar-10-batches-py  --use_cpu
+
+2. alexnet.cc. It trains the AlexNet model using the CPP APIs on a CudaGPU,
+
+        ./run.sh
+
+3. alexnet-parallel.cc. It trains the AlexNet model using the CPP APIs on two CudaGPU devices.
+The two devices run synchronously to compute the gradients of the mode parameters, which are
+averaged on the host CPU device and then be applied to update the parameters.
+
+        ./run-parallel.sh
+
+4. vgg-parallel.cc. It trains the VGG model using the CPP APIs on two CudaGPU devices similar to alexnet-parallel.cc.
+
+### Prediction
+
+predict.py includes the prediction function
+
+        def predict(net, images, dev, topk=5)
+
+The net is created by loading the previously trained model; Images consist of
+a numpy array of images (one row per image); dev is the training device, e.g.,
+a CudaGPU device or the host CppCPU device; It returns the topk labels for each instance.
+
+The predict.py file's main function provides an example of using the pre-trained alexnet model to do prediction for new images.
+The 'model.bin' file generated by the training program should be placed at the cifar10 folder to run
+
+        python predict.py

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/imagenet/alexnet/README.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/imagenet/alexnet/README.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/imagenet/alexnet/README.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/imagenet/alexnet/README.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,58 @@
+# Train AlexNet over ImageNet
+
+Convolution neural network (CNN) is a type of feed-forward neural
+network widely used for image and video classification. In this example, we will
+use a [deep CNN model](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks)
+to do image classification against the ImageNet dataset.
+
+## Instructions
+
+### Compile SINGA
+
+Please compile SINGA with CUDA, CUDNN and OpenCV. You can manually turn on the
+options in CMakeLists.txt or run `ccmake ..` in build/ folder.
+
+We have tested CUDNN V4 and V5 (V5 requires CUDA 7.5)
+
+### Data download
+* Please refer to step1-3 on [Instructions to create ImageNet 2012 data](https://github.com/amd/OpenCL-caffe/wiki/Instructions-to-create-ImageNet-2012-data)
+  to download and decompress the data.
+* You can download the training and validation list by
+  [get_ilsvrc_aux.sh](https://github.com/BVLC/caffe/blob/master/data/ilsvrc12/get_ilsvrc_aux.sh)
+  or from [Imagenet](http://www.image-net.org/download-images).
+
+### Data preprocessing
+* Assuming you have downloaded the data and the list.
+  Now we should transform the data into binary files. You can run:
+
+          sh create_data.sh
+
+  The script will generate a test file(`test.bin`), a mean file(`mean.bin`) and
+  several training files(`trainX.bin`) in the specified output folder.
+* You can also change the parameters in `create_data.sh`.
+  + `-trainlist <file>`: the file of training list;
+  + `-trainfolder <folder>`: the folder of training images;
+  + `-testlist <file>`: the file of test list;
+  + `-testfolder <floder>`: the folder of test images;
+  + `-outdata <folder>`: the folder to save output files, including mean, training and test files.
+    The script will generate these files in the specified folder;
+  + `-filesize <int>`: number of training images that stores in each binary file.
+
+### Training
+* After preparing data, you can run the following command to train the Alexnet model.
+
+          sh run.sh
+
+* You may change the parameters in `run.sh`.
+  + `-epoch <int>`: number of epoch to be trained, default is 90;
+  + `-lr <float>`: base learning rate, the learning rate will decrease each 20 epochs,
+    more specifically, `lr = lr * exp(0.1 * (epoch / 20))`;
+  + `-batchsize <int>`: batchsize, it should be changed regarding to your memory;
+  + `-filesize <int>`: number of training images that stores in each binary file, it is the
+    same as the `filesize` in data preprocessing;
+  + `-ntrain <int>`: number of training images;
+  + `-ntest <int>`: number of test images;
+  + `-data <folder>`: the folder which stores the binary files, it is exactly the output
+    folder in data preprocessing step;
+  + `-pfreq <int>`: the frequency(in batch) of printing current model status(loss and accuracy);
+  + `-nthreads <int>`: the number of threads to load data which feed to the model.

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/imagenet/googlenet/README.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/imagenet/googlenet/README.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/imagenet/googlenet/README.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/imagenet/googlenet/README.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,66 @@
+---
+name: GoogleNet on ImageNet
+SINGA version: 1.0.1
+SINGA commit: 8c990f7da2de220e8a012c6a8ecc897dc7532744
+parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz
+parameter_sha1: 0a88e8948b1abca3badfd8d090d6be03f8d7655d
+license: unrestricted https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet
+---
+
+# Image Classification using GoogleNet
+
+
+In this example, we convert GoogleNet trained on Caffe to SINGA for image classification.
+
+## Instructions
+
+* Download the parameter checkpoint file into this folder
+
+        $ wget https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz
+        $ tar xvf bvlc_googlenet.tar.gz
+
+* Run the program
+
+        # use cpu
+        $ python serve.py -C &
+        # use gpu
+        $ python serve.py &
+
+* Submit images for classification
+
+        $ curl -i -F image=@image1.jpg http://localhost:9999/api
+        $ curl -i -F image=@image2.jpg http://localhost:9999/api
+        $ curl -i -F image=@image3.jpg http://localhost:9999/api
+
+image1.jpg, image2.jpg and image3.jpg should be downloaded before executing the above commands.
+
+## Details
+
+We first extract the parameter values from [Caffe's checkpoint file](http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel) into a pickle version
+After downloading the checkpoint file into `caffe_root/python` folder, run the following script
+
+    # to be executed within caffe_root/python folder
+    import caffe
+    import numpy as np
+    import cPickle as pickle
+
+    model_def = '../models/bvlc_googlenet/deploy.prototxt'
+    weight = 'bvlc_googlenet.caffemodel'  # must be downloaded at first
+    net = caffe.Net(model_def, weight, caffe.TEST)
+
+    params = {}
+    for layer_name in net.params.keys():
+        weights=np.copy(net.params[layer_name][0].data)
+        bias=np.copy(net.params[layer_name][1].data)
+        params[layer_name+'_weight']=weights
+        params[layer_name+'_bias']=bias
+        print layer_name, weights.shape, bias.shape
+
+    with open('bvlc_googlenet.pickle', 'wb') as fd:
+        pickle.dump(params, fd)
+
+Then we construct the GoogleNet using SINGA's FeedForwardNet structure.
+Note that we added a EndPadding layer to resolve the issue from discrepancy
+of the rounding strategy of the pooling layer between Caffe (ceil) and cuDNN (floor).
+Only the MaxPooling layers outside inception blocks have this problem.
+Refer to [this](http://joelouismarino.github.io/blog_posts/blog_googlenet_keras.html) for more detials.

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/index.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/index.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/index.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/index.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,29 @@
+..
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+..
+
+Model Zoo
+=========
+
+.. toctree::
+
+   cifar10/README
+   char-rnn/README
+   imagenet/alexnet/README
+   imagenet/googlenet/README
+
+

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/mnist/README.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/mnist/README.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/mnist/README.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/examples/mnist/README.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,18 @@
+# Train a RBM model against MNIST dataset
+
+This example is to train an RBM model using the
+MNIST dataset. The RBM model and its hyper-parameters are set following
+[Hinton's paper](http://www.cs.toronto.edu/~hinton/science.pdf)
+
+## Running instructions
+
+1. Download the pre-processed [MNIST dataset](https://github.com/mnielsen/neural-networks-and-deep-learning/raw/master/data/mnist.pkl.gz)
+
+2. Start the training
+
+        python train.py mnist.pkl.gz
+
+By default the training code would run on CPU. To run it on a GPU card, please start
+the program with an additional argument
+
+        python train.py mnist.pkl.gz --use_gpu

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/imagenet/README.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/imagenet/README.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/imagenet/README.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/imagenet/README.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,58 @@
+# Train AlexNet over ImageNet
+
+Convolution neural network (CNN) is a type of feed-forward neural
+network widely used for image and video classification. In this example, we will
+use a [deep CNN model](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks)
+to do image classification against the ImageNet dataset.
+
+## Instructions
+
+### Compile SINGA
+
+Please compile SINGA with CUDA, CUDNN and OpenCV. You can manually turn on the
+options in CMakeLists.txt or run `ccmake ..` in build/ folder.
+
+We have tested CUDNN V4 and V5 (V5 requires CUDA 7.5)
+
+### Data download
+* Please refer to step1-3 on [Instructions to create ImageNet 2012 data](https://github.com/amd/OpenCL-caffe/wiki/Instructions-to-create-ImageNet-2012-data)
+  to download and decompress the data.
+* You can download the training and validation list by
+  [get_ilsvrc_aux.sh](https://github.com/BVLC/caffe/blob/master/data/ilsvrc12/get_ilsvrc_aux.sh)
+  or from [Imagenet](http://www.image-net.org/download-images).
+
+### Data preprocessing
+* Assuming you have downloaded the data and the list.
+  Now we should transform the data into binary files. You can run:
+
+          sh create_data.sh
+
+  The script will generate a test file(`test.bin`), a mean file(`mean.bin`) and
+  several training files(`trainX.bin`) in the specified output folder.
+* You can also change the parameters in `create_data.sh`.
+  + `-trainlist <file>`: the file of training list;
+  + `-trainfolder <folder>`: the folder of training images;
+  + `-testlist <file>`: the file of test list;
+  + `-testfolder <floder>`: the folder of test images;
+  + `-outdata <folder>`: the folder to save output files, including mean, training and test files.
+    The script will generate these files in the specified folder;
+  + `-filesize <int>`: number of training images that stores in each binary file.
+
+### Training
+* After preparing data, you can run the following command to train the Alexnet model.
+
+          sh run.sh
+
+* You may change the parameters in `run.sh`.
+  + `-epoch <int>`: number of epoch to be trained, default is 90;
+  + `-lr <float>`: base learning rate, the learning rate will decrease each 20 epochs,
+    more specifically, `lr = lr * exp(0.1 * (epoch / 20))`;
+  + `-batchsize <int>`: batchsize, it should be changed regarding to your memory;
+  + `-filesize <int>`: number of training images that stores in each binary file, it is the
+    same as the `filesize` in data preprocessing;
+  + `-ntrain <int>`: number of training images;
+  + `-ntest <int>`: number of test images;
+  + `-data <folder>`: the folder which stores the binary files, it is exactly the output
+    folder in data preprocessing step;
+  + `-pfreq <int>`: the frequency(in batch) of printing current model status(loss and accuracy);
+  + `-nthreads <int>`: the number of threads to load data which feed to the model.

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/imagenet/alexnet/README.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/imagenet/alexnet/README.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/imagenet/alexnet/README.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/imagenet/alexnet/README.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,58 @@
+# Train AlexNet over ImageNet
+
+Convolution neural network (CNN) is a type of feed-forward neural
+network widely used for image and video classification. In this example, we will
+use a [deep CNN model](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks)
+to do image classification against the ImageNet dataset.
+
+## Instructions
+
+### Compile SINGA
+
+Please compile SINGA with CUDA, CUDNN and OpenCV. You can manually turn on the
+options in CMakeLists.txt or run `ccmake ..` in build/ folder.
+
+We have tested CUDNN V4 and V5 (V5 requires CUDA 7.5)
+
+### Data download
+* Please refer to step1-3 on [Instructions to create ImageNet 2012 data](https://github.com/amd/OpenCL-caffe/wiki/Instructions-to-create-ImageNet-2012-data)
+  to download and decompress the data.
+* You can download the training and validation list by
+  [get_ilsvrc_aux.sh](https://github.com/BVLC/caffe/blob/master/data/ilsvrc12/get_ilsvrc_aux.sh)
+  or from [Imagenet](http://www.image-net.org/download-images).
+
+### Data preprocessing
+* Assuming you have downloaded the data and the list.
+  Now we should transform the data into binary files. You can run:
+
+          sh create_data.sh
+
+  The script will generate a test file(`test.bin`), a mean file(`mean.bin`) and
+  several training files(`trainX.bin`) in the specified output folder.
+* You can also change the parameters in `create_data.sh`.
+  + `-trainlist <file>`: the file of training list;
+  + `-trainfolder <folder>`: the folder of training images;
+  + `-testlist <file>`: the file of test list;
+  + `-testfolder <floder>`: the folder of test images;
+  + `-outdata <folder>`: the folder to save output files, including mean, training and test files.
+    The script will generate these files in the specified folder;
+  + `-filesize <int>`: number of training images that stores in each binary file.
+
+### Training
+* After preparing data, you can run the following command to train the Alexnet model.
+
+          sh run.sh
+
+* You may change the parameters in `run.sh`.
+  + `-epoch <int>`: number of epoch to be trained, default is 90;
+  + `-lr <float>`: base learning rate, the learning rate will decrease each 20 epochs,
+    more specifically, `lr = lr * exp(0.1 * (epoch / 20))`;
+  + `-batchsize <int>`: batchsize, it should be changed regarding to your memory;
+  + `-filesize <int>`: number of training images that stores in each binary file, it is the
+    same as the `filesize` in data preprocessing;
+  + `-ntrain <int>`: number of training images;
+  + `-ntest <int>`: number of test images;
+  + `-data <folder>`: the folder which stores the binary files, it is exactly the output
+    folder in data preprocessing step;
+  + `-pfreq <int>`: the frequency(in batch) of printing current model status(loss and accuracy);
+  + `-nthreads <int>`: the number of threads to load data which feed to the model.

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/imagenet/googlenet/README.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/imagenet/googlenet/README.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/imagenet/googlenet/README.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/imagenet/googlenet/README.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,66 @@
+---
+name: GoogleNet on ImageNet
+SINGA version: 1.0.1
+SINGA commit: 8c990f7da2de220e8a012c6a8ecc897dc7532744
+parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz
+parameter_sha1: 0a88e8948b1abca3badfd8d090d6be03f8d7655d
+license: unrestricted https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet
+---
+
+# Image Classification using GoogleNet
+
+
+In this example, we convert GoogleNet trained on Caffe to SINGA for image classification.
+
+## Instructions
+
+* Download the parameter checkpoint file into this folder
+
+        $ wget https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz
+        $ tar xvf bvlc_googlenet.tar.gz
+
+* Run the program
+
+        # use cpu
+        $ python serve.py -C &
+        # use gpu
+        $ python serve.py &
+
+* Submit images for classification
+
+        $ curl -i -F image=@image1.jpg http://localhost:9999/api
+        $ curl -i -F image=@image2.jpg http://localhost:9999/api
+        $ curl -i -F image=@image3.jpg http://localhost:9999/api
+
+image1.jpg, image2.jpg and image3.jpg should be downloaded before executing the above commands.
+
+## Details
+
+We first extract the parameter values from [Caffe's checkpoint file](http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel) into a pickle version
+After downloading the checkpoint file into `caffe_root/python` folder, run the following script
+
+    # to be executed within caffe_root/python folder
+    import caffe
+    import numpy as np
+    import cPickle as pickle
+
+    model_def = '../models/bvlc_googlenet/deploy.prototxt'
+    weight = 'bvlc_googlenet.caffemodel'  # must be downloaded at first
+    net = caffe.Net(model_def, weight, caffe.TEST)
+
+    params = {}
+    for layer_name in net.params.keys():
+        weights=np.copy(net.params[layer_name][0].data)
+        bias=np.copy(net.params[layer_name][1].data)
+        params[layer_name+'_weight']=weights
+        params[layer_name+'_bias']=bias
+        print layer_name, weights.shape, bias.shape
+
+    with open('bvlc_googlenet.pickle', 'wb') as fd:
+        pickle.dump(params, fd)
+
+Then we construct the GoogleNet using SINGA's FeedForwardNet structure.
+Note that we added a EndPadding layer to resolve the issue from discrepancy
+of the rounding strategy of the pooling layer between Caffe (ceil) and cuDNN (floor).
+Only the MaxPooling layers outside inception blocks have this problem.
+Refer to [this](http://joelouismarino.github.io/blog_posts/blog_googlenet_keras.html) for more detials.

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/index.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/index.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/index.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/index.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,29 @@
+..
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+..
+
+Model Zoo
+=========
+
+.. toctree::
+
+   cifar10/README
+   char-rnn/README
+   imagenet/alexnet/README
+   imagenet/googlenet/README
+
+

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/mnist/README.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/mnist/README.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/mnist/README.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/model_zoo/mnist/README.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,18 @@
+# Train a RBM model against MNIST dataset
+
+This example is to train an RBM model using the
+MNIST dataset. The RBM model and its hyper-parameters are set following
+[Hinton's paper](http://www.cs.toronto.edu/~hinton/science.pdf)
+
+## Running instructions
+
+1. Download the pre-processed [MNIST dataset](https://github.com/mnielsen/neural-networks-and-deep-learning/raw/master/data/mnist.pkl.gz)
+
+2. Start the training
+
+        python train.py mnist.pkl.gz
+
+By default the training code would run on CPU. To run it on a GPU card, please start
+the program with an additional argument
+
+        python train.py mnist.pkl.gz --use_gpu

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/net.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/net.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/net.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/net.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,26 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements.  See the NOTICE file
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+
+
+FeedForward Net
+===============
+
+.. automodule:: singa.net
+   :members:
+   :member-order: bysource
+   :show-inheritance:
+   :undoc-members:

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/neural-net.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/neural-net.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/neural-net.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/neural-net.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,326 @@
+# Neural Net
+
+
+`NeuralNet` in SINGA represents an instance of user's neural net model. As the
+neural net typically consists of a set of layers, `NeuralNet` comprises
+a set of unidirectionally connected [Layer](layer.html)s.
+This page describes how to convert an user's neural net into
+the configuration of `NeuralNet`.
+
+<img src="../_static/images/model-category.png" align="center" width="200px"/>
+<span><strong>Figure 1 - Categorization of popular deep learning models.</strong></span>
+
+## Net structure configuration
+
+Users configure the `NeuralNet` by listing all layers of the neural net and
+specifying each layer's source layer names. Popular deep learning models can be
+categorized as Figure 1. The subsequent sections give details for each
+category.
+
+### Feed-forward models
+
+<div align = "left">
+<img src="../_static/images/mlp-net.png" align="center" width="200px"/>
+<span><strong>Figure 2 - Net structure of a MLP model.</strong></span>
+</div>
+
+Feed-forward models, e.g., CNN and MLP, can easily get configured as their layer
+connections are undirected without circles. The
+configuration for the MLP model shown in Figure 1 is as follows,
+
+    net {
+      layer {
+        name : 'data"
+        type : kData
+      }
+      layer {
+        name : 'image"
+        type : kImage
+        srclayer: 'data'
+      }
+      layer {
+        name : 'label"
+        type : kLabel
+        srclayer: 'data'
+      }
+      layer {
+        name : 'hidden"
+        type : kHidden
+        srclayer: 'image'
+      }
+      layer {
+        name : 'softmax"
+        type : kSoftmaxLoss
+        srclayer: 'hidden'
+        srclayer: 'label'
+      }
+    }
+
+### Energy models
+
+<img src="../_static/images/rbm-rnn.png" align="center" width="500px"/>
+<span><strong>Figure 3 - Convert connections in RBM and RNN.</strong></span>
+
+
+For energy models including RBM, DBM,
+etc., their connections are undirected (i.e., Category B). To represent these models using
+`NeuralNet`, users can simply replace each connection with two directed
+connections, as shown in Figure 3a. In other words, for each pair of connected layers, their source
+layer field should include each other's name.
+The full [RBM example](rbm.html) has
+detailed neural net configuration for a RBM model, which looks like
+
+    net {
+      layer {
+        name : "vis"
+        type : kVisLayer
+        param {
+          name : "w1"
+        }
+        srclayer: "hid"
+      }
+      layer {
+        name : "hid"
+        type : kHidLayer
+        param {
+          name : "w2"
+          share_from: "w1"
+        }
+        srclayer: "vis"
+      }
+    }
+
+### RNN models
+
+For recurrent neural networks (RNN), users can remove the recurrent connections
+by unrolling the recurrent layer.  For example, in Figure 3b, the original
+layer is unrolled into a new layer with 4 internal layers. In this way, the
+model is like a normal feed-forward model, thus can be configured similarly.
+The [RNN example](rnn.html) has a full neural net
+configuration for a RNN model.
+
+
+## Configuration for multiple nets
+
+Typically, a training job includes three neural nets for
+training, validation and test phase respectively. The three neural nets share most
+layers except the data layer, loss layer or output layer, etc..  To avoid
+redundant configurations for the shared layers, users can uses the `exclude`
+filed to filter a layer in the neural net, e.g., the following layer will be
+filtered when creating the testing `NeuralNet`.
+
+
+    layer {
+      ...
+      exclude : kTest # filter this layer for creating test net
+    }
+
+
+
+## Neural net partitioning
+
+A neural net can be partitioned in different ways to distribute the training
+over multiple workers.
+
+### Batch and feature dimension
+
+<img src="../_static/images/partition_fc.png" align="center" width="400px"/>
+<span><strong>Figure 4 - Partitioning of a fully connected layer.</strong></span>
+
+
+Every layer's feature blob is considered a matrix whose rows are feature
+vectors. Thus, one layer can be split on two dimensions. Partitioning on
+dimension 0 (also called batch dimension) slices the feature matrix by rows.
+For instance, if the mini-batch size is 256 and the layer is partitioned into 2
+sub-layers, each sub-layer would have 128 feature vectors in its feature blob.
+Partitioning on this dimension has no effect on the parameters, as every
+[Param](param.html) object is replicated in the sub-layers. Partitioning on dimension
+1 (also called feature dimension) slices the feature matrix by columns. For
+example, suppose the original feature vector has 50 units, after partitioning
+into 2 sub-layers, each sub-layer would have 25 units. This partitioning may
+result in [Param](param.html) object being split, as shown in
+Figure 4. Both the bias vector and weight matrix are
+partitioned into two sub-layers.
+
+
+### Partitioning configuration
+
+There are 4 partitioning schemes, whose configurations are give below,
+
+  1. Partitioning each singe layer into sub-layers on batch dimension (see
+  below). It is enabled by configuring the partition dimension of the layer to
+  0, e.g.,
+
+          # with other fields omitted
+          layer {
+            partition_dim: 0
+          }
+
+  2. Partitioning each singe layer into sub-layers on feature dimension (see
+  below).  It is enabled by configuring the partition dimension of the layer to
+  1, e.g.,
+
+          # with other fields omitted
+          layer {
+            partition_dim: 1
+          }
+
+  3. Partitioning all layers into different subsets. It is enabled by
+  configuring the location ID of a layer, e.g.,
+
+          # with other fields omitted
+          layer {
+            location: 1
+          }
+          layer {
+            location: 0
+          }
+
+
+  4. Hybrid partitioning of strategy 1, 2 and 3. The hybrid partitioning is
+  useful for large models. An example application is to implement the
+  [idea proposed by Alex](http://arxiv.org/abs/1404.5997).
+  Hybrid partitioning is configured like,
+
+          # with other fields omitted
+          layer {
+            location: 1
+          }
+          layer {
+            location: 0
+          }
+          layer {
+            partition_dim: 0
+            location: 0
+          }
+          layer {
+            partition_dim: 1
+            location: 0
+          }
+
+Currently SINGA supports strategy-2 well. Other partitioning strategies are
+are under test and will be released in later version.
+
+## Parameter sharing
+
+Parameters can be shared in two cases,
+
+  * sharing parameters among layers via user configuration. For example, the
+  visible layer and hidden layer of a RBM shares the weight matrix, which is configured through
+  the `share_from` field as shown in the above RBM configuration. The
+  configurations must be the same (except name) for shared parameters.
+
+  * due to neural net partitioning, some `Param` objects are replicated into
+  different workers, e.g., partitioning one layer on batch dimension. These
+  workers share parameter values. SINGA controls this kind of parameter
+  sharing automatically, users do not need to do any configuration.
+
+  * the `NeuralNet` for training and testing (and validation) share most layers
+  , thus share `Param` values.
+
+If the shared `Param` instances resident in the same process (may in different
+threads), they use the same chunk of memory space for their values. But they
+would have different memory spaces for their gradients. In fact, their
+gradients will be averaged by the stub or server.
+
+## Advanced user guide
+
+### Creation
+
+    static NeuralNet* NeuralNet::Create(const NetProto& np, Phase phase, int num);
+
+The above function creates a `NeuralNet` for a given phase, and returns a
+pointer to the `NeuralNet` instance. The phase is in {kTrain,
+kValidation, kTest}. `num` is used for net partitioning which indicates the
+number of partitions.  Typically, a training job includes three neural nets for
+training, validation and test phase respectively. The three neural nets share most
+layers except the data layer, loss layer or output layer, etc.. The `Create`
+function takes in the full net configuration including layers for training,
+validation and test.  It removes layers for phases other than the specified
+phase based on the `exclude` field in
+[layer configuration](layer.html):
+
+    layer {
+      ...
+      exclude : kTest # filter this layer for creating test net
+    }
+
+The filtered net configuration is passed to the constructor of `NeuralNet`:
+
+    NeuralNet::NeuralNet(NetProto netproto, int npartitions);
+
+The constructor creates a graph representing the net structure firstly in
+
+    Graph* NeuralNet::CreateGraph(const NetProto& netproto, int npartitions);
+
+Next, it creates a layer for each node and connects layers if their nodes are
+connected.
+
+    void NeuralNet::CreateNetFromGraph(Graph* graph, int npartitions);
+
+Since the `NeuralNet` instance may be shared among multiple workers, the
+`Create` function returns a pointer to the `NeuralNet` instance .
+
+### Parameter sharing
+
+ `Param` sharing
+is enabled by first sharing the Param configuration (in `NeuralNet::Create`)
+to create two similar (e.g., the same shape) Param objects, and then calling
+(in `NeuralNet::CreateNetFromGraph`),
+
+    void Param::ShareFrom(const Param& from);
+
+It is also possible to share `Param`s of two nets, e.g., sharing parameters of
+the training net and the test net,
+
+    void NeuralNet:ShareParamsFrom(NeuralNet* other);
+
+It will call `Param::ShareFrom` for each Param object.
+
+### Access functions
+`NeuralNet` provides a couple of access function to get the layers and params
+of the net:
+
+    const std::vector<Layer*>& layers() const;
+    const std::vector<Param*>& params() const ;
+    Layer* name2layer(string name) const;
+    Param* paramid2param(int id) const;
+
+
+### Partitioning
+
+
+#### Implementation
+
+SINGA partitions the neural net in `CreateGraph` function, which creates one
+node for each (partitioned) layer. For example, if one layer's partition
+dimension is 0 or 1, then it creates `npartition` nodes for it; if the
+partition dimension is -1, a single node is created, i.e., no partitioning.
+Each node is assigned a partition (or location) ID. If the original layer is
+configured with a location ID, then the ID is assigned to each newly created node.
+These nodes are connected according to the connections of the original layers.
+Some connection layers will be added automatically.
+For instance, if two connected sub-layers are located at two
+different workers, then a pair of bridge layers is inserted to transfer the
+feature (and gradient) blob between them. When two layers are partitioned on
+different dimensions, a concatenation layer which concatenates feature rows (or
+columns) and a slice layer which slices feature rows (or columns) would be
+inserted. These connection layers help making the network communication and
+synchronization transparent to the users.
+
+#### Dispatching partitions to workers
+
+Each (partitioned) layer is assigned a location ID, based on which it is dispatched to one
+worker. Particularly, the pointer to the `NeuralNet` instance is passed
+to every worker within the same group, but each worker only computes over the
+layers that have the same partition (or location) ID as the worker's ID.  When
+every worker computes the gradients of the entire model parameters
+(strategy-2), we refer to this process as data parallelism.  When different
+workers compute the gradients of different parameters (strategy-3 or
+strategy-1), we call this process model parallelism.  The hybrid partitioning
+leads to hybrid parallelism where some workers compute the gradients of the
+same subset of model parameters while other workers compute on different model
+parameters.  For example, to implement the hybrid parallelism in for the
+[DCNN model](http://arxiv.org/abs/1404.5997), we set `partition_dim = 0` for
+lower layers and `partition_dim = 1` for higher layers.
+

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/notebook/README.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/notebook/README.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/notebook/README.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/notebook/README.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,3 @@
+These are some examples in IPython notebooks.
+
+You can open them in [notebook viewer](http://nbviewer.jupyter.org/github/apache/incubator-singa/blob/master/doc/en/docs/notebook/index.ipynb).

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/optimizer.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/optimizer.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/optimizer.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/optimizer.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,29 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements.  See the NOTICE file
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+
+
+Optimizer
+=========
+
+
+.. automodule:: singa.optimizer
+   :members:
+   :member-order: bysource
+   :show-inheritance:
+   :undoc-members:
+
+

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/snapshot.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/snapshot.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/snapshot.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/snapshot.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,24 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements.  See the NOTICE file
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+
+
+Snapshot
+========
+
+
+.. automodule:: singa.snapshot
+   :members:

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/software_stack.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/software_stack.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/software_stack.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/software_stack.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,99 @@
+# Software Stack
+
+SINGA's software stack includes three major components, namely, core, IO and
+model. Figure 1 illustrates these components together with the hardware.
+The core component provides memory management and tensor operations;
+IO has classes for reading (and writing) data from (to) disk and network; The
+model component provides data structures and algorithms for machine learning models,
+e.g., layers for neural network models, optimizers/initializer/metric/loss for
+general machine learning models.
+
+
+<img src="../_static/images/singav1-sw.png" align="center" width="500px"/>
+<br/>
+<span><strong>Figure 1 - SINGA V1 software stack.</strong></span>
+
+## Core
+
+[Tensor](tensor.html) and [Device](device.html) are two core abstractions in SINGA. Tensor class represents a
+multi-dimensional array, which stores model variables and provides linear algebra
+operations for machine learning
+algorithms, including matrix multiplication and random functions. Each tensor
+instance (i.e. a tensor) is allocated on a Device instance.
+Each Device instance (i.e. a device) is created against one hardware device,
+e.g. a GPU card or a CPU core. Devices manage the memory of tensors and execute
+tensor operations on its execution units, e.g. CPU threads or CUDA streams.
+
+Depending on the hardware and the programming language, SINGA have implemented
+the following specific device classes:
+
+* **CudaGPU** represents an Nvidia GPU card. The execution units are the CUDA streams.
+* **CppCPU** represents a normal CPU. The execution units are the CPU threads.
+* **OpenclGPU** represents normal GPU card from both Nvidia and AMD.
+  The execution units are the CommandQueues. Given that OpenCL is compatible with
+  many hardware devices, e.g. FPGA and ARM, the OpenclGPU has the potential to be
+  extended for other devices.
+
+Different types of devices use different programming languages to write the kernel
+functions for tensor operations,
+
+* CppMath (tensor_math_cpp.h) implements the tensor operations using Cpp for CppCPU
+* CudaMath (tensor_math_cuda.h) implements the tensor operations using CUDA for CudaGPU
+* OpenclMath (tensor_math_opencl.h) implements the tensor operations using OpenCL for OpenclGPU
+
+In addition, different types of data, such as float32 and float16, could be supported by adding
+the corresponding tensor functions.
+
+Typically, users would create a device instance and pass it to create multiple
+tensor instances. When users call the Tensor functions, these function would invoke
+the corresponding implementation (CppMath/CudaMath/OpenclMath) automatically. In
+other words, the implementation of Tensor operations is transparent to users.
+
+Most machine learning algorithms could be expressed using (dense or sparse) tensors.
+Therefore, with the Tensor abstraction, SINGA would be able to run a wide range of models,
+including deep learning models and other traditional machine learning models.
+
+The Tensor and Device abstractions are extensible to support a wide range of hardware device
+using different programming languages. A new hardware device would be supported by
+adding a new Device subclass and the corresponding implementation of the Tensor
+operations (xxxMath).
+
+Optimizations in terms of speed and memory could be implemented by Device, which
+manages both operation execution and memory malloc/free. More optimization details
+would be described in the [Device page](device.html).
+
+
+## Model
+
+On top of the Tensor and Device abstractions, SINGA provides some higher level
+classes for machine learning modules.
+
+* [Layer](layer.html) and its subclasses are specific for neural networks. Every layer provides
+  functions for forward propagating features and backward propagating gradients w.r.t the training loss functions.
+  They wraps the complex layer operations so that users can easily create neural nets
+  by connecting a set of layers.
+
+* [Initializer](initializer.html) and its subclasses provide variant methods of initializing
+  model parameters (stored in Tensor instances), following Uniform, Gaussian, etc.
+
+* [Loss](loss.html) and its subclasses defines the training objective loss functions.
+  Both functions of computing the loss values and computing the gradient of the prediction w.r.t the
+  objective loss are implemented. Example loss functions include squared error and cross entropy.
+
+* [Metric](metric.html) and its subclasses provide the function to measure the
+  performance of the model, e.g., the accuracy.
+
+* [Optimizer](optimizer.html) and its subclasses implement the methods for updating
+  model parameter values using parameter gradients, including SGD, AdaGrad, RMSProp etc.
+
+
+## IO
+
+The IO module consists of classes for data loading, data preprocessing and message passing.
+
+* Reader and its subclasses load string records from disk files
+* Writer and its subclasses write string records to disk files
+* Encoder and its subclasses encode Tensor instances into string records
+* Decoder and its subclasses decodes string records into Tensor instances
+* Endpoint represents a communication endpoint which provides functions for passing messages to each other.
+* Message represents communication messages between Endpoint instances. It carries both meta data and payload.

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/tensor.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/tensor.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/tensor.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/tensor.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,48 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements.  See the NOTICE file
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+
+
+Tensor
+========
+
+Each Tensor instance is a multi-dimensional array allocated on a specific
+Device instance. Tensor instances store variables and provide
+linear algebra operations over different types of hardware devices without user
+awareness. Note that users need to make sure the tensor operands are
+allocated on the same device except copy functions.
+
+
+Tensor implementation
+---------------------
+
+SINGA has three different sets of implmentations of Tensor functions, one for each
+type of Device.
+
+* 'tensor_math_cpp.h' implements operations using Cpp (with CBLAS) for CppGPU devices.
+* 'tensor_math_cuda.h' implements operations using Cuda (with cuBLAS) for CudaGPU devices.
+* 'tensor_math_opencl.h' implements operations using OpenCL for OpenclGPU devices.
+
+Python API
+----------
+
+
+.. automodule:: singa.tensor
+   :members:
+
+
+CPP API
+---------

Added: incubator/singa/site/trunk/v1.1.0/_sources/docs/utils.txt
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/v1.1.0/_sources/docs/utils.txt?rev=1782721&view=auto
==============================================================================
--- incubator/singa/site/trunk/v1.1.0/_sources/docs/utils.txt (added)
+++ incubator/singa/site/trunk/v1.1.0/_sources/docs/utils.txt Mon Feb 13 05:13:19 2017
@@ -0,0 +1,24 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements.  See the NOTICE file
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+
+
+Utils
+=========
+
+
+.. automodule:: singa.utils
+   :members: