You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@singa.apache.org by wa...@apache.org on 2016/08/15 16:15:31 UTC

[18/22] incubator-singa git commit: SINGA-223 Use Sphinx to create the website.

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/community/mail-lists.rst
----------------------------------------------------------------------
diff --git a/doc/en/community/mail-lists.rst b/doc/en/community/mail-lists.rst
new file mode 100644
index 0000000..02b39de
--- /dev/null
+++ b/doc/en/community/mail-lists.rst
@@ -0,0 +1,10 @@
+Project Mailing Lists
+=====================
+
+These are the mailing lists that have been established for this project. For each list, there is a subscribe, unsubscribe, and an archive link.
+
+.. csv-table:: Mailing Lists
+	:header: "Name", "Post", "Subscribe", "Unsubscribe", "Archive"
+
+        "Development", "dev@singa.incubator.apache.org", "`Subscribe <ma...@singa.incubator.apache.org>`_", "`Unsubscribe <ma...@singa.incubator.apache.org.>`_", "`mail-archives.apache.org <http://mail-archives.apache.org/mod_mbox/singa-dev/>`_"
+        "Commits", "commits@singa.incubator.apache.org", "`Subscribe <ma...@singa.incubator.apache.org>`_", "`Unsubscribe <ma...@singa.incubator.apache.org>`_", "`mail-archives.apache.org  <http://mail-archives.apache.org/mod_mbox/singa-commits/>`_"

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/community/source-repository.md
----------------------------------------------------------------------
diff --git a/doc/en/community/source-repository.md b/doc/en/community/source-repository.md
new file mode 100644
index 0000000..8864629
--- /dev/null
+++ b/doc/en/community/source-repository.md
@@ -0,0 +1,22 @@
+# Source Repository
+
+___
+
+This project uses [Git](http://git-scm.com/) to manage its source code. Instructions on Git use can be found at [http://git-scm.com/documentation](http://git-scm.com/documentation).
+
+## Web Access
+
+The following is a link to the online source repository.
+
+* [https://git-wip-us.apache.org/repos/asf?p=incubator-singa.git;a=summary](https://git-wip-us.apache.org/repos/asf?p=incubator-singa.git;a=summary)
+
+
+## Upstream for committers
+
+Committers need to set the upstream endpoint to the Apache git (not github) repo address, e.g.,
+
+    $ git remote add asf https://git-wip-us.apache.org/repos/asf/incubator-singa.git
+
+Then you (committer) can push your code in this way,
+
+    $ git push asf <local-branch>:<remote-branch>

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/community/team-list.rst
----------------------------------------------------------------------
diff --git a/doc/en/community/team-list.rst b/doc/en/community/team-list.rst
new file mode 100644
index 0000000..a677aff
--- /dev/null
+++ b/doc/en/community/team-list.rst
@@ -0,0 +1,64 @@
+The SINGA Team
+==============
+
+A successful project requires many people to play many roles. Some members write code or documentation, while others are valuable as testers, submitting patches and suggestions.
+
+Mentors
+-------
+
+==================   ============
+Name                 Email
+==================   ============
+Daniel Dai           daijy@apache.org
+Ted Dunning	     tdunning@apache.org
+Alan Gates	     gates@apache.org
+Thejas Nair	     thejas@apache.org
+==================   ============
+
+Developers
+----------
+
++-------------------+--------------------------------+----------------------------------------------+
+| Name              |  Email                         |  Organization                                |
++-------------------+--------------------------------+----------------------------------------------+
+|Gang Chen          |  cg@zju.edu.cn                 |   Zhejiang University                        |
++-------------------+--------------------------------+----------------------------------------------+
+| Haibo Chen        | hzchenhaibo@corp.netease.com   |  NetEase                                     |
++-------------------+--------------------------------+----------------------------------------------+
+| Anh Dinh	    |     dinhtta@apache.org	     |         National University of Singapore     |                       
++-------------------+--------------------------------+----------------------------------------------+
+| Jinyang Gao	    |     jinyang@apache.org	     |         National University of Singapore	    |
++-------------------+--------------------------------+----------------------------------------------+
+| Xing Ji	    |         jixin@comp.nus.edu.sg  |          National University of Singapore    |
++-------------------+--------------------------------+----------------------------------------------+
+| Chonho Lee	    |  chonho@gmail.com              |   National University of Singapore           |
++-------------------+--------------------------------+----------------------------------------------+
+| Zhaojing Luo	    | zhaojing@apache.org	     | National University of Singapore	            |
++-------------------+--------------------------------+----------------------------------------------+
+| Beng Chin Ooi	    | ooibc@comp.nus.edu.sg          | National University of Singapore	            |
++-------------------+--------------------------------+----------------------------------------------+
+| Kian-Lee Tan	    |    tankl@apache.org            | National University of Singapore	            |
++-------------------+--------------------------------+----------------------------------------------+
+|Anthony K. H. Tung |  atung@comp.nus.edu.sg         |   National University of Singapore	    |
++-------------------+--------------------------------+----------------------------------------------+
+| Ji Wang	    |         wangji@comp.nus.edu.sg |	      National University of Singapore	    |
++-------------------+--------------------------------+----------------------------------------------+
+| Sheng Wang	    |    wangsh@apache.org           | National University of Singapore	            |
++-------------------+--------------------------------+----------------------------------------------+
+| Wei Wang	    |    wangwei@apache.org	     |         National University of Singapore	    |
++-------------------+--------------------------------+----------------------------------------------+
+| Yuan Wang         |  wangyuan@corp.netease.com     |   NetEase                                    |
++-------------------+--------------------------------+----------------------------------------------+
+| Wenfeng Wu	    |     wuwf@comp.nus.edu.sg       |  National University of Singapore            |
++-------------------+--------------------------------+----------------------------------------------+
+| Zhongle Xie	    |     zhongle@apache.org	     |        National University of Singapore      |
++-------------------+--------------------------------+----------------------------------------------+
+| Meihui Zhang	    |     meihui_zhang@sutd.edu.sg   |Singapore University of Technology and Design |
++-------------------+--------------------------------+----------------------------------------------+
+| Kaiping Zheng     |     kaiping@apache.org	     |         National University of Singapore	    |
++-------------------+--------------------------------+----------------------------------------------+
+| Ming Zhong        | hzzhongming15@corp.netease.com |   Zhejiang University                        |
++-------------------+--------------------------------+----------------------------------------------+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/conf.py
----------------------------------------------------------------------
diff --git a/doc/en/conf.py b/doc/en/conf.py
new file mode 100755
index 0000000..332a0d1
--- /dev/null
+++ b/doc/en/conf.py
@@ -0,0 +1,339 @@
+# -*- coding: utf-8 -*-
+#
+# incubator-singa documentation build configuration file, created by
+# sphinx-quickstart on Sat Jul  9 20:36:57 2016.
+#
+# This file is execfile()d with the current directory set to its
+# containing dir.
+#
+# Note that not all possible configuration values are present in this
+# autogenerated file.
+#
+# All configuration values have a default; values that are commented out
+# serve to show the default.
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+import sys
+sys.path.insert(0, os.path.abspath('.'))
+sys.path.insert(1, os.path.abspath('../build/python'))
+
+# -- General configuration ------------------------------------------------
+from recommonmark.parser import CommonMarkParser
+
+source_parsers = {
+    '.md': CommonMarkParser,
+}
+
+# If your documentation needs a minimal Sphinx version, state it here.
+#
+# needs_sphinx = '1.0'
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = ['sphinx.ext.autodoc', 'sphinx.ext.napoleon']
+napoleon_google_docstring = True
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# The suffix(es) of source filenames.
+# You can specify multiple suffix as a list of string:
+#
+# source_suffix = ['.rst', '.md']
+source_suffix = ['.rst', '.md']
+
+# The encoding of source files.
+#
+source_encoding = 'utf-8-sig'
+
+# The master toctree document.
+master_doc = 'index'
+
+# General information about the project.
+project = u'incubator-singa'
+copyright = u'2016 The Apache Software Foundation. All rights reserved. Apache Singa, Apache, the Apache feather logo, and the Apache Singa project logos are trademarks of The Apache Software Foundation. All other marks mentioned may be trademarks or registered trademarks of their respective owners.'
+author = u'moaz'
+
+# The version info for the project you're documenting, acts as replacement for
+# |version| and |release|, also used in various other places throughout the
+# built documents.
+#
+# The short X.Y version.
+version = u'1.0.0'
+# The full version, including alpha/beta/rc tags.
+release = u'1.0.0'
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#
+# This is also used if you do content translation via gettext catalogs.
+# Usually you set "language" from the command line for these cases.
+language = None
+
+# There are two options for replacing |today|: either, you set today to some
+# non-false value, then it is used:
+#
+# today = ''
+#
+# Else, today_fmt is used as the format for a strftime call.
+#
+# today_fmt = '%B %d, %Y'
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This patterns also effect to html_static_path and html_extra_path
+exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
+
+# The reST default role (used for this markup: `text`) to use for all
+# documents.
+#
+# default_role = None
+
+# If true, '()' will be appended to :func: etc. cross-reference text.
+#
+# add_function_parentheses = True
+
+# If true, the current module name will be prepended to all description
+# unit titles (such as .. function::).
+#
+# add_module_names = True
+
+# If true, sectionauthor and moduleauthor directives will be shown in the
+# output. They are ignored by default.
+#
+# show_authors = False
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = 'sphinx'
+
+# A list of ignored prefixes for module index sorting.
+# modindex_common_prefix = []
+
+# If true, keep warnings as "system message" paragraphs in the built documents.
+# keep_warnings = False
+
+# If true, `todo` and `todoList` produce output, else they produce nothing.
+todo_include_todos = False
+
+
+# -- Options for HTML output ----------------------------------------------
+
+# The theme to use for HTML and HTML Help pages.  See the documentation for
+# a list of builtin themes.
+#
+html_theme = 'sphinx_rtd_theme'
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further.  For a list of options available for each theme, see the
+# documentation.
+#
+# html_theme_options = {}
+
+# Add any paths that contain custom themes here, relative to this directory.
+# html_theme_path = []
+
+# The name for this set of Sphinx documents.
+# "<project> v<release> documentation" by default.
+#
+# html_title = u'Singa v1.0.0'
+
+# A shorter title for the navigation bar.  Default is the same as html_title.
+#
+# html_short_title = None
+
+# The name of an image file (relative to this directory) to place at the top
+# of the sidebar.
+#
+html_logo = 'image/singa.png'
+
+# The name of an image file (relative to this directory) to use as a favicon of
+# the docs.  This file should be a Windows icon file (.ico) being 16x16 or 32x32
+# pixels large.
+#
+# html_favicon = None
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['../_static']
+
+# Add any extra paths that contain custom files (such as robots.txt or
+# .htaccess) here, relative to this directory. These files are copied
+# directly to the root of the documentation.
+#
+# html_extra_path = []
+
+# If not None, a 'Last updated on:' timestamp is inserted at every page
+# bottom, using the given strftime format.
+# The empty string is equivalent to '%b %d, %Y'.
+#
+# html_last_updated_fmt = None
+
+# If true, SmartyPants will be used to convert quotes and dashes to
+# typographically correct entities.
+#
+# html_use_smartypants = True
+
+# Custom sidebar templates, maps document names to template names.
+#
+# html_sidebars = {}
+
+# Additional templates that should be rendered to pages, maps page names to
+# template names.
+#
+# html_additional_pages = {}
+
+# If false, no module index is generated.
+#
+# html_domain_indices = True
+
+# If false, no index is generated.
+#
+# html_use_index = True
+
+# If true, the index is split into individual pages for each letter.
+#
+# html_split_index = False
+
+# If true, links to the reST sources are added to the pages.
+#
+html_show_sourcelink = False
+
+# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
+#
+# html_show_sphinx = True
+
+# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
+#
+# html_show_copyright = True
+
+# If true, an OpenSearch description file will be output, and all pages will
+# contain a <link> tag referring to it.  The value of this option must be the
+# base URL from which the finished HTML is served.
+#
+# html_use_opensearch = ''
+
+# This is the file name suffix for HTML files (e.g. ".xhtml").
+# html_file_suffix = None
+
+# Language to be used for generating the HTML full-text search index.
+# Sphinx supports the following languages:
+#   'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
+#   'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr', 'zh'
+#
+# html_search_language = 'en'
+
+# A dictionary with options for the search language support, empty by default.
+# 'ja' uses this config value.
+# 'zh' user can custom change `jieba` dictionary path.
+#
+# html_search_options = {'type': 'default'}
+
+# The name of a javascript file (relative to the configuration directory) that
+# implements a search results scorer. If empty, the default will be used.
+#
+# html_search_scorer = 'scorer.js'
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'Singadoc'
+
+# -- Options for LaTeX output ---------------------------------------------
+
+latex_elements = {
+     # The paper size ('letterpaper' or 'a4paper').
+     #
+     # 'papersize': 'letterpaper',
+
+     # The font size ('10pt', '11pt' or '12pt').
+     #
+     # 'pointsize': '10pt',
+
+     # Additional stuff for the LaTeX preamble.
+     #
+     # 'preamble': '',
+
+     # Latex figure (float) alignment
+     #
+     # 'figure_align': 'htbp',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title,
+#  author, documentclass [howto, manual, or own class]).
+latex_documents = [
+    (master_doc, 'incubator-singa.tex', u'incubator-singa Documentation',
+     u'moaz', 'manual'),
+]
+
+# The name of an image file (relative to this directory) to place at the top of
+# the title page.
+#
+# latex_logo = None
+
+# For "manual" documents, if this is true, then toplevel headings are parts,
+# not chapters.
+#
+# latex_use_parts = False
+
+# If true, show page references after internal links.
+#
+# latex_show_pagerefs = False
+
+# If true, show URL addresses after external links.
+#
+# latex_show_urls = False
+
+# Documents to append as an appendix to all manuals.
+#
+# latex_appendices = []
+
+# If false, no module index is generated.
+#
+# latex_domain_indices = True
+
+
+# -- Options for manual page output ---------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [
+    (master_doc, 'incubator-singa', u'incubator-singa Documentation',
+     [author], 1)
+]
+
+# If true, show URL addresses after external links.
+#
+# man_show_urls = False
+
+
+# -- Options for Texinfo output -------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+#  dir menu entry, description, category)
+texinfo_documents = [
+    (master_doc, 'incubator-singa', u'incubator-singa Documentation',
+     author, 'incubator-singa', 'One line description of project.',
+     'Miscellaneous'),
+]
+
+# Documents to append as an appendix to all manuals.
+#
+# texinfo_appendices = []
+
+# If false, no module index is generated.
+#
+# texinfo_domain_indices = True
+
+# How to display URL addresses: 'footnote', 'no', or 'inline'.
+#
+# texinfo_show_urls = 'footnote'
+
+# If true, do not generate a @detailmenu in the "Top" node's menu.
+#
+# texinfo_no_detailmenu = False

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/develop/contribute-code.md
----------------------------------------------------------------------
diff --git a/doc/en/develop/contribute-code.md b/doc/en/develop/contribute-code.md
new file mode 100644
index 0000000..98e5aee
--- /dev/null
+++ b/doc/en/develop/contribute-code.md
@@ -0,0 +1,60 @@
+## How to Contribute Code
+
+_____
+
+### Coding Style
+
+The SINGA codebase follows the [Google C++ Style Guide](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml).
+
+To check if your code follows the style, you can use the provided cpplint tool:
+    
+    $ ./tool/cpplint.py YOUR_FILE
+
+
+### JIRA format
+
+Like other Apache projects, SINGA uses JIRA to track bugs, improvements and
+other high-level discussions (e.g., system design and features).  Github pull requests are
+used for implementation discussions, e.g., code review and code merge.
+
+* Provide a descriptive Title.
+* Write a detailed Description. For bug reports, this should ideally include a
+  short reproduction of the problem. For new features, it may include a design
+  document.
+* Set [required fields](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark#ContributingtoSpark-JIRA)
+
+### Pull Request
+
+The work flow is
+
+* Fork the [SINGA Github repository](https://github.com/apache/incubator-singa) to
+your own Github account.
+
+* Clone your fork, create a new branch (e.g., feature-foo or fixbug-foo),
+ work on it. After finishing your job,
+ [rebase](https://git-scm.com/book/en/v2/Git-Branching-Rebasing) it to the
+ current latest master and push commits to your own Github account (the new
+ branch).
+
+* Open a pull request against the master branch of apache/incubator-singa.
+The PR title should be of the form SINGA-xxxx Title, where
+SINGA-xxxx is the relevant JIRA number, and Title may be the JIRA's title or a
+more specific title describing the PR itself, for example, "SINGA-6 Implement thread-safe singleton". Detailed description can be copied from the JIRA.
+Consider identifying committers or other contributors who have worked on the
+code being changed. Find the file(s) in Github and click "Blame" to see a
+line-by-line annotation of who changed the code last.  You can add @username in
+the PR description to ping them immediately.
+Please state that the contribution is your original work and that you license
+the work to the project under the project's open source license. Further commits (e.g., bug fix)
+to your new branch will be added to this pull request automatically by Github.
+
+* Wait for one committer to review the patch. If no conflicts, the committers will merge it with
+the master branch. The merge should a) not use rebase b) disable fast forward merge c) check the 
+commit message format and test the code/feature.
+
+* If there are too many small commit messages, you will be told to squash your commits into fewer meaningful
+commits. If your commit message does not follow the format (i.e., SINGA-xxxx), you will be told to
+reword your commit message. Both changes can be done using interactive git rebase. Once you
+get the commits corrected, push them to you own github again. Your pull request 
+will be automatically updated. For details, please refer to 
+[Rebase Pull Requests](https://github.com/edx/edx-platform/wiki/How-to-Rebase-a-Pull-Request).
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/develop/contribute-docs.md
----------------------------------------------------------------------
diff --git a/doc/en/develop/contribute-docs.md b/doc/en/develop/contribute-docs.md
new file mode 100644
index 0000000..5e21a0f
--- /dev/null
+++ b/doc/en/develop/contribute-docs.md
@@ -0,0 +1,28 @@
+# How to Contribute Documentation
+
+___
+
+
+## Website
+This document gives step-by-step instructions for deploying [Singa website](http://singa.incubator.apache.org).
+
+Singa website is built by [Sphinx](http://www.sphinx-doc.org) 1.4.4 from a source tree stored in git: https://github.com/apache/incubator-singa/tree/master/doc.
+
+To install Sphinx on Ubuntu:
+
+    $ apt-get install python-sphinx
+
+To install the markdown support for Sphinx:
+
+    $ pip install recommonmark
+
+You can build the website by executing the following command from the doc folder:
+
+    $ make html
+
+The procedure for contributing documentation is the same as [contributing code](contribute-code.html).
+
+
+## CPP API
+
+To generate docs, run "doxygen" from the doc folder (Doxygen >= 1.8 recommended)

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/develop/how-contribute.md
----------------------------------------------------------------------
diff --git a/doc/en/develop/how-contribute.md b/doc/en/develop/how-contribute.md
new file mode 100644
index 0000000..8687b5a
--- /dev/null
+++ b/doc/en/develop/how-contribute.md
@@ -0,0 +1,11 @@
+# How to Contribute to SINGA
+
+___
+
+As with any open source project, there are several ways you can help:
+
+* Join the [mailing list](../community/mail-lists.html) and answer other user's questions.
+* [Build Singa](../quick-start.html) for yourself, in order to fix bugs.
+* Report bugs, feature requests and other issues in the [issue tracking](../community/issue-tracking.html) application.
+* Check SINGA's [development schedule](schedule.html) and [contribute code](contribute-code.html) by providing patches.
+* [Help with the documentation](contribute-docs.html) by updating webpages that are lacking or unclear.

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/develop/schedule.rst
----------------------------------------------------------------------
diff --git a/doc/en/develop/schedule.rst b/doc/en/develop/schedule.rst
new file mode 100644
index 0000000..2afe54f
--- /dev/null
+++ b/doc/en/develop/schedule.rst
@@ -0,0 +1,40 @@
+Development Schedule
+====================
+
+.. csv-table::
+	:header: "Release", "Module", "Feature", "Status"
+
+	" 0.1 Sep 2015     "," Neural Network          "," Feed forward neural network, including CNN, MLP                                                                 "," done  "
+	"                  ","                         "," RBM-like model, including RBM                                                                                   "," done   "
+	"                  ","                         "," Recurrent neural network, including standard RNN                                                                "," done   "
+	"                  ","  Architecture           "," One worker group on single node (with data partition)                                                           "," done   "
+	"                  ","                         "," Multi worker groups on single node using [Hogwild](http://www.eecs.berkeley.edu/~brecht/papers/hogwildTR.pdf)      ","done"
+	"                  ","                         "," Distributed Hogwild","done"
+	"                  ","                         "," Multi groups across nodes, like [Downpour](http://papers.nips.cc/paper/4687-large-scale-distributed-deep-networks) ","done"
+	"                  ","                         "," All-Reduce training architecture like [DeepImage](http://arxiv.org/abs/1501.02876) ","done"
+	"                  ","                         "," Load-balance among servers "," done"
+	"                  ","  Failure recovery       "," Checkpoint and restore ","done"
+	"                  ","  Tools                  "," Installation with GNU auto tools"," done"
+	"0.2 Jan 2016      "," Neural Network          "," Feed forward neural network, including AlexNet, cuDNN layers, etc."," done "
+	"                  ","                         "," Recurrent neural network, including GRULayer and BPTT","done "
+	"                  ","                         "," Model partition and hybrid partition","done"
+	"      		   "," Tools                   "," Integration with Mesos for resource management","done"
+	"         	   ","                         "," Prepare Docker images for deployment","done"
+	"              	   ","                         "," Visualization of neural net and debug information ","done"
+	"                  "," Binding                 "," Python binding for major components ","done"
+	"                  "," GPU                     "," Single node with multiple GPUs ","done"
+	"0.3 April 2016    "," GPU                     "," Multiple nodes, each with multiple GPUs","done"
+	"                  ","                         "," Heterogeneous training using both GPU and CPU [CcT](http://arxiv.org/abs/1504.04343)","done"
+	"                  ","                         "," Support cuDNN v4 "," done"
+	"                  "," Installation            "," Remove dependency on ZeroMQ, CZMQ, Zookeeper for single node training","done"
+	"                  "," Updater                 "," Add new SGD updaters including Adam, AdamMax and AdaDelta","done"
+	"                  "," Binding                 "," Enhance Python binding for training","done"
+	"1.0 July 2016     "," Programming abstraction ","Tensor with linear algebra, neural net and random operations "," "
+	"                  ","                         ","Updater for distributed parameter updating ",""
+	"                  "," Optimization            "," Execution and memory optimization",""
+	"                  "," Hardware                "," Use Cuda and Cudnn for Nvidia GPU",""
+	"                  ","                         "," Use OpenCL for AMD GPU or other devices",""
+	"                  "," Cross-platform          "," To extend from Linux to MacOS and Windows",""
+	"                  "," Examples                "," Speech recognition example",""
+	"                  ","                         ","Large image models, e.g., [GoogLeNet](http://arxiv.org/abs/1409.4842), [VGG](https://arxiv.org/pdf/1409.1556.pdf) and [Residual Net](http://arxiv.org/abs/1512.03385)",""
+	"     "," Rafiki                  "," Deep learning as a service "," "

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/docs.rst
----------------------------------------------------------------------
diff --git a/doc/en/docs.rst b/doc/en/docs.rst
new file mode 100644
index 0000000..400b12a
--- /dev/null
+++ b/doc/en/docs.rst
@@ -0,0 +1,6 @@
+Documentation
+=============
+
+.. toctree::
+   docs/index
+   docs/zh/index

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/docs/cnn.md
----------------------------------------------------------------------
diff --git a/doc/en/docs/cnn.md b/doc/en/docs/cnn.md
new file mode 100755
index 0000000..21ef1f7
--- /dev/null
+++ b/doc/en/docs/cnn.md
@@ -0,0 +1,141 @@
+#Quickstart - Cifar10 example
+Convolution neural network (CNN) is a type of feed-forward artificial neural network widely used for image classification. In this example, we will use a deep CNN model to do image classification for the [CIFAR10 dataset](http://www.cs.toronto.edu/~kriz/cifar.html).
+
+## Running instructions for CPP version
+Please refer to [Installation](installation.html) page for how to install SINGA. Currently, we CNN requires CUDNN, hence both CUDA and CUDNN should be installed and SINGA should be compiled with CUDA and CUDNN.
+
+The Cifar10 dataset could be downloaded by running
+
+    # switch to cifar10 directory
+    $ cd ../examples/cifar10
+    # download data for CPP version
+    $ python download_data.py bin
+
+'bin' is for downloading binary version of Cifar10 data.
+
+During downloading, you should see the detailed output like
+
+     Downloading CIFAR10 from http://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz
+     The tar file does exist. Extracting it now..
+     Finished!
+
+Now you have prepared the data for this Cifar10 example, the final step is to execute the `run.sh` script,
+
+    # in SINGA_ROOT/examples/cifar10/
+    $ ./run.sh
+
+You should see the detailed output as follows: first read the data files in order, show the statistics of training and testing data, then show the details of neural net structure with some parameter information, finally illustrate the performance details during training and validation process. The number of epochs can be specified in `run.sh` file.
+
+    Start training
+    Reading file cifar-10-batches-bin/data_batch_1.bin
+    Reading file cifar-10-batches-bin/data_batch_2.bin
+    Reading file cifar-10-batches-bin/data_batch_3.bin
+    Reading file cifar-10-batches-bin/data_batch_4.bin
+    Reading file cifar-10-batches-bin/data_batch_5.bin
+    Reading file cifar-10-batches-bin/test_batch.bin
+    Training samples = 50000, Test samples = 10000
+    conv1(32, 32, 32, )
+    pool1(32, 16, 16, )
+    relu1(32, 16, 16, )
+    lrn1(32, 16, 16, )
+    conv2(32, 16, 16, )
+    relu2(32, 16, 16, )
+    pool2(32, 8, 8, )
+    lrn2(32, 8, 8, )
+    conv3(64, 8, 8, )
+    relu3(64, 8, 8, )
+    pool3(64, 4, 4, )
+    flat(1024, )
+    ip(10, )
+    conv1_weight : 8.09309e-05
+    conv1_bias : 0
+    conv2_weight : 0.00797731
+    conv2_bias : 0
+    conv3_weight : 0.00795888
+    conv3_bias : 0
+    ip_weight : 0.00798683
+    ip_bias : 0
+    Messages will be appended to an existed file: train_perf
+    Messages will be appended to an existed file: val_perf
+    Epoch 0, training loss = 1.828369, accuracy = 0.329420, lr = 0.001000
+    Epoch 0, val loss = 1.561823, metric = 0.420600
+    Epoch 1, training loss = 1.465898, accuracy = 0.469940, lr = 0.001000
+    Epoch 1, val loss = 1.361778, metric = 0.513300
+    Epoch 2, training loss = 1.320708, accuracy = 0.529000, lr = 0.001000
+    Epoch 2, val loss = 1.242080, metric = 0.549100
+    Epoch 3, training loss = 1.213776, accuracy = 0.571620, lr = 0.001000
+    Epoch 3, val loss = 1.175346, metric = 0.582000
+
+The training details are stored in `train_perf` file in the same directory and the validation details in `val_perf` file.
+
+
+## Running instructions for Python version
+To run CNN example in Python version, we need to compile SINGA with Python binding,
+
+    $ mkdir build && cd build
+    $ cmake -DUSE_PYTHON=ON ..
+    $ make
+
+Now download the Cifar10 dataset,
+
+    # switch to cifar10 directory
+    $ cd ../examples/cifar10
+    # download data for Python version
+    $ python download_data.py py
+
+During downloading, you should see the detailed output like
+
+     Downloading CIFAR10 from http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
+     The tar file does exist. Extracting it now..
+     Finished!
+
+Then execute the `train.py` script to build the model
+
+    $ python train.py
+
+You should see the output as follows including the details of neural net structure with some parameter information, reading data files, and the performance details during training and testing process.
+
+    (32L, 32L, 32L)
+    (32L, 16L, 16L)
+    (32L, 16L, 16L)
+    (32L, 16L, 16L)
+    (32L, 16L, 16L)
+    (32L, 16L, 16L)
+    (32L, 8L, 8L)
+    (32L, 8L, 8L)
+    (64L, 8L, 8L)
+    (64L, 8L, 8L)
+    (64L, 4L, 4L)
+    (1024L,)
+    Start intialization............
+    conv1_weight gaussian 7.938460476e-05
+    conv1_bias constant 0.0
+    conv2_weight gaussian 0.00793507322669
+    conv2_bias constant 0.0
+    conv3_weight gaussian 0.00799657031894
+    conv3_bias constant 0.0
+    dense_weight gaussian 0.00804364029318
+    dense_bias constant 0.0
+    Loading data ..................
+    Loading data file cifar-10-batches-py/data_batch_1
+    Loading data file cifar-10-batches-py/data_batch_2
+    Loading data file cifar-10-batches-py/data_batch_3
+    Loading data file cifar-10-batches-py/data_batch_4
+    Loading data file cifar-10-batches-py/data_batch_5
+    Loading data file cifar-10-batches-py/test_batch
+    Epoch 0
+    training loss = 1.881866, training accuracy = 0.306360 accuracy = 0.420000
+    test loss = 1.602577, test accuracy = 0.412200
+    Epoch 1
+    training loss = 1.536011, training accuracy = 0.441940 accuracy = 0.500000
+    test loss = 1.378170, test accuracy = 0.507600
+    Epoch 2
+    training loss = 1.333137, training accuracy = 0.519960 accuracy = 0.520000
+    test loss = 1.272205, test accuracy = 0.540600
+    Epoch 3
+    training loss = 1.185212, training accuracy = 0.574120 accuracy = 0.540000
+    test loss = 1.211573, test accuracy = 0.567600
+
+This script will call `alexnet.py` file to build the alexnet model. After the training is finished, SINGA will save the model parameters into a checkpoint file `model.bin` in the same directory. Then we can use this `model.bin` file for prediction.
+
+    $ python predict.py

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/docs/device.rst
----------------------------------------------------------------------
diff --git a/doc/en/docs/device.rst b/doc/en/docs/device.rst
new file mode 100644
index 0000000..e79d87a
--- /dev/null
+++ b/doc/en/docs/device.rst
@@ -0,0 +1,38 @@
+Device
+=======
+
+
+The Device abstract represents any hardware device with memory and compuation units.
+All [Tensor operations](tensor.html) are scheduled by the resident device for execution.
+Tensor memory is also managed by the device's memory manager. Therefore, optimization
+of memory and execution are implemented in the Device class.
+
+Specific devices
+----------------
+Currently, SINGA has three Device implmentations,
+
+1. CudaGPU for an Nvidia GPU card which runs Cuda code
+2. CppCPU for a CPU which runs Cpp code
+3. OpenclGPU for a GPU card which runs OpenCL code
+
+
+Python API
+----------
+
+.. automodule:: singa.device
+   :members: create_cuda_gpus, create_cuda_gpus_on, get_default_device
+
+
+The following code provides examples of creating devices,
+
+.. code:: python
+
+   from singa import device
+   cuda = device.create_cuda_gpu_on(0)  # use GPU card of ID 0
+   host = device.get_default_device()  # get the default host device (a CppCPU)
+   ary1 = device.create_cuda_gpus(2)  # create 2 devices, starting from ID 0
+   ary2 = device.create_cuda_gpus([0,2])  # create 2 devices on ID 0 and 2
+
+
+CPP API
+---------

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/docs/index.rst
----------------------------------------------------------------------
diff --git a/doc/en/docs/index.rst b/doc/en/docs/index.rst
new file mode 100644
index 0000000..93315de
--- /dev/null
+++ b/doc/en/docs/index.rst
@@ -0,0 +1,10 @@
+English
+=======
+
+.. toctree::
+
+   installation
+   software_stack
+   device
+   tensor
+   examples/index

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/docs/installation.md
----------------------------------------------------------------------
diff --git a/doc/en/docs/installation.md b/doc/en/docs/installation.md
new file mode 100755
index 0000000..8ab617f
--- /dev/null
+++ b/doc/en/docs/installation.md
@@ -0,0 +1,69 @@
+# Building SINGA from source
+
+## Dependencies
+
+### Required
+* Google Protobuf (>=2.5)
+* BLAS (tested with OpenBLAS >=0.2.10)
+* CUDA (tested with 6.5, 7.0 and 7.5)
+* CUDNN (v4 and v5)
+* cmake (>=2.6)
+
+Users must install the above mandatory libraries.
+Currently CUDA and CUDNN are also mandatory, but it would become optional later.
+
+### Optional
+* Glog
+* OpenCV (tested with 2.4.8)
+* LMDB (tested with 0.9)
+
+
+## Instructions
+
+Please clone the newest code from [Github](https://github.com/apache/incubator-singa) and execute the following commands,
+
+
+    $ git clone https://github.com/apache/incubator-singa.git
+    $ cd incubator-singa/
+    # switch to dev branch
+    $ git checkout dev
+
+
+If you use CUDA, then [CNMeM](https://github.com/NVIDIA/cnmem) is necessary,
+which could be downloaded as
+
+    $ git submodule init
+    $ git submodule update
+
+
+### Linux OS
+
+GCC (>=4.8.1) is required to compile SINGA on Linux OS.
+In SINGA_ROOT, execute the following commands for compiling SINGA,
+
+    $ mkdir build && cd build
+    # generate Makefile for compilation
+    $ cmake ..
+    # compile SINGA
+    $ make
+
+Note that if you are using CUDNN, you need to let cmake know the paths to CUDNN,
+
+    $ export CMAKE_INCLUDE_PATH=<path to cudnn>/include:$CMAKE_INCLUDE_PATH
+    $ export CMAKE_LIBRARY_PATH=<path to cudnn>/lib64:$CMAKE_LIBRARY_PATH
+
+You can use `ccmake ..` to configure the compilation options including using
+LMDB, GLOG, etc.
+
+After compiling SINGA, you can run the unit tests by
+
+    $ ./bin/test_singa
+
+You can see all the testing cases with testing results. If SINGA passes all
+tests, then you have successfully installed SINGA. Please proceed to try the examples!
+
+
+### MacOS
+
+
+### Windows

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/docs/neural-net.md
----------------------------------------------------------------------
diff --git a/doc/en/docs/neural-net.md b/doc/en/docs/neural-net.md
new file mode 100644
index 0000000..c10baf8
--- /dev/null
+++ b/doc/en/docs/neural-net.md
@@ -0,0 +1,327 @@
+# Neural Net
+
+---
+
+`NeuralNet` in SINGA represents an instance of user's neural net model. As the
+neural net typically consists of a set of layers, `NeuralNet` comprises
+a set of unidirectionally connected [Layer](layer.html)s.
+This page describes how to convert an user's neural net into
+the configuration of `NeuralNet`.
+
+<img src="../_static/images/model-category.png" align="center" width="200px"/>
+<span><strong>Figure 1 - Categorization of popular deep learning models.</strong></span>
+
+## Net structure configuration
+
+Users configure the `NeuralNet` by listing all layers of the neural net and
+specifying each layer's source layer names. Popular deep learning models can be
+categorized as Figure 1. The subsequent sections give details for each
+category.
+
+### Feed-forward models
+
+<div align = "left">
+<img src="../_static/images/mlp-net.png" align="center" width="200px"/>
+<span><strong>Figure 2 - Net structure of a MLP model.</strong></span>
+</div>
+
+Feed-forward models, e.g., CNN and MLP, can easily get configured as their layer
+connections are undirected without circles. The
+configuration for the MLP model shown in Figure 1 is as follows,
+
+    net {
+      layer {
+        name : 'data"
+        type : kData
+      }
+      layer {
+        name : 'image"
+        type : kImage
+        srclayer: 'data'
+      }
+      layer {
+        name : 'label"
+        type : kLabel
+        srclayer: 'data'
+      }
+      layer {
+        name : 'hidden"
+        type : kHidden
+        srclayer: 'image'
+      }
+      layer {
+        name : 'softmax"
+        type : kSoftmaxLoss
+        srclayer: 'hidden'
+        srclayer: 'label'
+      }
+    }
+
+### Energy models
+
+<img src="../_static/images/rbm-rnn.png" align="center" width="500px"/>
+<span><strong>Figure 3 - Convert connections in RBM and RNN.</strong></span>
+
+
+For energy models including RBM, DBM,
+etc., their connections are undirected (i.e., Category B). To represent these models using
+`NeuralNet`, users can simply replace each connection with two directed
+connections, as shown in Figure 3a. In other words, for each pair of connected layers, their source
+layer field should include each other's name.
+The full [RBM example](rbm.html) has
+detailed neural net configuration for a RBM model, which looks like
+
+    net {
+      layer {
+        name : "vis"
+        type : kVisLayer
+        param {
+          name : "w1"
+        }
+        srclayer: "hid"
+      }
+      layer {
+        name : "hid"
+        type : kHidLayer
+        param {
+          name : "w2"
+          share_from: "w1"
+        }
+        srclayer: "vis"
+      }
+    }
+
+### RNN models
+
+For recurrent neural networks (RNN), users can remove the recurrent connections
+by unrolling the recurrent layer.  For example, in Figure 3b, the original
+layer is unrolled into a new layer with 4 internal layers. In this way, the
+model is like a normal feed-forward model, thus can be configured similarly.
+The [RNN example](rnn.html) has a full neural net
+configuration for a RNN model.
+
+
+## Configuration for multiple nets
+
+Typically, a training job includes three neural nets for
+training, validation and test phase respectively. The three neural nets share most
+layers except the data layer, loss layer or output layer, etc..  To avoid
+redundant configurations for the shared layers, users can uses the `exclude`
+filed to filter a layer in the neural net, e.g., the following layer will be
+filtered when creating the testing `NeuralNet`.
+
+
+    layer {
+      ...
+      exclude : kTest # filter this layer for creating test net
+    }
+
+
+
+## Neural net partitioning
+
+A neural net can be partitioned in different ways to distribute the training
+over multiple workers.
+
+### Batch and feature dimension
+
+<img src="../_static/images/partition_fc.png" align="center" width="400px"/>
+<span><strong>Figure 4 - Partitioning of a fully connected layer.</strong></span>
+
+
+Every layer's feature blob is considered a matrix whose rows are feature
+vectors. Thus, one layer can be split on two dimensions. Partitioning on
+dimension 0 (also called batch dimension) slices the feature matrix by rows.
+For instance, if the mini-batch size is 256 and the layer is partitioned into 2
+sub-layers, each sub-layer would have 128 feature vectors in its feature blob.
+Partitioning on this dimension has no effect on the parameters, as every
+[Param](param.html) object is replicated in the sub-layers. Partitioning on dimension
+1 (also called feature dimension) slices the feature matrix by columns. For
+example, suppose the original feature vector has 50 units, after partitioning
+into 2 sub-layers, each sub-layer would have 25 units. This partitioning may
+result in [Param](param.html) object being split, as shown in
+Figure 4. Both the bias vector and weight matrix are
+partitioned into two sub-layers.
+
+
+### Partitioning configuration
+
+There are 4 partitioning schemes, whose configurations are give below,
+
+  1. Partitioning each singe layer into sub-layers on batch dimension (see
+  below). It is enabled by configuring the partition dimension of the layer to
+  0, e.g.,
+
+          # with other fields omitted
+          layer {
+            partition_dim: 0
+          }
+
+  2. Partitioning each singe layer into sub-layers on feature dimension (see
+  below).  It is enabled by configuring the partition dimension of the layer to
+  1, e.g.,
+
+          # with other fields omitted
+          layer {
+            partition_dim: 1
+          }
+
+  3. Partitioning all layers into different subsets. It is enabled by
+  configuring the location ID of a layer, e.g.,
+
+          # with other fields omitted
+          layer {
+            location: 1
+          }
+          layer {
+            location: 0
+          }
+
+
+  4. Hybrid partitioning of strategy 1, 2 and 3. The hybrid partitioning is
+  useful for large models. An example application is to implement the
+  [idea proposed by Alex](http://arxiv.org/abs/1404.5997).
+  Hybrid partitioning is configured like,
+
+          # with other fields omitted
+          layer {
+            location: 1
+          }
+          layer {
+            location: 0
+          }
+          layer {
+            partition_dim: 0
+            location: 0
+          }
+          layer {
+            partition_dim: 1
+            location: 0
+          }
+
+Currently SINGA supports strategy-2 well. Other partitioning strategies are
+are under test and will be released in later version.
+
+## Parameter sharing
+
+Parameters can be shared in two cases,
+
+  * sharing parameters among layers via user configuration. For example, the
+  visible layer and hidden layer of a RBM shares the weight matrix, which is configured through
+  the `share_from` field as shown in the above RBM configuration. The
+  configurations must be the same (except name) for shared parameters.
+
+  * due to neural net partitioning, some `Param` objects are replicated into
+  different workers, e.g., partitioning one layer on batch dimension. These
+  workers share parameter values. SINGA controls this kind of parameter
+  sharing automatically, users do not need to do any configuration.
+
+  * the `NeuralNet` for training and testing (and validation) share most layers
+  , thus share `Param` values.
+
+If the shared `Param` instances resident in the same process (may in different
+threads), they use the same chunk of memory space for their values. But they
+would have different memory spaces for their gradients. In fact, their
+gradients will be averaged by the stub or server.
+
+## Advanced user guide
+
+### Creation
+
+    static NeuralNet* NeuralNet::Create(const NetProto& np, Phase phase, int num);
+
+The above function creates a `NeuralNet` for a given phase, and returns a
+pointer to the `NeuralNet` instance. The phase is in {kTrain,
+kValidation, kTest}. `num` is used for net partitioning which indicates the
+number of partitions.  Typically, a training job includes three neural nets for
+training, validation and test phase respectively. The three neural nets share most
+layers except the data layer, loss layer or output layer, etc.. The `Create`
+function takes in the full net configuration including layers for training,
+validation and test.  It removes layers for phases other than the specified
+phase based on the `exclude` field in
+[layer configuration](layer.html):
+
+    layer {
+      ...
+      exclude : kTest # filter this layer for creating test net
+    }
+
+The filtered net configuration is passed to the constructor of `NeuralNet`:
+
+    NeuralNet::NeuralNet(NetProto netproto, int npartitions);
+
+The constructor creates a graph representing the net structure firstly in
+
+    Graph* NeuralNet::CreateGraph(const NetProto& netproto, int npartitions);
+
+Next, it creates a layer for each node and connects layers if their nodes are
+connected.
+
+    void NeuralNet::CreateNetFromGraph(Graph* graph, int npartitions);
+
+Since the `NeuralNet` instance may be shared among multiple workers, the
+`Create` function returns a pointer to the `NeuralNet` instance .
+
+### Parameter sharing
+
+ `Param` sharing
+is enabled by first sharing the Param configuration (in `NeuralNet::Create`)
+to create two similar (e.g., the same shape) Param objects, and then calling
+(in `NeuralNet::CreateNetFromGraph`),
+
+    void Param::ShareFrom(const Param& from);
+
+It is also possible to share `Param`s of two nets, e.g., sharing parameters of
+the training net and the test net,
+
+    void NeuralNet:ShareParamsFrom(NeuralNet* other);
+
+It will call `Param::ShareFrom` for each Param object.
+
+### Access functions
+`NeuralNet` provides a couple of access function to get the layers and params
+of the net:
+
+    const std::vector<Layer*>& layers() const;
+    const std::vector<Param*>& params() const ;
+    Layer* name2layer(string name) const;
+    Param* paramid2param(int id) const;
+
+
+### Partitioning
+
+
+#### Implementation
+
+SINGA partitions the neural net in `CreateGraph` function, which creates one
+node for each (partitioned) layer. For example, if one layer's partition
+dimension is 0 or 1, then it creates `npartition` nodes for it; if the
+partition dimension is -1, a single node is created, i.e., no partitioning.
+Each node is assigned a partition (or location) ID. If the original layer is
+configured with a location ID, then the ID is assigned to each newly created node.
+These nodes are connected according to the connections of the original layers.
+Some connection layers will be added automatically.
+For instance, if two connected sub-layers are located at two
+different workers, then a pair of bridge layers is inserted to transfer the
+feature (and gradient) blob between them. When two layers are partitioned on
+different dimensions, a concatenation layer which concatenates feature rows (or
+columns) and a slice layer which slices feature rows (or columns) would be
+inserted. These connection layers help making the network communication and
+synchronization transparent to the users.
+
+#### Dispatching partitions to workers
+
+Each (partitioned) layer is assigned a location ID, based on which it is dispatched to one
+worker. Particularly, the pointer to the `NeuralNet` instance is passed
+to every worker within the same group, but each worker only computes over the
+layers that have the same partition (or location) ID as the worker's ID.  When
+every worker computes the gradients of the entire model parameters
+(strategy-2), we refer to this process as data parallelism.  When different
+workers compute the gradients of different parameters (strategy-3 or
+strategy-1), we call this process model parallelism.  The hybrid partitioning
+leads to hybrid parallelism where some workers compute the gradients of the
+same subset of model parameters while other workers compute on different model
+parameters.  For example, to implement the hybrid parallelism in for the
+[DCNN model](http://arxiv.org/abs/1404.5997), we set `partition_dim = 0` for
+lower layers and `partition_dim = 1` for higher layers.
+

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/docs/overview.rst
----------------------------------------------------------------------
diff --git a/doc/en/docs/overview.rst b/doc/en/docs/overview.rst
new file mode 100644
index 0000000..18ad62b
--- /dev/null
+++ b/doc/en/docs/overview.rst
@@ -0,0 +1,99 @@
+Introduction
+==============
+
+
+SINGA is a general distributed deep learning platform for training big deep
+learning models over large datasets. It is designed with an intuitive
+programming model based on the layer abstraction. A variety
+of popular deep learning models are supported, namely feed-forward models including
+convolutional neural networks (CNN), energy models like restricted Boltzmann
+machine (RBM), and recurrent neural networks (RNN). Many built-in layers are
+provided for users. SINGA architecture is
+sufficiently flexible to run synchronous, asynchronous and hybrid training
+frameworks.  SINGA
+also supports different neural net partitioning schemes to parallelize the
+training of large models, namely partitioning on batch dimension, feature
+dimension or hybrid partitioning.
+
+
+Goals
+-----
+
+As a distributed system, the first goal of SINGA is to have good scalability. In other
+words, SINGA is expected to reduce the total training time to achieve certain
+accuracy with more computing resources (i.e., machines).
+
+
+The second goal is to make SINGA easy to use.
+It is non-trivial for programmers to develop and train models with deep and
+complex model structures.  Distributed training further increases the burden of
+programmers, e.g., data and model partitioning, and network communication.  Hence it is essential to
+provide an easy to use programming model so that users can implement their deep
+learning models/algorithms without much awareness of the underlying distributed
+platform.
+
+Principles
+----------
+
+Scalability is a challenging research problem for distributed deep learning
+training. SINGA provides a general architecture to exploit the scalability of
+different training frameworks. Synchronous training frameworks improve the
+efficiency of one training iteration, and
+asynchronous training frameworks improve the convergence rate. Given a fixed budget
+(e.g., cluster size), users can run a hybrid framework that maximizes the
+scalability by trading off between efficiency and convergence rate.
+
+SINGA comes with a programming model designed based on the layer abstraction, which
+is intuitive for deep learning models.  A variety of
+popular deep learning models can be expressed and trained using this programming model.
+
+System overview
+---------------
+
+.. figure:: /image/sgd.png
+
+            Figure 1 - SGD flow
+
+Training a deep learning model is to find the optimal parameters involved in
+the transformation functions that generate good features for specific tasks.
+The goodness of a set of parameters is measured by a loss function, e.g.,
+`Cross-Entropy Loss <https://en.wikipedia.org/wiki/Cross_entropy>`_ . Since the
+loss functions are usually non-linear and non-convex, it is difficult to get a
+closed form solution. Typically, people use the stochastic gradient descent
+(SGD) algorithm, which randomly
+initializes the parameters and then iteratively updates them to reduce the loss
+as shown in Figure 1.
+
+.. figure:: /image/overview.png
+
+           Figure 2 - SINGA overview
+
+SGD is used in SINGA to train
+parameters of deep learning models. The training workload is distributed over
+worker and server units as shown in Figure 2. In each
+iteration, every worker calls *TrainOneBatch* function to compute
+parameter gradients. *TrainOneBatch* takes a *NeuralNet* object
+representing the neural net, and visits layers of the *NeuralNet* in
+certain order. The resultant gradients are sent to the local stub that
+aggregates the requests and forwards them to corresponding servers for
+updating. Servers reply to workers with the updated parameters for the next
+iteration.
+
+
+Job submission
+--------------
+
+To submit a job in SINGA (i.e., training a deep learning model),
+users pass the job configuration to SINGA driver in the
+`main function <programming-guide.html>`_ . The job configuration
+specifies the four major components in Figure 2,
+
+  * a `NeuralNet <neural-net.html>`_ describing the neural net structure with the detailed layer setting and their connections;
+  * a `TrainOneBatch <train-one-batch.html>`_  algorithm which is tailored for different model categories;
+  * an `Updater <updater.html>`_  defining the protocol for updating parameters at the server side;
+  * a `Cluster Topology <distributed-training.html>`_ specifying the distributed architecture of workers and servers.
+
+This process is like the job submission in Hadoop, where users configure their
+jobs in the main function to set the mapper, reducer, etc.
+In Hadoop, users can configure their jobs with their own (or built-in) mapper and reducer; in SINGA, users
+can configure their jobs with their own (or built-in) layer, updater, etc.

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/docs/software_stack.md
----------------------------------------------------------------------
diff --git a/doc/en/docs/software_stack.md b/doc/en/docs/software_stack.md
new file mode 100644
index 0000000..c60b6a5
--- /dev/null
+++ b/doc/en/docs/software_stack.md
@@ -0,0 +1,99 @@
+# Software Stack
+
+SINGA's software stack includes three major components, namely, core, IO and
+model. Figure 1 illustrates these components together with the hardware.
+The core component provides memory management and tensor operations;
+IO has classes for reading (and writing) data from (to) disk and network; The
+model component provides data structures and algorithms for machine learning models,
+e.g., layers for neural network models, optimizers/initializer/metric/loss for
+general machine learning models.
+
+
+<img src="../_static/images/singav1-sw.png" align="center" width="500px"/>
+<br/>
+<span><strong>Figure 1 - SINGA V1 software stack.</strong></span>
+
+## Core
+
+[Tensor](tensor.html) and [Device](device.html) are two core abstractions in SINGA. Tensor class represents a
+multi-dimensional array, which stores model variables and provides linear algebra
+operations for machine learning
+algorithms, including matrix multiplication and random functions. Each tensor
+instance (i.e. a tensor) is allocated on a Device instance.
+Each Device instance (i.e. a device) is created against one hardware device,
+e.g. a GPU card or a CPU core. Devices manage the memory of tensors and execute
+tensor operations on its execution units, e.g. CPU threads or CUDA streams.
+
+Depending on the hardware and the programming language, SINGA have implemented
+the following specific device classes:
+
+* **CudaGPU** represents an Nvidia GPU card. The execution units are the CUDA streams.
+* **CppCPU** represents a normal CPU. The execution units are the CPU threads.
+* **OpenclGPU** represents normal GPU card from both Nvidia and AMD.
+  The execution units are the CommandQueues. Given that OpenCL is compatible with
+  many hardware devices, e.g. FPGA and ARM, the OpenclGPU has the potential to be
+  extended for other devices.
+
+Different types of devices use different programming languages to write the kernel
+functions for tensor operations,
+
+* CppMath (tensor_math_cpp.h) implements the tensor operations using Cpp for CppCPU
+* CudaMath (tensor_math_cuda.h) implements the tensor operations using CUDA for CudaGPU
+* OpenclMath (tensor_math_opencl.h) implements the tensor operations using OpenCL for OpenclGPU
+
+In addition, different types of data, such as float32 and float16, could be supported by adding
+the corresponding tensor functions.
+
+Typically, users would create a device instance and pass it to create multiple
+tensor instances. When users call the Tensor functions, these function would invoke
+the corresponding implementation (CppMath/CudaMath/OpenclMath) automatically. In
+other words, the implementation of Tensor operations is transparent to users.
+
+Most machine learning algorithms could be expressed using (dense or sparse) tensors.
+Therefore, with the Tensor abstraction, SINGA would be able to run a wide range of models,
+including deep learning models and other traditional machine learning models.
+
+The Tensor and Device abstractions are extensible to support a wide range of hardware device
+using different programming languages. A new hardware device would be supported by
+adding a new Device subclass and the corresponding implementation of the Tensor
+operations (xxxMath).
+
+Optimizations in terms of speed and memory could be implemented by Device, which
+manages both operation execution and memory malloc/free. More optimization details
+would be described in the [Device page](device.html).
+
+
+## Model
+
+On top of the Tensor and Device abstractions, SINGA provides some higher level
+classes for machine learning modules.
+
+* [Layer](layer.html) and its subclasses are specific for neural networks. Every layer provides
+  functions for forward propagating features and backward propagating gradients w.r.t the training loss functions.
+  They wraps the complex layer operations so that users can easily create neural nets
+  by connecting a set of layers.
+
+* [Initializer](initializer.html) and its subclasses provide variant methods of initializing
+  model parameters (stored in Tensor instances), following Uniform, Gaussian, etc.
+
+* [Loss](loss.html) and its subclasses defines the training objective loss functions.
+  Both functions of computing the loss values and computing the gradient of the prediction w.r.t the
+  objective loss are implemented. Example loss functions include squared error and cross entropy.
+
+* [Metric](metric.html) and its subclasses provide the function to measure the
+  performance of the model, e.g., the accuracy.
+
+* [Optimizer](optimizer.html) and its subclasses implement the methods for updating
+  model parameter values using parameter gradients, including SGD, AdaGrad, RMSProp etc.
+
+
+## IO
+
+The IO module consists of classes for data loading, data preprocessing and message passing.
+
+* Reader and its subclasses load string records from disk files
+* Writer and its subclasses write string records to disk files
+* Encoder and its subclasses encode Tensor instances into string records
+* Decoder and its subclasses decodes string records into Tensor instances
+* Endpoint represents a communication endpoint which provides functions for passing messages to each other.
+* Message represents communication messages between Endpoint instances. It carries both meta data and payload.

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/docs/tensor.rst
----------------------------------------------------------------------
diff --git a/doc/en/docs/tensor.rst b/doc/en/docs/tensor.rst
new file mode 100644
index 0000000..87d26ea
--- /dev/null
+++ b/doc/en/docs/tensor.rst
@@ -0,0 +1,54 @@
+Tensor
+========
+
+Each Tensor instance is a multi-dimensional array allocated on a specific
+Device instance. Tensor instances store variables and provide
+linear algebra operations over different types of hardware devices without user
+awareness. Note that users need to make sure the tensor operands are
+allocated on the same device except copy functions.
+
+
+Tensor implementation
+---------------------
+
+SINGA has three different sets of implmentations of Tensor functions, one for each
+type of Device.
+
+* 'tensor_math_cpp.h' implements operations using Cpp (with CBLAS) for CppGPU devices.
+* 'tensor_math_cuda.h' implements operations using Cuda (with cuBLAS) for CudaGPU devices.
+* 'tensor_math_opencl.h' implements operations using OpenCL for OpenclGPU devices.
+
+Python API
+----------
+
+There are two set of tensor functions,
+1. Tensor member functions, which would change the internal state of the Tensor instance.
+2. tensor module functions, which accepts Tensor instances as arguments and return
+Tensor instances.
+
+
+Create Tensor instances
+~~~~~~~~~~~~~~~~~~~~~~~
+
+.. autoclass:: singa.tensor.Tensor
+
+
+Tensor instances can be constructed from Numpy array,
+
+.. automodule:: singa.tensor
+   :members: from_numpy
+
+
+Set Tensor values
+~~~~~~~~~~~~~~~~~
+
+
+
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/downloads.md
----------------------------------------------------------------------
diff --git a/doc/en/downloads.md b/doc/en/downloads.md
new file mode 100644
index 0000000..31e7274
--- /dev/null
+++ b/doc/en/downloads.md
@@ -0,0 +1,67 @@
+## Download SINGA
+---
+
+* Latest code: please clone the dev branch from [Github](https://github.com/apache/incubator-singa)
+
+* v0.3.0 (20 April 2016):
+    * [Apache SINGA 0.3.0](http://www.apache.org/dyn/closer.cgi/incubator/singa/0.3.0/apache-singa-incubating-0.3.0.tar.gz)
+      [\[MD5\]](https://dist.apache.org/repos/dist/release/incubator/singa/0.3.0/apache-singa-incubating-0.3.0.tar.gz.md5)
+      [\[KEYS\]](https://dist.apache.org/repos/dist/release/incubator/singa/0.3.0/KEYS)
+    * [Release Notes 0.3.0](releases/RELEASE_NOTES_0.3.0.html)
+    * New features and major updates,
+        * [Training on GPU cluster](v0.3.0/gpu.html) enables training of deep learning models over a GPU cluster.
+        * [Python wrapper improvement](v0.3.0/python.html) makes it easy to configure the job, including neural net and SGD algorithm.
+        * [New SGD updaters](v0.3.0/updater.html) are added, including Adam, AdaDelta and AdaMax.
+        * [Installation](v0.3.0/installation.html) has fewer dependent libraries for single node training.
+        * Heterogeneous training with CPU and GPU.
+        * Support cuDNN V4.
+        * Data prefetching.
+        * Fix some bugs.
+
+
+
+* v0.2.0 (14 January 2016):
+    * [Apache SINGA 0.2.0](http://www.apache.org/dyn/closer.cgi/incubator/singa/0.2.0/apache-singa-incubating-0.2.0.tar.gz)
+      [\[MD5\]](https://archive.apache.org/dist/incubator/singa/0.2.0/apache-singa-incubating-0.2.0.tar.gz.md5)
+      [\[KEYS\]](https://archive.apache.org/dist/incubator/singa/0.2.0/KEYS)
+    * [Release Notes 0.2.0](releases/RELEASE_NOTES_0.2.0.html)
+    * New features and major updates,
+        * [Training on GPU](v0.2.0/gpu.html) enables training of complex models on a single node with multiple GPU cards.
+        * [Hybrid neural net partitioning](v0.2.0/hybrid.html) supports data and model parallelism at the same time.
+        * [Python wrapper](v0.2.0/python.html) makes it easy to configure the job, including neural net and SGD algorithm.
+        * [RNN model and BPTT algorithm](v0.2.0/general-rnn.html) are implemented to support applications based on RNN models, e.g., GRU.
+        * [Cloud software integration](v0.2.0/distributed-training.html) includes Mesos, Docker and HDFS.
+        * Visualization of neural net structure and layer information, which is helpful for debugging.
+        * Linear algebra functions and random functions against Blobs and raw data pointers.
+        * New layers, including SoftmaxLayer, ArgSortLayer, DummyLayer, RNN layers and cuDNN layers.
+        * Update Layer class to carry multiple data/grad Blobs.
+        * Extract features and test performance for new data by loading previously trained model parameters.
+        * Add Store class for IO operations.
+
+
+* v0.1.0 (8 October 2015):
+    * [Apache SINGA 0.1.0](http://www.apache.org/dyn/closer.cgi/incubator/singa/apache-singa-incubating-0.1.0.tar.gz)
+      [\[MD5\]](https://archive.apache.org/dist/incubator/singa/apache-singa-incubating-0.1.0.tar.gz.md5)
+      [\[KEYS\]](https://archive.apache.org/dist/incubator/singa/KEYS)
+    * [Amazon EC2 image](https://console.aws.amazon.com/ec2/v2/home?region=ap-southeast-1#LaunchInstanceWizard:ami=ami-b41001e6)
+    * [Release Notes 0.1.0](releases/RELEASE_NOTES_0.1.0.html)
+    * Major features include,
+        * Installation using GNU build utility
+        * Scripts for job management with zookeeper
+        * Programming model based on NeuralNet and Layer abstractions.
+        * System architecture based on Worker, Server and Stub.
+        * Training models from three different model categories, namely, feed-forward models, energy models and RNN models.
+        * Synchronous and asynchronous distributed training frameworks using CPU
+        * Checkpoint and restore
+        * Unit test using gtest
+
+**Disclaimer**
+
+Apache SINGA is an effort undergoing incubation at The Apache Software
+Foundation (ASF), sponsored by the name of Apache Incubator PMC. Incubation is
+required of all newly accepted projects until a further review indicates that
+the infrastructure, communications, and decision making process have stabilized
+in a manner consistent with other successful ASF projects. While incubation
+status is not necessarily a reflection of the completeness or stability of the
+code, it does indicate that the project has yet to be fully endorsed by the
+ASF.

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/en/index.rst
----------------------------------------------------------------------
diff --git a/doc/en/index.rst b/doc/en/index.rst
new file mode 100755
index 0000000..50c65d7
--- /dev/null
+++ b/doc/en/index.rst
@@ -0,0 +1,109 @@
+.. Singa documentation master file, created by
+   sphinx-quickstart on Sat Jul  9 20:36:57 2016.
+   You can adapt this file completely to your liking, but it should at least
+   contain the root `toctree` directive.
+
+Welcome to Apache Singa
+=======================
+
+Recent News
+-----------
+
+* The **third release** is now available, 20 April, 2016. `Download SINGA v0.3.0 <downloads.html>`_
+
+* The **second release** is now available, 14 Jan, 2016. `Download SINGA v0.2.0 <downloads.html>`_.
+
+* SINGA will be presented at `Strata+Hadoop <http://strataconf.com/big-data-conference-sg-2015/public/schedule/detail/45123>`_ on 2 Dec, 2015
+
+* SINGA was presented at `ACM Multimedia <http://www.acmmm.org/2015/at-a-glance/>`_ Best Paper session and Open Source Software Competition session, 26-30 Oct, 2015 (`Slides <files/mm2015.ppt>`_)
+
+* The **first release** is now available, 8 Oct, 2015. `Download SINGA v0.1.0 <downloads.html>`_.
+
+* SINGA was presented at `workshop on deep learning <http://www.comp.nus.edu.sg/~dbsystem/singa/workshop>`_  held on 16 Sep, 2015
+
+* SINGA was presented at `BOSS <http://boss.dima.tu-berlin.de/>`_ of `VLDB 2015 <http://www.vldb.org/2015/>`_ at Hawaii, 4 Sep, 2015. (slides: `overview <files/singa-vldb-boss.pptx>`_, `basic <files/basic-user-guide.pptx>`_, `advanced <files/advanced-user-guide.pptx>`_)
+
+* SINGA was presented at `ADSC/I2R Deep Learning Workshop <http://adsc.illinois.edu/contact-us>`_, 25 Aug, 2015.
+
+* A tutorial on SINGA was given at VLDB summer school at Tsinghua University,  25-31 July, 2015.
+
+* A half day tutorial on SINGA was given at I2R, 29 June, 2015.
+
+* SINGA was presented at `DanaC <http://danac.org/>`_ of `SIGMOD 2015 <http://www.sigmod2015.org/index.shtml>`_ at Melbourne, 31 May - 4 June, 2015.
+
+* SINGA has been accepted by `Apache Incubator <http://incubator.apache.org/>`_, 17 March, 2015.
+
+Getting Started
+---------------
+* The `Introduction <docs/overview.html>`_ page gives an overview of SINGA.
+
+* The `Installation <docs/installation.html>`_ guide describes details on downloading and installing SINGA.
+
+* Please follow the `Quick Start <docs/quick-start.html>`_ guide to run simple applications on SINGA.
+
+Documentation
+-------------
+
+* Documentations are listed `here <docs.html>`_.
+
+* Code API can be found `here <api/index.html>`_.
+
+* Research publication list is available `here <http://www.comp.nus.edu.sg/~dbsystem/singa/research/publication/>`_.
+
+How to contribute
+----------------------
+
+* Please subscribe to our development mailing list dev-subscribe@singa.incubator.apache.org.
+
+* If you find any issues using SINGA, please report it to the `Issue Tracker <https://issues.apache.org/jira/browse/singa>`_.
+
+* You can also contact with `SINGA committers <community.html>`_ directly.
+
+More details on contributing to SINGA is described `here <develop/how-contribute.html>`_ .
+
+Citing SINGA
+------------
+
+Please cite the following two papers if you use SINGA in your research:
+
+* B. C. Ooi, K.-L. Tan, S. Wang, W. Wang, Q. Cai, G. Chen, J. Gao, Z. Luo, A. K. H. Tung, Y. Wang, Z. Xie, M. Zhang, and K. Zheng. `SINGA: A distributed deep learning platform <http://www.comp.nus.edu.sg/~ooibc/singaopen-mm15.pdf>`_. ACM Multimedia (Open Source Software Competition) 2015 (`BibTex <http://www.comp.nus.edu.sg/~dbsystem/singa//assets/file/bib-oss.txt>`_).
+
+* W. Wang, G. Chen, T. T. A. Dinh, B. C. Ooi, K.-L.Tan, J. Gao, and S. Wang. `SINGA: putting deep learning in the hands of multimedia users <http://www.comp.nus.edu.sg/~ooibc/singa-mm15.pdf>`_. ACM Multimedia 2015 (`BibTex <http://www.comp.nus.edu.sg/~dbsystem/singa//assets/file/bib-singa.txt>`_, `Slides <files/mm2015.ppt>`_).
+
+.. toctree::
+   :hidden:
+
+   downloads
+   docs
+
+.. toctree::
+   :hidden:
+   :maxdepth: 2
+   :caption: Development
+
+   develop/schedule
+   develop/how-contribute
+   develop/contribute-code
+   develop/contribute-docs
+
+.. toctree::
+   :hidden:
+   :maxdepth: 2
+   :caption: Community
+
+   community/source-repository
+   community/mail-lists
+   community/issue-tracking
+   community/team-list
+
+
+
+License
+----------
+SINGA is released under `Apache License Version 2.0 <http://www.apache.org/licenses/LICENSE-2.0>`_.
+
+Disclaimers
+-----------
+
+Apache SINGA is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.
+

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/index.rst
----------------------------------------------------------------------
diff --git a/doc/index.rst b/doc/index.rst
deleted file mode 100755
index 50c65d7..0000000
--- a/doc/index.rst
+++ /dev/null
@@ -1,109 +0,0 @@
-.. Singa documentation master file, created by
-   sphinx-quickstart on Sat Jul  9 20:36:57 2016.
-   You can adapt this file completely to your liking, but it should at least
-   contain the root `toctree` directive.
-
-Welcome to Apache Singa
-=======================
-
-Recent News
------------
-
-* The **third release** is now available, 20 April, 2016. `Download SINGA v0.3.0 <downloads.html>`_
-
-* The **second release** is now available, 14 Jan, 2016. `Download SINGA v0.2.0 <downloads.html>`_.
-
-* SINGA will be presented at `Strata+Hadoop <http://strataconf.com/big-data-conference-sg-2015/public/schedule/detail/45123>`_ on 2 Dec, 2015
-
-* SINGA was presented at `ACM Multimedia <http://www.acmmm.org/2015/at-a-glance/>`_ Best Paper session and Open Source Software Competition session, 26-30 Oct, 2015 (`Slides <files/mm2015.ppt>`_)
-
-* The **first release** is now available, 8 Oct, 2015. `Download SINGA v0.1.0 <downloads.html>`_.
-
-* SINGA was presented at `workshop on deep learning <http://www.comp.nus.edu.sg/~dbsystem/singa/workshop>`_  held on 16 Sep, 2015
-
-* SINGA was presented at `BOSS <http://boss.dima.tu-berlin.de/>`_ of `VLDB 2015 <http://www.vldb.org/2015/>`_ at Hawaii, 4 Sep, 2015. (slides: `overview <files/singa-vldb-boss.pptx>`_, `basic <files/basic-user-guide.pptx>`_, `advanced <files/advanced-user-guide.pptx>`_)
-
-* SINGA was presented at `ADSC/I2R Deep Learning Workshop <http://adsc.illinois.edu/contact-us>`_, 25 Aug, 2015.
-
-* A tutorial on SINGA was given at VLDB summer school at Tsinghua University,  25-31 July, 2015.
-
-* A half day tutorial on SINGA was given at I2R, 29 June, 2015.
-
-* SINGA was presented at `DanaC <http://danac.org/>`_ of `SIGMOD 2015 <http://www.sigmod2015.org/index.shtml>`_ at Melbourne, 31 May - 4 June, 2015.
-
-* SINGA has been accepted by `Apache Incubator <http://incubator.apache.org/>`_, 17 March, 2015.
-
-Getting Started
----------------
-* The `Introduction <docs/overview.html>`_ page gives an overview of SINGA.
-
-* The `Installation <docs/installation.html>`_ guide describes details on downloading and installing SINGA.
-
-* Please follow the `Quick Start <docs/quick-start.html>`_ guide to run simple applications on SINGA.
-
-Documentation
--------------
-
-* Documentations are listed `here <docs.html>`_.
-
-* Code API can be found `here <api/index.html>`_.
-
-* Research publication list is available `here <http://www.comp.nus.edu.sg/~dbsystem/singa/research/publication/>`_.
-
-How to contribute
-----------------------
-
-* Please subscribe to our development mailing list dev-subscribe@singa.incubator.apache.org.
-
-* If you find any issues using SINGA, please report it to the `Issue Tracker <https://issues.apache.org/jira/browse/singa>`_.
-
-* You can also contact with `SINGA committers <community.html>`_ directly.
-
-More details on contributing to SINGA is described `here <develop/how-contribute.html>`_ .
-
-Citing SINGA
-------------
-
-Please cite the following two papers if you use SINGA in your research:
-
-* B. C. Ooi, K.-L. Tan, S. Wang, W. Wang, Q. Cai, G. Chen, J. Gao, Z. Luo, A. K. H. Tung, Y. Wang, Z. Xie, M. Zhang, and K. Zheng. `SINGA: A distributed deep learning platform <http://www.comp.nus.edu.sg/~ooibc/singaopen-mm15.pdf>`_. ACM Multimedia (Open Source Software Competition) 2015 (`BibTex <http://www.comp.nus.edu.sg/~dbsystem/singa//assets/file/bib-oss.txt>`_).
-
-* W. Wang, G. Chen, T. T. A. Dinh, B. C. Ooi, K.-L.Tan, J. Gao, and S. Wang. `SINGA: putting deep learning in the hands of multimedia users <http://www.comp.nus.edu.sg/~ooibc/singa-mm15.pdf>`_. ACM Multimedia 2015 (`BibTex <http://www.comp.nus.edu.sg/~dbsystem/singa//assets/file/bib-singa.txt>`_, `Slides <files/mm2015.ppt>`_).
-
-.. toctree::
-   :hidden:
-
-   downloads
-   docs
-
-.. toctree::
-   :hidden:
-   :maxdepth: 2
-   :caption: Development
-
-   develop/schedule
-   develop/how-contribute
-   develop/contribute-code
-   develop/contribute-docs
-
-.. toctree::
-   :hidden:
-   :maxdepth: 2
-   :caption: Community
-
-   community/source-repository
-   community/mail-lists
-   community/issue-tracking
-   community/team-list
-
-
-
-License
-----------
-SINGA is released under `Apache License Version 2.0 <http://www.apache.org/licenses/LICENSE-2.0>`_.
-
-Disclaimers
------------
-
-Apache SINGA is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.
-

http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/31ae6bd4/doc/make.bat
----------------------------------------------------------------------
diff --git a/doc/make.bat b/doc/make.bat
deleted file mode 100644
index 624a328..0000000
--- a/doc/make.bat
+++ /dev/null
@@ -1,281 +0,0 @@
-@ECHO OFF
-
-REM Command file for Sphinx documentation
-
-if "%SPHINXBUILD%" == "" (
-	set SPHINXBUILD=sphinx-build
-)
-set BUILDDIR=_build
-set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
-set I18NSPHINXOPTS=%SPHINXOPTS% .
-if NOT "%PAPER%" == "" (
-	set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
-	set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
-)
-
-if "%1" == "" goto help
-
-if "%1" == "help" (
-	:help
-	echo.Please use `make ^<target^>` where ^<target^> is one of
-	echo.  html       to make standalone HTML files
-	echo.  dirhtml    to make HTML files named index.html in directories
-	echo.  singlehtml to make a single large HTML file
-	echo.  pickle     to make pickle files
-	echo.  json       to make JSON files
-	echo.  htmlhelp   to make HTML files and a HTML help project
-	echo.  qthelp     to make HTML files and a qthelp project
-	echo.  devhelp    to make HTML files and a Devhelp project
-	echo.  epub       to make an epub
-	echo.  epub3      to make an epub3
-	echo.  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter
-	echo.  text       to make text files
-	echo.  man        to make manual pages
-	echo.  texinfo    to make Texinfo files
-	echo.  gettext    to make PO message catalogs
-	echo.  changes    to make an overview over all changed/added/deprecated items
-	echo.  xml        to make Docutils-native XML files
-	echo.  pseudoxml  to make pseudoxml-XML files for display purposes
-	echo.  linkcheck  to check all external links for integrity
-	echo.  doctest    to run all doctests embedded in the documentation if enabled
-	echo.  coverage   to run coverage check of the documentation if enabled
-	echo.  dummy      to check syntax errors of document sources
-	goto end
-)
-
-if "%1" == "clean" (
-	for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
-	del /q /s %BUILDDIR%\*
-	goto end
-)
-
-
-REM Check if sphinx-build is available and fallback to Python version if any
-%SPHINXBUILD% 1>NUL 2>NUL
-if errorlevel 9009 goto sphinx_python
-goto sphinx_ok
-
-:sphinx_python
-
-set SPHINXBUILD=python -m sphinx.__init__
-%SPHINXBUILD% 2> nul
-if errorlevel 9009 (
-	echo.
-	echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
-	echo.installed, then set the SPHINXBUILD environment variable to point
-	echo.to the full path of the 'sphinx-build' executable. Alternatively you
-	echo.may add the Sphinx directory to PATH.
-	echo.
-	echo.If you don't have Sphinx installed, grab it from
-	echo.http://sphinx-doc.org/
-	exit /b 1
-)
-
-:sphinx_ok
-
-
-if "%1" == "html" (
-	%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The HTML pages are in %BUILDDIR%/html.
-	goto end
-)
-
-if "%1" == "dirhtml" (
-	%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
-	goto end
-)
-
-if "%1" == "singlehtml" (
-	%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
-	goto end
-)
-
-if "%1" == "pickle" (
-	%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; now you can process the pickle files.
-	goto end
-)
-
-if "%1" == "json" (
-	%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; now you can process the JSON files.
-	goto end
-)
-
-if "%1" == "htmlhelp" (
-	%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; now you can run HTML Help Workshop with the ^
-.hhp project file in %BUILDDIR%/htmlhelp.
-	goto end
-)
-
-if "%1" == "qthelp" (
-	%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; now you can run "qcollectiongenerator" with the ^
-.qhcp project file in %BUILDDIR%/qthelp, like this:
-	echo.^> qcollectiongenerator %BUILDDIR%\qthelp\Singa.qhcp
-	echo.To view the help file:
-	echo.^> assistant -collectionFile %BUILDDIR%\qthelp\Singa.ghc
-	goto end
-)
-
-if "%1" == "devhelp" (
-	%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished.
-	goto end
-)
-
-if "%1" == "epub" (
-	%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The epub file is in %BUILDDIR%/epub.
-	goto end
-)
-
-if "%1" == "epub3" (
-	%SPHINXBUILD% -b epub3 %ALLSPHINXOPTS% %BUILDDIR%/epub3
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The epub3 file is in %BUILDDIR%/epub3.
-	goto end
-)
-
-if "%1" == "latex" (
-	%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
-	goto end
-)
-
-if "%1" == "latexpdf" (
-	%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
-	cd %BUILDDIR%/latex
-	make all-pdf
-	cd %~dp0
-	echo.
-	echo.Build finished; the PDF files are in %BUILDDIR%/latex.
-	goto end
-)
-
-if "%1" == "latexpdfja" (
-	%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
-	cd %BUILDDIR%/latex
-	make all-pdf-ja
-	cd %~dp0
-	echo.
-	echo.Build finished; the PDF files are in %BUILDDIR%/latex.
-	goto end
-)
-
-if "%1" == "text" (
-	%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The text files are in %BUILDDIR%/text.
-	goto end
-)
-
-if "%1" == "man" (
-	%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The manual pages are in %BUILDDIR%/man.
-	goto end
-)
-
-if "%1" == "texinfo" (
-	%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
-	goto end
-)
-
-if "%1" == "gettext" (
-	%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
-	goto end
-)
-
-if "%1" == "changes" (
-	%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.The overview file is in %BUILDDIR%/changes.
-	goto end
-)
-
-if "%1" == "linkcheck" (
-	%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Link check complete; look for any errors in the above output ^
-or in %BUILDDIR%/linkcheck/output.txt.
-	goto end
-)
-
-if "%1" == "doctest" (
-	%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Testing of doctests in the sources finished, look at the ^
-results in %BUILDDIR%/doctest/output.txt.
-	goto end
-)
-
-if "%1" == "coverage" (
-	%SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Testing of coverage in the sources finished, look at the ^
-results in %BUILDDIR%/coverage/python.txt.
-	goto end
-)
-
-if "%1" == "xml" (
-	%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The XML files are in %BUILDDIR%/xml.
-	goto end
-)
-
-if "%1" == "pseudoxml" (
-	%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
-	goto end
-)
-
-if "%1" == "dummy" (
-	%SPHINXBUILD% -b dummy %ALLSPHINXOPTS% %BUILDDIR%/dummy
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. Dummy builder generates no files.
-	goto end
-)
-
-:end