You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@singa.apache.org by wa...@apache.org on 2015/09/21 04:29:10 UTC

svn commit: r1704209 - /incubator/singa/site/trunk/content/markdown/develop/schedule.md

Author: wangwei
Date: Mon Sep 21 02:29:02 2015
New Revision: 1704209

URL: http://svn.apache.org/viewvc?rev=1704209&view=rev
Log:
update schedule; finished all features for the first release

Modified:
    incubator/singa/site/trunk/content/markdown/develop/schedule.md

Modified: incubator/singa/site/trunk/content/markdown/develop/schedule.md
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/develop/schedule.md?rev=1704209&r1=1704208&r2=1704209&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/develop/schedule.md (original)
+++ incubator/singa/site/trunk/content/markdown/develop/schedule.md Mon Sep 21 02:29:02 2015
@@ -4,13 +4,14 @@
 | Release | Module| Feature | Status |
 |---------|---------|-------------|--------|
 | 0.1 September    | Neural Network |1.1. Feed forward neural network, including CNN, MLP | done|
-| |          |1.2. RBM-like model, including RBM | testing|
-|         |                |1.3. Recurrent neural network, including standard RNN | working|
+|         |          |1.2. RBM-like model, including RBM | testing|
+|         |                |1.3. Recurrent neural network, including standard RNN | done|
 |         | Architecture   |1.4. One worker group on single node (with data partition)| done|
 |         |                |1.5. Multi worker groups on single node using [Hogwild](http://www.eecs.berkeley.edu/~brecht/papers/hogwildTR.pdf)|done|
-|         |                |1.6. Distributed Hogwild|testing|
+|         |                |1.6. Distributed Hogwild|done|
 |         |                |1.7. Multi groups across nodes, like [Downpour](http://papers.nips.cc/paper/4687-large-scale-distributed-deep-networks)|done|
 |         |                |1.8. All-Reduce training architecture like [DeepImage](http://arxiv.org/abs/1501.02876)|done|
+|         |                |1.9. Load-balance among servers | done|
 |         | Failure recovery|1.10. Checkpoint and restore |done|
 |         | Tools|1.11. Installation with GNU auto tools| done|
 |0.2 October  | Neural Network |2.1. Feed forward neural network, including auto-encoders, hinge loss layers, HDFS data layers||