You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@singa.apache.org by bu...@apache.org on 2015/06/16 04:33:38 UTC

svn commit: r954974 - in /websites/staging/singa/trunk/content: ./ quick-start.html

Author: buildbot
Date: Tue Jun 16 02:33:38 2015
New Revision: 954974

Log:
Staging update by buildbot for singa

Modified:
    websites/staging/singa/trunk/content/   (props changed)
    websites/staging/singa/trunk/content/quick-start.html

Propchange: websites/staging/singa/trunk/content/
------------------------------------------------------------------------------
--- cms:source-revision (original)
+++ cms:source-revision Tue Jun 16 02:33:38 2015
@@ -1 +1 @@
-1685692
+1685694

Modified: websites/staging/singa/trunk/content/quick-start.html
==============================================================================
--- websites/staging/singa/trunk/content/quick-start.html (original)
+++ websites/staging/singa/trunk/content/quick-start.html Tue Jun 16 02:33:38 2015
@@ -406,7 +406,10 @@ nworkers_per_procs: 2
 workspace: "examples/cifar10/"
 </pre></div></div>
 <p>The above cluster configuration file specifies two worker groups and one server group. Worker groups run asynchronously but share the memory space for parameter values. In other words, it runs as the Hogwild algorithm. Since it is running in a single node, we can avoid partitioning the dataset explicitly. In specific, a random start offset is assigned to each worker group such that they would not work on the same mini-batch for every iteration. Consequently, they run like on different data partitions. The running command is the same:</p>
-<p>./bin/singa-run.sh -model=examples/cifar10/model.conf -cluster=examples/cifar10/cluster.conf</p></div>
+
+<div class="source">
+<div class="source"><pre class="prettyprint">./bin/singa-run.sh -model=examples/cifar10/model.conf -cluster=examples/cifar10/cluster.conf
+</pre></div></div></div>
 <div class="section">
 <h5><a name="Training_with_model_Partitioning"></a>Training with model Partitioning</h5>