You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@singa.apache.org by wa...@apache.org on 2019/06/29 14:42:26 UTC

svn commit: r1862313 [6/34] - in /incubator/singa/site/trunk: ./ _sources/ _sources/community/ _sources/docs/ _sources/docs/model_zoo/ _sources/docs/model_zoo/caffe/ _sources/docs/model_zoo/char-rnn/ _sources/docs/model_zoo/cifar10/ _sources/docs/model...

Modified: incubator/singa/site/trunk/docs/loss.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/loss.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/loss.html (original)
+++ incubator/singa/site/trunk/docs/loss.html Sat Jun 29 14:42:24 2019
@@ -104,6 +104,7 @@
 <li class="toctree-l1 current"><a class="reference internal" href="index.html">Documentation</a><ul class="current">
 <li class="toctree-l2"><a class="reference internal" href="installation.html">Installation</a></li>
 <li class="toctree-l2"><a class="reference internal" href="software_stack.html">Software Stack</a></li>
+<li class="toctree-l2"><a class="reference internal" href="benchmark.html">Benchmark for Distributed training</a></li>
 <li class="toctree-l2"><a class="reference internal" href="device.html">Device</a></li>
 <li class="toctree-l2"><a class="reference internal" href="tensor.html">Tensor</a></li>
 <li class="toctree-l2"><a class="reference internal" href="layer.html">Layer</a></li>
@@ -203,8 +204,193 @@
           <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
            <div itemprop="articleBody">
             
-  <div class="section" id="loss">
-<h1>Loss<a class="headerlink" href="#loss" title="Permalink to this headline">¶</a></h1>
+  <div class="section" id="module-singa.loss">
+<span id="loss"></span><h1>Loss<a class="headerlink" href="#module-singa.loss" title="Permalink to this headline">¶</a></h1>
+<p>Loss module includes a set of training loss implmentations. Some are converted
+from C++ implementation, and the rest are implemented directly using python
+Tensor.</p>
+<p>Example usage:</p>
+<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">singa</span> <span class="k">import</span> <span class="n">tensor</span>
+<span class="kn">from</span> <span class="nn">singa</span> <span class="k">import</span> <span class="n">loss</span>
+
+<span class="n">x</span> <span class="o">=</span> <span class="n">tensor</span><span class="o">.</span><span class="n">Tensor</span><span class="p">((</span><span class="mi">3</span><span class="p">,</span> <span class="mi">5</span><span class="p">))</span>
+<span class="n">x</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>  <span class="c1"># randomly genearte the prediction activation</span>
+<span class="n">y</span> <span class="o">=</span> <span class="n">tensor</span><span class="o">.</span><span class="n">from_numpy</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">3</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">np</span><span class="o">.</span><span class="n">int</span><span class="p">))</span>  <span class="c1"># set the truth</span>
+
+<span class="n">f</span> <span class="o">=</span> <span class="n">loss</span><span class="o">.</span><span class="n">SoftmaxCrossEntropy</span><span class="p">()</span>
+<span class="n">l</span> <span class="o">=</span> <span class="n">f</span><span class="o">.</span><span class="n">forward</span><span class="p">(</span><span class="kc">True</span><span class="p">,</span> <span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span>  <span class="c1"># l is tensor with 3 loss values</span>
+<span class="n">g</span> <span class="o">=</span> <span class="n">f</span><span class="o">.</span><span class="n">backward</span><span class="p">()</span>  <span class="c1"># g is a tensor containing all gradients of x w.r.t l</span>
+</pre></div>
+</div>
+<dl class="class">
+<dt id="singa.loss.Loss">
+<em class="property">class </em><code class="descclassname">singa.loss.</code><code class="descname">Loss</code><a class="headerlink" href="#singa.loss.Loss" title="Permalink to this definition">¶</a></dt>
+<dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">object</span></code></p>
+<p>Base loss class.</p>
+<p>Subclasses that wrap the C++ loss classes can use the inherited foward,
+backward, and evaluate functions of this base class. Other subclasses need
+to override these functions</p>
+<dl class="method">
+<dt id="singa.loss.Loss.backward">
+<code class="descname">backward</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#singa.loss.Loss.backward" title="Permalink to this definition">¶</a></dt>
+<dd><dl class="field-list simple">
+<dt class="field-odd">Returns</dt>
+<dd class="field-odd"><p>the grad of x w.r.t. the loss</p>
+</dd>
+</dl>
+</dd></dl>
+
+<dl class="method">
+<dt id="singa.loss.Loss.evaluate">
+<code class="descname">evaluate</code><span class="sig-paren">(</span><em>flag</em>, <em>x</em>, <em>y</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.loss.Loss.evaluate" title="Permalink to this definition">¶</a></dt>
+<dd><dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><ul class="simple">
+<li><p><strong>flag</strong> (<em>int</em>) – must be kEval, to be removed</p></li>
+<li><p><strong>x</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – the prediction Tensor</p></li>
+<li><p><strong>y</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – the ground truth Tnesor</p></li>
+</ul>
+</dd>
+<dt class="field-even">Returns</dt>
+<dd class="field-even"><p>the averaged loss for all samples in x.</p>
+</dd>
+</dl>
+</dd></dl>
+
+<dl class="method">
+<dt id="singa.loss.Loss.forward">
+<code class="descname">forward</code><span class="sig-paren">(</span><em>flag</em>, <em>x</em>, <em>y</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.loss.Loss.forward" title="Permalink to this definition">¶</a></dt>
+<dd><p>Compute the loss values.</p>
+<dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><ul class="simple">
+<li><p><strong>flag</strong> – kTrain/kEval or bool. If it is kTrain/True, then the backward
+function must be called before calling forward again.</p></li>
+<li><p><strong>x</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – the prediction Tensor</p></li>
+<li><p><strong>y</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – the ground truch Tensor, x.shape[0] must = y.shape[0]</p></li>
+</ul>
+</dd>
+<dt class="field-even">Returns</dt>
+<dd class="field-even"><p>a tensor of floats for the loss values, one per sample</p>
+</dd>
+</dl>
+</dd></dl>
+
+</dd></dl>
+
+<dl class="class">
+<dt id="singa.loss.SigmoidCrossEntropy">
+<em class="property">class </em><code class="descclassname">singa.loss.</code><code class="descname">SigmoidCrossEntropy</code><span class="sig-paren">(</span><em>epsilon=1e-08</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.loss.SigmoidCrossEntropy" title="Permalink to this definition">¶</a></dt>
+<dd><p>Bases: <a class="reference internal" href="#singa.loss.Loss" title="singa.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">singa.loss.Loss</span></code></a></p>
+<p>This loss evaluates the cross-entropy loss between the prediction and the
+truth values with the prediction probability generated from Sigmoid.</p>
+<dl class="method">
+<dt id="singa.loss.SigmoidCrossEntropy.backward">
+<code class="descname">backward</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#singa.loss.SigmoidCrossEntropy.backward" title="Permalink to this definition">¶</a></dt>
+<dd><p>Compute the gradient of loss w.r.t to x.</p>
+<dl class="field-list simple">
+<dt class="field-odd">Returns</dt>
+<dd class="field-odd"><p>dx = pi - yi.</p>
+</dd>
+</dl>
+</dd></dl>
+
+<dl class="method">
+<dt id="singa.loss.SigmoidCrossEntropy.evaluate">
+<code class="descname">evaluate</code><span class="sig-paren">(</span><em>flag</em>, <em>x</em>, <em>y</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.loss.SigmoidCrossEntropy.evaluate" title="Permalink to this definition">¶</a></dt>
+<dd><p>Compuate the averaged error.</p>
+<dl class="field-list simple">
+<dt class="field-odd">Returns</dt>
+<dd class="field-odd"><p>a float value as the averaged error</p>
+</dd>
+</dl>
+</dd></dl>
+
+<dl class="method">
+<dt id="singa.loss.SigmoidCrossEntropy.forward">
+<code class="descname">forward</code><span class="sig-paren">(</span><em>flag</em>, <em>x</em>, <em>y</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.loss.SigmoidCrossEntropy.forward" title="Permalink to this definition">¶</a></dt>
+<dd><p>loss is -yi * log pi - (1-yi) log (1-pi), where pi=sigmoid(xi)</p>
+<dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><ul class="simple">
+<li><p><strong>flag</strong> (<em>bool</em>) – true for training; false for evaluation</p></li>
+<li><p><strong>x</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – the prediction Tensor</p></li>
+<li><p><strong>y</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – the truth Tensor, a binary array value per sample</p></li>
+</ul>
+</dd>
+<dt class="field-even">Returns</dt>
+<dd class="field-even"><p>a Tensor with one error value per sample</p>
+</dd>
+</dl>
+</dd></dl>
+
+</dd></dl>
+
+<dl class="class">
+<dt id="singa.loss.SoftmaxCrossEntropy">
+<em class="property">class </em><code class="descclassname">singa.loss.</code><code class="descname">SoftmaxCrossEntropy</code><a class="headerlink" href="#singa.loss.SoftmaxCrossEntropy" title="Permalink to this definition">¶</a></dt>
+<dd><p>Bases: <a class="reference internal" href="#singa.loss.Loss" title="singa.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">singa.loss.Loss</span></code></a></p>
+<p>This loss function is a combination of SoftMax and Cross-Entropy loss.</p>
+<p>It converts the inputs via SoftMax function and then
+computes the cross-entropy loss against the ground truth values.</p>
+<p>For each sample, the ground truth could be a integer as the label index;
+or a binary array, indicating the label distribution. The ground truth
+tensor thus could be a 1d or 2d tensor.
+The data/feature tensor could 1d (for a single sample) or 2d for a batch of
+samples.</p>
+</dd></dl>
+
+<dl class="class">
+<dt id="singa.loss.SquaredError">
+<em class="property">class </em><code class="descclassname">singa.loss.</code><code class="descname">SquaredError</code><a class="headerlink" href="#singa.loss.SquaredError" title="Permalink to this definition">¶</a></dt>
+<dd><p>Bases: <a class="reference internal" href="#singa.loss.Loss" title="singa.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">singa.loss.Loss</span></code></a></p>
+<p>This loss evaluates the squared error between the prediction and the
+truth values.</p>
+<p>It is implemented using Python Tensor operations.</p>
+<dl class="method">
+<dt id="singa.loss.SquaredError.backward">
+<code class="descname">backward</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#singa.loss.SquaredError.backward" title="Permalink to this definition">¶</a></dt>
+<dd><p>Compute the gradient of x w.r.t the error.</p>
+<dl class="field-list simple">
+<dt class="field-odd">Returns</dt>
+<dd class="field-odd"><p>x - y</p>
+</dd>
+</dl>
+</dd></dl>
+
+<dl class="method">
+<dt id="singa.loss.SquaredError.evaluate">
+<code class="descname">evaluate</code><span class="sig-paren">(</span><em>flag</em>, <em>x</em>, <em>y</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.loss.SquaredError.evaluate" title="Permalink to this definition">¶</a></dt>
+<dd><p>Compuate the averaged error.</p>
+<dl class="field-list simple">
+<dt class="field-odd">Returns</dt>
+<dd class="field-odd"><p>a float value as the averaged error</p>
+</dd>
+</dl>
+</dd></dl>
+
+<dl class="method">
+<dt id="singa.loss.SquaredError.forward">
+<code class="descname">forward</code><span class="sig-paren">(</span><em>flag</em>, <em>x</em>, <em>y</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.loss.SquaredError.forward" title="Permalink to this definition">¶</a></dt>
+<dd><p>Compute the error as 0.5 * ||x-y||^2.</p>
+<dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><ul class="simple">
+<li><p><strong>flag</strong> (<em>int</em>) – kTrain or kEval; if kTrain, then the backward must be
+called before calling forward again.</p></li>
+<li><p><strong>x</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – the prediction Tensor</p></li>
+<li><p><strong>y</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – the truth Tensor, an integer value per sample, whose
+value is [0, x.shape[1])</p></li>
+</ul>
+</dd>
+<dt class="field-even">Returns</dt>
+<dd class="field-even"><p>a Tensor with one error value per sample</p>
+</dd>
+</dl>
+</dd></dl>
+
+</dd></dl>
+
 </div>
 
 

Modified: incubator/singa/site/trunk/docs/metric.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/metric.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/metric.html (original)
+++ incubator/singa/site/trunk/docs/metric.html Sat Jun 29 14:42:24 2019
@@ -104,6 +104,7 @@
 <li class="toctree-l1 current"><a class="reference internal" href="index.html">Documentation</a><ul class="current">
 <li class="toctree-l2"><a class="reference internal" href="installation.html">Installation</a></li>
 <li class="toctree-l2"><a class="reference internal" href="software_stack.html">Software Stack</a></li>
+<li class="toctree-l2"><a class="reference internal" href="benchmark.html">Benchmark for Distributed training</a></li>
 <li class="toctree-l2"><a class="reference internal" href="device.html">Device</a></li>
 <li class="toctree-l2"><a class="reference internal" href="tensor.html">Tensor</a></li>
 <li class="toctree-l2"><a class="reference internal" href="layer.html">Layer</a></li>
@@ -203,8 +204,163 @@
           <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
            <div itemprop="articleBody">
             
-  <div class="section" id="metric">
-<h1>Metric<a class="headerlink" href="#metric" title="Permalink to this headline">¶</a></h1>
+  <div class="section" id="module-singa.metric">
+<span id="metric"></span><h1>Metric<a class="headerlink" href="#module-singa.metric" title="Permalink to this headline">¶</a></h1>
+<p>This module includes a set of metric classes for evaluating the model’s
+performance. The specific metric classes could be converted from C++
+implmentation or implemented directly using Python.</p>
+<p>Example usage:</p>
+<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">singa</span> <span class="k">import</span> <span class="n">tensor</span>
+<span class="kn">from</span> <span class="nn">singa</span> <span class="k">import</span> <span class="n">metric</span>
+
+<span class="n">x</span> <span class="o">=</span> <span class="n">tensor</span><span class="o">.</span><span class="n">Tensor</span><span class="p">((</span><span class="mi">3</span><span class="p">,</span> <span class="mi">5</span><span class="p">))</span>
+<span class="n">x</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>  <span class="c1"># randomly genearte the prediction activation</span>
+<span class="n">x</span> <span class="o">=</span> <span class="n">tensor</span><span class="o">.</span><span class="n">SoftMax</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>  <span class="c1"># normalize the prediction into probabilities</span>
+<span class="n">y</span> <span class="o">=</span> <span class="n">tensor</span><span class="o">.</span><span class="n">from_numpy</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">3</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">np</span><span class="o">.</span><span class="n">int</span><span class="p">))</span>  <span class="c1"># set the truth</span>
+
+<span class="n">f</span> <span class="o">=</span> <span class="n">metric</span><span class="o">.</span><span class="n">Accuracy</span><span class="p">()</span>
+<span class="n">acc</span> <span class="o">=</span> <span class="n">f</span><span class="o">.</span><span class="n">evaluate</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span>  <span class="c1"># averaged accuracy over all 3 samples in x</span>
+</pre></div>
+</div>
+<dl class="class">
+<dt id="singa.metric.Metric">
+<em class="property">class </em><code class="descclassname">singa.metric.</code><code class="descname">Metric</code><a class="headerlink" href="#singa.metric.Metric" title="Permalink to this definition">¶</a></dt>
+<dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">object</span></code></p>
+<p>Base metric class.</p>
+<p>Subclasses that wrap the C++ loss classes can use the inherited foward,
+and evaluate functions of this base class. Other subclasses need
+to override these functions. Users need to feed in the <strong>predictions</strong> and
+ground truth to get the metric values.</p>
+<dl class="method">
+<dt id="singa.metric.Metric.forward">
+<code class="descname">forward</code><span class="sig-paren">(</span><em>x</em>, <em>y</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.metric.Metric.forward" title="Permalink to this definition">¶</a></dt>
+<dd><p>Compute the metric for each sample.</p>
+<dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><ul class="simple">
+<li><p><strong>x</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – predictions, one row per sample</p></li>
+<li><p><strong>y</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – ground truth values, one row per sample</p></li>
+</ul>
+</dd>
+<dt class="field-even">Returns</dt>
+<dd class="field-even"><p>a tensor of floats, one per sample</p>
+</dd>
+</dl>
+</dd></dl>
+
+<dl class="method">
+<dt id="singa.metric.Metric.evaluate">
+<code class="descname">evaluate</code><span class="sig-paren">(</span><em>x</em>, <em>y</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.metric.Metric.evaluate" title="Permalink to this definition">¶</a></dt>
+<dd><p>Compute the averaged metric over all samples.</p>
+<dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><ul class="simple">
+<li><p><strong>x</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – predictions, one row per sample</p></li>
+<li><p><strong>y</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – ground truth values, one row per sample</p></li>
+</ul>
+</dd>
+<dt class="field-even">Returns</dt>
+<dd class="field-even"><p>a float value for the averaged metric</p>
+</dd>
+</dl>
+</dd></dl>
+
+</dd></dl>
+
+<dl class="class">
+<dt id="singa.metric.Accuracy">
+<em class="property">class </em><code class="descclassname">singa.metric.</code><code class="descname">Accuracy</code><a class="headerlink" href="#singa.metric.Accuracy" title="Permalink to this definition">¶</a></dt>
+<dd><p>Bases: <a class="reference internal" href="#singa.metric.Metric" title="singa.metric.Metric"><code class="xref py py-class docutils literal notranslate"><span class="pre">singa.metric.Metric</span></code></a></p>
+<p>Compute the top one accuracy for single label prediction tasks.</p>
+<p>It calls the C++ functions to do the calculation.</p>
+</dd></dl>
+
+<dl class="class">
+<dt id="singa.metric.Precision">
+<em class="property">class </em><code class="descclassname">singa.metric.</code><code class="descname">Precision</code><span class="sig-paren">(</span><em>top_k</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.metric.Precision" title="Permalink to this definition">¶</a></dt>
+<dd><p>Bases: <a class="reference internal" href="#singa.metric.Metric" title="singa.metric.Metric"><code class="xref py py-class docutils literal notranslate"><span class="pre">singa.metric.Metric</span></code></a></p>
+<p>Make the top-k labels of max probability as the prediction</p>
+<p>Compute the precision against the groundtruth labels</p>
+<dl class="method">
+<dt id="singa.metric.Precision.forward">
+<code class="descname">forward</code><span class="sig-paren">(</span><em>x</em>, <em>y</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.metric.Precision.forward" title="Permalink to this definition">¶</a></dt>
+<dd><p>Compute the precision for each sample.</p>
+<p>Convert tensor to numpy for computation</p>
+<dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><ul class="simple">
+<li><p><strong>x</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – predictions, one row per sample</p></li>
+<li><p><strong>y</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – ground truth labels, one row per sample</p></li>
+</ul>
+</dd>
+<dt class="field-even">Returns</dt>
+<dd class="field-even"><p>a tensor of floats, one per sample</p>
+</dd>
+</dl>
+</dd></dl>
+
+<dl class="method">
+<dt id="singa.metric.Precision.evaluate">
+<code class="descname">evaluate</code><span class="sig-paren">(</span><em>x</em>, <em>y</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.metric.Precision.evaluate" title="Permalink to this definition">¶</a></dt>
+<dd><p>Compute the averaged precision over all samples.</p>
+<dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><ul class="simple">
+<li><p><strong>x</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – predictions, one row per sample</p></li>
+<li><p><strong>y</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – ground truth values, one row per sample</p></li>
+</ul>
+</dd>
+<dt class="field-even">Returns</dt>
+<dd class="field-even"><p>a float value for the averaged metric</p>
+</dd>
+</dl>
+</dd></dl>
+
+</dd></dl>
+
+<dl class="class">
+<dt id="singa.metric.Recall">
+<em class="property">class </em><code class="descclassname">singa.metric.</code><code class="descname">Recall</code><span class="sig-paren">(</span><em>top_k</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.metric.Recall" title="Permalink to this definition">¶</a></dt>
+<dd><p>Bases: <a class="reference internal" href="#singa.metric.Metric" title="singa.metric.Metric"><code class="xref py py-class docutils literal notranslate"><span class="pre">singa.metric.Metric</span></code></a></p>
+<p>Make the top-k labels of max probability as the prediction</p>
+<p>Compute the recall against the groundtruth labels</p>
+<dl class="method">
+<dt id="singa.metric.Recall.forward">
+<code class="descname">forward</code><span class="sig-paren">(</span><em>x</em>, <em>y</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.metric.Recall.forward" title="Permalink to this definition">¶</a></dt>
+<dd><p>Compute the recall for each sample.</p>
+<p>Convert tensor to numpy for computation</p>
+<dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><ul class="simple">
+<li><p><strong>x</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – predictions, one row per sample</p></li>
+<li><p><strong>y</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – ground truth labels, one row per sample</p></li>
+</ul>
+</dd>
+<dt class="field-even">Returns</dt>
+<dd class="field-even"><p>a tensor of floats, one per sample</p>
+</dd>
+</dl>
+</dd></dl>
+
+<dl class="method">
+<dt id="singa.metric.Recall.evaluate">
+<code class="descname">evaluate</code><span class="sig-paren">(</span><em>x</em>, <em>y</em><span class="sig-paren">)</span><a class="headerlink" href="#singa.metric.Recall.evaluate" title="Permalink to this definition">¶</a></dt>
+<dd><p>Compute the averaged precision over all samples.</p>
+<dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><ul class="simple">
+<li><p><strong>x</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – predictions, one row per sample</p></li>
+<li><p><strong>y</strong> (<a class="reference internal" href="tensor.html#singa.tensor.Tensor" title="singa.tensor.Tensor"><em>Tensor</em></a>) – ground truth values, one row per sample</p></li>
+</ul>
+</dd>
+<dt class="field-even">Returns</dt>
+<dd class="field-even"><p>a float value for the averaged metric</p>
+</dd>
+</dl>
+</dd></dl>
+
+</dd></dl>
+
 </div>
 
 

Modified: incubator/singa/site/trunk/docs/model_zoo/caffe/README.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/caffe/README.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/caffe/README.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/caffe/README.html Sat Jun 29 14:42:24 2019
@@ -180,24 +180,7 @@
           <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
            <div itemprop="articleBody">
             
-  <!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
---><div class="section" id="use-parameters-pre-trained-from-caffe-in-singa">
+  <div class="section" id="use-parameters-pre-trained-from-caffe-in-singa">
 <h1>Use parameters pre-trained from Caffe in SINGA<a class="headerlink" href="#use-parameters-pre-trained-from-caffe-in-singa" title="Permalink to this headline">¶</a></h1>
 <p>In this example, we use SINGA to load the VGG parameters trained by Caffe to do image classification.</p>
 <div class="section" id="run-this-example">
@@ -207,12 +190,12 @@ The script does the following work.</p>
 <div class="section" id="obtain-the-caffe-model">
 <h3>Obtain the Caffe model<a class="headerlink" href="#obtain-the-caffe-model" title="Permalink to this headline">¶</a></h3>
 <ul class="simple">
-<li>Download caffe model prototxt and parameter binary file.</li>
-<li>Currently we only support the latest caffe format, if your model is in
+<li><p>Download caffe model prototxt and parameter binary file.</p></li>
+<li><p>Currently we only support the latest caffe format, if your model is in
 previous version of caffe, please update it to current format.(This is
-supported by caffe)</li>
-<li>After updating, we can obtain two files, i.e., the prototxt and parameter
-binary file.</li>
+supported by caffe)</p></li>
+<li><p>After updating, we can obtain two files, i.e., the prototxt and parameter
+binary file.</p></li>
 </ul>
 </div>
 <div class="section" id="prepare-test-images">

Modified: incubator/singa/site/trunk/docs/model_zoo/char-rnn/README.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/char-rnn/README.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/char-rnn/README.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/char-rnn/README.html Sat Jun 29 14:42:24 2019
@@ -36,7 +36,7 @@
   <link rel="stylesheet" href="../../../_static/pygments.css" type="text/css" />
     <link rel="index" title="Index" href="../../../genindex.html" />
     <link rel="search" title="Search" href="../../../search.html" />
-    <link rel="next" title="Train a RBM model against MNIST dataset" href="../mnist/README.html" />
+    <link rel="next" title="Train AlexNet over ImageNet" href="../imagenet/alexnet/README.html" />
     <link rel="prev" title="Train CNN over Cifar-10" href="../cifar10/README.html" />
     <link href="../../../_static/style.css" rel="stylesheet" type="text/css">
     <!--link href="../../../_static/fontawesome-all.min.css" rel="stylesheet" type="text/css"-->
@@ -104,6 +104,7 @@
 <li class="toctree-l1 current"><a class="reference internal" href="../../index.html">Documentation</a><ul class="current">
 <li class="toctree-l2"><a class="reference internal" href="../../installation.html">Installation</a></li>
 <li class="toctree-l2"><a class="reference internal" href="../../software_stack.html">Software Stack</a></li>
+<li class="toctree-l2"><a class="reference internal" href="../../benchmark.html">Benchmark for Distributed training</a></li>
 <li class="toctree-l2"><a class="reference internal" href="../../device.html">Device</a></li>
 <li class="toctree-l2"><a class="reference internal" href="../../tensor.html">Tensor</a></li>
 <li class="toctree-l2"><a class="reference internal" href="../../layer.html">Layer</a></li>
@@ -124,13 +125,7 @@
 <li class="toctree-l4"><a class="reference internal" href="#instructions">Instructions</a></li>
 </ul>
 </li>
-<li class="toctree-l3"><a class="reference internal" href="../mnist/README.html">Train a RBM model against MNIST dataset</a></li>
 <li class="toctree-l3"><a class="reference internal" href="../imagenet/alexnet/README.html">Train AlexNet over ImageNet</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../imagenet/densenet/README.html">name: DenseNet models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../imagenet/densenet/README.html#image-classification-using-densenet">Image Classification using DenseNet</a></li>
 <li class="toctree-l3"><a class="reference internal" href="../imagenet/googlenet/README.html">name: GoogleNet on ImageNet
 SINGA version: 1.0.1
 SINGA commit: 8c990f7da2de220e8a012c6a8ecc897dc7532744
@@ -138,24 +133,6 @@ parameter_url: https://s3-ap-southeast-1
 parameter_sha1: 0a88e8948b1abca3badfd8d090d6be03f8d7655d
 license: unrestricted https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet</a></li>
 <li class="toctree-l3"><a class="reference internal" href="../imagenet/googlenet/README.html#image-classification-using-googlenet">Image Classification using GoogleNet</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../imagenet/inception/README.html">name: Inception V4 on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/inception_v4.tar.gz
-parameter_sha1: 5fdd6f5d8af8fd10e7321d9b38bb87ef14e80d56
-license: https://github.com/tensorflow/models/tree/master/slim</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../imagenet/inception/README.html#image-classification-using-inception-v4">Image Classification using Inception V4</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../imagenet/resnet/README.html">name: Resnets on ImageNet
-SINGA version: 1.1
-SINGA commit: 45ec92d8ffc1fa1385a9307fdf07e21da939ee2f
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz
-license: Apache V2, https://github.com/facebook/fb.resnet.torch/blob/master/LICENSE</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../imagenet/resnet/README.html#image-classification-using-residual-networks">Image Classification using Residual Networks</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../imagenet/vgg/README.html">name: VGG models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../imagenet/vgg/README.html#image-classification-using-vgg">Image Classification using VGG</a></li>
 </ul>
 </li>
 </ul>
@@ -244,24 +221,7 @@ license: https://github.com/pytorch/visi
           <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
            <div itemprop="articleBody">
             
-  <!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
---><div class="section" id="train-char-rnn-over-plain-text">
+  <div class="section" id="train-char-rnn-over-plain-text">
 <h1>Train Char-RNN over plain text<a class="headerlink" href="#train-char-rnn-over-plain-text" title="Permalink to this headline">¶</a></h1>
 <p>Recurrent neural networks (RNN) are widely used for modelling sequential data,
 e.g., natural language sentences. This example describes how to implement a RNN
@@ -274,12 +234,10 @@ generate meaningful code from the model.
 <div class="section" id="instructions">
 <h2>Instructions<a class="headerlink" href="#instructions" title="Permalink to this headline">¶</a></h2>
 <ul>
-<li><p class="first">Compile and install SINGA. Currently the RNN implementation depends on Cudnn with version &gt;= 5.05.</p>
-</li>
-<li><p class="first">Prepare the dataset. Download the <a class="reference external" href="http://cs.stanford.edu/people/karpathy/char-rnn/">kernel source code</a>.
-Other plain text files can also be used.</p>
-</li>
-<li><p class="first">Start the training,</p>
+<li><p>Compile and install SINGA. Currently the RNN implementation depends on Cudnn with version &gt;= 5.05.</p></li>
+<li><p>Prepare the dataset. Download the <a class="reference external" href="http://cs.stanford.edu/people/karpathy/char-rnn/">kernel source code</a>.
+Other plain text files can also be used.</p></li>
+<li><p>Start the training,</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  <span class="n">python</span> <span class="n">train</span><span class="o">.</span><span class="n">py</span> <span class="n">linux_input</span><span class="o">.</span><span class="n">txt</span>
 </pre></div>
 </div>
@@ -288,7 +246,7 @@ Other plain text files can also be used.
 </pre></div>
 </div>
 </li>
-<li><p class="first">Sample characters from the model by providing the number of characters to sample and the seed string.</p>
+<li><p>Sample characters from the model by providing the number of characters to sample and the seed string.</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  <span class="n">python</span> <span class="n">sample</span><span class="o">.</span><span class="n">py</span> <span class="s1">&#39;model.bin&#39;</span> <span class="mi">100</span> <span class="o">--</span><span class="n">seed</span> <span class="s1">&#39;#include &lt;std&#39;</span>
 </pre></div>
 </div>
@@ -306,7 +264,7 @@ Other plain text files can also be used.
   
     <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
       
-        <a href="../mnist/README.html" class="btn btn-neutral float-right" title="Train a RBM model against MNIST dataset" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
+        <a href="../imagenet/alexnet/README.html" class="btn btn-neutral float-right" title="Train AlexNet over ImageNet" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
       
       
         <a href="../cifar10/README.html" class="btn btn-neutral float-left" title="Train CNN over Cifar-10" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>

Modified: incubator/singa/site/trunk/docs/model_zoo/cifar10/README.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/cifar10/README.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/cifar10/README.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/cifar10/README.html Sat Jun 29 14:42:24 2019
@@ -104,6 +104,7 @@
 <li class="toctree-l1 current"><a class="reference internal" href="../../index.html">Documentation</a><ul class="current">
 <li class="toctree-l2"><a class="reference internal" href="../../installation.html">Installation</a></li>
 <li class="toctree-l2"><a class="reference internal" href="../../software_stack.html">Software Stack</a></li>
+<li class="toctree-l2"><a class="reference internal" href="../../benchmark.html">Benchmark for Distributed training</a></li>
 <li class="toctree-l2"><a class="reference internal" href="../../device.html">Device</a></li>
 <li class="toctree-l2"><a class="reference internal" href="../../tensor.html">Tensor</a></li>
 <li class="toctree-l2"><a class="reference internal" href="../../layer.html">Layer</a></li>
@@ -124,13 +125,7 @@
 </ul>
 </li>
 <li class="toctree-l3"><a class="reference internal" href="../char-rnn/README.html">Train Char-RNN over plain text</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../mnist/README.html">Train a RBM model against MNIST dataset</a></li>
 <li class="toctree-l3"><a class="reference internal" href="../imagenet/alexnet/README.html">Train AlexNet over ImageNet</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../imagenet/densenet/README.html">name: DenseNet models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../imagenet/densenet/README.html#image-classification-using-densenet">Image Classification using DenseNet</a></li>
 <li class="toctree-l3"><a class="reference internal" href="../imagenet/googlenet/README.html">name: GoogleNet on ImageNet
 SINGA version: 1.0.1
 SINGA commit: 8c990f7da2de220e8a012c6a8ecc897dc7532744
@@ -138,24 +133,6 @@ parameter_url: https://s3-ap-southeast-1
 parameter_sha1: 0a88e8948b1abca3badfd8d090d6be03f8d7655d
 license: unrestricted https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet</a></li>
 <li class="toctree-l3"><a class="reference internal" href="../imagenet/googlenet/README.html#image-classification-using-googlenet">Image Classification using GoogleNet</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../imagenet/inception/README.html">name: Inception V4 on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/inception_v4.tar.gz
-parameter_sha1: 5fdd6f5d8af8fd10e7321d9b38bb87ef14e80d56
-license: https://github.com/tensorflow/models/tree/master/slim</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../imagenet/inception/README.html#image-classification-using-inception-v4">Image Classification using Inception V4</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../imagenet/resnet/README.html">name: Resnets on ImageNet
-SINGA version: 1.1
-SINGA commit: 45ec92d8ffc1fa1385a9307fdf07e21da939ee2f
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz
-license: Apache V2, https://github.com/facebook/fb.resnet.torch/blob/master/LICENSE</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../imagenet/resnet/README.html#image-classification-using-residual-networks">Image Classification using Residual Networks</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../imagenet/vgg/README.html">name: VGG models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../imagenet/vgg/README.html#image-classification-using-vgg">Image Classification using VGG</a></li>
 </ul>
 </li>
 </ul>
@@ -244,34 +221,17 @@ license: https://github.com/pytorch/visi
           <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
            <div itemprop="articleBody">
             
-  <!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
---><div class="section" id="train-cnn-over-cifar-10">
+  <div class="section" id="train-cnn-over-cifar-10">
 <h1>Train CNN over Cifar-10<a class="headerlink" href="#train-cnn-over-cifar-10" title="Permalink to this headline">¶</a></h1>
 <p>Convolution neural network (CNN) is a type of feed-forward artificial neural
 network widely used for image and video classification. In this example, we
 will train three deep CNN models to do image classification for the CIFAR-10 dataset,</p>
 <ol class="simple">
-<li><a class="reference external" href="https://code.google.com/p/cuda-convnet/source/browse/trunk/example-layers/layers-18pct.cfg">AlexNet</a>
-the best validation accuracy (without data augmentation) we achieved was about 82%.</li>
-<li><a class="reference external" href="http://torch.ch/blog/2015/07/30/cifar.html">VGGNet</a>, the best validation accuracy (without data augmentation) we achieved was about 89%.</li>
-<li><a class="reference external" href="https://github.com/facebook/fb.resnet.torch">ResNet</a>, the best validation accuracy (without data augmentation) we achieved was about 83%.</li>
-<li><a class="reference external" href="https://github.com/BVLC/caffe/tree/master/examples/cifar10">Alexnet from Caffe</a>, SINGA is able to convert model from Caffe seamlessly.</li>
+<li><p><a class="reference external" href="https://code.google.com/p/cuda-convnet/source/browse/trunk/example-layers/layers-18pct.cfg">AlexNet</a>
+the best validation accuracy (without data augmentation) we achieved was about 82%.</p></li>
+<li><p><a class="reference external" href="http://torch.ch/blog/2015/07/30/cifar.html">VGGNet</a>, the best validation accuracy (without data augmentation) we achieved was about 89%.</p></li>
+<li><p><a class="reference external" href="https://github.com/facebook/fb.resnet.torch">ResNet</a>, the best validation accuracy (without data augmentation) we achieved was about 83%.</p></li>
+<li><p><a class="reference external" href="https://github.com/BVLC/caffe/tree/master/examples/cifar10">Alexnet from Caffe</a>, SINGA is able to convert model from Caffe seamlessly.</p></li>
 </ol>
 <div class="section" id="instructions">
 <h2>Instructions<a class="headerlink" href="#instructions" title="Permalink to this headline">¶</a></h2>
@@ -296,7 +256,7 @@ are required. Please refer to the instal
 <h3>Training<a class="headerlink" href="#training" title="Permalink to this headline">¶</a></h3>
 <p>There are four training programs</p>
 <ol>
-<li><p class="first">train.py. The following command would train the VGG model using the python
+<li><p>train.py. The following command would train the VGG model using the python
 version of the Cifar-10 dataset in ‘cifar-10-batches-py’ folder.</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span> <span class="n">python</span> <span class="n">train</span><span class="o">.</span><span class="n">py</span> <span class="n">vgg</span> <span class="n">cifar</span><span class="o">-</span><span class="mi">10</span><span class="o">-</span><span class="n">batches</span><span class="o">-</span><span class="n">py</span>
 </pre></div>
@@ -309,20 +269,19 @@ argument</p>
 </pre></div>
 </div>
 </li>
-<li><p class="first">alexnet.cc. It trains the AlexNet model using the CPP APIs on a CudaGPU,</p>
+<li><p>alexnet.cc. It trains the AlexNet model using the CPP APIs on a CudaGPU,</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span> <span class="o">./</span><span class="n">run</span><span class="o">.</span><span class="n">sh</span>
 </pre></div>
 </div>
 </li>
-<li><p class="first">alexnet-parallel.cc. It trains the AlexNet model using the CPP APIs on two CudaGPU devices.
+<li><p>alexnet-parallel.cc. It trains the AlexNet model using the CPP APIs on two CudaGPU devices.
 The two devices run synchronously to compute the gradients of the mode parameters, which are
 averaged on the host CPU device and then be applied to update the parameters.</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span> <span class="o">./</span><span class="n">run</span><span class="o">-</span><span class="n">parallel</span><span class="o">.</span><span class="n">sh</span>
 </pre></div>
 </div>
 </li>
-<li><p class="first">vgg-parallel.cc. It trains the VGG model using the CPP APIs on two CudaGPU devices similar to alexnet-parallel.cc.</p>
-</li>
+<li><p>vgg-parallel.cc. It trains the VGG model using the CPP APIs on two CudaGPU devices similar to alexnet-parallel.cc.</p></li>
 </ol>
 </div>
 <div class="section" id="prediction">

Modified: incubator/singa/site/trunk/docs/model_zoo/examples/caffe/README.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/examples/caffe/README.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/examples/caffe/README.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/examples/caffe/README.html Sat Jun 29 14:42:24 2019
@@ -207,12 +207,12 @@ The script does the following work.</p>
 <div class="section" id="obtain-the-caffe-model">
 <h3>Obtain the Caffe model<a class="headerlink" href="#obtain-the-caffe-model" title="Permalink to this headline">¶</a></h3>
 <ul class="simple">
-<li>Download caffe model prototxt and parameter binary file.</li>
-<li>Currently we only support the latest caffe format, if your model is in
+<li><p>Download caffe model prototxt and parameter binary file.</p></li>
+<li><p>Currently we only support the latest caffe format, if your model is in
 previous version of caffe, please update it to current format.(This is
-supported by caffe)</li>
-<li>After updating, we can obtain two files, i.e., the prototxt and parameter
-binary file.</li>
+supported by caffe)</p></li>
+<li><p>After updating, we can obtain two files, i.e., the prototxt and parameter
+binary file.</p></li>
 </ul>
 </div>
 <div class="section" id="prepare-test-images">

Modified: incubator/singa/site/trunk/docs/model_zoo/examples/char-rnn/README.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/examples/char-rnn/README.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/examples/char-rnn/README.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/examples/char-rnn/README.html Sat Jun 29 14:42:24 2019
@@ -210,12 +210,10 @@ generate meaningful code from the model.
 <div class="section" id="instructions">
 <h2>Instructions<a class="headerlink" href="#instructions" title="Permalink to this headline">¶</a></h2>
 <ul>
-<li><p class="first">Compile and install SINGA. Currently the RNN implementation depends on Cudnn with version &gt;= 5.05.</p>
-</li>
-<li><p class="first">Prepare the dataset. Download the <a class="reference external" href="http://cs.stanford.edu/people/karpathy/char-rnn/">kernel source code</a>.
-Other plain text files can also be used.</p>
-</li>
-<li><p class="first">Start the training,</p>
+<li><p>Compile and install SINGA. Currently the RNN implementation depends on Cudnn with version &gt;= 5.05.</p></li>
+<li><p>Prepare the dataset. Download the <a class="reference external" href="http://cs.stanford.edu/people/karpathy/char-rnn/">kernel source code</a>.
+Other plain text files can also be used.</p></li>
+<li><p>Start the training,</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  <span class="n">python</span> <span class="n">train</span><span class="o">.</span><span class="n">py</span> <span class="n">linux_input</span><span class="o">.</span><span class="n">txt</span>
 </pre></div>
 </div>
@@ -224,7 +222,7 @@ Other plain text files can also be used.
 </pre></div>
 </div>
 </li>
-<li><p class="first">Sample characters from the model by providing the number of characters to sample and the seed string.</p>
+<li><p>Sample characters from the model by providing the number of characters to sample and the seed string.</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  <span class="n">python</span> <span class="n">sample</span><span class="o">.</span><span class="n">py</span> <span class="s1">&#39;model.bin&#39;</span> <span class="mi">100</span> <span class="o">--</span><span class="n">seed</span> <span class="s1">&#39;#include &lt;std&#39;</span>
 </pre></div>
 </div>

Modified: incubator/singa/site/trunk/docs/model_zoo/examples/cifar10/README.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/examples/cifar10/README.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/examples/cifar10/README.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/examples/cifar10/README.html Sat Jun 29 14:42:24 2019
@@ -203,11 +203,11 @@
 network widely used for image and video classification. In this example, we
 will train three deep CNN models to do image classification for the CIFAR-10 dataset,</p>
 <ol class="simple">
-<li><a class="reference external" href="https://code.google.com/p/cuda-convnet/source/browse/trunk/example-layers/layers-18pct.cfg">AlexNet</a>
-the best validation accuracy (without data augmentation) we achieved was about 82%.</li>
-<li><a class="reference external" href="http://torch.ch/blog/2015/07/30/cifar.html">VGGNet</a>, the best validation accuracy (without data augmentation) we achieved was about 89%.</li>
-<li><a class="reference external" href="https://github.com/facebook/fb.resnet.torch">ResNet</a>, the best validation accuracy (without data augmentation) we achieved was about 83%.</li>
-<li><a class="reference external" href="https://github.com/BVLC/caffe/tree/master/examples/cifar10">Alexnet from Caffe</a>, SINGA is able to convert model from Caffe seamlessly.</li>
+<li><p><a class="reference external" href="https://code.google.com/p/cuda-convnet/source/browse/trunk/example-layers/layers-18pct.cfg">AlexNet</a>
+the best validation accuracy (without data augmentation) we achieved was about 82%.</p></li>
+<li><p><a class="reference external" href="http://torch.ch/blog/2015/07/30/cifar.html">VGGNet</a>, the best validation accuracy (without data augmentation) we achieved was about 89%.</p></li>
+<li><p><a class="reference external" href="https://github.com/facebook/fb.resnet.torch">ResNet</a>, the best validation accuracy (without data augmentation) we achieved was about 83%.</p></li>
+<li><p><a class="reference external" href="https://github.com/BVLC/caffe/tree/master/examples/cifar10">Alexnet from Caffe</a>, SINGA is able to convert model from Caffe seamlessly.</p></li>
 </ol>
 <div class="section" id="instructions">
 <h2>Instructions<a class="headerlink" href="#instructions" title="Permalink to this headline">¶</a></h2>
@@ -232,7 +232,7 @@ are required. Please refer to the instal
 <h3>Training<a class="headerlink" href="#training" title="Permalink to this headline">¶</a></h3>
 <p>There are four training programs</p>
 <ol>
-<li><p class="first">train.py. The following command would train the VGG model using the python
+<li><p>train.py. The following command would train the VGG model using the python
 version of the Cifar-10 dataset in ‘cifar-10-batches-py’ folder.</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span> <span class="n">python</span> <span class="n">train</span><span class="o">.</span><span class="n">py</span> <span class="n">vgg</span> <span class="n">cifar</span><span class="o">-</span><span class="mi">10</span><span class="o">-</span><span class="n">batches</span><span class="o">-</span><span class="n">py</span>
 </pre></div>
@@ -245,20 +245,19 @@ argument</p>
 </pre></div>
 </div>
 </li>
-<li><p class="first">alexnet.cc. It trains the AlexNet model using the CPP APIs on a CudaGPU,</p>
+<li><p>alexnet.cc. It trains the AlexNet model using the CPP APIs on a CudaGPU,</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span> <span class="o">./</span><span class="n">run</span><span class="o">.</span><span class="n">sh</span>
 </pre></div>
 </div>
 </li>
-<li><p class="first">alexnet-parallel.cc. It trains the AlexNet model using the CPP APIs on two CudaGPU devices.
+<li><p>alexnet-parallel.cc. It trains the AlexNet model using the CPP APIs on two CudaGPU devices.
 The two devices run synchronously to compute the gradients of the mode parameters, which are
 averaged on the host CPU device and then be applied to update the parameters.</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span> <span class="o">./</span><span class="n">run</span><span class="o">-</span><span class="n">parallel</span><span class="o">.</span><span class="n">sh</span>
 </pre></div>
 </div>
 </li>
-<li><p class="first">vgg-parallel.cc. It trains the VGG model using the CPP APIs on two CudaGPU devices similar to alexnet-parallel.cc.</p>
-</li>
+<li><p>vgg-parallel.cc. It trains the VGG model using the CPP APIs on two CudaGPU devices similar to alexnet-parallel.cc.</p></li>
 </ol>
 </div>
 <div class="section" id="prediction">

Modified: incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/alexnet/README.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/alexnet/README.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/alexnet/README.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/alexnet/README.html Sat Jun 29 14:42:24 2019
@@ -214,17 +214,17 @@ options in CMakeLists.txt or run <code c
 <div class="section" id="data-download">
 <h3>Data download<a class="headerlink" href="#data-download" title="Permalink to this headline">¶</a></h3>
 <ul class="simple">
-<li>Please refer to step1-3 on <a class="reference external" href="https://github.com/amd/OpenCL-caffe/wiki/Instructions-to-create-ImageNet-2012-data">Instructions to create ImageNet 2012 data</a>
-to download and decompress the data.</li>
-<li>You can download the training and validation list by
+<li><p>Please refer to step1-3 on <a class="reference external" href="https://github.com/amd/OpenCL-caffe/wiki/Instructions-to-create-ImageNet-2012-data">Instructions to create ImageNet 2012 data</a>
+to download and decompress the data.</p></li>
+<li><p>You can download the training and validation list by
 <a class="reference external" href="https://github.com/BVLC/caffe/blob/master/data/ilsvrc12/get_ilsvrc_aux.sh">get_ilsvrc_aux.sh</a>
-or from <a class="reference external" href="http://www.image-net.org/download-images">Imagenet</a>.</li>
+or from <a class="reference external" href="http://www.image-net.org/download-images">Imagenet</a>.</p></li>
 </ul>
 </div>
 <div class="section" id="data-preprocessing">
 <h3>Data preprocessing<a class="headerlink" href="#data-preprocessing" title="Permalink to this headline">¶</a></h3>
 <ul>
-<li><p class="first">Assuming you have downloaded the data and the list.
+<li><p>Assuming you have downloaded the data and the list.
 Now we should transform the data into binary files. You can run:</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>    <span class="n">sh</span> <span class="n">create_data</span><span class="o">.</span><span class="n">sh</span>
 </pre></div>
@@ -232,15 +232,15 @@ Now we should transform the data into bi
 <p>The script will generate a test file(<code class="docutils literal notranslate"><span class="pre">test.bin</span></code>), a mean file(<code class="docutils literal notranslate"><span class="pre">mean.bin</span></code>) and
 several training files(<code class="docutils literal notranslate"><span class="pre">trainX.bin</span></code>) in the specified output folder.</p>
 </li>
-<li><p class="first">You can also change the parameters in <code class="docutils literal notranslate"><span class="pre">create_data.sh</span></code>.</p>
+<li><p>You can also change the parameters in <code class="docutils literal notranslate"><span class="pre">create_data.sh</span></code>.</p>
 <ul class="simple">
-<li><code class="docutils literal notranslate"><span class="pre">-trainlist</span> <span class="pre">&lt;file&gt;</span></code>: the file of training list;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-trainfolder</span> <span class="pre">&lt;folder&gt;</span></code>: the folder of training images;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-testlist</span> <span class="pre">&lt;file&gt;</span></code>: the file of test list;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-testfolder</span> <span class="pre">&lt;floder&gt;</span></code>: the folder of test images;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-outdata</span> <span class="pre">&lt;folder&gt;</span></code>: the folder to save output files, including mean, training and test files.
-The script will generate these files in the specified folder;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-filesize</span> <span class="pre">&lt;int&gt;</span></code>: number of training images that stores in each binary file.</li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-trainlist</span> <span class="pre">&lt;file&gt;</span></code>: the file of training list;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-trainfolder</span> <span class="pre">&lt;folder&gt;</span></code>: the folder of training images;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-testlist</span> <span class="pre">&lt;file&gt;</span></code>: the file of test list;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-testfolder</span> <span class="pre">&lt;floder&gt;</span></code>: the folder of test images;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-outdata</span> <span class="pre">&lt;folder&gt;</span></code>: the folder to save output files, including mean, training and test files.
+The script will generate these files in the specified folder;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-filesize</span> <span class="pre">&lt;int&gt;</span></code>: number of training images that stores in each binary file.</p></li>
 </ul>
 </li>
 </ul>
@@ -248,25 +248,25 @@ The script will generate these files in
 <div class="section" id="training">
 <h3>Training<a class="headerlink" href="#training" title="Permalink to this headline">¶</a></h3>
 <ul>
-<li><p class="first">After preparing data, you can run the following command to train the Alexnet model.</p>
+<li><p>After preparing data, you can run the following command to train the Alexnet model.</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>    <span class="n">sh</span> <span class="n">run</span><span class="o">.</span><span class="n">sh</span>
 </pre></div>
 </div>
 </li>
-<li><p class="first">You may change the parameters in <code class="docutils literal notranslate"><span class="pre">run.sh</span></code>.</p>
+<li><p>You may change the parameters in <code class="docutils literal notranslate"><span class="pre">run.sh</span></code>.</p>
 <ul class="simple">
-<li><code class="docutils literal notranslate"><span class="pre">-epoch</span> <span class="pre">&lt;int&gt;</span></code>: number of epoch to be trained, default is 90;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-lr</span> <span class="pre">&lt;float&gt;</span></code>: base learning rate, the learning rate will decrease each 20 epochs,
-more specifically, <code class="docutils literal notranslate"><span class="pre">lr</span> <span class="pre">=</span> <span class="pre">lr</span> <span class="pre">*</span> <span class="pre">exp(0.1</span> <span class="pre">*</span> <span class="pre">(epoch</span> <span class="pre">/</span> <span class="pre">20))</span></code>;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-batchsize</span> <span class="pre">&lt;int&gt;</span></code>: batchsize, it should be changed regarding to your memory;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-filesize</span> <span class="pre">&lt;int&gt;</span></code>: number of training images that stores in each binary file, it is the
-same as the <code class="docutils literal notranslate"><span class="pre">filesize</span></code> in data preprocessing;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-ntrain</span> <span class="pre">&lt;int&gt;</span></code>: number of training images;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-ntest</span> <span class="pre">&lt;int&gt;</span></code>: number of test images;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-data</span> <span class="pre">&lt;folder&gt;</span></code>: the folder which stores the binary files, it is exactly the output
-folder in data preprocessing step;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-pfreq</span> <span class="pre">&lt;int&gt;</span></code>: the frequency(in batch) of printing current model status(loss and accuracy);</li>
-<li><code class="docutils literal notranslate"><span class="pre">-nthreads</span> <span class="pre">&lt;int&gt;</span></code>: the number of threads to load data which feed to the model.</li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-epoch</span> <span class="pre">&lt;int&gt;</span></code>: number of epoch to be trained, default is 90;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-lr</span> <span class="pre">&lt;float&gt;</span></code>: base learning rate, the learning rate will decrease each 20 epochs,
+more specifically, <code class="docutils literal notranslate"><span class="pre">lr</span> <span class="pre">=</span> <span class="pre">lr</span> <span class="pre">*</span> <span class="pre">exp(0.1</span> <span class="pre">*</span> <span class="pre">(epoch</span> <span class="pre">/</span> <span class="pre">20))</span></code>;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-batchsize</span> <span class="pre">&lt;int&gt;</span></code>: batchsize, it should be changed regarding to your memory;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-filesize</span> <span class="pre">&lt;int&gt;</span></code>: number of training images that stores in each binary file, it is the
+same as the <code class="docutils literal notranslate"><span class="pre">filesize</span></code> in data preprocessing;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-ntrain</span> <span class="pre">&lt;int&gt;</span></code>: number of training images;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-ntest</span> <span class="pre">&lt;int&gt;</span></code>: number of test images;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-data</span> <span class="pre">&lt;folder&gt;</span></code>: the folder which stores the binary files, it is exactly the output
+folder in data preprocessing step;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-pfreq</span> <span class="pre">&lt;int&gt;</span></code>: the frequency(in batch) of printing current model status(loss and accuracy);</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-nthreads</span> <span class="pre">&lt;int&gt;</span></code>: the number of threads to load data which feed to the model.</p></li>
 </ul>
 </li>
 </ul>

Modified: incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/densenet/README.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/densenet/README.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/densenet/README.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/densenet/README.html Sat Jun 29 14:42:24 2019
@@ -9,7 +9,7 @@
   
   <meta name="viewport" content="width=device-width, initial-scale=1.0">
   
-  <title>name: DenseNet models on ImageNet SINGA version: 1.1.1 SINGA commit: license: https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py &mdash; incubator-singa 2.0.0 documentation</title>
+  <title>Image Classification using DenseNet &mdash; incubator-singa 2.0.0 documentation</title>
   
 
   
@@ -163,10 +163,7 @@
     
       <li><a href="../../../../../index.html">Docs</a> &raquo;</li>
         
-      <li>name: DenseNet models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py</li>
+      <li>Image Classification using DenseNet</li>
     
     
       <li class="wy-breadcrumbs-aside">
@@ -200,33 +197,26 @@ license: https://github.com/pytorch/visi
     KIND, either express or implied.  See the License for the
     specific language governing permissions and limitations
     under the License.
---><hr class="docutils" />
-<div class="section" id="name-densenet-models-on-imagenet-singa-version-1-1-1-singa-commit-license-https-github-com-pytorch-vision-blob-master-torchvision-models-densenet-py">
-<h1>name: DenseNet models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py<a class="headerlink" href="#name-densenet-models-on-imagenet-singa-version-1-1-1-singa-commit-license-https-github-com-pytorch-vision-blob-master-torchvision-models-densenet-py" title="Permalink to this headline">¶</a></h1>
-</div>
-<div class="section" id="image-classification-using-densenet">
+--><div class="section" id="image-classification-using-densenet">
 <h1>Image Classification using DenseNet<a class="headerlink" href="#image-classification-using-densenet" title="Permalink to this headline">¶</a></h1>
 <p>In this example, we convert DenseNet on <a class="reference external" href="https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py">PyTorch</a>
 to SINGA for image classification.</p>
 <div class="section" id="instructions">
 <h2>Instructions<a class="headerlink" href="#instructions" title="Permalink to this headline">¶</a></h2>
 <ul>
-<li><p class="first">Download one parameter checkpoint file (see below) and the synset word file of ImageNet into this folder, e.g.,</p>
+<li><p>Download one parameter checkpoint file (see below) and the synset word file of ImageNet into this folder, e.g.,</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ wget https://s3-ap-southeast-1.amazonaws.com/dlfile/densenet/densenet-121.tar.gz
   $ wget https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/synset_words.txt
   $ tar xvf densenet-121.tar.gz
 </pre></div>
 </div>
 </li>
-<li><p class="first">Usage</p>
+<li><p>Usage</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ python serve.py -h
 </pre></div>
 </div>
 </li>
-<li><p class="first">Example</p>
+<li><p>Example</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  # use cpu
   $ python serve.py --use_cpu --parameter_file densenet-121.pickle --depth 121 &amp;
   # use gpu
@@ -236,7 +226,7 @@ to SINGA for image classification.</p>
 <p>The parameter files for the following model and depth configuration pairs are provided:
 <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/densenet/densenet-121.tar.gz">121</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/densenet/densenet-169.tar.gz">169</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/densenet/densenet-201.tar.gz">201</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/densenet/densenet-161.tar.gz">161</a></p>
 </li>
-<li><p class="first">Submit images for classification</p>
+<li><p>Submit images for classification</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ curl -i -F image=@image1.jpg http://localhost:9999/api
   $ curl -i -F image=@image2.jpg http://localhost:9999/api
   $ curl -i -F image=@image3.jpg http://localhost:9999/api

Modified: incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/googlenet/README.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/googlenet/README.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/googlenet/README.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/googlenet/README.html Sat Jun 29 14:42:24 2019
@@ -9,7 +9,7 @@
   
   <meta name="viewport" content="width=device-width, initial-scale=1.0">
   
-  <title>name: GoogleNet on ImageNet SINGA version: 1.0.1 SINGA commit: 8c990f7da2de220e8a012c6a8ecc897dc7532744 parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz parameter_sha1: 0a88e8948b1abca3badfd8d090d6be03f8d7655d license: unrestricted https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet &mdash; incubator-singa 2.0.0 documentation</title>
+  <title>Image Classification using GoogleNet &mdash; incubator-singa 2.0.0 documentation</title>
   
 
   
@@ -163,12 +163,7 @@
     
       <li><a href="../../../../../index.html">Docs</a> &raquo;</li>
         
-      <li>name: GoogleNet on ImageNet
-SINGA version: 1.0.1
-SINGA commit: 8c990f7da2de220e8a012c6a8ecc897dc7532744
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz
-parameter_sha1: 0a88e8948b1abca3badfd8d090d6be03f8d7655d
-license: unrestricted https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet</li>
+      <li>Image Classification using GoogleNet</li>
     
     
       <li class="wy-breadcrumbs-aside">
@@ -202,28 +197,19 @@ license: unrestricted https://github.com
     KIND, either express or implied.  See the License for the
     specific language governing permissions and limitations
     under the License.
---><hr class="docutils" />
-<div class="section" id="name-googlenet-on-imagenet-singa-version-1-0-1-singa-commit-8c990f7da2de220e8a012c6a8ecc897dc7532744-parameter-url-https-s3-ap-southeast-1-amazonaws-com-dlfile-bvlc-googlenet-tar-gz-parameter-sha1-0a88e8948b1abca3badfd8d090d6be03f8d7655d-license-unrestricted-https-github-com-bvlc-caffe-tree-master-models-bvlc-googlenet">
-<h1>name: GoogleNet on ImageNet
-SINGA version: 1.0.1
-SINGA commit: 8c990f7da2de220e8a012c6a8ecc897dc7532744
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz
-parameter_sha1: 0a88e8948b1abca3badfd8d090d6be03f8d7655d
-license: unrestricted https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet<a class="headerlink" href="#name-googlenet-on-imagenet-singa-version-1-0-1-singa-commit-8c990f7da2de220e8a012c6a8ecc897dc7532744-parameter-url-https-s3-ap-southeast-1-amazonaws-com-dlfile-bvlc-googlenet-tar-gz-parameter-sha1-0a88e8948b1abca3badfd8d090d6be03f8d7655d-license-unrestricted-https-github-com-bvlc-caffe-tree-master-models-bvlc-googlenet" title="Permalink to this headline">¶</a></h1>
-</div>
-<div class="section" id="image-classification-using-googlenet">
+--><div class="section" id="image-classification-using-googlenet">
 <h1>Image Classification using GoogleNet<a class="headerlink" href="#image-classification-using-googlenet" title="Permalink to this headline">¶</a></h1>
-<p>In this example, we convert GoogleNet trained on Caffe to SINGA for image classification.</p>
+<p>In this example, we convert GoogleNet trained on Caffe to SINGA for image classification. Tested on <a class="reference external" href="8c990f7da2de220e8a012c6a8ecc897dc7532744">SINGA commit</a> with <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz">the parameters</a>.</p>
 <div class="section" id="instructions">
 <h2>Instructions<a class="headerlink" href="#instructions" title="Permalink to this headline">¶</a></h2>
 <ul>
-<li><p class="first">Download the parameter checkpoint file into this folder</p>
+<li><p>Download the parameter checkpoint file into this folder</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ wget https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz
   $ tar xvf bvlc_googlenet.tar.gz
 </pre></div>
 </div>
 </li>
-<li><p class="first">Run the program</p>
+<li><p>Run the program</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  # use cpu
   $ python serve.py -C &amp;
   # use gpu
@@ -231,7 +217,7 @@ license: unrestricted https://github.com
 </pre></div>
 </div>
 </li>
-<li><p class="first">Submit images for classification</p>
+<li><p>Submit images for classification</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ curl -i -F image=@image1.jpg http://localhost:9999/api
   $ curl -i -F image=@image2.jpg http://localhost:9999/api
   $ curl -i -F image=@image3.jpg http://localhost:9999/api

Modified: incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/inception/README.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/inception/README.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/inception/README.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/inception/README.html Sat Jun 29 14:42:24 2019
@@ -9,7 +9,7 @@
   
   <meta name="viewport" content="width=device-width, initial-scale=1.0">
   
-  <title>name: Inception V4 on ImageNet SINGA version: 1.1.1 SINGA commit: parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/inception_v4.tar.gz parameter_sha1: 5fdd6f5d8af8fd10e7321d9b38bb87ef14e80d56 license: https://github.com/tensorflow/models/tree/master/slim &mdash; incubator-singa 2.0.0 documentation</title>
+  <title>Image Classification using Inception V4 &mdash; incubator-singa 2.0.0 documentation</title>
   
 
   
@@ -163,12 +163,7 @@
     
       <li><a href="../../../../../index.html">Docs</a> &raquo;</li>
         
-      <li>name: Inception V4 on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/inception_v4.tar.gz
-parameter_sha1: 5fdd6f5d8af8fd10e7321d9b38bb87ef14e80d56
-license: https://github.com/tensorflow/models/tree/master/slim</li>
+      <li>Image Classification using Inception V4</li>
     
     
       <li class="wy-breadcrumbs-aside">
@@ -202,30 +197,20 @@ license: https://github.com/tensorflow/m
     KIND, either express or implied.  See the License for the
     specific language governing permissions and limitations
     under the License.
---><hr class="docutils" />
-<div class="section" id="name-inception-v4-on-imagenet-singa-version-1-1-1-singa-commit-parameter-url-https-s3-ap-southeast-1-amazonaws-com-dlfile-inception-v4-tar-gz-parameter-sha1-5fdd6f5d8af8fd10e7321d9b38bb87ef14e80d56-license-https-github-com-tensorflow-models-tree-master-slim">
-<h1>name: Inception V4 on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/inception_v4.tar.gz
-parameter_sha1: 5fdd6f5d8af8fd10e7321d9b38bb87ef14e80d56
-license: https://github.com/tensorflow/models/tree/master/slim<a class="headerlink" href="#name-inception-v4-on-imagenet-singa-version-1-1-1-singa-commit-parameter-url-https-s3-ap-southeast-1-amazonaws-com-dlfile-inception-v4-tar-gz-parameter-sha1-5fdd6f5d8af8fd10e7321d9b38bb87ef14e80d56-license-https-github-com-tensorflow-models-tree-master-slim" title="Permalink to this headline">¶</a></h1>
-</div>
-<div class="section" id="image-classification-using-inception-v4">
+--><div class="section" id="image-classification-using-inception-v4">
 <h1>Image Classification using Inception V4<a class="headerlink" href="#image-classification-using-inception-v4" title="Permalink to this headline">¶</a></h1>
-<p>In this example, we convert Inception V4 trained on Tensorflow to SINGA for image classification.</p>
+<p>In this example, we convert Inception V4 trained on Tensorflow to SINGA for image classification. Tested on SINGA version 1.1.1 with <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/inception_v4.tar.gz">parameters pretrained by tensorflow</a>.</p>
 <div class="section" id="instructions">
 <h2>Instructions<a class="headerlink" href="#instructions" title="Permalink to this headline">¶</a></h2>
 <ul>
-<li><p class="first">Download the parameter checkpoint file</p>
+<li><p>Download the parameter checkpoint file</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ wget
   $ tar xvf inception_v4.tar.gz
 </pre></div>
 </div>
 </li>
-<li><p class="first">Download <a class="reference external" href="https://github.com/BVLC/caffe/blob/master/data/ilsvrc12/get_ilsvrc_aux.sh">synset_word.txt</a> file.</p>
-</li>
-<li><p class="first">Run the program</p>
+<li><p>Download <a class="reference external" href="https://github.com/BVLC/caffe/blob/master/data/ilsvrc12/get_ilsvrc_aux.sh">synset_word.txt</a> file.</p></li>
+<li><p>Run the program</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  # use cpu
   $ python serve.py -C &amp;
   # use gpu
@@ -233,7 +218,7 @@ license: https://github.com/tensorflow/m
 </pre></div>
 </div>
 </li>
-<li><p class="first">Submit images for classification</p>
+<li><p>Submit images for classification</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ curl -i -F image=@image1.jpg http://localhost:9999/api
   $ curl -i -F image=@image2.jpg http://localhost:9999/api
   $ curl -i -F image=@image3.jpg http://localhost:9999/api