You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@singa.apache.org by wa...@apache.org on 2019/06/29 14:42:26 UTC

svn commit: r1862313 [7/34] - in /incubator/singa/site/trunk: ./ _sources/ _sources/community/ _sources/docs/ _sources/docs/model_zoo/ _sources/docs/model_zoo/caffe/ _sources/docs/model_zoo/char-rnn/ _sources/docs/model_zoo/cifar10/ _sources/docs/model...

Modified: incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/resnet/README.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/resnet/README.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/resnet/README.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/resnet/README.html Sat Jun 29 14:42:24 2019
@@ -9,7 +9,7 @@
   
   <meta name="viewport" content="width=device-width, initial-scale=1.0">
   
-  <title>name: Resnets on ImageNet SINGA version: 1.1 SINGA commit: 45ec92d8ffc1fa1385a9307fdf07e21da939ee2f parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz license: Apache V2, https://github.com/facebook/fb.resnet.torch/blob/master/LICENSE &mdash; incubator-singa 2.0.0 documentation</title>
+  <title>Image Classification using Residual Networks &mdash; incubator-singa 2.0.0 documentation</title>
   
 
   
@@ -163,11 +163,7 @@
     
       <li><a href="../../../../../index.html">Docs</a> &raquo;</li>
         
-      <li>name: Resnets on ImageNet
-SINGA version: 1.1
-SINGA commit: 45ec92d8ffc1fa1385a9307fdf07e21da939ee2f
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz
-license: Apache V2, https://github.com/facebook/fb.resnet.torch/blob/master/LICENSE</li>
+      <li>Image Classification using Residual Networks</li>
     
     
       <li class="wy-breadcrumbs-aside">
@@ -201,33 +197,25 @@ license: Apache V2, https://github.com/f
     KIND, either express or implied.  See the License for the
     specific language governing permissions and limitations
     under the License.
---><hr class="docutils" />
-<div class="section" id="name-resnets-on-imagenet-singa-version-1-1-singa-commit-45ec92d8ffc1fa1385a9307fdf07e21da939ee2f-parameter-url-https-s3-ap-southeast-1-amazonaws-com-dlfile-resnet-resnet-18-tar-gz-license-apache-v2-https-github-com-facebook-fb-resnet-torch-blob-master-license">
-<h1>name: Resnets on ImageNet
-SINGA version: 1.1
-SINGA commit: 45ec92d8ffc1fa1385a9307fdf07e21da939ee2f
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz
-license: Apache V2, https://github.com/facebook/fb.resnet.torch/blob/master/LICENSE<a class="headerlink" href="#name-resnets-on-imagenet-singa-version-1-1-singa-commit-45ec92d8ffc1fa1385a9307fdf07e21da939ee2f-parameter-url-https-s3-ap-southeast-1-amazonaws-com-dlfile-resnet-resnet-18-tar-gz-license-apache-v2-https-github-com-facebook-fb-resnet-torch-blob-master-license" title="Permalink to this headline">¶</a></h1>
-</div>
-<div class="section" id="image-classification-using-residual-networks">
+--><div class="section" id="image-classification-using-residual-networks">
 <h1>Image Classification using Residual Networks<a class="headerlink" href="#image-classification-using-residual-networks" title="Permalink to this headline">¶</a></h1>
-<p>In this example, we convert Residual Networks trained on <a class="reference external" href="https://github.com/facebook/fb.resnet.torch">Torch</a> to SINGA for image classification.</p>
+<p>In this example, we convert Residual Networks trained on <a class="reference external" href="https://github.com/facebook/fb.resnet.torch">Torch</a> to SINGA for image classification. Tested on [SINGA commit] with the <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz">parameters pretrained by Torch</a></p>
 <div class="section" id="instructions">
 <h2>Instructions<a class="headerlink" href="#instructions" title="Permalink to this headline">¶</a></h2>
 <ul>
-<li><p class="first">Download one parameter checkpoint file (see below) and the synset word file of ImageNet into this folder, e.g.,</p>
+<li><p>Download one parameter checkpoint file (see below) and the synset word file of ImageNet into this folder, e.g.,</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ wget https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz
   $ wget https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/synset_words.txt
   $ tar xvf resnet-18.tar.gz
 </pre></div>
 </div>
 </li>
-<li><p class="first">Usage</p>
+<li><p>Usage</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ python serve.py -h
 </pre></div>
 </div>
 </li>
-<li><p class="first">Example</p>
+<li><p>Example</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  # use cpu
   $ python serve.py --use_cpu --parameter_file resnet-18.pickle --model resnet --depth 18 &amp;
   # use gpu
@@ -236,13 +224,13 @@ license: Apache V2, https://github.com/f
 </div>
 <p>The parameter files for the following model and depth configuration pairs are provided:</p>
 <ul class="simple">
-<li>resnet (original resnet), <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz">18</a>|<a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-34.tar.gz">34</a>|<a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-101.tar.gz">101</a>|<a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-152.tar.gz">152</a></li>
-<li>addbn (resnet with a batch normalization layer after the addition), <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-50.tar.gz">50</a></li>
-<li>wrn (wide resnet), <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/wrn-50-2.tar.gz">50</a></li>
-<li>preact (resnet with pre-activation) <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-200.tar.gz">200</a></li>
+<li><p>resnet (original resnet), <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz">18</a>|<a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-34.tar.gz">34</a>|<a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-101.tar.gz">101</a>|<a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-152.tar.gz">152</a></p></li>
+<li><p>addbn (resnet with a batch normalization layer after the addition), <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-50.tar.gz">50</a></p></li>
+<li><p>wrn (wide resnet), <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/wrn-50-2.tar.gz">50</a></p></li>
+<li><p>preact (resnet with pre-activation) <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-200.tar.gz">200</a></p></li>
 </ul>
 </li>
-<li><p class="first">Submit images for classification</p>
+<li><p>Submit images for classification</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ curl -i -F image=@image1.jpg http://localhost:9999/api
   $ curl -i -F image=@image2.jpg http://localhost:9999/api
   $ curl -i -F image=@image3.jpg http://localhost:9999/api

Modified: incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/vgg/README.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/vgg/README.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/vgg/README.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/examples/imagenet/vgg/README.html Sat Jun 29 14:42:24 2019
@@ -9,7 +9,7 @@
   
   <meta name="viewport" content="width=device-width, initial-scale=1.0">
   
-  <title>name: VGG models on ImageNet SINGA version: 1.1.1 SINGA commit: license: https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py &mdash; incubator-singa 2.0.0 documentation</title>
+  <title>Image Classification using VGG &mdash; incubator-singa 2.0.0 documentation</title>
   
 
   
@@ -163,10 +163,7 @@
     
       <li><a href="../../../../../index.html">Docs</a> &raquo;</li>
         
-      <li>name: VGG models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py</li>
+      <li>Image Classification using VGG</li>
     
     
       <li class="wy-breadcrumbs-aside">
@@ -200,33 +197,26 @@ license: https://github.com/pytorch/visi
     KIND, either express or implied.  See the License for the
     specific language governing permissions and limitations
     under the License.
---><hr class="docutils" />
-<div class="section" id="name-vgg-models-on-imagenet-singa-version-1-1-1-singa-commit-license-https-github-com-pytorch-vision-blob-master-torchvision-models-vgg-py">
-<h1>name: VGG models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py<a class="headerlink" href="#name-vgg-models-on-imagenet-singa-version-1-1-1-singa-commit-license-https-github-com-pytorch-vision-blob-master-torchvision-models-vgg-py" title="Permalink to this headline">¶</a></h1>
-</div>
-<div class="section" id="image-classification-using-vgg">
+--><div class="section" id="image-classification-using-vgg">
 <h1>Image Classification using VGG<a class="headerlink" href="#image-classification-using-vgg" title="Permalink to this headline">¶</a></h1>
 <p>In this example, we convert VGG on <a class="reference external" href="https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py">PyTorch</a>
 to SINGA for image classification.</p>
 <div class="section" id="instructions">
 <h2>Instructions<a class="headerlink" href="#instructions" title="Permalink to this headline">¶</a></h2>
 <ul>
-<li><p class="first">Download one parameter checkpoint file (see below) and the synset word file of ImageNet into this folder, e.g.,</p>
+<li><p>Download one parameter checkpoint file (see below) and the synset word file of ImageNet into this folder, e.g.,</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ wget https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg11.tar.gz
   $ wget https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/synset_words.txt
   $ tar xvf vgg11.tar.gz
 </pre></div>
 </div>
 </li>
-<li><p class="first">Usage</p>
+<li><p>Usage</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ python serve.py -h
 </pre></div>
 </div>
 </li>
-<li><p class="first">Example</p>
+<li><p>Example</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  # use cpu
   $ python serve.py --use_cpu --parameter_file vgg11.pickle --depth 11 &amp;
   # use gpu
@@ -235,11 +225,11 @@ to SINGA for image classification.</p>
 </div>
 <p>The parameter files for the following model and depth configuration pairs are provided:</p>
 <ul class="simple">
-<li>Without batch-normalization, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg11.tar.gz">11</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg13.tar.gz">13</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg16.tar.gz">16</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg19.tar.gz">19</a></li>
-<li>With batch-normalization, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg11_bn.tar.gz">11</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg13_bn.tar.gz">13</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg16_bn.tar.gz">16</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg19_bn.tar.gz">19</a></li>
+<li><p>Without batch-normalization, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg11.tar.gz">11</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg13.tar.gz">13</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg16.tar.gz">16</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg19.tar.gz">19</a></p></li>
+<li><p>With batch-normalization, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg11_bn.tar.gz">11</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg13_bn.tar.gz">13</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg16_bn.tar.gz">16</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg19_bn.tar.gz">19</a></p></li>
 </ul>
 </li>
-<li><p class="first">Submit images for classification</p>
+<li><p>Submit images for classification</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ curl -i -F image=@image1.jpg http://localhost:9999/api
   $ curl -i -F image=@image2.jpg http://localhost:9999/api
   $ curl -i -F image=@image3.jpg http://localhost:9999/api

Modified: incubator/singa/site/trunk/docs/model_zoo/examples/index.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/examples/index.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/examples/index.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/examples/index.html Sat Jun 29 14:42:24 2019
@@ -212,52 +212,27 @@
 </li>
 </ul>
 </li>
-<li class="toctree-l1"><a class="reference internal" href="imagenet/densenet/README.html">name: DenseNet models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py</a></li>
-<li class="toctree-l1"><a class="reference internal" href="imagenet/densenet/README.html#image-classification-using-densenet">Image Classification using DenseNet</a><ul>
+<li class="toctree-l1"><a class="reference internal" href="imagenet/densenet/README.html">Image Classification using DenseNet</a><ul>
 <li class="toctree-l2"><a class="reference internal" href="imagenet/densenet/README.html#instructions">Instructions</a></li>
 <li class="toctree-l2"><a class="reference internal" href="imagenet/densenet/README.html#details">Details</a></li>
 </ul>
 </li>
-<li class="toctree-l1"><a class="reference internal" href="imagenet/googlenet/README.html">name: GoogleNet on ImageNet
-SINGA version: 1.0.1
-SINGA commit: 8c990f7da2de220e8a012c6a8ecc897dc7532744
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz
-parameter_sha1: 0a88e8948b1abca3badfd8d090d6be03f8d7655d
-license: unrestricted https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet</a></li>
-<li class="toctree-l1"><a class="reference internal" href="imagenet/googlenet/README.html#image-classification-using-googlenet">Image Classification using GoogleNet</a><ul>
+<li class="toctree-l1"><a class="reference internal" href="imagenet/googlenet/README.html">Image Classification using GoogleNet</a><ul>
 <li class="toctree-l2"><a class="reference internal" href="imagenet/googlenet/README.html#instructions">Instructions</a></li>
 <li class="toctree-l2"><a class="reference internal" href="imagenet/googlenet/README.html#details">Details</a></li>
 </ul>
 </li>
-<li class="toctree-l1"><a class="reference internal" href="imagenet/inception/README.html">name: Inception V4 on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/inception_v4.tar.gz
-parameter_sha1: 5fdd6f5d8af8fd10e7321d9b38bb87ef14e80d56
-license: https://github.com/tensorflow/models/tree/master/slim</a></li>
-<li class="toctree-l1"><a class="reference internal" href="imagenet/inception/README.html#image-classification-using-inception-v4">Image Classification using Inception V4</a><ul>
+<li class="toctree-l1"><a class="reference internal" href="imagenet/inception/README.html">Image Classification using Inception V4</a><ul>
 <li class="toctree-l2"><a class="reference internal" href="imagenet/inception/README.html#instructions">Instructions</a></li>
 <li class="toctree-l2"><a class="reference internal" href="imagenet/inception/README.html#details">Details</a></li>
 </ul>
 </li>
-<li class="toctree-l1"><a class="reference internal" href="imagenet/resnet/README.html">name: Resnets on ImageNet
-SINGA version: 1.1
-SINGA commit: 45ec92d8ffc1fa1385a9307fdf07e21da939ee2f
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz
-license: Apache V2, https://github.com/facebook/fb.resnet.torch/blob/master/LICENSE</a></li>
-<li class="toctree-l1"><a class="reference internal" href="imagenet/resnet/README.html#image-classification-using-residual-networks">Image Classification using Residual Networks</a><ul>
+<li class="toctree-l1"><a class="reference internal" href="imagenet/resnet/README.html">Image Classification using Residual Networks</a><ul>
 <li class="toctree-l2"><a class="reference internal" href="imagenet/resnet/README.html#instructions">Instructions</a></li>
 <li class="toctree-l2"><a class="reference internal" href="imagenet/resnet/README.html#details">Details</a></li>
 </ul>
 </li>
-<li class="toctree-l1"><a class="reference internal" href="imagenet/vgg/README.html">name: VGG models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py</a></li>
-<li class="toctree-l1"><a class="reference internal" href="imagenet/vgg/README.html#image-classification-using-vgg">Image Classification using VGG</a><ul>
+<li class="toctree-l1"><a class="reference internal" href="imagenet/vgg/README.html">Image Classification using VGG</a><ul>
 <li class="toctree-l2"><a class="reference internal" href="imagenet/vgg/README.html#instructions">Instructions</a></li>
 <li class="toctree-l2"><a class="reference internal" href="imagenet/vgg/README.html#details">Details</a></li>
 </ul>

Modified: incubator/singa/site/trunk/docs/model_zoo/examples/mnist/README.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/examples/mnist/README.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/examples/mnist/README.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/examples/mnist/README.html Sat Jun 29 14:42:24 2019
@@ -205,9 +205,8 @@ MNIST dataset. The RBM model and its hyp
 <div class="section" id="running-instructions">
 <h2>Running instructions<a class="headerlink" href="#running-instructions" title="Permalink to this headline">¶</a></h2>
 <ol>
-<li><p class="first">Download the pre-processed <a class="reference external" href="https://github.com/mnielsen/neural-networks-and-deep-learning/raw/master/data/mnist.pkl.gz">MNIST dataset</a></p>
-</li>
-<li><p class="first">Start the training</p>
+<li><p>Download the pre-processed <a class="reference external" href="https://github.com/mnielsen/neural-networks-and-deep-learning/raw/master/data/mnist.pkl.gz">MNIST dataset</a></p></li>
+<li><p>Start the training</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span> <span class="n">python</span> <span class="n">train</span><span class="o">.</span><span class="n">py</span> <span class="n">mnist</span><span class="o">.</span><span class="n">pkl</span><span class="o">.</span><span class="n">gz</span>
 </pre></div>
 </div>

Modified: incubator/singa/site/trunk/docs/model_zoo/imagenet/alexnet/README.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/imagenet/alexnet/README.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/imagenet/alexnet/README.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/imagenet/alexnet/README.html Sat Jun 29 14:42:24 2019
@@ -36,8 +36,8 @@
   <link rel="stylesheet" href="../../../../_static/pygments.css" type="text/css" />
     <link rel="index" title="Index" href="../../../../genindex.html" />
     <link rel="search" title="Search" href="../../../../search.html" />
-    <link rel="next" title="name: DenseNet models on ImageNet SINGA version: 1.1.1 SINGA commit: license: https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py" href="../densenet/README.html" />
-    <link rel="prev" title="Train a RBM model against MNIST dataset" href="../../mnist/README.html" />
+    <link rel="next" title="name: GoogleNet on ImageNet SINGA version: 1.0.1 SINGA commit: 8c990f7da2de220e8a012c6a8ecc897dc7532744 parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz parameter_sha1: 0a88e8948b1abca3badfd8d090d6be03f8d7655d license: unrestricted https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet" href="../googlenet/README.html" />
+    <link rel="prev" title="Train Char-RNN over plain text" href="../../char-rnn/README.html" />
     <link href="../../../../_static/style.css" rel="stylesheet" type="text/css">
     <!--link href="../../../../_static/fontawesome-all.min.css" rel="stylesheet" type="text/css"-->
 	<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.0.13/css/all.css" integrity="sha384-DNOHZ68U8hZfKXOrtjWvjxusGo9WQnrNx2sqG0tfsghAvtVlRW3tvkXWZh58N9jp" crossorigin="anonymous">
@@ -104,6 +104,7 @@
 <li class="toctree-l1 current"><a class="reference internal" href="../../../index.html">Documentation</a><ul class="current">
 <li class="toctree-l2"><a class="reference internal" href="../../../installation.html">Installation</a></li>
 <li class="toctree-l2"><a class="reference internal" href="../../../software_stack.html">Software Stack</a></li>
+<li class="toctree-l2"><a class="reference internal" href="../../../benchmark.html">Benchmark for Distributed training</a></li>
 <li class="toctree-l2"><a class="reference internal" href="../../../device.html">Device</a></li>
 <li class="toctree-l2"><a class="reference internal" href="../../../tensor.html">Tensor</a></li>
 <li class="toctree-l2"><a class="reference internal" href="../../../layer.html">Layer</a></li>
@@ -121,16 +122,10 @@
 <li class="toctree-l2 current"><a class="reference internal" href="../../index.html">Model Zoo</a><ul class="current">
 <li class="toctree-l3"><a class="reference internal" href="../../cifar10/README.html">Train CNN over Cifar-10</a></li>
 <li class="toctree-l3"><a class="reference internal" href="../../char-rnn/README.html">Train Char-RNN over plain text</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../../mnist/README.html">Train a RBM model against MNIST dataset</a></li>
 <li class="toctree-l3 current"><a class="current reference internal" href="#">Train AlexNet over ImageNet</a><ul>
 <li class="toctree-l4"><a class="reference internal" href="#instructions">Instructions</a></li>
 </ul>
 </li>
-<li class="toctree-l3"><a class="reference internal" href="../densenet/README.html">name: DenseNet models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../densenet/README.html#image-classification-using-densenet">Image Classification using DenseNet</a></li>
 <li class="toctree-l3"><a class="reference internal" href="../googlenet/README.html">name: GoogleNet on ImageNet
 SINGA version: 1.0.1
 SINGA commit: 8c990f7da2de220e8a012c6a8ecc897dc7532744
@@ -138,24 +133,6 @@ parameter_url: https://s3-ap-southeast-1
 parameter_sha1: 0a88e8948b1abca3badfd8d090d6be03f8d7655d
 license: unrestricted https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet</a></li>
 <li class="toctree-l3"><a class="reference internal" href="../googlenet/README.html#image-classification-using-googlenet">Image Classification using GoogleNet</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../inception/README.html">name: Inception V4 on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/inception_v4.tar.gz
-parameter_sha1: 5fdd6f5d8af8fd10e7321d9b38bb87ef14e80d56
-license: https://github.com/tensorflow/models/tree/master/slim</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../inception/README.html#image-classification-using-inception-v4">Image Classification using Inception V4</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../resnet/README.html">name: Resnets on ImageNet
-SINGA version: 1.1
-SINGA commit: 45ec92d8ffc1fa1385a9307fdf07e21da939ee2f
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz
-license: Apache V2, https://github.com/facebook/fb.resnet.torch/blob/master/LICENSE</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../resnet/README.html#image-classification-using-residual-networks">Image Classification using Residual Networks</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../vgg/README.html">name: VGG models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../vgg/README.html#image-classification-using-vgg">Image Classification using VGG</a></li>
 </ul>
 </li>
 </ul>
@@ -244,24 +221,7 @@ license: https://github.com/pytorch/visi
           <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
            <div itemprop="articleBody">
             
-  <!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
---><div class="section" id="train-alexnet-over-imagenet">
+  <div class="section" id="train-alexnet-over-imagenet">
 <h1>Train AlexNet over ImageNet<a class="headerlink" href="#train-alexnet-over-imagenet" title="Permalink to this headline">¶</a></h1>
 <p>Convolution neural network (CNN) is a type of feed-forward neural
 network widely used for image and video classification. In this example, we will
@@ -278,17 +238,17 @@ options in CMakeLists.txt or run <code c
 <div class="section" id="data-download">
 <h3>Data download<a class="headerlink" href="#data-download" title="Permalink to this headline">¶</a></h3>
 <ul class="simple">
-<li>Please refer to step1-3 on <a class="reference external" href="https://github.com/amd/OpenCL-caffe/wiki/Instructions-to-create-ImageNet-2012-data">Instructions to create ImageNet 2012 data</a>
-to download and decompress the data.</li>
-<li>You can download the training and validation list by
+<li><p>Please refer to step1-3 on <a class="reference external" href="https://github.com/amd/OpenCL-caffe/wiki/Instructions-to-create-ImageNet-2012-data">Instructions to create ImageNet 2012 data</a>
+to download and decompress the data.</p></li>
+<li><p>You can download the training and validation list by
 <a class="reference external" href="https://github.com/BVLC/caffe/blob/master/data/ilsvrc12/get_ilsvrc_aux.sh">get_ilsvrc_aux.sh</a>
-or from <a class="reference external" href="http://www.image-net.org/download-images">Imagenet</a>.</li>
+or from <a class="reference external" href="http://www.image-net.org/download-images">Imagenet</a>.</p></li>
 </ul>
 </div>
 <div class="section" id="data-preprocessing">
 <h3>Data preprocessing<a class="headerlink" href="#data-preprocessing" title="Permalink to this headline">¶</a></h3>
 <ul>
-<li><p class="first">Assuming you have downloaded the data and the list.
+<li><p>Assuming you have downloaded the data and the list.
 Now we should transform the data into binary files. You can run:</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>    <span class="n">sh</span> <span class="n">create_data</span><span class="o">.</span><span class="n">sh</span>
 </pre></div>
@@ -296,15 +256,15 @@ Now we should transform the data into bi
 <p>The script will generate a test file(<code class="docutils literal notranslate"><span class="pre">test.bin</span></code>), a mean file(<code class="docutils literal notranslate"><span class="pre">mean.bin</span></code>) and
 several training files(<code class="docutils literal notranslate"><span class="pre">trainX.bin</span></code>) in the specified output folder.</p>
 </li>
-<li><p class="first">You can also change the parameters in <code class="docutils literal notranslate"><span class="pre">create_data.sh</span></code>.</p>
+<li><p>You can also change the parameters in <code class="docutils literal notranslate"><span class="pre">create_data.sh</span></code>.</p>
 <ul class="simple">
-<li><code class="docutils literal notranslate"><span class="pre">-trainlist</span> <span class="pre">&lt;file&gt;</span></code>: the file of training list;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-trainfolder</span> <span class="pre">&lt;folder&gt;</span></code>: the folder of training images;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-testlist</span> <span class="pre">&lt;file&gt;</span></code>: the file of test list;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-testfolder</span> <span class="pre">&lt;floder&gt;</span></code>: the folder of test images;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-outdata</span> <span class="pre">&lt;folder&gt;</span></code>: the folder to save output files, including mean, training and test files.
-The script will generate these files in the specified folder;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-filesize</span> <span class="pre">&lt;int&gt;</span></code>: number of training images that stores in each binary file.</li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-trainlist</span> <span class="pre">&lt;file&gt;</span></code>: the file of training list;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-trainfolder</span> <span class="pre">&lt;folder&gt;</span></code>: the folder of training images;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-testlist</span> <span class="pre">&lt;file&gt;</span></code>: the file of test list;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-testfolder</span> <span class="pre">&lt;floder&gt;</span></code>: the folder of test images;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-outdata</span> <span class="pre">&lt;folder&gt;</span></code>: the folder to save output files, including mean, training and test files.
+The script will generate these files in the specified folder;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-filesize</span> <span class="pre">&lt;int&gt;</span></code>: number of training images that stores in each binary file.</p></li>
 </ul>
 </li>
 </ul>
@@ -312,25 +272,25 @@ The script will generate these files in
 <div class="section" id="training">
 <h3>Training<a class="headerlink" href="#training" title="Permalink to this headline">¶</a></h3>
 <ul>
-<li><p class="first">After preparing data, you can run the following command to train the Alexnet model.</p>
+<li><p>After preparing data, you can run the following command to train the Alexnet model.</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>    <span class="n">sh</span> <span class="n">run</span><span class="o">.</span><span class="n">sh</span>
 </pre></div>
 </div>
 </li>
-<li><p class="first">You may change the parameters in <code class="docutils literal notranslate"><span class="pre">run.sh</span></code>.</p>
+<li><p>You may change the parameters in <code class="docutils literal notranslate"><span class="pre">run.sh</span></code>.</p>
 <ul class="simple">
-<li><code class="docutils literal notranslate"><span class="pre">-epoch</span> <span class="pre">&lt;int&gt;</span></code>: number of epoch to be trained, default is 90;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-lr</span> <span class="pre">&lt;float&gt;</span></code>: base learning rate, the learning rate will decrease each 20 epochs,
-more specifically, <code class="docutils literal notranslate"><span class="pre">lr</span> <span class="pre">=</span> <span class="pre">lr</span> <span class="pre">*</span> <span class="pre">exp(0.1</span> <span class="pre">*</span> <span class="pre">(epoch</span> <span class="pre">/</span> <span class="pre">20))</span></code>;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-batchsize</span> <span class="pre">&lt;int&gt;</span></code>: batchsize, it should be changed regarding to your memory;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-filesize</span> <span class="pre">&lt;int&gt;</span></code>: number of training images that stores in each binary file, it is the
-same as the <code class="docutils literal notranslate"><span class="pre">filesize</span></code> in data preprocessing;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-ntrain</span> <span class="pre">&lt;int&gt;</span></code>: number of training images;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-ntest</span> <span class="pre">&lt;int&gt;</span></code>: number of test images;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-data</span> <span class="pre">&lt;folder&gt;</span></code>: the folder which stores the binary files, it is exactly the output
-folder in data preprocessing step;</li>
-<li><code class="docutils literal notranslate"><span class="pre">-pfreq</span> <span class="pre">&lt;int&gt;</span></code>: the frequency(in batch) of printing current model status(loss and accuracy);</li>
-<li><code class="docutils literal notranslate"><span class="pre">-nthreads</span> <span class="pre">&lt;int&gt;</span></code>: the number of threads to load data which feed to the model.</li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-epoch</span> <span class="pre">&lt;int&gt;</span></code>: number of epoch to be trained, default is 90;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-lr</span> <span class="pre">&lt;float&gt;</span></code>: base learning rate, the learning rate will decrease each 20 epochs,
+more specifically, <code class="docutils literal notranslate"><span class="pre">lr</span> <span class="pre">=</span> <span class="pre">lr</span> <span class="pre">*</span> <span class="pre">exp(0.1</span> <span class="pre">*</span> <span class="pre">(epoch</span> <span class="pre">/</span> <span class="pre">20))</span></code>;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-batchsize</span> <span class="pre">&lt;int&gt;</span></code>: batchsize, it should be changed regarding to your memory;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-filesize</span> <span class="pre">&lt;int&gt;</span></code>: number of training images that stores in each binary file, it is the
+same as the <code class="docutils literal notranslate"><span class="pre">filesize</span></code> in data preprocessing;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-ntrain</span> <span class="pre">&lt;int&gt;</span></code>: number of training images;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-ntest</span> <span class="pre">&lt;int&gt;</span></code>: number of test images;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-data</span> <span class="pre">&lt;folder&gt;</span></code>: the folder which stores the binary files, it is exactly the output
+folder in data preprocessing step;</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-pfreq</span> <span class="pre">&lt;int&gt;</span></code>: the frequency(in batch) of printing current model status(loss and accuracy);</p></li>
+<li><p><code class="docutils literal notranslate"><span class="pre">-nthreads</span> <span class="pre">&lt;int&gt;</span></code>: the number of threads to load data which feed to the model.</p></li>
 </ul>
 </li>
 </ul>
@@ -346,10 +306,10 @@ folder in data preprocessing step;</li>
   
     <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
       
-        <a href="../densenet/README.html" class="btn btn-neutral float-right" title="name: DenseNet models on ImageNet SINGA version: 1.1.1 SINGA commit: license: https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
+        <a href="../googlenet/README.html" class="btn btn-neutral float-right" title="name: GoogleNet on ImageNet SINGA version: 1.0.1 SINGA commit: 8c990f7da2de220e8a012c6a8ecc897dc7532744 parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz parameter_sha1: 0a88e8948b1abca3badfd8d090d6be03f8d7655d license: unrestricted https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
       
       
-        <a href="../../mnist/README.html" class="btn btn-neutral float-left" title="Train a RBM model against MNIST dataset" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
+        <a href="../../char-rnn/README.html" class="btn btn-neutral float-left" title="Train Char-RNN over plain text" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
       
     </div>
   

Modified: incubator/singa/site/trunk/docs/model_zoo/imagenet/googlenet/README.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/imagenet/googlenet/README.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/imagenet/googlenet/README.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/imagenet/googlenet/README.html Sat Jun 29 14:42:24 2019
@@ -36,8 +36,8 @@
   <link rel="stylesheet" href="../../../../_static/pygments.css" type="text/css" />
     <link rel="index" title="Index" href="../../../../genindex.html" />
     <link rel="search" title="Search" href="../../../../search.html" />
-    <link rel="next" title="name: Inception V4 on ImageNet SINGA version: 1.1.1 SINGA commit: parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/inception_v4.tar.gz parameter_sha1: 5fdd6f5d8af8fd10e7321d9b38bb87ef14e80d56 license: https://github.com/tensorflow/models/tree/master/slim" href="../inception/README.html" />
-    <link rel="prev" title="name: DenseNet models on ImageNet SINGA version: 1.1.1 SINGA commit: license: https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py" href="../densenet/README.html" />
+    <link rel="next" title="Download SINGA" href="../../../../downloads.html" />
+    <link rel="prev" title="Train AlexNet over ImageNet" href="../alexnet/README.html" />
     <link href="../../../../_static/style.css" rel="stylesheet" type="text/css">
     <!--link href="../../../../_static/fontawesome-all.min.css" rel="stylesheet" type="text/css"-->
 	<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.0.13/css/all.css" integrity="sha384-DNOHZ68U8hZfKXOrtjWvjxusGo9WQnrNx2sqG0tfsghAvtVlRW3tvkXWZh58N9jp" crossorigin="anonymous">
@@ -104,6 +104,7 @@
 <li class="toctree-l1 current"><a class="reference internal" href="../../../index.html">Documentation</a><ul class="current">
 <li class="toctree-l2"><a class="reference internal" href="../../../installation.html">Installation</a></li>
 <li class="toctree-l2"><a class="reference internal" href="../../../software_stack.html">Software Stack</a></li>
+<li class="toctree-l2"><a class="reference internal" href="../../../benchmark.html">Benchmark for Distributed training</a></li>
 <li class="toctree-l2"><a class="reference internal" href="../../../device.html">Device</a></li>
 <li class="toctree-l2"><a class="reference internal" href="../../../tensor.html">Tensor</a></li>
 <li class="toctree-l2"><a class="reference internal" href="../../../layer.html">Layer</a></li>
@@ -121,13 +122,7 @@
 <li class="toctree-l2 current"><a class="reference internal" href="../../index.html">Model Zoo</a><ul class="current">
 <li class="toctree-l3"><a class="reference internal" href="../../cifar10/README.html">Train CNN over Cifar-10</a></li>
 <li class="toctree-l3"><a class="reference internal" href="../../char-rnn/README.html">Train Char-RNN over plain text</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../../mnist/README.html">Train a RBM model against MNIST dataset</a></li>
 <li class="toctree-l3"><a class="reference internal" href="../alexnet/README.html">Train AlexNet over ImageNet</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../densenet/README.html">name: DenseNet models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../densenet/README.html#image-classification-using-densenet">Image Classification using DenseNet</a></li>
 <li class="toctree-l3 current"><a class="current reference internal" href="#">name: GoogleNet on ImageNet
 SINGA version: 1.0.1
 SINGA commit: 8c990f7da2de220e8a012c6a8ecc897dc7532744
@@ -139,24 +134,6 @@ license: unrestricted https://github.com
 <li class="toctree-l4"><a class="reference internal" href="#details">Details</a></li>
 </ul>
 </li>
-<li class="toctree-l3"><a class="reference internal" href="../inception/README.html">name: Inception V4 on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/inception_v4.tar.gz
-parameter_sha1: 5fdd6f5d8af8fd10e7321d9b38bb87ef14e80d56
-license: https://github.com/tensorflow/models/tree/master/slim</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../inception/README.html#image-classification-using-inception-v4">Image Classification using Inception V4</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../resnet/README.html">name: Resnets on ImageNet
-SINGA version: 1.1
-SINGA commit: 45ec92d8ffc1fa1385a9307fdf07e21da939ee2f
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz
-license: Apache V2, https://github.com/facebook/fb.resnet.torch/blob/master/LICENSE</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../resnet/README.html#image-classification-using-residual-networks">Image Classification using Residual Networks</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../vgg/README.html">name: VGG models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../vgg/README.html#image-classification-using-vgg">Image Classification using VGG</a></li>
 </ul>
 </li>
 </ul>
@@ -250,24 +227,7 @@ license: unrestricted https://github.com
           <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
            <div itemprop="articleBody">
             
-  <!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
---><hr class="docutils" />
+  <hr class="docutils" />
 <div class="section" id="name-googlenet-on-imagenet-singa-version-1-0-1-singa-commit-8c990f7da2de220e8a012c6a8ecc897dc7532744-parameter-url-https-s3-ap-southeast-1-amazonaws-com-dlfile-bvlc-googlenet-tar-gz-parameter-sha1-0a88e8948b1abca3badfd8d090d6be03f8d7655d-license-unrestricted-https-github-com-bvlc-caffe-tree-master-models-bvlc-googlenet">
 <h1>name: GoogleNet on ImageNet
 SINGA version: 1.0.1
@@ -282,13 +242,13 @@ license: unrestricted https://github.com
 <div class="section" id="instructions">
 <h2>Instructions<a class="headerlink" href="#instructions" title="Permalink to this headline">¶</a></h2>
 <ul>
-<li><p class="first">Download the parameter checkpoint file into this folder</p>
+<li><p>Download the parameter checkpoint file into this folder</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ wget https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz
   $ tar xvf bvlc_googlenet.tar.gz
 </pre></div>
 </div>
 </li>
-<li><p class="first">Run the program</p>
+<li><p>Run the program</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  # use cpu
   $ python serve.py -C &amp;
   # use gpu
@@ -296,7 +256,7 @@ license: unrestricted https://github.com
 </pre></div>
 </div>
 </li>
-<li><p class="first">Submit images for classification</p>
+<li><p>Submit images for classification</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ curl -i -F image=@image1.jpg http://localhost:9999/api
   $ curl -i -F image=@image2.jpg http://localhost:9999/api
   $ curl -i -F image=@image3.jpg http://localhost:9999/api
@@ -347,10 +307,10 @@ Refer to <a class="reference external" h
   
     <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
       
-        <a href="../inception/README.html" class="btn btn-neutral float-right" title="name: Inception V4 on ImageNet SINGA version: 1.1.1 SINGA commit: parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/inception_v4.tar.gz parameter_sha1: 5fdd6f5d8af8fd10e7321d9b38bb87ef14e80d56 license: https://github.com/tensorflow/models/tree/master/slim" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
+        <a href="../../../../downloads.html" class="btn btn-neutral float-right" title="Download SINGA" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
       
       
-        <a href="../densenet/README.html" class="btn btn-neutral float-left" title="name: DenseNet models on ImageNet SINGA version: 1.1.1 SINGA commit: license: https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
+        <a href="../alexnet/README.html" class="btn btn-neutral float-left" title="Train AlexNet over ImageNet" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
       
     </div>
   

Modified: incubator/singa/site/trunk/docs/model_zoo/imagenet/inception/README.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/imagenet/inception/README.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/imagenet/inception/README.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/imagenet/inception/README.html Sat Jun 29 14:42:24 2019
@@ -36,8 +36,6 @@
   <link rel="stylesheet" href="../../../../_static/pygments.css" type="text/css" />
     <link rel="index" title="Index" href="../../../../genindex.html" />
     <link rel="search" title="Search" href="../../../../search.html" />
-    <link rel="next" title="name: Resnets on ImageNet SINGA version: 1.1 SINGA commit: 45ec92d8ffc1fa1385a9307fdf07e21da939ee2f parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz license: Apache V2, https://github.com/facebook/fb.resnet.torch/blob/master/LICENSE" href="../resnet/README.html" />
-    <link rel="prev" title="name: GoogleNet on ImageNet SINGA version: 1.0.1 SINGA commit: 8c990f7da2de220e8a012c6a8ecc897dc7532744 parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz parameter_sha1: 0a88e8948b1abca3badfd8d090d6be03f8d7655d license: unrestricted https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet" href="../googlenet/README.html" />
     <link href="../../../../_static/style.css" rel="stylesheet" type="text/css">
     <!--link href="../../../../_static/fontawesome-all.min.css" rel="stylesheet" type="text/css"-->
 	<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.0.13/css/all.css" integrity="sha384-DNOHZ68U8hZfKXOrtjWvjxusGo9WQnrNx2sqG0tfsghAvtVlRW3tvkXWZh58N9jp" crossorigin="anonymous">
@@ -100,67 +98,8 @@
               
             
             
-              <ul class="current">
-<li class="toctree-l1 current"><a class="reference internal" href="../../../index.html">Documentation</a><ul class="current">
-<li class="toctree-l2"><a class="reference internal" href="../../../installation.html">Installation</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../software_stack.html">Software Stack</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../device.html">Device</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../tensor.html">Tensor</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../layer.html">Layer</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../net.html">FeedForward Net</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../initializer.html">Initializer</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../loss.html">Loss</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../metric.html">Metric</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../optimizer.html">Optimizer</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../autograd.html">Autograd in Singa</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../data.html">Data</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../image_tool.html">Image Tool</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../snapshot.html">Snapshot</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../converter.html">Caffe Converter</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../utils.html">Utils</a></li>
-<li class="toctree-l2 current"><a class="reference internal" href="../../index.html">Model Zoo</a><ul class="current">
-<li class="toctree-l3"><a class="reference internal" href="../../cifar10/README.html">Train CNN over Cifar-10</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../../char-rnn/README.html">Train Char-RNN over plain text</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../../mnist/README.html">Train a RBM model against MNIST dataset</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../alexnet/README.html">Train AlexNet over ImageNet</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../densenet/README.html">name: DenseNet models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../densenet/README.html#image-classification-using-densenet">Image Classification using DenseNet</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../googlenet/README.html">name: GoogleNet on ImageNet
-SINGA version: 1.0.1
-SINGA commit: 8c990f7da2de220e8a012c6a8ecc897dc7532744
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz
-parameter_sha1: 0a88e8948b1abca3badfd8d090d6be03f8d7655d
-license: unrestricted https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../googlenet/README.html#image-classification-using-googlenet">Image Classification using GoogleNet</a></li>
-<li class="toctree-l3 current"><a class="current reference internal" href="#">name: Inception V4 on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/inception_v4.tar.gz
-parameter_sha1: 5fdd6f5d8af8fd10e7321d9b38bb87ef14e80d56
-license: https://github.com/tensorflow/models/tree/master/slim</a></li>
-<li class="toctree-l3"><a class="reference internal" href="#image-classification-using-inception-v4">Image Classification using Inception V4</a><ul>
-<li class="toctree-l4"><a class="reference internal" href="#instructions">Instructions</a></li>
-<li class="toctree-l4"><a class="reference internal" href="#details">Details</a></li>
-</ul>
-</li>
-<li class="toctree-l3"><a class="reference internal" href="../resnet/README.html">name: Resnets on ImageNet
-SINGA version: 1.1
-SINGA commit: 45ec92d8ffc1fa1385a9307fdf07e21da939ee2f
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz
-license: Apache V2, https://github.com/facebook/fb.resnet.torch/blob/master/LICENSE</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../resnet/README.html#image-classification-using-residual-networks">Image Classification using Residual Networks</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../vgg/README.html">name: VGG models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../vgg/README.html#image-classification-using-vgg">Image Classification using VGG</a></li>
-</ul>
-</li>
-</ul>
-</li>
+              <ul>
+<li class="toctree-l1"><a class="reference internal" href="../../../index.html">Documentation</a></li>
 <li class="toctree-l1"><a class="reference internal" href="../../../../downloads.html">Download SINGA</a></li>
 <li class="toctree-l1"><a class="reference internal" href="../../../../security.html">Security</a></li>
 </ul>
@@ -224,10 +163,6 @@ license: https://github.com/pytorch/visi
     
       <li><a href="../../../../index.html">Docs</a> &raquo;</li>
         
-          <li><a href="../../../index.html">Documentation</a> &raquo;</li>
-        
-          <li><a href="../../index.html">Model Zoo</a> &raquo;</li>
-        
       <li>name: Inception V4 on ImageNet
 SINGA version: 1.1.1
 SINGA commit:
@@ -250,24 +185,7 @@ license: https://github.com/tensorflow/m
           <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
            <div itemprop="articleBody">
             
-  <!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
---><hr class="docutils" />
+  <hr class="docutils" />
 <div class="section" id="name-inception-v4-on-imagenet-singa-version-1-1-1-singa-commit-parameter-url-https-s3-ap-southeast-1-amazonaws-com-dlfile-inception-v4-tar-gz-parameter-sha1-5fdd6f5d8af8fd10e7321d9b38bb87ef14e80d56-license-https-github-com-tensorflow-models-tree-master-slim">
 <h1>name: Inception V4 on ImageNet
 SINGA version: 1.1.1
@@ -282,15 +200,14 @@ license: https://github.com/tensorflow/m
 <div class="section" id="instructions">
 <h2>Instructions<a class="headerlink" href="#instructions" title="Permalink to this headline">¶</a></h2>
 <ul>
-<li><p class="first">Download the parameter checkpoint file</p>
+<li><p>Download the parameter checkpoint file</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ wget
   $ tar xvf inception_v4.tar.gz
 </pre></div>
 </div>
 </li>
-<li><p class="first">Download <a class="reference external" href="https://github.com/BVLC/caffe/blob/master/data/ilsvrc12/get_ilsvrc_aux.sh">synset_word.txt</a> file.</p>
-</li>
-<li><p class="first">Run the program</p>
+<li><p>Download <a class="reference external" href="https://github.com/BVLC/caffe/blob/master/data/ilsvrc12/get_ilsvrc_aux.sh">synset_word.txt</a> file.</p></li>
+<li><p>Run the program</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  # use cpu
   $ python serve.py -C &amp;
   # use gpu
@@ -298,7 +215,7 @@ license: https://github.com/tensorflow/m
 </pre></div>
 </div>
 </li>
-<li><p class="first">Submit images for classification</p>
+<li><p>Submit images for classification</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ curl -i -F image=@image1.jpg http://localhost:9999/api
   $ curl -i -F image=@image2.jpg http://localhost:9999/api
   $ curl -i -F image=@image3.jpg http://localhost:9999/api
@@ -324,15 +241,6 @@ After downloading and decompressing the
           </div>
           <footer>
   
-    <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
-      
-        <a href="../resnet/README.html" class="btn btn-neutral float-right" title="name: Resnets on ImageNet SINGA version: 1.1 SINGA commit: 45ec92d8ffc1fa1385a9307fdf07e21da939ee2f parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz license: Apache V2, https://github.com/facebook/fb.resnet.torch/blob/master/LICENSE" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
-      
-      
-        <a href="../googlenet/README.html" class="btn btn-neutral float-left" title="name: GoogleNet on ImageNet SINGA version: 1.0.1 SINGA commit: 8c990f7da2de220e8a012c6a8ecc897dc7532744 parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz parameter_sha1: 0a88e8948b1abca3badfd8d090d6be03f8d7655d license: unrestricted https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
-      
-    </div>
-  
 
   <hr/>
 

Modified: incubator/singa/site/trunk/docs/model_zoo/imagenet/resnet/README.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/imagenet/resnet/README.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/imagenet/resnet/README.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/imagenet/resnet/README.html Sat Jun 29 14:42:24 2019
@@ -36,8 +36,6 @@
   <link rel="stylesheet" href="../../../../_static/pygments.css" type="text/css" />
     <link rel="index" title="Index" href="../../../../genindex.html" />
     <link rel="search" title="Search" href="../../../../search.html" />
-    <link rel="next" title="name: VGG models on ImageNet SINGA version: 1.1.1 SINGA commit: license: https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py" href="../vgg/README.html" />
-    <link rel="prev" title="name: Inception V4 on ImageNet SINGA version: 1.1.1 SINGA commit: parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/inception_v4.tar.gz parameter_sha1: 5fdd6f5d8af8fd10e7321d9b38bb87ef14e80d56 license: https://github.com/tensorflow/models/tree/master/slim" href="../inception/README.html" />
     <link href="../../../../_static/style.css" rel="stylesheet" type="text/css">
     <!--link href="../../../../_static/fontawesome-all.min.css" rel="stylesheet" type="text/css"-->
 	<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.0.13/css/all.css" integrity="sha384-DNOHZ68U8hZfKXOrtjWvjxusGo9WQnrNx2sqG0tfsghAvtVlRW3tvkXWZh58N9jp" crossorigin="anonymous">
@@ -100,67 +98,8 @@
               
             
             
-              <ul class="current">
-<li class="toctree-l1 current"><a class="reference internal" href="../../../index.html">Documentation</a><ul class="current">
-<li class="toctree-l2"><a class="reference internal" href="../../../installation.html">Installation</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../software_stack.html">Software Stack</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../device.html">Device</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../tensor.html">Tensor</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../layer.html">Layer</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../net.html">FeedForward Net</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../initializer.html">Initializer</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../loss.html">Loss</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../metric.html">Metric</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../optimizer.html">Optimizer</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../autograd.html">Autograd in Singa</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../data.html">Data</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../image_tool.html">Image Tool</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../snapshot.html">Snapshot</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../converter.html">Caffe Converter</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../utils.html">Utils</a></li>
-<li class="toctree-l2 current"><a class="reference internal" href="../../index.html">Model Zoo</a><ul class="current">
-<li class="toctree-l3"><a class="reference internal" href="../../cifar10/README.html">Train CNN over Cifar-10</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../../char-rnn/README.html">Train Char-RNN over plain text</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../../mnist/README.html">Train a RBM model against MNIST dataset</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../alexnet/README.html">Train AlexNet over ImageNet</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../densenet/README.html">name: DenseNet models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../densenet/README.html#image-classification-using-densenet">Image Classification using DenseNet</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../googlenet/README.html">name: GoogleNet on ImageNet
-SINGA version: 1.0.1
-SINGA commit: 8c990f7da2de220e8a012c6a8ecc897dc7532744
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz
-parameter_sha1: 0a88e8948b1abca3badfd8d090d6be03f8d7655d
-license: unrestricted https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../googlenet/README.html#image-classification-using-googlenet">Image Classification using GoogleNet</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../inception/README.html">name: Inception V4 on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/inception_v4.tar.gz
-parameter_sha1: 5fdd6f5d8af8fd10e7321d9b38bb87ef14e80d56
-license: https://github.com/tensorflow/models/tree/master/slim</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../inception/README.html#image-classification-using-inception-v4">Image Classification using Inception V4</a></li>
-<li class="toctree-l3 current"><a class="current reference internal" href="#">name: Resnets on ImageNet
-SINGA version: 1.1
-SINGA commit: 45ec92d8ffc1fa1385a9307fdf07e21da939ee2f
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz
-license: Apache V2, https://github.com/facebook/fb.resnet.torch/blob/master/LICENSE</a></li>
-<li class="toctree-l3"><a class="reference internal" href="#image-classification-using-residual-networks">Image Classification using Residual Networks</a><ul>
-<li class="toctree-l4"><a class="reference internal" href="#instructions">Instructions</a></li>
-<li class="toctree-l4"><a class="reference internal" href="#details">Details</a></li>
-</ul>
-</li>
-<li class="toctree-l3"><a class="reference internal" href="../vgg/README.html">name: VGG models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../vgg/README.html#image-classification-using-vgg">Image Classification using VGG</a></li>
-</ul>
-</li>
-</ul>
-</li>
+              <ul>
+<li class="toctree-l1"><a class="reference internal" href="../../../index.html">Documentation</a></li>
 <li class="toctree-l1"><a class="reference internal" href="../../../../downloads.html">Download SINGA</a></li>
 <li class="toctree-l1"><a class="reference internal" href="../../../../security.html">Security</a></li>
 </ul>
@@ -224,10 +163,6 @@ license: https://github.com/pytorch/visi
     
       <li><a href="../../../../index.html">Docs</a> &raquo;</li>
         
-          <li><a href="../../../index.html">Documentation</a> &raquo;</li>
-        
-          <li><a href="../../index.html">Model Zoo</a> &raquo;</li>
-        
       <li>name: Resnets on ImageNet
 SINGA version: 1.1
 SINGA commit: 45ec92d8ffc1fa1385a9307fdf07e21da939ee2f
@@ -249,24 +184,7 @@ license: Apache V2, https://github.com/f
           <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
            <div itemprop="articleBody">
             
-  <!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
---><hr class="docutils" />
+  <hr class="docutils" />
 <div class="section" id="name-resnets-on-imagenet-singa-version-1-1-singa-commit-45ec92d8ffc1fa1385a9307fdf07e21da939ee2f-parameter-url-https-s3-ap-southeast-1-amazonaws-com-dlfile-resnet-resnet-18-tar-gz-license-apache-v2-https-github-com-facebook-fb-resnet-torch-blob-master-license">
 <h1>name: Resnets on ImageNet
 SINGA version: 1.1
@@ -280,19 +198,19 @@ license: Apache V2, https://github.com/f
 <div class="section" id="instructions">
 <h2>Instructions<a class="headerlink" href="#instructions" title="Permalink to this headline">¶</a></h2>
 <ul>
-<li><p class="first">Download one parameter checkpoint file (see below) and the synset word file of ImageNet into this folder, e.g.,</p>
+<li><p>Download one parameter checkpoint file (see below) and the synset word file of ImageNet into this folder, e.g.,</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ wget https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz
   $ wget https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/synset_words.txt
   $ tar xvf resnet-18.tar.gz
 </pre></div>
 </div>
 </li>
-<li><p class="first">Usage</p>
+<li><p>Usage</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ python serve.py -h
 </pre></div>
 </div>
 </li>
-<li><p class="first">Example</p>
+<li><p>Example</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  # use cpu
   $ python serve.py --use_cpu --parameter_file resnet-18.pickle --model resnet --depth 18 &amp;
   # use gpu
@@ -301,13 +219,13 @@ license: Apache V2, https://github.com/f
 </div>
 <p>The parameter files for the following model and depth configuration pairs are provided:</p>
 <ul class="simple">
-<li>resnet (original resnet), <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz">18</a>|<a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-34.tar.gz">34</a>|<a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-101.tar.gz">101</a>|<a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-152.tar.gz">152</a></li>
-<li>addbn (resnet with a batch normalization layer after the addition), <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-50.tar.gz">50</a></li>
-<li>wrn (wide resnet), <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/wrn-50-2.tar.gz">50</a></li>
-<li>preact (resnet with pre-activation) <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-200.tar.gz">200</a></li>
+<li><p>resnet (original resnet), <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz">18</a>|<a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-34.tar.gz">34</a>|<a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-101.tar.gz">101</a>|<a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-152.tar.gz">152</a></p></li>
+<li><p>addbn (resnet with a batch normalization layer after the addition), <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-50.tar.gz">50</a></p></li>
+<li><p>wrn (wide resnet), <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/wrn-50-2.tar.gz">50</a></p></li>
+<li><p>preact (resnet with pre-activation) <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-200.tar.gz">200</a></p></li>
 </ul>
 </li>
-<li><p class="first">Submit images for classification</p>
+<li><p>Submit images for classification</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ curl -i -F image=@image1.jpg http://localhost:9999/api
   $ curl -i -F image=@image2.jpg http://localhost:9999/api
   $ curl -i -F image=@image3.jpg http://localhost:9999/api
@@ -334,15 +252,6 @@ the convert.py program.</p>
           </div>
           <footer>
   
-    <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
-      
-        <a href="../vgg/README.html" class="btn btn-neutral float-right" title="name: VGG models on ImageNet SINGA version: 1.1.1 SINGA commit: license: https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
-      
-      
-        <a href="../inception/README.html" class="btn btn-neutral float-left" title="name: Inception V4 on ImageNet SINGA version: 1.1.1 SINGA commit: parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/inception_v4.tar.gz parameter_sha1: 5fdd6f5d8af8fd10e7321d9b38bb87ef14e80d56 license: https://github.com/tensorflow/models/tree/master/slim" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
-      
-    </div>
-  
 
   <hr/>
 

Modified: incubator/singa/site/trunk/docs/model_zoo/imagenet/vgg/README.html
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/docs/model_zoo/imagenet/vgg/README.html?rev=1862313&r1=1862312&r2=1862313&view=diff
==============================================================================
--- incubator/singa/site/trunk/docs/model_zoo/imagenet/vgg/README.html (original)
+++ incubator/singa/site/trunk/docs/model_zoo/imagenet/vgg/README.html Sat Jun 29 14:42:24 2019
@@ -36,8 +36,6 @@
   <link rel="stylesheet" href="../../../../_static/pygments.css" type="text/css" />
     <link rel="index" title="Index" href="../../../../genindex.html" />
     <link rel="search" title="Search" href="../../../../search.html" />
-    <link rel="next" title="Download SINGA" href="../../../../downloads.html" />
-    <link rel="prev" title="name: Resnets on ImageNet SINGA version: 1.1 SINGA commit: 45ec92d8ffc1fa1385a9307fdf07e21da939ee2f parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz license: Apache V2, https://github.com/facebook/fb.resnet.torch/blob/master/LICENSE" href="../resnet/README.html" />
     <link href="../../../../_static/style.css" rel="stylesheet" type="text/css">
     <!--link href="../../../../_static/fontawesome-all.min.css" rel="stylesheet" type="text/css"-->
 	<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.0.13/css/all.css" integrity="sha384-DNOHZ68U8hZfKXOrtjWvjxusGo9WQnrNx2sqG0tfsghAvtVlRW3tvkXWZh58N9jp" crossorigin="anonymous">
@@ -100,67 +98,8 @@
               
             
             
-              <ul class="current">
-<li class="toctree-l1 current"><a class="reference internal" href="../../../index.html">Documentation</a><ul class="current">
-<li class="toctree-l2"><a class="reference internal" href="../../../installation.html">Installation</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../software_stack.html">Software Stack</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../device.html">Device</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../tensor.html">Tensor</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../layer.html">Layer</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../net.html">FeedForward Net</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../initializer.html">Initializer</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../loss.html">Loss</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../metric.html">Metric</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../optimizer.html">Optimizer</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../autograd.html">Autograd in Singa</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../data.html">Data</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../image_tool.html">Image Tool</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../snapshot.html">Snapshot</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../converter.html">Caffe Converter</a></li>
-<li class="toctree-l2"><a class="reference internal" href="../../../utils.html">Utils</a></li>
-<li class="toctree-l2 current"><a class="reference internal" href="../../index.html">Model Zoo</a><ul class="current">
-<li class="toctree-l3"><a class="reference internal" href="../../cifar10/README.html">Train CNN over Cifar-10</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../../char-rnn/README.html">Train Char-RNN over plain text</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../../mnist/README.html">Train a RBM model against MNIST dataset</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../alexnet/README.html">Train AlexNet over ImageNet</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../densenet/README.html">name: DenseNet models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../densenet/README.html#image-classification-using-densenet">Image Classification using DenseNet</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../googlenet/README.html">name: GoogleNet on ImageNet
-SINGA version: 1.0.1
-SINGA commit: 8c990f7da2de220e8a012c6a8ecc897dc7532744
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz
-parameter_sha1: 0a88e8948b1abca3badfd8d090d6be03f8d7655d
-license: unrestricted https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../googlenet/README.html#image-classification-using-googlenet">Image Classification using GoogleNet</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../inception/README.html">name: Inception V4 on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/inception_v4.tar.gz
-parameter_sha1: 5fdd6f5d8af8fd10e7321d9b38bb87ef14e80d56
-license: https://github.com/tensorflow/models/tree/master/slim</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../inception/README.html#image-classification-using-inception-v4">Image Classification using Inception V4</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../resnet/README.html">name: Resnets on ImageNet
-SINGA version: 1.1
-SINGA commit: 45ec92d8ffc1fa1385a9307fdf07e21da939ee2f
-parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz
-license: Apache V2, https://github.com/facebook/fb.resnet.torch/blob/master/LICENSE</a></li>
-<li class="toctree-l3"><a class="reference internal" href="../resnet/README.html#image-classification-using-residual-networks">Image Classification using Residual Networks</a></li>
-<li class="toctree-l3 current"><a class="current reference internal" href="#">name: VGG models on ImageNet
-SINGA version: 1.1.1
-SINGA commit:
-license: https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py</a></li>
-<li class="toctree-l3"><a class="reference internal" href="#image-classification-using-vgg">Image Classification using VGG</a><ul>
-<li class="toctree-l4"><a class="reference internal" href="#instructions">Instructions</a></li>
-<li class="toctree-l4"><a class="reference internal" href="#details">Details</a></li>
-</ul>
-</li>
-</ul>
-</li>
-</ul>
-</li>
+              <ul>
+<li class="toctree-l1"><a class="reference internal" href="../../../index.html">Documentation</a></li>
 <li class="toctree-l1"><a class="reference internal" href="../../../../downloads.html">Download SINGA</a></li>
 <li class="toctree-l1"><a class="reference internal" href="../../../../security.html">Security</a></li>
 </ul>
@@ -224,10 +163,6 @@ license: https://github.com/pytorch/visi
     
       <li><a href="../../../../index.html">Docs</a> &raquo;</li>
         
-          <li><a href="../../../index.html">Documentation</a> &raquo;</li>
-        
-          <li><a href="../../index.html">Model Zoo</a> &raquo;</li>
-        
       <li>name: VGG models on ImageNet
 SINGA version: 1.1.1
 SINGA commit:
@@ -248,24 +183,7 @@ license: https://github.com/pytorch/visi
           <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
            <div itemprop="articleBody">
             
-  <!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
---><hr class="docutils" />
+  <hr class="docutils" />
 <div class="section" id="name-vgg-models-on-imagenet-singa-version-1-1-1-singa-commit-license-https-github-com-pytorch-vision-blob-master-torchvision-models-vgg-py">
 <h1>name: VGG models on ImageNet
 SINGA version: 1.1.1
@@ -279,32 +197,32 @@ to SINGA for image classification.</p>
 <div class="section" id="instructions">
 <h2>Instructions<a class="headerlink" href="#instructions" title="Permalink to this headline">¶</a></h2>
 <ul>
-<li><p class="first">Download one parameter checkpoint file (see below) and the synset word file of ImageNet into this folder, e.g.,</p>
+<li><p>Download one parameter checkpoint file (see below) and the synset word file of ImageNet into this folder, e.g.,</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ wget https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg11.tar.gz
   $ wget https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/synset_words.txt
   $ tar xvf vgg11.tar.gz
 </pre></div>
 </div>
 </li>
-<li><p class="first">Usage</p>
+<li><p>Usage</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ python serve.py -h
 </pre></div>
 </div>
 </li>
-<li><p class="first">Example</p>
+<li><p>Example</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  # use cpu
-  $ python serve.py --use_cpu --parameter_file vgg11.pickle --depth 11 &amp;
+  $ python serve.py --use_cpu --parameter_file vgg11.pickle --depth 11 --use_cpu &amp;
   # use gpu
-  $ python serve.py --parameter_file vgg11.pickle --depth 11 &amp;
+  $ python serve.py --use_cpu --parameter_file vgg11.pickle --depth 11 &amp;
 </pre></div>
 </div>
 <p>The parameter files for the following model and depth configuration pairs are provided:</p>
 <ul class="simple">
-<li>Without batch-normalization, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg11.tar.gz">11</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg13.tar.gz">13</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg16.tar.gz">16</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg19.tar.gz">19</a></li>
-<li>With batch-normalization, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg11_bn.tar.gz">11</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg13_bn.tar.gz">13</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg16_bn.tar.gz">16</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg19_bn.tar.gz">19</a></li>
+<li><p>Without batch-normalization, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg11.tar.gz">11</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg13.tar.gz">13</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg16.tar.gz">16</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg19.tar.gz">19</a></p></li>
+<li><p>With batch-normalization, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg11_bn.tar.gz">11</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg13_bn.tar.gz">13</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg16_bn.tar.gz">16</a>, <a class="reference external" href="https://s3-ap-southeast-1.amazonaws.com/dlfile/vgg/vgg19_bn.tar.gz">19</a></p></li>
 </ul>
 </li>
-<li><p class="first">Submit images for classification</p>
+<li><p>Submit images for classification</p>
 <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>  $ curl -i -F image=@image1.jpg http://localhost:9999/api
   $ curl -i -F image=@image2.jpg http://localhost:9999/api
   $ curl -i -F image=@image3.jpg http://localhost:9999/api
@@ -330,15 +248,6 @@ to SINGA for image classification.</p>
           </div>
           <footer>
   
-    <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
-      
-        <a href="../../../../downloads.html" class="btn btn-neutral float-right" title="Download SINGA" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
-      
-      
-        <a href="../resnet/README.html" class="btn btn-neutral float-left" title="name: Resnets on ImageNet SINGA version: 1.1 SINGA commit: 45ec92d8ffc1fa1385a9307fdf07e21da939ee2f parameter_url: https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz license: Apache V2, https://github.com/facebook/fb.resnet.torch/blob/master/LICENSE" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
-      
-    </div>
-  
 
   <hr/>