You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/06/29 03:25:16 UTC

[GitHub] marcoabreu closed pull request #11465: [MXNET-607] Fix the broken reported by the new BLC

marcoabreu closed pull request #11465: [MXNET-607] Fix the broken reported by the new BLC
URL: https://github.com/apache/incubator-mxnet/pull/11465
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/tutorials/gluon/mnist.md b/docs/tutorials/gluon/mnist.md
index 3a2a2cbe01b..5b8a98a3d66 100644
--- a/docs/tutorials/gluon/mnist.md
+++ b/docs/tutorials/gluon/mnist.md
@@ -77,7 +77,7 @@ In an MLP, the outputs of most FC layers are fed into an activation function, wh
 The following code declares three fully connected layers with 128, 64 and 10 neurons each.
 The last fully connected layer often has its hidden size equal to the number of output classes in the dataset. Furthermore, these FC layers uses ReLU activation for performing an element-wise ReLU transformation on the FC layer output.
 
-To do this, we will use [Sequential layer](http://mxnet.io/api/python/gluon.html#mxnet.gluon.nn.Sequential) type. This is simply a linear stack of neural network layers. `nn.Dense` layers are nothing but the fully connected layers we discussed above.
+To do this, we will use [Sequential layer](http://mxnet.io/api/python/gluon/gluon.html#mxnet.gluon.nn.Sequential) type. This is simply a linear stack of neural network layers. `nn.Dense` layers are nothing but the fully connected layers we discussed above.
 
 ```python
 # define network
@@ -90,13 +90,13 @@ with net.name_scope():
 
 #### Initialize parameters and optimizer
 
-The following source code initializes all parameters received from parameter dict using [Xavier](http://mxnet.io/api/python/optimization.html#mxnet.initializer.Xavier) initializer
+The following source code initializes all parameters received from parameter dict using [Xavier](http://mxnet.io/api/python/optimization/optimization.html#mxnet.initializer.Xavier) initializer
 to train the MLP network we defined above.
 
 For our training, we will make use of the stochastic gradient descent (SGD) optimizer. In particular, we'll be using mini-batch SGD. Standard SGD processes train data one example at a time. In practice, this is very slow and one can speed up the process by processing examples in small batches. In this case, our batch size will be 100, which is a reasonable choice. Another parameter we select here is the learning rate, which controls the step size the optimizer takes in search of a solution. We'll pick a learning rate of 0.02, again a reasonable choice. Settings such as batch size and learning rate are what are usually referred to as hyper-parameters. What values we give them can have a great impact on training performance.
 
-We will use [Trainer](http://mxnet.io/api/python/gluon.html#trainer) class to apply the
-[SGD optimizer](http://mxnet.io/api/python/optimization.html#mxnet.optimizer.SGD) on the
+We will use [Trainer](http://mxnet.io/api/python/gluon/gluon.html#trainer) class to apply the
+[SGD optimizer](http://mxnet.io/api/python/optimization/optimization.html#mxnet.optimizer.SGD) on the
 initialized parameters.
 
 ```python
@@ -112,7 +112,7 @@ Typically, one runs the training until convergence, which means that we have lea
 
 We will take following steps for training:
 
-- Define [Accuracy evaluation metric](http://mxnet.io/api/python/metric.html#mxnet.metric.Accuracy) over training data.
+- Define [Accuracy evaluation metric](http://mxnet.io/api/python/metric/metric.html#mxnet.metric.Accuracy) over training data.
 - Loop over inputs for every epoch.
 - Forward input through network to get output.
 - Compute loss with output and label inside record scope.
@@ -121,7 +121,7 @@ We will take following steps for training:
 
 Loss function takes (output, label) pairs and computes a scalar loss for each sample in the mini-batch. The scalars measure how far each output is from the label.
 There are many predefined loss functions in gluon.loss. Here we use
-[softmax_cross_entropy_loss](http://mxnet.io/api/python/gluon.html#mxnet.gluon.loss.softmax_cross_entropy_loss) for digit classification. We will compute loss and do backward propagation inside
+[softmax_cross_entropy_loss](http://mxnet.io/api/python/gluon/gluon.html#mxnet.gluon.loss.softmax_cross_entropy_loss) for digit classification. We will compute loss and do backward propagation inside
 training scope which is defined by `autograd.record()`.
 
 ```python
diff --git a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/ImageClassifierExample.scala b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/ImageClassifierExample.scala
index 8a57527f355..e886b908ba2 100644
--- a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/ImageClassifierExample.scala
+++ b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/ImageClassifierExample.scala
@@ -31,9 +31,9 @@ import scala.collection.mutable.ListBuffer
 /**
   * <p>
   * Example inference showing usage of the Infer package on a resnet-152 model.
-  * @see <a href="https://github.com/apache/incubator-mxnet/tree/m\
-  * aster/scala-package/examples/src/main/scala/org/apache/mxnetexamples/in\
-  * fer/imageclassifier" target="_blank">Instructions to run this example</a>
+  * @see <pre><a href="https://github.com/apache/incubator-mxnet/tree/master/s
+    cala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/im
+    ageclassifier" target="_blank">Instructions to run this example</a></pre>
   */
 object ImageClassifierExample {
 
diff --git a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/SSDClassifierExample.scala b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/SSDClassifierExample.scala
index b5222e662df..c9707cb3ff6 100644
--- a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/SSDClassifierExample.scala
+++ b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/SSDClassifierExample.scala
@@ -33,9 +33,9 @@ import scala.collection.mutable.ListBuffer
   * <p>
   * Example single shot detector (SSD) using the Infer package
   * on a ssd_resnet50_512 model.
-  * @see <a href="https://github.com/apache/incubator-mxnet/tree/master/sca\
-  * la-package/examples/src/main/scala/org/apache/mxnetexamples/infer/object\
-  * detector" target="_blank">Instructions to run this example</a>
+  * @see <pre><a href="https://github.com/apache/incubator-mxnet/tree/master/s
+    cala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/object
+    detector" target="_blank">Instructions to run this example</a></pre>
   */
 class SSDClassifierExample {
   @Option(name = "--model-path-prefix", usage = "the input model directory and prefix of the model")


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services