You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/01/02 21:48:20 UTC

[GitHub] szha closed pull request #9123: Update module.md to include results of the code snippet

szha closed pull request #9123: Update module.md to include results of the code snippet
URL: https://github.com/apache/incubator-mxnet/pull/9123
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/tutorials/basic/module.md b/docs/tutorials/basic/module.md
index e0618ca65e..48f9086034 100644
--- a/docs/tutorials/basic/module.md
+++ b/docs/tutorials/basic/module.md
@@ -1,3 +1,4 @@
+
 # Module - Neural network training and inference
 
 Training a neural network involves quite a few steps. One need to specify how
@@ -35,6 +36,7 @@ The following code downloads the dataset and creates an 80:20 train:test
 split. It also initializes a training data iterator to return a batch of 32
 training examples each time. A separate iterator is also created for test data.
 
+
 ```python
 import logging
 logging.getLogger().setLevel(logging.INFO)
@@ -51,8 +53,10 @@ train_iter = mx.io.NDArrayIter(data[:ntrain, :], label[:ntrain], batch_size, shu
 val_iter = mx.io.NDArrayIter(data[ntrain:, :], label[ntrain:], batch_size)
 ```
 
+
 Next, we define the network.
 
+
 ```python
 net = mx.sym.Variable('data')
 net = mx.sym.FullyConnected(net, name='fc1', num_hidden=64)
@@ -62,6 +66,13 @@ net = mx.sym.SoftmaxOutput(net, name='softmax')
 mx.viz.plot_network(net)
 ```
 
+
+
+
+![svg](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/basic/module/output_3_0.svg?sanitize=true)
+
+
+
 ## Creating a Module
 
 Now we are ready to introduce module. The commonly used module class is
@@ -75,6 +86,7 @@ Now we are ready to introduce module. The commonly used module class is
 For `net`, we have only one data named `data`, and one label named `softmax_label`,
 which is automatically named for us following the name `softmax` we specified for the `SoftmaxOutput` operator.
 
+
 ```python
 mod = mx.mod.Module(symbol=net,
                     context=mx.cpu(),
@@ -100,6 +112,7 @@ To train a module, we need to perform following steps:
 
 This can be used as follows:
 
+
 ```python
 # allocate memory given the input data and label shapes
 mod.bind(data_shapes=train_iter.provide_data, label_shapes=train_iter.provide_label)
@@ -121,6 +134,13 @@ for epoch in range(5):
     print('Epoch %d, Training %s' % (epoch, metric.get()))
 ```
 
+    Epoch 0, Training ('accuracy', 0.4554375)
+    Epoch 1, Training ('accuracy', 0.6485625)
+    Epoch 2, Training ('accuracy', 0.7055625)
+    Epoch 3, Training ('accuracy', 0.7396875)
+    Epoch 4, Training ('accuracy', 0.764375)
+
+
 To learn more about these APIs, visit [Module API](http://mxnet.io/api/python/module.html).
 
 ## High-level Interface
@@ -134,6 +154,7 @@ and it internally executes the same steps.
 
 To fit a module, call the `fit` function as follows:
 
+
 ```python
 # reset train_iter to the beginning
 train_iter.reset()
@@ -153,6 +174,32 @@ mod.fit(train_iter,
         num_epoch=8)
 ```
 
+    INFO:root:Epoch[0] Train-accuracy=0.364625
+    INFO:root:Epoch[0] Time cost=0.388
+    INFO:root:Epoch[0] Validation-accuracy=0.557250
+    INFO:root:Epoch[1] Train-accuracy=0.633625
+    INFO:root:Epoch[1] Time cost=0.470
+    INFO:root:Epoch[1] Validation-accuracy=0.634750
+    INFO:root:Epoch[2] Train-accuracy=0.697187
+    INFO:root:Epoch[2] Time cost=0.402
+    INFO:root:Epoch[2] Validation-accuracy=0.665500
+    INFO:root:Epoch[3] Train-accuracy=0.735062
+    INFO:root:Epoch[3] Time cost=0.402
+    INFO:root:Epoch[3] Validation-accuracy=0.713000
+    INFO:root:Epoch[4] Train-accuracy=0.762563
+    INFO:root:Epoch[4] Time cost=0.408
+    INFO:root:Epoch[4] Validation-accuracy=0.742000
+    INFO:root:Epoch[5] Train-accuracy=0.782312
+    INFO:root:Epoch[5] Time cost=0.400
+    INFO:root:Epoch[5] Validation-accuracy=0.778500
+    INFO:root:Epoch[6] Train-accuracy=0.797188
+    INFO:root:Epoch[6] Time cost=0.392
+    INFO:root:Epoch[6] Validation-accuracy=0.798250
+    INFO:root:Epoch[7] Train-accuracy=0.807750
+    INFO:root:Epoch[7] Time cost=0.401
+    INFO:root:Epoch[7] Validation-accuracy=0.789250
+
+
 By default, `fit` function has `eval_metric` set to `accuracy`, `optimizer` to `sgd`
 and optimizer_params to `(('learning_rate', 0.01),)`.
 
@@ -161,6 +208,7 @@ and optimizer_params to `(('learning_rate', 0.01),)`.
 To predict with module, we can call `predict()`. It will collect and
 return all the prediction results.
 
+
 ```python
 y = mod.predict(val_iter)
 assert y.shape == (4000, 26)
@@ -172,11 +220,15 @@ dataset and evaluates the performance according to the given input metric.
 
 It can be used as follows:
 
+
 ```python
 score = mod.score(val_iter, ['acc'])
 print("Accuracy score is %f" % (score[0][1]))
 ```
 
+    Accuracy score is 0.789250
+
+
 Some of the other metrics which can be used are `top_k_acc`(top-k-accuracy),
 `F1`, `RMSE`, `MSE`, `MAE`, `ce`(CrossEntropy). To learn more about the metrics,
 visit [Evaluation metric](http://mxnet.io/api/python/metric.html).
@@ -188,6 +240,7 @@ and tune these parameters to get best score.
 
 We can save the module parameters after each training epoch by using a checkpoint callback.
 
+
 ```python
 # construct a callback function to save checkpoints
 model_prefix = 'mx_mlp'
@@ -197,10 +250,28 @@ mod = mx.mod.Module(symbol=net)
 mod.fit(train_iter, num_epoch=5, epoch_end_callback=checkpoint)
 ```
 
+    INFO:root:Epoch[0] Train-accuracy=0.101062
+    INFO:root:Epoch[0] Time cost=0.422
+    INFO:root:Saved checkpoint to "mx_mlp-0001.params"
+    INFO:root:Epoch[1] Train-accuracy=0.263313
+    INFO:root:Epoch[1] Time cost=0.785
+    INFO:root:Saved checkpoint to "mx_mlp-0002.params"
+    INFO:root:Epoch[2] Train-accuracy=0.452188
+    INFO:root:Epoch[2] Time cost=0.624
+    INFO:root:Saved checkpoint to "mx_mlp-0003.params"
+    INFO:root:Epoch[3] Train-accuracy=0.544125
+    INFO:root:Epoch[3] Time cost=0.427
+    INFO:root:Saved checkpoint to "mx_mlp-0004.params"
+    INFO:root:Epoch[4] Train-accuracy=0.605250
+    INFO:root:Epoch[4] Time cost=0.399
+    INFO:root:Saved checkpoint to "mx_mlp-0005.params"
+
+
 To load the saved module parameters, call the `load_checkpoint` function. It
 loads the Symbol and the associated parameters. We can then set the loaded
 parameters into the module.
 
+
 ```python
 sym, arg_params, aux_params = mx.model.load_checkpoint(model_prefix, 3)
 assert sym.tojson() == net.tojson()
@@ -215,6 +286,7 @@ parameters, so that `fit()` knows to start from those parameters instead of
 initializing randomly from scratch. We also set the `begin_epoch` parameter so that
 `fit()` knows we are resuming from a previously saved epoch.
 
+
 ```python
 mod = mx.mod.Module(symbol=sym)
 mod.fit(train_iter,
@@ -224,4 +296,19 @@ mod.fit(train_iter,
         begin_epoch=3)
 ```
 
+    INFO:root:Epoch[3] Train-accuracy=0.544125
+    INFO:root:Epoch[3] Time cost=0.398
+    INFO:root:Epoch[4] Train-accuracy=0.605250
+    INFO:root:Epoch[4] Time cost=0.545
+    INFO:root:Epoch[5] Train-accuracy=0.644312
+    INFO:root:Epoch[5] Time cost=0.592
+    INFO:root:Epoch[6] Train-accuracy=0.675000
+    INFO:root:Epoch[6] Time cost=0.491
+    INFO:root:Epoch[7] Train-accuracy=0.695812
+    INFO:root:Epoch[7] Time cost=0.363
+
+
+
 <!-- INSERT SOURCE DOWNLOAD BUTTONS -->
+
+


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services