You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/07/13 22:27:14 UTC

[GitHub] [incubator-mxnet] ys2843 commented on a change in pull request #18691: Merge numpy.mxnet.io into mxnet official website

ys2843 commented on a change in pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691#discussion_r453979739



##########
File path: docs/python_docs/python/tutorials/getting-started/crash-course/2-nn.md
##########
@@ -15,47 +15,50 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-# Create a neural network
+# Step 2: Create a neural network
 
-Now let's look how to create neural networks in Gluon. In addition the NDArray package (`nd`) that we just covered, we now will also import the neural network `nn` package from `gluon`.
+In this step, you learn how to use NP on MXNet to create neural networks in Gluon. In addition to the `np` package that you learned about in the previous step [Step 1: Manipulate data with NP on MXNet](1-ndarray.md), you also import the neural network `nn` package from `gluon`.
+
+Use the following commands to import the packages required for this step.
 
 ```{.python .input  n=2}
-from mxnet import nd
+from mxnet import np, npx
 from mxnet.gluon import nn
+npx.set_np()  # Change MXNet to the numpy-like mode.
 ```
 
 ## Create your neural network's first layer
 
-Let's start with a dense layer with 2 output units.
+Use the following code example to start with a dense layer with two output units.
 <!-- mention what the none and the linear parts mean? -->
 
 ```{.python .input  n=31}
 layer = nn.Dense(2)
 layer
 ```
 
-Then initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$.
+Initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$. You can see this in the following example.
 
 ```{.python .input  n=32}
 layer.initialize()
 ```
 
-Then we do a forward pass with random data. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
+Do a forward pass with random data, shown in the following example. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
 
 ```{.python .input  n=34}
-x = nd.random.uniform(-1,1,(3,4))
+x = np.random.uniform(-1,1,(3,4))
 layer(x)
 ```
 
-As can be seen, the layer's input limit of 2 produced a $(3,2)$ shape output from our $(3,4)$ input. Note that we didn't specify the input size of `layer` before (though we can specify it with the argument `in_units=4` here), the system will automatically infer it during the first time we feed in data, create and initialize the weights. So we can access the weight after the first forward pass:
+As can be seen, the layer's input limit of two produced a $(3,2)$ shape output from our $(3,4)$ input. You didn't specify the input size of `layer` before, though you can specify it with the argument `in_units=4` here. The system  automatically infers it during the first time you feed in data, create, and initialize the weights. You can access the weight after the first forward pass, as shown in this example.
 
 ```{.python .input  n=35}
-layer.weight.data()
+# layer.weight.data() # FIXME

Review comment:
       Sorry for these errors. The contents are copied from Numpy site directly, I think it needs someone who is familiar with the docs to review it.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org