You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/07/10 21:49:38 UTC

[GitHub] [incubator-mxnet] ys2843 opened a new pull request #18691: Merge numpy.mxnet.io into mxnet official website

ys2843 opened a new pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691


   ## Description ##
   Implementation for issue #18566 . 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   
   ### Changes ###
   - [ ] Merge contents from numpy.mxnet.io to main site
   
   ## Comments ##
   - Preview: pending
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #18691: Merge numpy.mxnet.io into mxnet official website

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691#issuecomment-656907008


   Hey @ys2843 , Thanks for submitting the PR 
   All tests are already queued to run once. If tests fail, you can trigger one or more tests again with the following commands: 
   - To trigger all jobs: @mxnet-bot run ci [all] 
   - To trigger specific jobs: @mxnet-bot run ci [job1, job2] 
   *** 
   **CI supported jobs**: [windows-cpu, clang, edge, centos-cpu, unix-gpu, website, unix-cpu, miscellaneous, windows-gpu, sanity, centos-gpu]
   *** 
   _Note_: 
    Only following 3 categories can trigger CI :PR Author, MXNet Committer, Jenkins Admin. 
   All CI tests must pass before the PR can be merged. 
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] ys2843 commented on a change in pull request #18691: Merge numpy.mxnet.io into mxnet official website

Posted by GitBox <gi...@apache.org>.
ys2843 commented on a change in pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691#discussion_r453979739



##########
File path: docs/python_docs/python/tutorials/getting-started/crash-course/2-nn.md
##########
@@ -15,47 +15,50 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-# Create a neural network
+# Step 2: Create a neural network
 
-Now let's look how to create neural networks in Gluon. In addition the NDArray package (`nd`) that we just covered, we now will also import the neural network `nn` package from `gluon`.
+In this step, you learn how to use NP on MXNet to create neural networks in Gluon. In addition to the `np` package that you learned about in the previous step [Step 1: Manipulate data with NP on MXNet](1-ndarray.md), you also import the neural network `nn` package from `gluon`.
+
+Use the following commands to import the packages required for this step.
 
 ```{.python .input  n=2}
-from mxnet import nd
+from mxnet import np, npx
 from mxnet.gluon import nn
+npx.set_np()  # Change MXNet to the numpy-like mode.
 ```
 
 ## Create your neural network's first layer
 
-Let's start with a dense layer with 2 output units.
+Use the following code example to start with a dense layer with two output units.
 <!-- mention what the none and the linear parts mean? -->
 
 ```{.python .input  n=31}
 layer = nn.Dense(2)
 layer
 ```
 
-Then initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$.
+Initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$. You can see this in the following example.
 
 ```{.python .input  n=32}
 layer.initialize()
 ```
 
-Then we do a forward pass with random data. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
+Do a forward pass with random data, shown in the following example. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
 
 ```{.python .input  n=34}
-x = nd.random.uniform(-1,1,(3,4))
+x = np.random.uniform(-1,1,(3,4))
 layer(x)
 ```
 
-As can be seen, the layer's input limit of 2 produced a $(3,2)$ shape output from our $(3,4)$ input. Note that we didn't specify the input size of `layer` before (though we can specify it with the argument `in_units=4` here), the system will automatically infer it during the first time we feed in data, create and initialize the weights. So we can access the weight after the first forward pass:
+As can be seen, the layer's input limit of two produced a $(3,2)$ shape output from our $(3,4)$ input. You didn't specify the input size of `layer` before, though you can specify it with the argument `in_units=4` here. The system  automatically infers it during the first time you feed in data, create, and initialize the weights. You can access the weight after the first forward pass, as shown in this example.
 
 ```{.python .input  n=35}
-layer.weight.data()
+# layer.weight.data() # FIXME

Review comment:
       Sorry for these errors. The contents are copied from Numpy site directly, I think it needs someone who is familiar with the docs to review it.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] szha commented on pull request #18691: Merge numpy.mxnet.io into mxnet official website

Posted by GitBox <gi...@apache.org>.
szha commented on pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691#issuecomment-657269593


   @leezu it would be great if you could take a look at the content as part of reviewing this PR.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] ys2843 commented on pull request #18691: Merge numpy.mxnet.io into mxnet official website

Posted by GitBox <gi...@apache.org>.
ys2843 commented on pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691#issuecomment-657707272


   @mxnet-label-bot add [Website, pr-awaiting-review]


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] leezu commented on a change in pull request #18691: Merge numpy.mxnet.io into mxnet official website

Posted by GitBox <gi...@apache.org>.
leezu commented on a change in pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691#discussion_r453968732



##########
File path: docs/python_docs/python/tutorials/getting-started/crash-course/1-ndarray.md
##########
@@ -15,113 +15,108 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-# Manipulate data with `ndarray`
+# Step 1: Manipulate data with NP on MXNet
 
-We'll start by introducing the `NDArray`, MXNet’s primary tool for storing and transforming data. If you’ve worked with `NumPy` before, you’ll notice that an NDArray is, by design, similar to NumPy’s multi-dimensional array.
+This getting started exercise introduces the `np` package, which is the primary tool for storing and
+transforming data on MXNet. If you’ve worked with NumPy before, you’ll notice `np` is, by design, similar to NumPy.
 
-## Get started
+## Import packages and create an array
 
-To get started, let's import the `ndarray` package (`nd` is a shorter alias) from MXNet.
 
-```{.python .input  n=1}
-# If you haven't installed MXNet yet, you can uncomment the following line to
-# install the latest stable release
-# !pip install -U mxnet
+To get started, run the following commands to import the `np` package together with the NumPy extensions package `npx`. Together, `np` with `npx` make up the NP on MXNet front end.
 
-from mxnet import nd
+```{.python .input  n=1}
+from mxnet import np, npx
+npx.set_np()  # Activate NumPy-like mode.

Review comment:
       Linking https://github.com/apache/incubator-mxnet/pull/18631 as the line needs removal after https://github.com/apache/incubator-mxnet/pull/18631 is merged (No need to change the code in this PR)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] ys2843 commented on a change in pull request #18691: Merge numpy.mxnet.io into mxnet official website

Posted by GitBox <gi...@apache.org>.
ys2843 commented on a change in pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691#discussion_r453979739



##########
File path: docs/python_docs/python/tutorials/getting-started/crash-course/2-nn.md
##########
@@ -15,47 +15,50 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-# Create a neural network
+# Step 2: Create a neural network
 
-Now let's look how to create neural networks in Gluon. In addition the NDArray package (`nd`) that we just covered, we now will also import the neural network `nn` package from `gluon`.
+In this step, you learn how to use NP on MXNet to create neural networks in Gluon. In addition to the `np` package that you learned about in the previous step [Step 1: Manipulate data with NP on MXNet](1-ndarray.md), you also import the neural network `nn` package from `gluon`.
+
+Use the following commands to import the packages required for this step.
 
 ```{.python .input  n=2}
-from mxnet import nd
+from mxnet import np, npx
 from mxnet.gluon import nn
+npx.set_np()  # Change MXNet to the numpy-like mode.
 ```
 
 ## Create your neural network's first layer
 
-Let's start with a dense layer with 2 output units.
+Use the following code example to start with a dense layer with two output units.
 <!-- mention what the none and the linear parts mean? -->
 
 ```{.python .input  n=31}
 layer = nn.Dense(2)
 layer
 ```
 
-Then initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$.
+Initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$. You can see this in the following example.
 
 ```{.python .input  n=32}
 layer.initialize()
 ```
 
-Then we do a forward pass with random data. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
+Do a forward pass with random data, shown in the following example. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
 
 ```{.python .input  n=34}
-x = nd.random.uniform(-1,1,(3,4))
+x = np.random.uniform(-1,1,(3,4))
 layer(x)
 ```
 
-As can be seen, the layer's input limit of 2 produced a $(3,2)$ shape output from our $(3,4)$ input. Note that we didn't specify the input size of `layer` before (though we can specify it with the argument `in_units=4` here), the system will automatically infer it during the first time we feed in data, create and initialize the weights. So we can access the weight after the first forward pass:
+As can be seen, the layer's input limit of two produced a $(3,2)$ shape output from our $(3,4)$ input. You didn't specify the input size of `layer` before, though you can specify it with the argument `in_units=4` here. The system  automatically infers it during the first time you feed in data, create, and initialize the weights. You can access the weight after the first forward pass, as shown in this example.
 
 ```{.python .input  n=35}
-layer.weight.data()
+# layer.weight.data() # FIXME

Review comment:
       Sorry for these errors. The contents are copied from Numpy site directly, I think it needs someone who is familiar with the docs to review it. 
   Instead of reviewing the contents here, I was wondering if you prefer make changes based on this PR and push it to this branch? 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] ys2843 commented on a change in pull request #18691: Merge numpy.mxnet.io into mxnet official website

Posted by GitBox <gi...@apache.org>.
ys2843 commented on a change in pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691#discussion_r454036935



##########
File path: docs/python_docs/python/tutorials/getting-started/crash-course/2-nn.md
##########
@@ -15,47 +15,50 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-# Create a neural network
+# Step 2: Create a neural network
 
-Now let's look how to create neural networks in Gluon. In addition the NDArray package (`nd`) that we just covered, we now will also import the neural network `nn` package from `gluon`.
+In this step, you learn how to use NP on MXNet to create neural networks in Gluon. In addition to the `np` package that you learned about in the previous step [Step 1: Manipulate data with NP on MXNet](1-ndarray.md), you also import the neural network `nn` package from `gluon`.
+
+Use the following commands to import the packages required for this step.
 
 ```{.python .input  n=2}
-from mxnet import nd
+from mxnet import np, npx
 from mxnet.gluon import nn
+npx.set_np()  # Change MXNet to the numpy-like mode.
 ```
 
 ## Create your neural network's first layer
 
-Let's start with a dense layer with 2 output units.
+Use the following code example to start with a dense layer with two output units.
 <!-- mention what the none and the linear parts mean? -->
 
 ```{.python .input  n=31}
 layer = nn.Dense(2)
 layer
 ```
 
-Then initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$.
+Initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$. You can see this in the following example.
 
 ```{.python .input  n=32}
 layer.initialize()
 ```
 
-Then we do a forward pass with random data. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
+Do a forward pass with random data, shown in the following example. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
 
 ```{.python .input  n=34}
-x = nd.random.uniform(-1,1,(3,4))
+x = np.random.uniform(-1,1,(3,4))
 layer(x)
 ```
 
-As can be seen, the layer's input limit of 2 produced a $(3,2)$ shape output from our $(3,4)$ input. Note that we didn't specify the input size of `layer` before (though we can specify it with the argument `in_units=4` here), the system will automatically infer it during the first time we feed in data, create and initialize the weights. So we can access the weight after the first forward pass:
+As can be seen, the layer's input limit of two produced a $(3,2)$ shape output from our $(3,4)$ input. You didn't specify the input size of `layer` before, though you can specify it with the argument `in_units=4` here. The system  automatically infers it during the first time you feed in data, create, and initialize the weights. You can access the weight after the first forward pass, as shown in this example.
 
 ```{.python .input  n=35}
-layer.weight.data()
+# layer.weight.data() # FIXME

Review comment:
       Done

##########
File path: docs/python_docs/python/tutorials/getting-started/crash-course/1-ndarray.md
##########
@@ -15,113 +15,108 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-# Manipulate data with `ndarray`
+# Step 1: Manipulate data with NP on MXNet
 
-We'll start by introducing the `NDArray`, MXNet’s primary tool for storing and transforming data. If you’ve worked with `NumPy` before, you’ll notice that an NDArray is, by design, similar to NumPy’s multi-dimensional array.
+This getting started exercise introduces the `np` package, which is the primary tool for storing and

Review comment:
       Done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] leezu commented on a change in pull request #18691: Merge numpy.mxnet.io into mxnet official website

Posted by GitBox <gi...@apache.org>.
leezu commented on a change in pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691#discussion_r453969147



##########
File path: docs/python_docs/python/tutorials/getting-started/crash-course/2-nn.md
##########
@@ -15,47 +15,50 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-# Create a neural network
+# Step 2: Create a neural network
 
-Now let's look how to create neural networks in Gluon. In addition the NDArray package (`nd`) that we just covered, we now will also import the neural network `nn` package from `gluon`.
+In this step, you learn how to use NP on MXNet to create neural networks in Gluon. In addition to the `np` package that you learned about in the previous step [Step 1: Manipulate data with NP on MXNet](1-ndarray.md), you also import the neural network `nn` package from `gluon`.
+
+Use the following commands to import the packages required for this step.
 
 ```{.python .input  n=2}
-from mxnet import nd
+from mxnet import np, npx
 from mxnet.gluon import nn
+npx.set_np()  # Change MXNet to the numpy-like mode.
 ```
 
 ## Create your neural network's first layer
 
-Let's start with a dense layer with 2 output units.
+Use the following code example to start with a dense layer with two output units.
 <!-- mention what the none and the linear parts mean? -->
 
 ```{.python .input  n=31}
 layer = nn.Dense(2)
 layer
 ```
 
-Then initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$.
+Initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$. You can see this in the following example.
 
 ```{.python .input  n=32}
 layer.initialize()
 ```
 
-Then we do a forward pass with random data. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
+Do a forward pass with random data, shown in the following example. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
 
 ```{.python .input  n=34}
-x = nd.random.uniform(-1,1,(3,4))
+x = np.random.uniform(-1,1,(3,4))
 layer(x)
 ```
 
-As can be seen, the layer's input limit of 2 produced a $(3,2)$ shape output from our $(3,4)$ input. Note that we didn't specify the input size of `layer` before (though we can specify it with the argument `in_units=4` here), the system will automatically infer it during the first time we feed in data, create and initialize the weights. So we can access the weight after the first forward pass:
+As can be seen, the layer's input limit of two produced a $(3,2)$ shape output from our $(3,4)$ input. You didn't specify the input size of `layer` before, though you can specify it with the argument `in_units=4` here. The system  automatically infers it during the first time you feed in data, create, and initialize the weights. You can access the weight after the first forward pass, as shown in this example.
 
 ```{.python .input  n=35}
-layer.weight.data()
+# layer.weight.data() # FIXME

Review comment:
       FIXME?

##########
File path: docs/python_docs/python/tutorials/getting-started/crash-course/1-ndarray.md
##########
@@ -15,113 +15,108 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-# Manipulate data with `ndarray`
+# Step 1: Manipulate data with NP on MXNet
 
-We'll start by introducing the `NDArray`, MXNet’s primary tool for storing and transforming data. If you’ve worked with `NumPy` before, you’ll notice that an NDArray is, by design, similar to NumPy’s multi-dimensional array.
+This getting started exercise introduces the `np` package, which is the primary tool for storing and
+transforming data on MXNet. If you’ve worked with NumPy before, you’ll notice `np` is, by design, similar to NumPy.
 
-## Get started
+## Import packages and create an array
 
-To get started, let's import the `ndarray` package (`nd` is a shorter alias) from MXNet.
 
-```{.python .input  n=1}
-# If you haven't installed MXNet yet, you can uncomment the following line to
-# install the latest stable release
-# !pip install -U mxnet
+To get started, run the following commands to import the `np` package together with the NumPy extensions package `npx`. Together, `np` with `npx` make up the NP on MXNet front end.
 
-from mxnet import nd
+```{.python .input  n=1}
+from mxnet import np, npx
+npx.set_np()  # Activate NumPy-like mode.

Review comment:
       Linking https://github.com/apache/incubator-mxnet/pull/18631 as the line needs removal after https://github.com/apache/incubator-mxnet/pull/18631 is merged

##########
File path: docs/python_docs/python/tutorials/getting-started/crash-course/1-ndarray.md
##########
@@ -15,113 +15,108 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-# Manipulate data with `ndarray`
+# Step 1: Manipulate data with NP on MXNet
 
-We'll start by introducing the `NDArray`, MXNet’s primary tool for storing and transforming data. If you’ve worked with `NumPy` before, you’ll notice that an NDArray is, by design, similar to NumPy’s multi-dimensional array.
+This getting started exercise introduces the `np` package, which is the primary tool for storing and

Review comment:
       Let's summarize the extent to which `np` is similar and link to a document containing details ` docs/python_docs/python/tutorials/getting-started/deepnumpy/deepnumpy-vs-numpy.md`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #18691: Merge numpy.mxnet.io into mxnet official website

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691#issuecomment-657817714


   Jenkins CI successfully triggered : [centos-cpu]


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] ys2843 commented on a change in pull request #18691: Merge numpy.mxnet.io into mxnet official website

Posted by GitBox <gi...@apache.org>.
ys2843 commented on a change in pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691#discussion_r453979739



##########
File path: docs/python_docs/python/tutorials/getting-started/crash-course/2-nn.md
##########
@@ -15,47 +15,50 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-# Create a neural network
+# Step 2: Create a neural network
 
-Now let's look how to create neural networks in Gluon. In addition the NDArray package (`nd`) that we just covered, we now will also import the neural network `nn` package from `gluon`.
+In this step, you learn how to use NP on MXNet to create neural networks in Gluon. In addition to the `np` package that you learned about in the previous step [Step 1: Manipulate data with NP on MXNet](1-ndarray.md), you also import the neural network `nn` package from `gluon`.
+
+Use the following commands to import the packages required for this step.
 
 ```{.python .input  n=2}
-from mxnet import nd
+from mxnet import np, npx
 from mxnet.gluon import nn
+npx.set_np()  # Change MXNet to the numpy-like mode.
 ```
 
 ## Create your neural network's first layer
 
-Let's start with a dense layer with 2 output units.
+Use the following code example to start with a dense layer with two output units.
 <!-- mention what the none and the linear parts mean? -->
 
 ```{.python .input  n=31}
 layer = nn.Dense(2)
 layer
 ```
 
-Then initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$.
+Initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$. You can see this in the following example.
 
 ```{.python .input  n=32}
 layer.initialize()
 ```
 
-Then we do a forward pass with random data. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
+Do a forward pass with random data, shown in the following example. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
 
 ```{.python .input  n=34}
-x = nd.random.uniform(-1,1,(3,4))
+x = np.random.uniform(-1,1,(3,4))
 layer(x)
 ```
 
-As can be seen, the layer's input limit of 2 produced a $(3,2)$ shape output from our $(3,4)$ input. Note that we didn't specify the input size of `layer` before (though we can specify it with the argument `in_units=4` here), the system will automatically infer it during the first time we feed in data, create and initialize the weights. So we can access the weight after the first forward pass:
+As can be seen, the layer's input limit of two produced a $(3,2)$ shape output from our $(3,4)$ input. You didn't specify the input size of `layer` before, though you can specify it with the argument `in_units=4` here. The system  automatically infers it during the first time you feed in data, create, and initialize the weights. You can access the weight after the first forward pass, as shown in this example.
 
 ```{.python .input  n=35}
-layer.weight.data()
+# layer.weight.data() # FIXME

Review comment:
       Sorry for these errors. The contents are copied from Numpy site directly, I think it needs someone who is familiar with the docs to review it. 
   Instead of reviewing the contents here, I was wondering if you prefer make changes based on this PR and push to this branch? 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] leezu commented on pull request #18691: Merge numpy.mxnet.io into mxnet official website

Posted by GitBox <gi...@apache.org>.
leezu commented on pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691#issuecomment-657817655


   @mxnet-bot run ci [centos-cpu]


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] leezu commented on a change in pull request #18691: Merge numpy.mxnet.io into mxnet official website

Posted by GitBox <gi...@apache.org>.
leezu commented on a change in pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691#discussion_r453999062



##########
File path: docs/python_docs/python/tutorials/getting-started/crash-course/2-nn.md
##########
@@ -15,47 +15,50 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-# Create a neural network
+# Step 2: Create a neural network
 
-Now let's look how to create neural networks in Gluon. In addition the NDArray package (`nd`) that we just covered, we now will also import the neural network `nn` package from `gluon`.
+In this step, you learn how to use NP on MXNet to create neural networks in Gluon. In addition to the `np` package that you learned about in the previous step [Step 1: Manipulate data with NP on MXNet](1-ndarray.md), you also import the neural network `nn` package from `gluon`.
+
+Use the following commands to import the packages required for this step.
 
 ```{.python .input  n=2}
-from mxnet import nd
+from mxnet import np, npx
 from mxnet.gluon import nn
+npx.set_np()  # Change MXNet to the numpy-like mode.
 ```
 
 ## Create your neural network's first layer
 
-Let's start with a dense layer with 2 output units.
+Use the following code example to start with a dense layer with two output units.
 <!-- mention what the none and the linear parts mean? -->
 
 ```{.python .input  n=31}
 layer = nn.Dense(2)
 layer
 ```
 
-Then initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$.
+Initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$. You can see this in the following example.
 
 ```{.python .input  n=32}
 layer.initialize()
 ```
 
-Then we do a forward pass with random data. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
+Do a forward pass with random data, shown in the following example. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
 
 ```{.python .input  n=34}
-x = nd.random.uniform(-1,1,(3,4))
+x = np.random.uniform(-1,1,(3,4))
 layer(x)
 ```
 
-As can be seen, the layer's input limit of 2 produced a $(3,2)$ shape output from our $(3,4)$ input. Note that we didn't specify the input size of `layer` before (though we can specify it with the argument `in_units=4` here), the system will automatically infer it during the first time we feed in data, create and initialize the weights. So we can access the weight after the first forward pass:
+As can be seen, the layer's input limit of two produced a $(3,2)$ shape output from our $(3,4)$ input. You didn't specify the input size of `layer` before, though you can specify it with the argument `in_units=4` here. The system  automatically infers it during the first time you feed in data, create, and initialize the weights. You can access the weight after the first forward pass, as shown in this example.
 
 ```{.python .input  n=35}
-layer.weight.data()
+# layer.weight.data() # FIXME

Review comment:
       For the current case, you can just delete the `#` and `FIXME`. Besides this FIXME and above two comments, I think it's fine to go ahead with merging this PR and have further content improvements in separate PRs




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] leezu merged pull request #18691: Merge numpy.mxnet.io into mxnet official website

Posted by GitBox <gi...@apache.org>.
leezu merged pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] szha commented on a change in pull request #18691: Merge numpy.mxnet.io into mxnet official website

Posted by GitBox <gi...@apache.org>.
szha commented on a change in pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691#discussion_r460479156



##########
File path: docs/python_docs/python/tutorials/packages/ndarray/index.rst
##########
@@ -45,10 +45,16 @@ NDArray
 
       For Sparse NDArray tutorials
 
+   .. card::
+      :title: NP on MXNet reference
+      :link: deepnumpy/index.html
+
+      This section contains the mxnet.np API reference documentation
 
 .. toctree::
    :hidden:
    :glob:
 
    *
-   sparse/index
\ No newline at end of file
+   sparse/index
+   deepnumpy/index

Review comment:
       this should be at top level, not under ndarray.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] ys2843 commented on a change in pull request #18691: Merge numpy.mxnet.io into mxnet official website

Posted by GitBox <gi...@apache.org>.
ys2843 commented on a change in pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691#discussion_r454000097



##########
File path: docs/python_docs/python/tutorials/getting-started/crash-course/2-nn.md
##########
@@ -15,47 +15,50 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-# Create a neural network
+# Step 2: Create a neural network
 
-Now let's look how to create neural networks in Gluon. In addition the NDArray package (`nd`) that we just covered, we now will also import the neural network `nn` package from `gluon`.
+In this step, you learn how to use NP on MXNet to create neural networks in Gluon. In addition to the `np` package that you learned about in the previous step [Step 1: Manipulate data with NP on MXNet](1-ndarray.md), you also import the neural network `nn` package from `gluon`.
+
+Use the following commands to import the packages required for this step.
 
 ```{.python .input  n=2}
-from mxnet import nd
+from mxnet import np, npx
 from mxnet.gluon import nn
+npx.set_np()  # Change MXNet to the numpy-like mode.
 ```
 
 ## Create your neural network's first layer
 
-Let's start with a dense layer with 2 output units.
+Use the following code example to start with a dense layer with two output units.
 <!-- mention what the none and the linear parts mean? -->
 
 ```{.python .input  n=31}
 layer = nn.Dense(2)
 layer
 ```
 
-Then initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$.
+Initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$. You can see this in the following example.
 
 ```{.python .input  n=32}
 layer.initialize()
 ```
 
-Then we do a forward pass with random data. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
+Do a forward pass with random data, shown in the following example. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
 
 ```{.python .input  n=34}
-x = nd.random.uniform(-1,1,(3,4))
+x = np.random.uniform(-1,1,(3,4))
 layer(x)
 ```
 
-As can be seen, the layer's input limit of 2 produced a $(3,2)$ shape output from our $(3,4)$ input. Note that we didn't specify the input size of `layer` before (though we can specify it with the argument `in_units=4` here), the system will automatically infer it during the first time we feed in data, create and initialize the weights. So we can access the weight after the first forward pass:
+As can be seen, the layer's input limit of two produced a $(3,2)$ shape output from our $(3,4)$ input. You didn't specify the input size of `layer` before, though you can specify it with the argument `in_units=4` here. The system  automatically infers it during the first time you feed in data, create, and initialize the weights. You can access the weight after the first forward pass, as shown in this example.
 
 ```{.python .input  n=35}
-layer.weight.data()
+# layer.weight.data() # FIXME

Review comment:
       Got it, will do. Thank you for reviewing. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] ys2843 commented on a change in pull request #18691: Merge numpy.mxnet.io into mxnet official website

Posted by GitBox <gi...@apache.org>.
ys2843 commented on a change in pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691#discussion_r453979739



##########
File path: docs/python_docs/python/tutorials/getting-started/crash-course/2-nn.md
##########
@@ -15,47 +15,50 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-# Create a neural network
+# Step 2: Create a neural network
 
-Now let's look how to create neural networks in Gluon. In addition the NDArray package (`nd`) that we just covered, we now will also import the neural network `nn` package from `gluon`.
+In this step, you learn how to use NP on MXNet to create neural networks in Gluon. In addition to the `np` package that you learned about in the previous step [Step 1: Manipulate data with NP on MXNet](1-ndarray.md), you also import the neural network `nn` package from `gluon`.
+
+Use the following commands to import the packages required for this step.
 
 ```{.python .input  n=2}
-from mxnet import nd
+from mxnet import np, npx
 from mxnet.gluon import nn
+npx.set_np()  # Change MXNet to the numpy-like mode.
 ```
 
 ## Create your neural network's first layer
 
-Let's start with a dense layer with 2 output units.
+Use the following code example to start with a dense layer with two output units.
 <!-- mention what the none and the linear parts mean? -->
 
 ```{.python .input  n=31}
 layer = nn.Dense(2)
 layer
 ```
 
-Then initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$.
+Initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$. You can see this in the following example.
 
 ```{.python .input  n=32}
 layer.initialize()
 ```
 
-Then we do a forward pass with random data. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
+Do a forward pass with random data, shown in the following example. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
 
 ```{.python .input  n=34}
-x = nd.random.uniform(-1,1,(3,4))
+x = np.random.uniform(-1,1,(3,4))
 layer(x)
 ```
 
-As can be seen, the layer's input limit of 2 produced a $(3,2)$ shape output from our $(3,4)$ input. Note that we didn't specify the input size of `layer` before (though we can specify it with the argument `in_units=4` here), the system will automatically infer it during the first time we feed in data, create and initialize the weights. So we can access the weight after the first forward pass:
+As can be seen, the layer's input limit of two produced a $(3,2)$ shape output from our $(3,4)$ input. You didn't specify the input size of `layer` before, though you can specify it with the argument `in_units=4` here. The system  automatically infers it during the first time you feed in data, create, and initialize the weights. You can access the weight after the first forward pass, as shown in this example.
 
 ```{.python .input  n=35}
-layer.weight.data()
+# layer.weight.data() # FIXME

Review comment:
       Sorry for these errors. The contents are copied from Numpy site directly, I think it needs someone who is familiar with the docs to review it. 
   Instead of reviewing the contents here, I was wondering if you prefer maybe make changes based on this PR and push it to this branch? 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] ys2843 commented on a change in pull request #18691: Merge numpy.mxnet.io into mxnet official website

Posted by GitBox <gi...@apache.org>.
ys2843 commented on a change in pull request #18691:
URL: https://github.com/apache/incubator-mxnet/pull/18691#discussion_r453979739



##########
File path: docs/python_docs/python/tutorials/getting-started/crash-course/2-nn.md
##########
@@ -15,47 +15,50 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-# Create a neural network
+# Step 2: Create a neural network
 
-Now let's look how to create neural networks in Gluon. In addition the NDArray package (`nd`) that we just covered, we now will also import the neural network `nn` package from `gluon`.
+In this step, you learn how to use NP on MXNet to create neural networks in Gluon. In addition to the `np` package that you learned about in the previous step [Step 1: Manipulate data with NP on MXNet](1-ndarray.md), you also import the neural network `nn` package from `gluon`.
+
+Use the following commands to import the packages required for this step.
 
 ```{.python .input  n=2}
-from mxnet import nd
+from mxnet import np, npx
 from mxnet.gluon import nn
+npx.set_np()  # Change MXNet to the numpy-like mode.
 ```
 
 ## Create your neural network's first layer
 
-Let's start with a dense layer with 2 output units.
+Use the following code example to start with a dense layer with two output units.
 <!-- mention what the none and the linear parts mean? -->
 
 ```{.python .input  n=31}
 layer = nn.Dense(2)
 layer
 ```
 
-Then initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$.
+Initialize its weights with the default initialization method, which draws random values uniformly from $[-0.7, 0.7]$. You can see this in the following example.
 
 ```{.python .input  n=32}
 layer.initialize()
 ```
 
-Then we do a forward pass with random data. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
+Do a forward pass with random data, shown in the following example. We create a $(3,4)$ shape random input `x` and feed into the layer to compute the output.
 
 ```{.python .input  n=34}
-x = nd.random.uniform(-1,1,(3,4))
+x = np.random.uniform(-1,1,(3,4))
 layer(x)
 ```
 
-As can be seen, the layer's input limit of 2 produced a $(3,2)$ shape output from our $(3,4)$ input. Note that we didn't specify the input size of `layer` before (though we can specify it with the argument `in_units=4` here), the system will automatically infer it during the first time we feed in data, create and initialize the weights. So we can access the weight after the first forward pass:
+As can be seen, the layer's input limit of two produced a $(3,2)$ shape output from our $(3,4)$ input. You didn't specify the input size of `layer` before, though you can specify it with the argument `in_units=4` here. The system  automatically infers it during the first time you feed in data, create, and initialize the weights. You can access the weight after the first forward pass, as shown in this example.
 
 ```{.python .input  n=35}
-layer.weight.data()
+# layer.weight.data() # FIXME

Review comment:
       Sorry for these errors. The contents are copied from Numpy site directly, I think it needs someone who is familiar with the docs to review it. 
   Instead of reviewing the contents here, I was wondering if you prefer make changes based on this PR and push to this branch? Like we did it when adding developer guide, #18474




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org