You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2021/10/19 10:29:47 UTC

[GitHub] [incubator-mxnet] anko-intel commented on a change in pull request #20670: [master] Bring dnnl_readme.md on master up-to-date

anko-intel commented on a change in pull request #20670:
URL: https://github.com/apache/incubator-mxnet/pull/20670#discussion_r731722277



##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/dnnl_readme.md
##########
@@ -183,29 +185,24 @@ Expected Output:
 [[ 2.  2.  2.]
  [ 2.  2.  2.]]
 ```
-### Verify whether ONEDNN works
+### Verify whether oneDNN works
 
-After MXNet is installed, you can verify if ONEDNN backend works well with a single Convolution layer.
+After MXNet is installed, you can verify if oneDNN backend works well with a single Convolution layer.
 ```
-import mxnet as mx
-import numpy as np
+from mxnet import np
+from mxnet.gluon import nn
 
 num_filter = 32
 kernel = (3, 3)
 pad = (1, 1)
 shape = (32, 32, 256, 256)
 
-x = mx.sym.Variable('x')
-w = mx.sym.Variable('w')
-y = mx.sym.Convolution(data=x, weight=w, num_filter=num_filter, kernel=kernel, no_bias=True, pad=pad)
-exe = y.simple_bind(mx.cpu(), x=shape)
+conv_layer = nn.Conv2D(channels=num_filter, kernel_size=kernel, padding=pad)
+conv_layer.initialize()
 
-exe.arg_arrays[0][:] = np.random.normal(size=exe.arg_arrays[0].shape)
-exe.arg_arrays[1][:] = np.random.normal(size=exe.arg_arrays[1].shape)
-
-exe.forward(is_train=False)
-o = exe.outputs[0]
-t = o.asnumpy()
+data = np.random.normal(size=shape)
+o = conv_layer(data)
+o.wait_to_read()

Review comment:
       it could suggest that wait_to_read is required in normal usage, but as far as I understand is not.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org