You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@mxnet.apache.org by "Ashok Emani (JIRA)" <ji...@apache.org> on 2018/04/26 23:12:00 UTC
[jira] [Created] (MXNET-362) ensure same mkldnn engine is used for
consistency
Ashok Emani created MXNET-362:
---------------------------------
Summary: ensure same mkldnn engine is used for consistency
Key: MXNET-362
URL: https://issues.apache.org/jira/browse/MXNET-362
Project: Apache MXNet
Issue Type: Bug
Reporter: Ashok Emani
Gluon data iterators may trigger different thread for execution context, this causes mkl-dnn engine to be inconsistent. Following snippet reproduces this issue.
{code:java}
// import numpy as np
import mxnet as mx
from mxnet import gluon, nd
net = gluon.nn.HybridSequential()
with net.name_scope():
net.add(gluon.nn.Conv2D(channels=32, kernel_size=3, activation=None))
net.collect_params().initialize(mx.init.Xavier(magnitude=2.24), ctx=mx.cpu())
val_data = gluon.data.DataLoader(
gluon.data.vision.CIFAR10(train=False),
batch_size=32, shuffle=False,num_workers=1)
# output should be 0.57521844
X = (32,3,32,32)
y = net(nd.array(np.ones(X))).asnumpy()
print(y[0][0][0][0])
# below line works!
# for _ in range(1):
# below line causes bug
for _ in val_data:
y = net(nd.array(np.ones(X))).asnumpy()
print(y[0][0][0][0])
break
{code}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@mxnet.apache.org
For additional commands, e-mail: issues-help@mxnet.apache.org