You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/03/25 16:22:21 UTC
[GitHub] [incubator-mxnet] chinakook commented on issue #17907: Depthwise in
windows is 10 times slower than linux on gpu
chinakook commented on issue #17907: Depthwise in windows is 10 times slower than linux on gpu
URL: https://github.com/apache/incubator-mxnet/issues/17907#issuecomment-603938898
There is another version in windows, pytorch takes 2.0s while mxnet takes 10.2s. I think this is a bug for a long time.
MXNET version:
```python
import os
os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0'
import time
import mxnet as mx
from mxnet import gluon
from mxnet.gluon import nn
from gluoncv.model_zoo import get_model
ctx = mx.gpu()
net = get_model('mobilenetv2_1.0', norm_layer=gluon.nn.BatchNorm)
net.initialize()
net.collect_params().reset_ctx(ctx)
s = time.time()
for i in range(50):
x = mx.nd.random.uniform(shape=(1,3,512,512), ctx=ctx)
t = time.time()
y = net(x)
mx.nd.waitall()
print(time.time() - t)
print('TOTAL TIME: ', time.time() - s)
```
```
0.2889983654022217
0.15599989891052246
0.14120268821716309
0.1549980640411377
0.14800024032592773
0.1549973487854004
0.1419973373413086
0.16100192070007324
0.15399909019470215
0.14299798011779785
0.1490001678466797
0.17000079154968262
0.1530005931854248
0.14499974250793457
0.1569969654083252
0.15002942085266113
0.14699625968933105
0.14600133895874023
0.143998384475708
0.15400242805480957
0.1439976692199707
0.14451003074645996
0.16103625297546387
0.15851068496704102
0.15300440788269043
0.15399932861328125
0.15399956703186035
0.14400243759155273
0.15401935577392578
0.14500117301940918
0.14951753616333008
0.14799976348876953
0.14800000190734863
0.15600085258483887
0.1529989242553711
0.14699888229370117
0.14899921417236328
0.1512279510498047
0.1525120735168457
0.1549992561340332
0.16200017929077148
0.1529998779296875
0.1510009765625
0.14804387092590332
0.14800000190734863
0.15600061416625977
0.15230464935302734
0.15199899673461914
0.14699792861938477
0.1289997100830078
TOTAL TIME: 10.248228788375854
```
PYTORCH version:
```python
import time
import torch
import torchvision
torch.backends.cudnn.benchmark=False
net = torchvision.models.mobilenet_v2()
net.cuda()
net.eval()
s = time.time()
for i in range(50):
t = time.time()
x = torch.rand([1,3,512,512]).cuda()
y = net(x)
print(time.time() - t)
print('TOTAL TIME: ', time.time() - s)
```
```
0.9051487445831299
0.04097485542297363
0.019997835159301758
0.018999099731445312
0.023026704788208008
0.021998167037963867
0.020003795623779297
0.0209958553314209
0.020031213760375977
0.020966291427612305
0.019999980926513672
0.022031784057617188
0.019968032836914062
0.023028850555419922
0.020004987716674805
0.01996612548828125
0.022998332977294922
0.020999431610107422
0.02000117301940918
0.019997119903564453
0.02300119400024414
0.02200031280517578
0.01899886131286621
0.01999974250793457
0.021999835968017578
0.02000284194946289
0.02000141143798828
0.02000117301940918
0.02099919319152832
0.020000457763671875
0.021001338958740234
0.020998477935791016
0.020000219345092773
0.020998477935791016
0.022002458572387695
0.02502727508544922
0.02000284194946289
0.021997690200805664
0.021001100540161133
0.024999141693115234
0.0299990177154541
0.02599787712097168
0.029999256134033203
0.029999256134033203
0.02700185775756836
0.02520275115966797
0.02800154685974121
0.032999277114868164
0.02400040626525879
0.02900218963623047
TOTAL TIME: 2.065340518951416
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
With regards,
Apache Git Services