You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/07/04 19:00:38 UTC

[GitHub] [incubator-mxnet] matteosal opened a new issue #15464: MKL-DNN gives wrong bias gradient if weights gradient is not requested

matteosal opened a new issue #15464: MKL-DNN gives wrong bias gradient if weights gradient is not requested
URL: https://github.com/apache/incubator-mxnet/issues/15464
 
 
   ## Description
   When using MKL-DNN and asking the gradient of a convolution with respect to its biases, the result is wrong unless the gradient with respect to the weights is also requested. 
   
   ## Environment info (Required)
   
   ```
   ----------Python Info----------
   Version      : 3.7.2
   Compiler     : GCC 7.3.0
   Build        : ('default', 'Dec 29 2018 06:19:36')
   Arch         : ('64bit', '')
   ------------Pip Info-----------
   Version      : 19.0.1
   Directory    : /opt/Anaconda/lib/python3.7/site-packages/pip
   ----------MXNet Info-----------
   Version      : 1.5.0
   Directory    : /home/matteo/Git/mxnet/python/mxnet
   Hashtag not found. Not installed from pre-built package.
   ----------System Info----------
   Platform     : Linux-4.15.0-54-generic-x86_64-with-debian-buster-sid
   system       : Linux
   node         : mongolius
   release      : 4.15.0-54-generic
   version      : #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019
   ----------Hardware Info----------
   machine      : x86_64
   processor    : x86_64
   Architecture:        x86_64
   CPU op-mode(s):      32-bit, 64-bit
   Byte Order:          Little Endian
   CPU(s):              8
   On-line CPU(s) list: 0-7
   Thread(s) per core:  2
   Core(s) per socket:  4
   Socket(s):           1
   NUMA node(s):        1
   Vendor ID:           GenuineIntel
   CPU family:          6
   Model:               94
   Model name:          Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
   Stepping:            3
   CPU MHz:             2700.094
   CPU max MHz:         3500,0000
   CPU min MHz:         800,0000
   BogoMIPS:            5184.00
   Virtualization:      VT-x
   L1d cache:           32K
   L1i cache:           32K
   L2 cache:            256K
   L3 cache:            6144K
   NUMA node0 CPU(s):   0-7
   Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d
   ----------Network Test----------
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0010 sec, LOAD: 1.0852 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.1153 sec, LOAD: 0.9477 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.1108 sec, LOAD: 0.8710 sec.
   Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0825 sec, LOAD: 1.2461 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0351 sec, LOAD: 1.1176 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0400 sec, LOAD: 0.5449 sec.
   ```
   
   Using the python interface
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio): gcc
   
   MXNet commit hash: 6a8d9eb5fd4f7133c094149dc80a3a236534f223
   
   Build config: unchanged `config.mk`, except for `USE_OPENCV = 0`
   
   ## Minimum reproducible example
   ```
   import mxnet as mx
   
   sym = mx.sym.Convolution(
   	mx.sym.Variable('in'), 
   	mx.sym.Variable('w'), 
   	mx.sym.Variable('b'),
   	kernel=(1, 1), 
   	num_filter=1
   )
   args = {
   	'in': mx.nd.ones([1, 1, 3, 3]),
   	'w': mx.nd.ones([1, 1, 1, 1]),
   	'b': mx.nd.ones([1]),
   }
   grad1 = {
   	'in': mx.nd.zeros([1, 1, 3, 3]),
   	'w': mx.nd.zeros([1, 1, 1, 1]),
   	'b': mx.nd.zeros([1]),
   }
   grad2 = {
   	'in': mx.nd.zeros([1, 1, 3, 3]),
   	'w': mx.nd.zeros([1, 1, 1, 1]),
   	'b': mx.nd.zeros([1]),
   }
   req1 = {'in': 'null', 'w': 'write', 'b': 'write'}
   req2 = {'in': 'null', 'w': 'null', 'b': 'write'}
   outgrad = mx.nd.ones([1, 1, 3, 3])
   
   ex1 = sym.bind(mx.cpu(), args, args_grad=grad1, grad_req=req1)
   ex2 = sym.bind(mx.cpu(), args, args_grad=grad2, grad_req=req2)
   
   ex1.forward(True);
   ex1.backward(out_grads=outgrad);
   ex2.forward(True);
   ex2.backward(out_grads=outgrad);
   
   print(grad1['b'])
   print(grad2['b'])
   ```
   The above script prints a wrong value (0) for `grad2['b']`, while `grad1['b'] is correct (9)`:
   ```
   [9.]
   <NDArray 1 @cpu(0)>
   
   [0.]
   <NDArray 1 @cpu(0)>
   ```
   running with `MXNET_MKLDNN_ENABLED=0` produces the correct result (9) for both gradients

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services