You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/09/17 15:01:46 UTC

[GitHub] [incubator-mxnet] igolan opened a new issue #16187: symbol.contrib.cond does not support custom operator execution

igolan opened a new issue #16187: symbol.contrib.cond does not support custom operator execution
URL: https://github.com/apache/incubator-mxnet/issues/16187
 
 
   ## Description
   ``symbol.contrib.cond`` operator does not support custom operator execution.
   
   ## Environment info (Required)
   
   
   ```
   ----------Python Info----------
   Version      : 3.7.4
   Compiler     : Clang 10.0.1 (clang-1001.0.46.4)
   Build        : ('default', 'Jul  9 2019 18:13:23')
   Arch         : ('64bit', '')
   ------------Pip Info-----------
   Version      : 19.0.3
   Directory    : /Users/XX/PycharmProjects/XX/venv/lib/python3.7/site-packages/pip-19.0.3-py3.7.egg/pip
   ----------MXNet Info-----------
   Version      : 1.5.0
   Directory    : /Users/XX/PycharmProjects/XX/venv/lib/python3.7/site-packages/mxnet
   Commit Hash   : 75a9e187d00a8b7ebc71412a02ed0e3ae489d91f
   Library      : ['/Users/XX/PycharmProjects/XX/venv/lib/python3.7/site-packages/mxnet/libmxnet.so']
   Build features:
   ✖ CUDA
   ✖ CUDNN
   ✖ NCCL
   ✖ CUDA_RTC
   ✖ TENSORRT
   ✔ CPU_SSE
   ✔ CPU_SSE2
   ✔ CPU_SSE3
   ✔ CPU_SSE4_1
   ✔ CPU_SSE4_2
   ✖ CPU_SSE4A
   ✔ CPU_AVX
   ✖ CPU_AVX2
   ✖ OPENMP
   ✖ SSE
   ✖ F16C
   ✖ JEMALLOC
   ✖ BLAS_OPEN
   ✖ BLAS_ATLAS
   ✖ BLAS_MKL
   ✖ BLAS_APPLE
   ✔ LAPACK
   ✖ MKLDNN
   ✔ OPENCV
   ✖ CAFFE
   ✖ PROFILER
   ✔ DIST_KVSTORE
   ✖ CXX14
   ✖ INT64_TENSOR_SIZE
   ✔ SIGNAL_HANDLER
   ✖ DEBUG
   ----------System Info----------
   Platform     : Darwin-18.7.0-x86_64-i386-64bit
   system       : Darwin
   node         : XXX
   release      : 18.7.0
   version      : Darwin Kernel Version 18.7.0: Tue Aug 20 16:57:14 PDT 2019; root:xnu-4903.271.2~2/RELEASE_X86_64
   ----------Hardware Info----------
   machine      : x86_64
   processor    : i386
   b'machdep.cpu.brand_string: Intel(R) Core(TM) i7-7660U CPU @ 2.50GHz'
   b'machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ DTES64 MON DSCPL VMX SMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C'
   b'machdep.cpu.leaf7_features: RDWRFSGS TSC_THREAD_OFFSET SGX BMI1 HLE AVX2 SMEP BMI2 ERMS INVPCID RTM FPU_CSDS MPX RDSEED ADX SMAP CLFSOPT IPT MDCLEAR TSXFA IBRS STIBP L1DF SSBD'
   b'machdep.cpu.extfeatures: SYSCALL XD 1GBPAGE EM64T LAHF LZCNT PREFETCHW RDTSCP TSCI'
   ----------Network Test----------
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0137 sec, LOAD: 0.5112 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0180 sec, LOAD: 0.4525 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0198 sec, LOAD: 0.8612 sec.
   Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0233 sec, LOAD: 0.1894 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0120 sec, LOAD: 0.3173 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0105 sec, LOAD: 0.0961 sec.
   ----------Environment----------
   
   ```
   
   I'm using Pyton
   
   ## Build info (Required if built from source)
   N/A
   
   ## Error Message:
   ```
   Traceback (most recent call last):
     File "_ctypes/callbacks.c", line 232, in 'calling callback function'
     File "/Users/XX/PycharmProjects/XX/venv/lib/python3.7/site-packages/mxnet/operator.py", line 718, in creator
       op_prop = prop_cls(**kwargs)
   TypeError: __init__() got an unexpected keyword argument '__subgraph_name__'
   
   Segmentation fault: 11
   
   Stack trace:
     [bt] (0) 1   libmxnet.so                         0x000000011705c2b0 mxnet::Storage::Get() + 4880
     [bt] (1) 2   libsystem_platform.dylib            0x00007fff57f9eb5d _sigtramp + 29
     [bt] (2) 3   Python                              0x000000010dcd7194 _PyMethodDef_RawFastCallDict + 591
     [bt] (3) 4   libmxnet.so                         0x0000000115698206 mxnet::NDArray::set_aux_shape(unsigned long, mxnet::TShape const&) const + 177878
     [bt] (4) 5   libmxnet.so                         0x00000001174c6fee NNSymbolCompose + 89646
     [bt] (5) 6   libmxnet.so                         0x00000001168eb3d6 MXSymbolCreateAtomicSymbol + 4086
     [bt] (6) 7   _ctypes.cpython-37m-darwin.so       0x000000010e1c636f ffi_call_unix64 + 79
     [bt] (7) 8   ???                                 0x00007ffee1f469d0 0x0 + 140732689312208
   ```
   
   ## Minimum reproducible example
   ```
   import mxnet as mx
   from mxnet import nd, autograd, gluon
   
   
   class IdentityOP(mx.operator.CustomOp):
       def forward(self, is_train, req, in_data, out_data, aux):
           self.assign(out_data[0], req[0], in_data[0])
   
       def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
           self.assign(in_grad[0], req[0], out_grad[0])
   
   
   @mx.operator.register("identityop")
   class IdentityOPProp(mx.operator.CustomOpProp):
       def __init__(self):
           super(IdentityOPProp, self).__init__(True)
   
       def create_operator(self, ctx, in_shapes, in_dtypes):
           return IdentityOP()
   
   
   class MLP(gluon.HybridBlock):
       def __init__(self, **kwargs):
           super(MLP, self).__init__(**kwargs)
           with self.name_scope():
               self.dense1 = gluon.nn.Dense(1, in_units=1)
   
       def hybrid_forward(self, F, x):
           # Not working:
           cond_out = F.contrib.cond(F.ones(1) == F.ones(1), lambda: self.dense1(x), lambda: mx.symbol.Custom(data=x, name='identityop', op_type='identityop'))
           # Working:
           # cond_out = F.contrib.cond(F.ones(1) == F.ones(1), lambda: self.dense1(x), lambda: x)
           return cond_out
   
   model_ctx = mx.cpu()
   net = MLP()
   net.hybridize()
   net.collect_params().initialize(mx.init.Constant([1]), ctx=model_ctx)
   data = nd.ones((3,1))
   with mx.autograd.record():
       out = net(data.as_in_context(model_ctx))
   out.backward()
   print(net.dense1.weight.grad())
   with mx.autograd.record():
       out = net(data.as_in_context(model_ctx))
   out.backward()
   print(net.dense1.weight.grad())
   ```
   
   ## Steps to reproduce
   Run code above
   
   ## What have you tried to solve it?
   
   1. Replace custom operator with no-operator (or built-in operator) - works (see comment in hybrid_forward
   
   *I'm not sure that the custom operator implementation is not missing something, I attached an example with simple identity custom operator (which doesn't work).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services