You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/09/16 21:49:35 UTC

[GitHub] [incubator-mxnet] igolan opened a new issue #16182: contrib.cond operator does not parameterized block execution

igolan opened a new issue #16182: contrib.cond operator does not parameterized block execution
URL: https://github.com/apache/incubator-mxnet/issues/16182
 
 
   ## Description
   ``contrib.cond`` operator does not support parameterized block execution (cannot infer shape).
   
   ## Environment info (Required)
   
   ```
   ----------Python Info----------
   Version      : 3.7.4
   Compiler     : Clang 10.0.1 (clang-1001.0.46.4)
   Build        : ('default', 'Jul  9 2019 18:13:23')
   Arch         : ('64bit', '')
   ------------Pip Info-----------
   Version      : 19.0.3
   Directory    : /Users/XX/PycharmProjects/XX/venv/lib/python3.7/site-packages/pip-19.0.3-py3.7.egg/pip
   ----------MXNet Info-----------
   Version      : 1.5.0
   Directory    : /Users/XX/PycharmProjects/XX/venv/lib/python3.7/site-packages/mxnet
   Commit Hash   : 75a9e187d00a8b7ebc71412a02ed0e3ae489d91f
   Library      : ['/Users/XX/PycharmProjects/XX/venv/lib/python3.7/site-packages/mxnet/libmxnet.so']
   Build features:
   ✖ CUDA
   ✖ CUDNN
   ✖ NCCL
   ✖ CUDA_RTC
   ✖ TENSORRT
   ✔ CPU_SSE
   ✔ CPU_SSE2
   ✔ CPU_SSE3
   ✔ CPU_SSE4_1
   ✔ CPU_SSE4_2
   ✖ CPU_SSE4A
   ✔ CPU_AVX
   ✖ CPU_AVX2
   ✖ OPENMP
   ✖ SSE
   ✖ F16C
   ✖ JEMALLOC
   ✖ BLAS_OPEN
   ✖ BLAS_ATLAS
   ✖ BLAS_MKL
   ✖ BLAS_APPLE
   ✔ LAPACK
   ✖ MKLDNN
   ✔ OPENCV
   ✖ CAFFE
   ✖ PROFILER
   ✔ DIST_KVSTORE
   ✖ CXX14
   ✖ INT64_TENSOR_SIZE
   ✔ SIGNAL_HANDLER
   ✖ DEBUG
   ----------System Info----------
   Platform     : Darwin-18.7.0-x86_64-i386-64bit
   system       : Darwin
   node         : XXX
   release      : 18.7.0
   version      : Darwin Kernel Version 18.7.0: Tue Aug 20 16:57:14 PDT 2019; root:xnu-4903.271.2~2/RELEASE_X86_64
   ----------Hardware Info----------
   machine      : x86_64
   processor    : i386
   b'machdep.cpu.brand_string: Intel(R) Core(TM) i7-7660U CPU @ 2.50GHz'
   b'machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ DTES64 MON DSCPL VMX SMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C'
   b'machdep.cpu.leaf7_features: RDWRFSGS TSC_THREAD_OFFSET SGX BMI1 HLE AVX2 SMEP BMI2 ERMS INVPCID RTM FPU_CSDS MPX RDSEED ADX SMAP CLFSOPT IPT MDCLEAR TSXFA IBRS STIBP L1DF SSBD'
   b'machdep.cpu.extfeatures: SYSCALL XD 1GBPAGE EM64T LAHF LZCNT PREFETCHW RDTSCP TSCI'
   ----------Network Test----------
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0137 sec, LOAD: 0.5112 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0180 sec, LOAD: 0.4525 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0198 sec, LOAD: 0.8612 sec.
   Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0233 sec, LOAD: 0.1894 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0120 sec, LOAD: 0.3173 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0105 sec, LOAD: 0.0961 sec.
   ----------Environment----------
   
   ```
   
   I'm using Pyton
   
   ## Build info (Required if built from source)
   N/A
   
   ## Error Message:
   ```
   Traceback (most recent call last):
     File "/Users/XX/PycharmProjects/XX/venv/lib/python3.7/site-packages/mxnet/gluon/block.py", line 811, in _call_cached_op
       for is_arg, i in self._cached_op_args]
     File "/Users/XX/PycharmProjects/XX/venv/lib/python3.7/site-packages/mxnet/gluon/block.py", line 811, in <listcomp>
       for is_arg, i in self._cached_op_args]
     File "/Users/XX/PycharmProjects/XX/venv/lib/python3.7/site-packages/mxnet/gluon/parameter.py", line 543, in data
       return self._check_and_get(self._data, ctx)
     File "/Users/XX/PycharmProjects/XX/venv/lib/python3.7/site-packages/mxnet/gluon/parameter.py", line 234, in _check_and_get
       "num_features, etc., for network layers."%(self.name))
   mxnet.gluon.parameter.DeferredInitializationError: Parameter 'mlp0_dense0_weight' has not been initialized yet because initialization was deferred. Actual initialization happens during the first forward pass. Please pass one batch of data through the network before accessing Parameters. You can also avoid deferred initialization by specifying in_units, num_features, etc., for network layers.
   
   During handling of the above exception, another exception occurred:
   
   Traceback (most recent call last):
     File "/Users/XX/PycharmProjects/XX/venv/lib/python3.7/site-packages/mxnet/gluon/block.py", line 797, in _deferred_infer_shape
       self.infer_shape(*args)
     File "/Users/XX/PycharmProjects/XX/venv/lib/python3.7/site-packages/mxnet/gluon/block.py", line 870, in infer_shape
       self._infer_attrs('infer_shape', 'shape', *args)
     File "/Users/XX/PycharmProjects/XX/venv/lib/python3.7/site-packages/mxnet/gluon/block.py", line 861, in _infer_attrs
       raise ValueError(w[0].message)
   ValueError: Cannot decide shape for the following arguments (0s in shape means unknown dimensions). Consider providing them as input:
   	mlp0_dense0_weight: (1, 0)
   
   During handling of the above exception, another exception occurred:
   
   Traceback (most recent call last):
     File "/Users/XX/PycharmProjects/XX/playground.py", line 24, in <module>
       out = net(data.as_in_context(model_ctx))
     File "/Users/XX/PycharmProjects/XX/venv/lib/python3.7/site-packages/mxnet/gluon/block.py", line 548, in __call__
       out = self.forward(*args)
     File "/Users/XX/PycharmProjects/XX/venv/lib/python3.7/site-packages/mxnet/gluon/block.py", line 915, in forward
       return self._call_cached_op(x, *args)
     File "/Users/XX/PycharmProjects/XX/venv/lib/python3.7/site-packages/mxnet/gluon/block.py", line 813, in _call_cached_op
       self._deferred_infer_shape(*args)
     File "/Users/XX/PycharmProjects/XX/venv/lib/python3.7/site-packages/mxnet/gluon/block.py", line 801, in _deferred_infer_shape
       raise ValueError(error_msg)
   ValueError: Deferred initialization failed because shape cannot be inferred. Cannot decide shape for the following arguments (0s in shape means unknown dimensions). Consider providing them as input:
   	mlp0_dense0_weight: (1, 0)
   
   ```
   
   ## Minimum reproducible example
   ```
   import numpy as np
   import mxnet as mx
   from mxnet import nd, autograd, gluon
   
   class MLP(gluon.HybridBlock):
       def __init__(self, **kwargs):
           super(MLP, self).__init__(**kwargs)
           with self.name_scope():
               self.dense1 = gluon.nn.Dense(1)
               # Remove comments from the following lines and code will work
               # self.dense1.initialize()
               # self.dense1.hybridize()
               # self.dense1(nd.ones((3,1)))
   
       def hybrid_forward(self, F, x):
           l1 = lambda: self.dense1(x)
           o3 = F.contrib.cond(x.sum() == F.ones(1), l1, l1)
           return o3
   
   model_ctx = mx.cpu()
   net = MLP()
   net.hybridize()
   net.collect_params().initialize(mx.init.Normal(sigma=.01), ctx=model_ctx)
   data = nd.ones((3,1))
   out = net(data.as_in_context(model_ctx))
   print(out)
   out = net(data.as_in_context(model_ctx))
   print(out)
   
   ```
   Remove comments from MLP.__init__ and code will run (it will initialize self.dense1 before using contrib.cond)
   
   ## Steps to reproduce
   Run code above
   
   ## What have you tried to solve it?
   
   1. Initialize the block before using cond solves the problem, but such workaround doesn't scale.
   
   Might be related to #12154 and #11641
   
   ## Questions
   1. Any ideas for a scalable workaround (specifying the shape is not scalable)?
   2. General question: when I use symbol.contrib.cond, does both "then" and "else" graphs are being executes in the background? (I know that it will only return the value of one of them, but I'm asking about it mechanism, does it run both branches or just the relevant one?)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services