You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/05/19 10:14:18 UTC

[GitHub] [tvm] sacalo edited a comment on issue #8057: deformable_conv2 error when converting torch traced model to relay

sacalo edited a comment on issue #8057:
URL: https://github.com/apache/tvm/issues/8057#issuecomment-843956703


   Thanks @comaniac I could extract the subgraph of the traced model, after serializing and deserializing it.
   @masahi: I have tried the traced model before serializing it and it doesn't work neither.
   
   ```
   def forward(self,
       x: Tensor,
       argument_2: Tensor) -> Tensor:
     _0 = self.norm
     _1 = self.weight
     out_channels = ops.prim.NumToTensor(torch.size(_1, 0))
     _2 = int(out_channels)
     _3 = ops.prim.NumToTensor(torch.size(x, 0))
     mask = torch.zeros([int(_3), 0], dtype=6, layout=None, device=torch.device("cpu"), pin_memory=False)
     bias = torch.zeros([_2], dtype=6, layout=None, device=torch.device("cpu"), pin_memory=False)
     input = ops.torchvision.deform_conv2d(x, _1, argument_2, mask, bias, 2, 2, 1, 1, 1, 1, 32, 1, False)
     return (_0).forward(input, )
   ```
   
   ```
   graph(%self.1 : __torch__.detectron2.layers.deform_conv.DeformConv,
         %x.1 : Tensor,
         %argument_2.1 : Tensor):
     %25 : bool = prim::Constant[value=0]() # ./venv3/lib/python3.8/site-packages/torchvision/ops/deform_conv.py:71:0
     %56 : Device = prim::Constant[value="cpu"]()
     %22 : None = prim::Constant() # :0:0
     %8 : int = prim::Constant[value=0]() # ./venv3/lib/python3.8/site-packages/torchvision/ops/deform_conv.py:66:0
     %21 : int = prim::Constant[value=6]() # ./venv3/lib/python3.8/site-packages/torchvision/ops/deform_conv.py:71:0
     %39 : int = prim::Constant[value=2]() # ./venv3/lib/python3.8/site-packages/torchvision/ops/deform_conv.py:92:0
     %40 : int = prim::Constant[value=1]() # ./venv3/lib/python3.8/site-packages/torchvision/ops/deform_conv.py:92:0
     %41 : int = prim::Constant[value=32]() # ./venv3/lib/python3.8/site-packages/torchvision/ops/deform_conv.py:92:0
     %4 : __torch__.detectron2.layers.batch_norm.___torch_mangle_35.FrozenBatchNorm2d = prim::GetAttr[name="norm"](%self.1)
     %6 : Tensor = prim::GetAttr[name="weight"](%self.1)
     %9 : int = aten::size(%6, %8) # ./venv3/lib/python3.8/site-packages/torchvision/ops/deform_conv.py:66:0
     %out_channels.1 : Tensor = prim::NumToTensor(%9) # :0:0
     %13 : int = aten::Int(%out_channels.1)
     %15 : int = aten::size(%x.1, %8) # ./venv3/lib/python3.8/site-packages/torchvision/ops/deform_conv.py:71:0
     %16 : Tensor = prim::NumToTensor(%15) # :0:0
     %19 : int = aten::Int(%16)
     %20 : int[] = prim::ListConstruct(%19, %8)
     %mask.1 : Tensor = aten::zeros(%20, %21, %22, %56, %25) # ./venv3/lib/python3.8/site-packages/torchvision/ops/deform_conv.py:71:0
     %28 : int[] = prim::ListConstruct(%13)
     %bias.1 : Tensor = aten::zeros(%28, %21, %22, %56, %25) # ./venv3/lib/python3.8/site-packages/torchvision/ops/deform_conv.py:74:0
     %input.1 : Tensor = torchvision::deform_conv2d(%x.1, %6, %argument_2.1, %mask.1, %bias.1, %39, %39, %40, %40, %40, %40, %41, %40, %25) # ./venv3/lib/python3.8/site-packages/torchvision/ops/deform_conv.py:92:0
     %46 : Tensor = prim::CallMethod[name="forward"](%4, %input.1) # :0:0
     return (%46)
   ```
   Hope it helps


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org