You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2019/12/29 16:52:07 UTC

[GitHub] [incubator-tvm] kice edited a comment on issue #4523: Optimization for subpixel layer on Tensor core

kice edited a comment on issue #4523: Optimization for subpixel layer on Tensor core
URL: https://github.com/apache/incubator-tvm/issues/4523#issuecomment-569522065
 
 
   After testing it, I am happy to let you know, we have no significant difference at all. And I even found a bug. xD
   
   TVM build on Win10 MSVC, CUDA 10.1, Test with RTX 2060 Super
   
   ```
   v0.0.4
   fn (%data: Tensor[(1, 3, 720, 1280), float16], %head.0.weight: Tensor[(64, 3, 3, 3), float16], %head.0.bias: Tensor[(64), float16], %body.0.body.0.weight: Tensor[(64, 64, 3, 3), float16], %body.0.body.0.bias: Tensor[(64), float16], %body.0.body.2.weight: Tensor[(64, 64, 3, 3), float16], %body.0.body.2.bias: Tensor[(64), float16], %body.1.body.0.weight: Tensor[(64, 64, 3, 3), float16], %body.1.body.0.bias: Tensor[(64), float16], %body.1.body.2.weight: Tensor[(64, 64, 3, 3), float16], %body.1.body.2.bias: Tensor[(64), float16], %body.2.weight: Tensor[(64, 64, 3, 3), float16], %body.2.bias: Tensor[(64), float16], %tail.0.0.weight: Tensor[(128, 64, 3, 3), float16], %tail.0.0.bias: Tensor[(128), float16], %tail.0.2.weight: Tensor[(64, 32, 3, 3), float16], %tail.0.2.bias: Tensor[(64), float16], %tail.1.weight: Tensor[(3, 16, 3, 3), float16], %tail.1.bias: Tensor[(3), float16]) -> Tensor[(1, 3, 2880, 5120), float16] {
     %0 = nn.conv2d(%data, %head.0.weight, padding=[1, 1], kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 720, 1280), float16] */;
     %1 = nn.bias_add(%0, %head.0.bias) /* ty=Tensor[(1, 64, 720, 1280), float16] */;
     %2 = nn.conv2d(%1, %body.0.body.0.weight, padding=[1, 1], kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 720, 1280), float16] */;
     %3 = nn.bias_add(%2, %body.0.body.0.bias) /* ty=Tensor[(1, 64, 720, 1280), float16] */;
     %4 = nn.leaky_relu(%3, alpha=0.1f) /* ty=Tensor[(1, 64, 720, 1280), float16] */;
     %5 = nn.conv2d(%4, %body.0.body.2.weight, padding=[1, 1], kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 720, 1280), float16] */;
     %6 = nn.bias_add(%5, %body.0.body.2.bias) /* ty=Tensor[(1, 64, 720, 1280), float16] */;
     %7 = add(%6, %1) /* ty=Tensor[(1, 64, 720, 1280), float16] */;
     %8 = nn.conv2d(%7, %body.1.body.0.weight, padding=[1, 1], kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 720, 1280), float16] */;
     %9 = nn.bias_add(%8, %body.1.body.0.bias) /* ty=Tensor[(1, 64, 720, 1280), float16] */;
     %10 = nn.leaky_relu(%9, alpha=0.1f) /* ty=Tensor[(1, 64, 720, 1280), float16] */;
     %11 = nn.conv2d(%10, %body.1.body.2.weight, padding=[1, 1], kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 720, 1280), float16] */;
     %12 = nn.bias_add(%11, %body.1.body.2.bias) /* ty=Tensor[(1, 64, 720, 1280), float16] */;
     %13 = add(%12, %7) /* ty=Tensor[(1, 64, 720, 1280), float16] */;
     %14 = nn.conv2d(%13, %body.2.weight, padding=[1, 1], kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 720, 1280), float16] */;
     %15 = nn.bias_add(%14, %body.2.bias) /* ty=Tensor[(1, 64, 720, 1280), float16] */;
     %16 = add(%15, %1) /* ty=Tensor[(1, 64, 720, 1280), float16] */;
     %17 = nn.conv2d(%16, %tail.0.0.weight, padding=[1, 1], kernel_size=[3, 3]) /* ty=Tensor[(1, 128, 720, 1280), float16] */;
     %18 = nn.bias_add(%17, %tail.0.0.bias) /* ty=Tensor[(1, 128, 720, 1280), float16] */;
     %19 = nn.depth_to_space(%18, block_size=2, mode="CRD") /* ty=Tensor[(1, 32, 1440, 2560), float16] */;
     %20 = nn.conv2d(%19, %tail.0.2.weight, padding=[1, 1], kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 1440, 2560), float16] */;
     %21 = nn.bias_add(%20, %tail.0.2.bias) /* ty=Tensor[(1, 64, 1440, 2560), float16] */;
     %22 = nn.depth_to_space(%21, block_size=2, mode="CRD") /* ty=Tensor[(1, 16, 2880, 5120), float16] */;
     %23 = nn.conv2d(%22, %tail.1.weight, padding=[1, 1], kernel_size=[3, 3]) /* ty=Tensor[(1, 3, 2880, 5120), float16] */;
     nn.bias_add(%23, %tail.1.bias) /* ty=Tensor[(1, 3, 2880, 5120), float16] */
   }
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services