You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/01/15 00:14:47 UTC

[GitHub] [incubator-tvm] icemelon9 edited a comment on issue #4318: [Relay][TOPI]Fix meaning of conv2d_transpose output_padding parameter

icemelon9 edited a comment on issue #4318: [Relay][TOPI]Fix meaning of conv2d_transpose output_padding parameter
URL: https://github.com/apache/incubator-tvm/pull/4318#issuecomment-574434976
 
 
   @abergeron @vinx13 @tmoreau89 
   I found out two problems in this PR. 
   1. In this [line](https://github.com/apache/incubator-tvm/pull/4318/files#diff-8be7003a84f663f3f6b0dbb3bf1f5ba6R105), `h+dh` can be potentially out of boundary. `max(h) = out_h - 1 = in_h - filter_h + output_padding[0]` and `max(dh) = filter_h - 1`, therefore, `max(h+dh) = in_h + output_padding[0] - 1`. When `output_padding[0] >= 1`, `max(h+dh) >= in_h`, which is out of `data_pad` height boundary. Similar to `w+dw`.
   2. The x86 conv2d_transpose implementation is different from generic conv2d_tranpose. In x86, after you call `conv2d_transpose_nchw_preprocess`, you directly call the normal conv2d without using `output_padding`.
   
   I'll revert this PR for now. Could you fix these two bugs and double check whether cuda and arm_cpu implementation are correct? Further, could you investigate why CI doesn't catch these errors?
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services