You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2023/01/10 10:19:47 UTC

[GitHub] [tvm] arina-grovety commented on pull request #13732: [microNPU] Add support for TFLite PAD

arina-grovety commented on PR #13732:
URL: https://github.com/apache/tvm/pull/13732#issuecomment-1377029980

   Hello @lhutton1, thanks for the review!
   > * With the current implementation `nn.pad` does not get offloaded if the provided padding exceeds [31, 31, 32, 32]. If these dimensions are exceeded, we might be able to use multiple average pooling operations similar to https://git.mlplatform.org/ml/ethos-u/ethos-u-vela.git/tree/ethosu/vela/tflite_graph_optimiser.py#n1500
   
   Yes, this was the first option that we tried to implement. But in the Vela implementation, this is done by **"copies IFM to the right place inside the OFM"** using **write_offset** attribute of the created AvgPool operation.
   
   In the TVM, VelaAPI operations are derived from the NpuOperation class, which does not have a write_offset attribute, so we cannot replicate Vela convert_pad() function. 
   
   We tried to implement PAD legalization using the Concatenate operation but encountered an error. Seems like Cascader must be turned off for Concatenate to work. For example, Cascader is disabled in test_tflite_concat() (if Cascader is enabled, there is the same error as we have with the Concatenate)
   
   So far, the most feasible option seems to use several depthwise_conv2d operators, if padding exceeds [31, 31, 32, 32].
   
   But of course, I do not have all the knowledge about this, maybe there are other options?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org