You are viewing a plain text version of this content. The canonical link for it is here.
Posted to discuss-archive@tvm.apache.org by nkami via Apache TVM Discuss <no...@discuss.tvm.ai> on 2021/10/27 14:49:08 UTC

[Apache TVM Discuss] [Questions] Support for Mali Valhalla


Hi! From my own testing it seems like currently TVM does not support compiling models for a Mali GPU that has the 3rd generation architecture Valhalla. Do you know when it's planned to add support for that?

Thanks





---
[Visit Topic](https://discuss.tvm.apache.org/t/support-for-mali-valhalla/11326/1) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/079010580f2c2f07cdffa78bba3218303dfda7ec0cbbeef8baf32ee0501221ae).

[Apache TVM Discuss] [Questions] Support for Mali Valhalla

Posted by nkami via Apache TVM Discuss <no...@discuss.tvm.ai>.

I'm not sure, I did test it on various models such as mobilenet_v1 and it failed on all them (see [this](https://discuss.tvm.apache.org/t/fine-tuned-opencl-gives-incorrect-outputs/11227) post I opened a few weeks ago). However, when I use a device with bifrost it works for all the models.

From additional testing I did, this problem occurs even with pytorch models with only 1 small 2d convolutional layer. I also tested it on a network with 1 fully connected layer and I dont recall it getting errors (but I may have forgotten...). 

In addition, for the simple networks with only 1 layer this error was pretty rare (would happen like 1 in 10 times), but for the larger models it almost always converged to an optimization that gave wrong outputs.





---
[Visit Topic](https://discuss.tvm.apache.org/t/support-for-mali-valhalla/11326/6) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/813202210126eed56dc2f2df05a7464ff7cc808d3b330e5b4b8be48ed8f5a333).

[Apache TVM Discuss] [Questions] Support for Mali Valhalla

Posted by Anastasia Stulova via Apache TVM Discuss <no...@discuss.tvm.ai>.

Do you get wrong output with any particular model/operator or all of them? Do you use tvmc or custom script to run auto-scheduler? I think I have observed something like this for some workloads of conv2d operators but on Bifrost (2nd generation of Mali). It seemed though like a generic auto-scheduler problem not specific to Mali or GPU in general... Those are pretty hard to debug though as the same workloads can work fine for other targets simply because ansor uses different mutations...





---
[Visit Topic](https://discuss.tvm.apache.org/t/support-for-mali-valhalla/11326/5) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/3d8a6b886b265a150c26677e3e34ecb7cb260c4ee1f68fab6a110970594605df).

[Apache TVM Discuss] [Questions] Support for Mali Valhalla

Posted by nkami via Apache TVM Discuss <no...@discuss.tvm.ai>.

Hello, right now it seems that when you auto schedule with Valhall, often the compiled model will produce wrong outputs (significantly different than the original model). If possible I would like TVM to support auto scheduling devices with Valhall. 

It does run properly if I just compile it without an optimization file and with -device=mali.

Thank you for your help.





---
[Visit Topic](https://discuss.tvm.apache.org/t/support-for-mali-valhalla/11326/4) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/b69d42a8b429b5469b0e9b1e8f275775190f62d8c6a3fe08ee7e0ad5eb5ca2e7).

[Apache TVM Discuss] [Questions] Support for Mali Valhalla

Posted by Anastasia Stulova via Apache TVM Discuss <no...@discuss.tvm.ai>.

Hi, is there any particular feature in Valhall you are interested in? Otherwise, you can use Mali target that should generally work at least from the functionality side... It might not be very performant though...





---
[Visit Topic](https://discuss.tvm.apache.org/t/support-for-mali-valhalla/11326/3) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/66a97539e7ddeb9d8829460598e0d215f5fee6a69126df0e022157bd6e0fec34).

[Apache TVM Discuss] [Questions] Support for Mali Valhalla

Posted by nkami via Apache TVM Discuss <no...@discuss.tvm.ai>.

Bumping for further visibility





---
[Visit Topic](https://discuss.tvm.apache.org/t/support-for-mali-valhalla/11326/2) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/f02407bb4370c739d31777c5c08c0fcd45d4d85548cb326603874ac7a3f14aeb).

[Apache TVM Discuss] [Questions] Support for Mali Valhalla

Posted by Anastasia Stulova via Apache TVM Discuss <no...@discuss.tvm.ai>.

Interesting, thanks for sharing. I was only able to run tflite model with ansor on "-device=mali" on Valhall or Bifrost GPUs. I think it was something fairly simple like resnet18... It might be useful to examine the output of the operators individually.

I did notice that some depthwise_conv2d workloads returned the wrong result and they were all zeroes. So it felt to me that some boundary condition was not transformed correctly somewhere and therefore nothing was ever written into the output buffers. However, I was not able to narrow down the problem yet.





---
[Visit Topic](https://discuss.tvm.apache.org/t/support-for-mali-valhalla/11326/7) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/bcb65c06b4df5be77c6de27fac86cf1281c5dc3f5356645b890600f5d6bb8305).