You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by "tqchen (via GitHub)" <gi...@apache.org> on 2023/04/16 19:22:20 UTC

[GitHub] [tvm] tqchen opened a new pull request, #14640: [Unity] Improve error message in webgpu request

tqchen opened a new pull request, #14640:
URL: https://github.com/apache/tvm/pull/14640

   Sometimes user may get confused with an old version of browser, add more precise error message


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] junrushao merged pull request #14640: [Unity] Improve error message in webgpu request

Posted by "junrushao (via GitHub)" <gi...@apache.org>.
junrushao merged PR #14640:
URL: https://github.com/apache/tvm/pull/14640


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] tvm-bot commented on pull request #14640: [Unity] Improve error message in webgpu request

Posted by "tvm-bot (via GitHub)" <gi...@apache.org>.
tvm-bot commented on PR #14640:
URL: https://github.com/apache/tvm/pull/14640#issuecomment-1510465897

   <!---bot-comment-->
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @-ing them in a comment.
   
   <!--bot-comment-ccs-start-->
    * cc @quic-sanirudh <sub>See [#10317](https://github.com/apache/tvm/issues/10317) for details</sub><!--bot-comment-ccs-end-->
   
   <sub>Generated by [tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)</sub>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] [Unity] Improve error message in webgpu request [tvm]

Posted by "CharlieFRuan (via GitHub)" <gi...@apache.org>.
CharlieFRuan commented on PR #14640:
URL: https://github.com/apache/tvm/pull/14640#issuecomment-1807299471

   Hi @DustinBrett, the demo now has three models (ones with -1k suffix) that should be able to run with 128MB: https://webllm.mlc.ai/


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] DustinBrett commented on pull request #14640: [Unity] Improve error message in webgpu request

Posted by "DustinBrett (via GitHub)" <gi...@apache.org>.
DustinBrett commented on PR #14640:
URL: https://github.com/apache/tvm/pull/14640#issuecomment-1529069199

   Are these hard requirements? I am trying to get WebLLM going on Android and the only requirement it can't do is `maxStorageBufferBindingSize`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] [Unity] Improve error message in webgpu request [tvm]

Posted by "beaufortfrancois (via GitHub)" <gi...@apache.org>.
beaufortfrancois commented on PR #14640:
URL: https://github.com/apache/tvm/pull/14640#issuecomment-1801794899

   Any updates on this issue?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] DustinBrett commented on pull request #14640: [Unity] Improve error message in webgpu request

Posted by "DustinBrett (via GitHub)" <gi...@apache.org>.
DustinBrett commented on PR #14640:
URL: https://github.com/apache/tvm/pull/14640#issuecomment-1529072020

   > the `maxStorageBufferBindingSize` likely is due to the size of the models we are dealing with. I remember trying to halving it and get an error, you can try to reduce it a bit though.
   > 
   > Also i think these constraints will get lifted in the future versions, dawn's matrix channel would be a good place to ask
   
   Ok, thanks for the quick reply! I am using Chrome Canary v115 on my Samsung Galaxy S20, but the limit seems to only be 128MB when it requires 1024MB. I will keep an eye on the progress and am happy to hear this could be lifted in the future. After seeing people running [mlc-llm](https://github.com/mlc-ai/mlc-llm) on mobile I am eager to do the same with WebLLM.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] tqchen commented on pull request #14640: [Unity] Improve error message in webgpu request

Posted by "tqchen (via GitHub)" <gi...@apache.org>.
tqchen commented on PR #14640:
URL: https://github.com/apache/tvm/pull/14640#issuecomment-1529069548

   the `maxStorageBufferBindingSize` likely is due to the size of the models we are dealing with. I remember trying to halving it and get an error, you can try to reduce it a bit though.
   
   Also i think these constraints will get lifted in the future versions, dawn's matrix channel would be a good place to ask


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org