You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/07/20 17:02:55 UTC

[GitHub] [incubator-tvm-vta] pasqoc commented on pull request #9: [Hardware][OpenCL] Intelfocl support

pasqoc commented on pull request #9:
URL: https://github.com/apache/incubator-tvm-vta/pull/9#issuecomment-661193715


   > > Thanks for the changes. Please apply the 0.0.2 and rename the vta target to something more specific, e.g. "arria10". Also there are some CI errors related to linting that could be addressed. Thanks!
   > 
   > Thank you very much! Sure. We will apply the 0.0.2 and address the linting errors.
   > However. I believe "arria10" is too restrictive here. The code should work for all devices supported by Intel OpenCL for FPGA, namely Intel Arria 10, Stratix V/10 and Cyclone V/10. So far we have tested it on both Arria 10 and Stratix 10 boards, and it worked.
   
   I agree with @remotego, arria10 seems too restrictive and would not do justice with what the code actually support, i.e. Intel OpenCL supported boards.
   
   That said, if the VTA implicit rule to add new supported (and working) target devices is that of using "boards" by name, like "pynq", "de10nano", "ultra96", etc. then we should continue doing so. 
   
   But in this case I would suggest to add target entries for all devices that are known to work, that is arria10, stratix10, etc. and document the fact that these have the restriction to only work with OpenCL and not VHLS or Chisel like pynq and de10nano.
   
   To avoid explosion, duplication of configurations, we should also probably be thinking of separating target parameters (target, hw_ver) in vta_config.json into a different file vta_target.json so that we can more flexibly specify different architecture configurations and target devices.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org