You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/12/08 16:26:13 UTC

[GitHub] [tvm] yongwww opened a new pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

yongwww opened a new pull request #7060:
URL: https://github.com/apache/tvm/pull/7060


   The PR implements the basic infra for the RFC (https://discuss.tvm.apache.org/t/rfc-mlir-frontend/6473), we will add more operators after this pr is accepted. Thanks Yanan for prototyping relay dialect and conversion from mlir relay to relay.
   
   
   @tqchen @zhiics MLIR and mlir hlo are required in CI machines for the build, could you please help install them? (or give me access to the machines?) 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] yongwww commented on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
yongwww commented on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-741706021


   @tqchen Thanks for the comment. 
   
   We chose to go with MLIR/HLO instead of XLA HLO, because HLO operations works on tensors with static shape (dynamic ops like [NMS](https://github.com/tensorflow/tensorflow/issues/42879) are not supported by XLA HLO), this limits op/model coverage. MLIR/HLO is possible to represent dynamism, for example, the mhlo lowered from MLIR/TensorFlow. 
   
   Introducing Relay Dialect does bring some extra effort for adding relay operators, and translation from MLIR/HLO to Relay is pretty similar to translation from MLIR/Relay to Relay. 
   
   Considering several dialects are defined already, like [MLIR/TensorFlow](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/mlir/tensorflow/ir) (NMS is in it now), [MLIR/TFLite](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/mlir/lite), [MLIR/HLO](https://github.com/tensorflow/mlir-hlo), [MLIR/ONNX](https://github.com/onnx/onnx-mlir/), etc. Having Relay Dialect will make the conversion from other potential MLIR Dialects to Relay(via mlir/relay) easy, writing some mlir passes/patterns will make the conversion between dialects happen under MLIR conversion infra. Besides, folks might translate relay to mlir/relay even though personally I don't think we need do this.
   
   If we believe/bet there are translations from other mlir dialects to relay in the future, I think having relay dialect is not a bad thing. I am okay to go with mlir/hlo to relay directly. Please let me know what you guys think. 
   
   cc: @yangjunpro @jroesch 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] yongwww edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
yongwww edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-741706021


   @tqchen Thanks for the comment. 
   
   We chose to go with MLIR/HLO instead of XLA HLO, because HLO operations works on tensors with static shape (dynamic ops like [NMS](https://github.com/tensorflow/tensorflow/issues/42879) are not supported by XLA HLO), this limits op/model coverage. MLIR/HLO is possible to represent dynamism, for example, the mhlo lowered from MLIR/TensorFlow. 
   
   Introducing Relay Dialect does bring some extra effort for adding relay operators, and translation from MLIR/HLO to Relay is pretty similar to translation from MLIR/Relay to Relay. 
   
   Considering several dialects are defined already, like [MLIR/TensorFlow](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/mlir/tensorflow/ir) (NMS is in it now), [MLIR/TFLite](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/mlir/lite), [MLIR/HLO](https://github.com/tensorflow/mlir-hlo), [MLIR/ONNX](https://github.com/onnx/onnx-mlir/), etc. Having Relay Dialect will make the conversion from other potential MLIR Dialects to Relay(via mlir/relay) easy, writing some mlir passes/patterns will make the conversion between dialects happen under MLIR conversion infra. Besides, folks might translate relay to mlir/relay even though personally I don't think we need do this.
   
   If we believe/bet there are translations from other mlir dialects to relay in the future, I think having relay dialect is not a bad thing. I am okay to go with mlir/hlo to relay directly. Please let me know what you guys think. 
   
   cc: @yangjunpro @jroesch @yidawang 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] xqdan commented on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
xqdan commented on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-785652300


   @yongwww @tqchen @zhiics I suggest we support a kind of graph ir with boundry in relay. Let me explain a little bit. XLA is expanding big ops with fine-grained primitive ops, but losing high level information, that is the boundry of big ops, and which brings lots of trouble to fusion, since it's very difficult to do fusion for such huge graph with thoudsands of fine-grained primitive ops. But if we can keep the boundry when we expand/convert big ops into relay graph ir, we can avoid the problem above, besides we can do whatever XLA can do with this graph ir with boundry info. with this design, tvm can be a more powerful deep learning compiler, also can be a more powerful plugin optimizer for other ai frameworks.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
tqchen edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-740757459


   Thanks @yongwww . I think it worth discussing the approach a bit further. Given that the relay operators are already accessible as part of the API, and the HLO and relay are pretty close, perhaps we want to statr by directly translate the HLO to relay in c++ without introducing the additional relay dialect step. The translation from MLIR/HLO to relay is going to be quite close to translation from a relay dialect to relay.
   
   This will help to reduce the number of places that are required to define a new operator. It will also likely makes the control flow and recursion translation more streamlined. A transition to a relay dialect(if we decided to) should not be too hard later. We should also start to think more about a formal operator spec description, so we can generate the tablegen def later if necessary.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
tqchen edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-740757459


   Thanks @yongwww . I think it worth discussing the approach a bit further. In particular given that the HLO and relay are pretty close. It perhaps makes sense to directly translate the HLO to relay in c++ without introducing the additional relay dialect step (to reduce the number of places that are required to define a new operator) as a first step, so new operator coverage can quickly be added.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] yongwww commented on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
yongwww commented on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-1003870432


   I am going to close this PR at this moment, I am not working on it since it was marked as low priority one by team.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] zhiics edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
zhiics edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-741866746


   Thanks @yongwww for the great effort:) Both paths make sense to me. Adding Relay as a dialect would benefit more other dialects including other projects internally. But it does introduce another layer of transition for TVM. 
   
   Since converting through MLIR/HLO to Relay shares almost the same amount of efforts, we can probably go this route first. We could either add relay as a dialect later after op schema is done as @tqchen mentioned or go through a dialect to MLIR/HLO then to Relay first if it is needed. As of now, we probably don't need many dialects to Relay, i.e. we have direct parsers from onnx/TFLite to Relay.
   
   If you agree, let's discuss offline to setup the CI so that you can move forward.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] yongwww commented on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
yongwww commented on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-892141566


   @u99127 Thanks for trying it out, I will work on this in the next or next two weeks hopefully, sorry that I was working on some other stuffs before. Cmake and llvm-mlir need to be upgraded for this project. I have an indepedent private repo for this project, I can add you it if you are interested. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
tqchen edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-740757459


   Thanks @yongwww . I think it worth discussing the approach a bit further. In particular given that the HLO and relay are pretty close. It perhaps makes sense to directly translate the HLO to relay in c++ without introducing the additional relay dialect step (to reduce the number of places that are required to define a new operator) as a first step, so new operator coverage can quickly be added. It will also likely makes the control flow and recursion translation more streamlined.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] u99127 edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
u99127 edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-892130966


   are there any instructions on how to build mlir-hlo and llvm to experiment with this PR ? I'm personally curious to see how this works but I'm not observing any instructions as to how to configure mlir-hlo project alongside this work. 
   
   I've tried the build instructions from here https://github.com/tensorflow/mlir-hlo 
   
   > build_tools/build_mlir.sh ${PWD}/llvm-project/ ${PWD}/llvm-build
   
   Then to build this PR, I rebased the config.cmake with trivial rebasing issues and then set
   
   set(USE_MLIR /home/ramana/work/tvm/tvm-mlir-trial-with-hlo/mlir-hlo/mlir-hlo/llvm-build)
   # Whether to use MLIR HLO
   set(USE_MHLO ON)
   
   My config.cmake has : 
   
   set(USE_MLIR /home/ramana/work/tvm/tvm-mlir-trial-with-hlo/mlir-hlo/mlir-hlo/llvm-build)
   
   # Whether to use MLIR HLO
   set(USE_MHLO ON)
   
   I realised I also needed to set LLVM_PROJ_SRC and LLVM_PROJ_BUILD in my environment to try and build this which I've set to 
   
   LLVM_PROJ_SRC=/home/ramana/work/tvm/tvm-mlir-trial-with-hlo/mlir-hlo/mlir-hlo/
   LLVM_PROJ_BUILD=/home/ramana/work/tvm/tvm-mlir-trial-with-hlo/mlir-hlo/mlir-hlo/llvm-build/
   
   
   And then doing a cmake .. gives me the following error.
   
   
   CMake Error at cmake/modules/contrib/MLIR.cmake:73 (message):
     MLIREDSC not found, did you forget to build llvm-project?
   Call Stack (most recent call first):
     cmake/modules/contrib/MLIR.cmake:87 (find_mlir_lib)
     CMakeLists.txt:413 (include)
   
   
   regards
   Ramana
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
tqchen edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-741822077


   First of all, I do not disagree about the general goal of having a relay dialect in MLIR. The main discussion pt is how can we get to that point.
   
   Right now we have the need to expose relay operators to a few places: (1) python  (2) c++ registration; (3) rust; (4) tablegen(if go with the mlir route). Adding new point of exposure would indeed creates the complexity of adding a new relay operator.
   
   If we can first streamline the operator schema itself (perhaps relates to the object def schema https://discuss.tvm.apache.org/t/rfc-tvm-object-schema-dsl/7930), then there will be less resistance in adding a MLIR relay dialect later. 
   
   Right now, I feel that direct translation might be a path of less resistance, given that: (1) We can always switch to a dialect later and most of the code are similar; (2) It would be easier to quickly expand the support. (3) We might need some effort in figuring out the control flow support/effect translation first, then think about how a complete dialect can be built
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
tqchen edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-740757459


   Thanks @yongwww . I think it worth discussing the approach a bit further. Given that the relay operators are already accessible as part of the API, and the HLO and relay are pretty close, perhaps we want to statr by directly translate the HLO to relay in c++ without introducing the additional relay dialect step. The translation from MLIR/HLO to relay is going to be quite close to translation from a relay dialect to relay.
   
   This will help to reduce the number of places that are required to define a new operator. It will also likely makes the control flow and recursion translation more streamlined. We should also start to think more about a formal operator spec description, so we can generate the tablegen def later if necessary.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] zhiics edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
zhiics edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-741866746


   Thanks @yongwww for the great effort:) Both paths make sense to me. Adding Relay as a dialect would benefit more other dialects including other projects internally. But it does introduce another layer of transition for TVM. Since converting through MLIR/HLO to Relay shares almost the same amount of efforts, we can probably go this route first. We could either add relay as a dialect later after op schema is done as @tqchen mentioned or go through a dialect to MLIR/HLO then to Relay first if it is needed. As of now, we probably don't need many dialects to Relay, i.e. we have direct parsers from onnx/TFLite to Relay. If you agree, let's discuss offline to setup the CI so that you can move forward.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
tqchen edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-740757459


   Thanks @yongwww . I think it worth discussing the approach a bit further. Given that the relay operators are already accessible as part of the API, and the HLO and relay are pretty close, perhaps we want to statr by directly translate the HLO to relay in c++ without introducing the additional relay dialect step. This will help to reduce the number of places that are required to define a new operator. It will also likely makes the control flow and recursion translation more streamlined.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
tqchen edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-741822077


   First of all, I do not disagree about the general goal of having a relay dialect in MLIR. The main discussion pt is how can we get to that point.
   
   Right now we have the need to expose relay operators to a few places: (1) python  (2) c++ registration; (3) rust; (4) tablegen(if go with the mlir route),
   
   If we can first streamline the operator schema itself (perhaps relates to the object def schema https://discuss.tvm.apache.org/t/rfc-tvm-object-schema-dsl/7930), then there will be less resistance in adding a MLIR relay dialect later. 
   
   Right now, I feel that direct translation might be a path of less resistance, given that: (1) We can always switch to a dialect later and most of the code are similar; (2) It would be easier to quickly expand the support. (3) We might need some effort in figuring out the control flow support/effect translation first, then think about how a complete dialect can be built
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] yongwww edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
yongwww edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-741706021


   @tqchen Thanks for the comment. 
   
   We chose to go with MLIR/HLO instead of XLA HLO, because HLO operations works on tensors with static shape (dynamic ops like [NMS](https://github.com/tensorflow/tensorflow/issues/42879) are not supported by XLA HLO), this limits op/model coverage. MLIR/HLO is possible to represent dynamism, for example, the mhlo lowered from MLIR/TensorFlow. 
   
   Introducing Relay Dialect does bring some extra effort for adding relay operators, and translation from MLIR/HLO to Relay is pretty similar to translation from MLIR/Relay to Relay. 
   
   Considering several dialects are defined already, like [MLIR/TensorFlow](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/mlir/tensorflow/ir) (NMS is in it now), [MLIR/TFLite](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/mlir/lite), [MLIR/HLO](https://github.com/tensorflow/mlir-hlo), [MLIR/ONNX](https://github.com/onnx/onnx-mlir/), etc. Having Relay Dialect will make the conversion from other potential MLIR Dialects to Relay(via mlir/relay) easy, writing some mlir passes/patterns will make the conversion between dialects happen under MLIR conversion infra. Besides, folks might translate relay to mlir/relay even though personally I don't think we need do this.
   
   If we believe/bet there are translations from other mlir dialects to relay in the future, I think having relay dialect is not a bad thing. I am okay to go with mlir/hlo to relay directly. Please let me know what you guys think. 
   
   cc: @yangjunpro @jroesch @yidawang @zhiics 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] u99127 edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
u99127 edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-892130966


   are there any instructions on how to build mlir-hlo and llvm to experiment with this PR ? I'm personally curious to see how this works but I'm not observing any instructions as to how to configure mlir-hlo project alongside this work. 
   
   I've tried the build instructions from the mlir-hlo GitHub project. 
   
   build_tools/build_mlir.sh ${PWD}/llvm-project/ ${PWD}/llvm-build
   
   Then to build this PR, I rebased the config.cmake with trivial rebasing issues and then set
   
   set(USE_MLIR /home/ramana/work/tvm/tvm-mlir-trial-with-hlo/mlir-hlo/mlir-hlo/llvm-build)
   # Whether to use MLIR HLO
   set(USE_MHLO ON)
   
   My config.cmake has : 
   
   set(USE_MLIR /home/ramana/work/tvm/tvm-mlir-trial-with-hlo/mlir-hlo/mlir-hlo/llvm-build)
   set(USE_MHLO ON)
   
   I realised I also needed to set LLVM_PROJ_SRC and LLVM_PROJ_BUILD in my environment to try and build this which I've set to 
   
   LLVM_PROJ_SRC=/home/ramana/work/tvm/tvm-mlir-trial-with-hlo/mlir-hlo/mlir-hlo/
   LLVM_PROJ_BUILD=/home/ramana/work/tvm/tvm-mlir-trial-with-hlo/mlir-hlo/mlir-hlo/llvm-build/
   
   
   And then doing a cmake .. gives me the following error.
   
   
   CMake Error at cmake/modules/contrib/MLIR.cmake:73 (message):
     MLIREDSC not found, did you forget to build llvm-project?
   Call Stack (most recent call first):
     cmake/modules/contrib/MLIR.cmake:87 (find_mlir_lib)
     CMakeLists.txt:413 (include)
   
   
   regards
   Ramana
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
tqchen edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-741822077


   Thanks @yongwww First of all, I do not disagree about the general goal of having a relay dialect in MLIR. The main discussion pt is how can we get to that point.
   
   Right now we have the need to expose relay operators to a few places: (1) python  (2) c++ registration; (3) rust; (4) tablegen(if go with the mlir route). Adding new point of exposure would indeed creates the complexity of adding a new relay operator.
   
   If we can first streamline the operator schema itself (perhaps relates to the object def schema https://discuss.tvm.apache.org/t/rfc-tvm-object-schema-dsl/7930), then there will be less resistance in adding a MLIR relay dialect later. 
   
   Right now, a direct translation might be a path of less resistance, given that: (1) We can always switch to a dialect later and most of the code are similar; (2) It would be easier to quickly expand the support. (3) We might need some effort in figuring out the control flow support/effect translation first, then think about how a complete dialect can be built. 
   
   Adding support translation from other MLIR dialects is not hard, as most of them shares the same graph structure, but will indeed benefit from a dialect. The main complexity though may not be on the op translation part, but on the translation of types, control flow and effect. So it would be nice to streamline these parts starting from a path of less resistance.
   
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] u99127 edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
u99127 edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-892130966


   are there any instructions on how to build mlir-hlo and llvm to experiment with this PR ? I'm personally curious to see how this works but I'm not observing any instructions as to how to configure mlir-hlo project alongside this work. 
   
   I've tried the build instructions from the mlir-hlo GitHub project. 
   
   build_tools/build_mlir.sh ${PWD}/llvm-project/ ${PWD}/llvm-build
   
   Then to build this PR, I rebased the config.cmake with trivial rebasing issues and then set
   
   set(USE_MLIR /home/ramana/work/tvm/tvm-mlir-trial-with-hlo/mlir-hlo/mlir-hlo/llvm-build)
   set(USE_MHLO ON)
   
   I realised I also needed to set LLVM_PROJ_SRC and LLVM_PROJ_BUILD in my environment to try and build this which I've set to 
   
   LLVM_PROJ_SRC=/home/ramana/work/tvm/tvm-mlir-trial-with-hlo/mlir-hlo/mlir-hlo/
   LLVM_PROJ_BUILD=/home/ramana/work/tvm/tvm-mlir-trial-with-hlo/mlir-hlo/mlir-hlo/llvm-build/
   
   
   And then doing a cmake .. gives me the following error.
   
   
   CMake Error at cmake/modules/contrib/MLIR.cmake:73 (message):
     MLIREDSC not found, did you forget to build llvm-project?
   Call Stack (most recent call first):
     cmake/modules/contrib/MLIR.cmake:87 (find_mlir_lib)
     CMakeLists.txt:413 (include)
   
   
   regards
   Ramana
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
tqchen edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-740757459


   Thanks @yongwww . I think it worth discussing the approach a bit further. In particular given that the HLO and relay are pretty close, does it makes sense to directly translate the HLO to relay in c++ without introducing the additional relay dialect step (to reduce the number of places that are required to define a new operator).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] zhiics commented on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
zhiics commented on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-741866746


   Thanks @yongwww for the great effort:) Both paths make sense to me. Adding Relay as a dialect would benefit more other dialects including other projects internally. But it does introduce another layer of transition for TVM. Since converting through MLIR/HLO to Relay shares almost the same amount of efforts, we can probably go this route first. We could either add relay as a dialect later after op schema is done as @tqchen mentioned or go through a dialect to HLO then to Relay first if it is needed. As of now, we probably don't need many dialects to Relay, i.e. we have directly parsers for onnx/TFLite to Relay. If you agree, let's discuss offline to setup the CI so that you can move forward.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-741841901


   I agree TQ’s point. we introduce this pr’s main goal is to cover more models (include dynamic). However, introducting tablegen will bring overhead to us. We don’t have urgent requirement to interact with other mlir dilaect and beyond the goal of introducing mlir frontend, which shouldn’t introduce more complex development to relay.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
FrozenGene edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-741841901


   I agree TQ @tqchen ’s point. we introduce this pr’s main goal is to cover more models (include dynamic). However, introducting tablegen will bring overhead to us. We don’t have urgent requirement to interact with other mlir dilaect and beyond the goal of introducing mlir frontend, which shouldn’t introduce more complex development to relay.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] zhiics edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
zhiics edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-741866746


   Thanks @yongwww for the great effort:) Both paths make sense to me. Adding Relay as a dialect would benefit more other dialects including other projects internally. But it does introduce another layer of transition for TVM. Since converting through MLIR/HLO to Relay shares almost the same amount of efforts, we can probably go this route first. We could either add relay as a dialect later after op schema is done as @tqchen mentioned or go through a dialect to MLIR/HLO then to Relay first if it is needed. As of now, we probably don't need many dialects to Relay, i.e. we have directly parsers for onnx/TFLite to Relay. If you agree, let's discuss offline to setup the CI so that you can move forward.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] yongwww commented on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
yongwww commented on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-742194668


   Thank you all! @tqchen @FrozenGene @zhiics It makes sense to me, I am going to update the code to go with MLIR-HLO to Relay accordingly, will ask @zhiics for help to setup CI then. Perhaps we could discuss relay dialect later after op schema is done. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen commented on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
tqchen commented on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-741822077


   First of all, I do not disagree about the general goal of having a relay dialect in MLIR. The main discussion pt is how can we get to that point.
   
   Right now we have the need to expose relay operators to a few places: (1) python  (2) c++ registration; (3) rust; (4) tablegen(if go with the mlir route),
   
   If we can first streamline the operator schema itself (perhaps relates to the object def schema https://discuss.tvm.apache.org/t/rfc-tvm-object-schema-dsl/7930), then there will be less resistance in adding a MLIR relay dialect later. 
   
   Right now, I feel that direct translation might be a path of less resistance, given that: (1) We can always switch to a dialect later and most of the code are similar; (2) It would be easier to quickly expand the support. (3) We might need some effort in figuring out the control flow support/effect first, then think about how a complete dialect can be built
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen commented on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
tqchen commented on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-740757459


   Thanks @yongwww . I think it worth discussing the approach a bit further. In particular given that the HLO and relay are pretty close, does it makes sense to directly translate the HLO to relay in c++ without introducing the additional relay dialect to reduce one level of indirection.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
tqchen edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-740757459


   Thanks @yongwww . I think it worth discussing the approach a bit further. Given that the relay operators are already accessible as part of the API, and the HLO and relay are pretty close, perhaps we want to statr by directly translate the HLO to relay in c++ without introducing the additional relay dialect step. The translation from MLIR/HLO to relay is going to be quite close to translation from a relay dialect to relay.
   
   This will help to reduce the number of places that are required to define a new operator. Given that the relay's operator set is growing. It will also likely makes the control flow and recursion translation more streamlined. 
   
   A transition to a relay dialect(if we decided to) should not be too hard later, in that case we should also start to think more about a formal operator spec description, so we can generate the tablegen def later if necessary.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] u99127 commented on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
u99127 commented on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-892130966


   are there any instructions on how to build mlir-hlo and llvm to experiment with this PR ? I'm personally curious to see how this works but I'm not observing any instructions as to how to configure mlir-hlo project alongside this work. 
   
   I've tried the build instructions from here https://github.com/tensorflow/mlir-hlo 
   
   > build_tools/build_mlir.sh ${PWD}/llvm-project/ ${PWD}/llvm-build
   
   and then setting LLVM_PROJ_SRC and LLVM_PROJ_BUILD in my environment to try and build this. 
   
   
   
   regards
   Ramana
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] u99127 edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
u99127 edited a comment on pull request #7060:
URL: https://github.com/apache/tvm/pull/7060#issuecomment-892130966


   are there any instructions on how to build mlir-hlo and llvm to experiment with this PR ? I'm personally curious to see how this works but I'm not observing any instructions as to how to configure mlir-hlo project alongside this work. 
   
   I've tried the build instructions from the mlir-hlo GitHub project. 
   
   build_tools/build_mlir.sh ${PWD}/llvm-project/ ${PWD}/llvm-build
   
   Then to build this PR, I rebased the config.cmake with trivial rebasing issues and then set
   
   set(USE_MLIR /home/ramana/work/tvm/tvm-mlir-trial-with-hlo/mlir-hlo/mlir-hlo/llvm-build)
   # Whether to use MLIR HLO
   set(USE_MHLO ON)
   
   My config.cmake has : 
   
   set(USE_MLIR /home/ramana/work/tvm/tvm-mlir-trial-with-hlo/mlir-hlo/mlir-hlo/llvm-build)
   
   # Whether to use MLIR HLO
   set(USE_MHLO ON)
   
   I realised I also needed to set LLVM_PROJ_SRC and LLVM_PROJ_BUILD in my environment to try and build this which I've set to 
   
   LLVM_PROJ_SRC=/home/ramana/work/tvm/tvm-mlir-trial-with-hlo/mlir-hlo/mlir-hlo/
   LLVM_PROJ_BUILD=/home/ramana/work/tvm/tvm-mlir-trial-with-hlo/mlir-hlo/mlir-hlo/llvm-build/
   
   
   And then doing a cmake .. gives me the following error.
   
   
   CMake Error at cmake/modules/contrib/MLIR.cmake:73 (message):
     MLIREDSC not found, did you forget to build llvm-project?
   Call Stack (most recent call first):
     cmake/modules/contrib/MLIR.cmake:87 (find_mlir_lib)
     CMakeLists.txt:413 (include)
   
   
   regards
   Ramana
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] yongwww closed pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

Posted by GitBox <gi...@apache.org>.
yongwww closed pull request #7060:
URL: https://github.com/apache/tvm/pull/7060


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org