You are viewing a plain text version of this content. The canonical link for it is here.
Posted to discuss-archive@tvm.apache.org by Yong Huang via TVM Discuss <no...@discuss.tvm.ai> on 2020/05/20 03:45:03 UTC

[TVM Discuss] [Questions] Why LSTM is implemented repeatedly for every front-end?


Hi, I notice that LSTM is implemented repeatedly for every front-end(Keras, ONNX, etc) and the implementations look similar to each other. I'm just wondering if we can make LSTM an Op in relay, or at least use unified implementation across different front-ends? 
I understand that different front-end may have slightly different schema definition for LSTM, but ONNX has managed to provide a LSTM Op, so I think it's reasonable to do so.  
And it's the same for other high level but popular Ops like GRU, Attension, etc.
I'm relatively new to TVM, hope I don't misunderstand the codes... 

Thanks





---
[Visit Topic](https://discuss.tvm.ai/t/why-lstm-is-implemented-repeatedly-for-every-front-end/6737/1) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/74f071589d21edbff5c75d2277c6409aa0dccdd11059ea9b9a0e38fb1ac8361e).