You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by "jiangjiajun (via GitHub)" <gi...@apache.org> on 2023/04/12 02:59:01 UTC

[GitHub] [tvm] jiangjiajun commented on a diff in pull request #14575: [Frontend][Paddle] [PaddlePaddle Hackathon 4]add attribute support for dropout/gelu/hard_sigmoid/pixel_shuffle

jiangjiajun commented on code in PR #14575:
URL: https://github.com/apache/tvm/pull/14575#discussion_r1163528537


##########
python/tvm/relay/frontend/paddlepaddle.py:
##########
@@ -502,7 +502,9 @@ def convert_dropout(g, op, block):
     """Operator converter for dropout."""
 
     x = g.get_node(op.input("X")[0])
-    g.add_node(op.output("Out")[0], x)
+    dropout_prob = op.attr("dropout_prob")
+    out = _op.nn.dropout(x, dropout_prob)
+    g.add_node(op.output("Out")[0], out)

Review Comment:
   There's two mode in dropout, refer https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Dropout_cn.html#dropout for more details
   
   



##########
python/tvm/relay/frontend/paddlepaddle.py:
##########
@@ -827,10 +829,26 @@ def convert_gelu(g, op, block):
     """Operator converter for gelu."""
 
     x = g.get_node(op.input("X")[0])
-    out = x * (
-        _expr.const(0.5, dtype="float32")
-        + _op.erf(x * _expr.const(0.5**0.5, dtype="float32")) * _expr.const(0.5, dtype="float32")
-    )
+    approximate = op.attr("approximate")

Review Comment:
   There's no need to implement `approximate` strategy, it's just a strategy to boost computation in Paddle



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org