You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/03/05 08:02:44 UTC

[GitHub] [incubator-tvm] lfengad opened a new pull request #4990: [Relay][Topi] BatchNorm support with run-time mean and variance calculation

lfengad opened a new pull request #4990: [Relay][Topi] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990
 
 
   We observe a great amount of tensorflow models used in our production environment invoke the FusedBatchNorm operator, and a lot of them use this operator in "is_training" mode for model inference. In "is_training" mode, the mean and variance are calculated dynamically using the run-time data without pre-defined. However, the current BatchNorm in TVM requires the mean and variance are given as non-empty tensors. 
   
   We add the support for BatchNorm in "is_training" mode, to make it able to dynamically calculate the mean and variance if not given. We first check the mean node and variance node for fused_batch_norm in tensorflow frontend to annotate them if they are empty. Then according to the annotation, we add necessary nodes for the mean and variance calculation in BatchNormToInferUnpack function, which is used to arrange the BatchNorm inference.
   
   In our current implementation, the annotations of the empty mean and variance are added into the name_hint of the corresponding variable nodes. This solution is simple and no need to modify the attributes of the relay operator batch_norm. Alternatively, we can add a bool attribute "is_training" to the relay operator batch_norm. If the mean and variance are empty, "is_training" is set to true. Then according to the attributes of the relay operator, we decide whether to add the nodes for calculating the mean and variance or not in function BatchNormToInferUnpack. This solution needs to modify the relay operator batch_norm.
   
   Any suggestions are welcome! @tqchen @FrozenGene

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388918957
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_dynamic.py
 ##########
 @@ -0,0 +1,64 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This is a test script to test fused_batch_norm operators
+in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fused_batch_norm():
+    g = tf.Graph()
+    with g.as_default():
+        input_tensor = tf.placeholder(tf.float32, shape=(1, 12, 12, 32), name='input')
+        alpha = tf.constant(np.random.rand(32,), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(32,), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='output')
+    data = np.random.rand(1, 12, 12, 32)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['output'])
+
+    for device in ["llvm"]:
+        ctx = tvm.context(device, 0)
+        if not ctx.exist:
+            print("Skip because %s is not enabled" % device)
+            continue
+        layout = None
+        mod, params = relay.frontend.from_tensorflow(constant_graph, layout=layout, outputs=['output'])
+        with relay.build_config(opt_level=3):
+            graph, lib, params = relay.build(mod,
+                                             target=device,
+                                             target_host=None,
 
 Review comment:
   `target_host = None` is unnecessary. Because our default value of it is `None`.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r389227393
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_dynamic.py
 ##########
 @@ -0,0 +1,71 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This is a test script to test fused_batch_norm operators
+in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def verify_fused_batch_norm(shape):
+    g = tf.Graph()
+    with g.as_default():
+        input_tensor = tf.placeholder(tf.float32, shape=shape, name='input')
+        alpha = tf.constant(np.random.rand(shape[-1],), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(shape[-1],), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='output')
+    data = np.random.rand(*shape)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['output'])
+
+    for device in ["llvm"]:
+        ctx = tvm.context(device, 0)
+        if not ctx.exist:
+            print("Skip because %s is not enabled" % device)
+            continue
+        mod, params = relay.frontend.from_tensorflow(constant_graph,
+                                                     outputs=['output'])
+        with relay.build_config(opt_level=3):
+            graph, lib, params = relay.build(mod,
+                                             target=device,
+                                             params=params)
+        from tvm.contrib import graph_runtime
+        m = graph_runtime.create(graph, lib, ctx)
+        m.set_input(**params)
+        m.set_input('input', data)
+        m.run()
+        tvm_out = m.get_output(0)
+        tvm.testing.assert_allclose(tvm_out.asnumpy(), tf_out.astype(tvm_out.dtype), rtol=1e-1)
 
 Review comment:
   Why change to 1e-1? atol=1e-3, rtol=1e-3 can not satisify?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene merged pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene merged pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990
 
 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad commented on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad commented on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595611551
 
 
   > > Yeah, our current implementation is just to check whether `mean` / `variance` is empty `VarNode` (with zero dimension), and then call `Mean` and `Variance` in BatchNormToInferUnpack.
   > 
   > I think our pr could remove `name_hint` too.
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has the possibility to hold the given pre-defined constant values and thus cannot be replaced with Mean \ Variance.
   > 
   > Could you give us an example of this condition? I could only imagine models have empty or full pre-defined values. So we should only to calculate it by calling `Mean` / `Variance` feed by `data` or our current implementation of `BatchNormToInferUnpack `.
   
   Thanks for your discussion! According to our discussion, I have rewritten the code as in the newest commit. This time, the function `BatchNormToInferUnpack` is not modified. We only modify the tensorflow frontend for `_fused_batch_norm`. If `mean` and `variance` are empty, we directly add `Mean` and `Variance` relay operators to the frontend graph before the `batch_norm` operator, without modifying the `batch_norm` operator at all.
   Thank you for the suggestions!

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r389230801
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_dynamic.py
 ##########
 @@ -0,0 +1,71 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This is a test script to test fused_batch_norm operators
+in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def verify_fused_batch_norm(shape):
+    g = tf.Graph()
+    with g.as_default():
+        input_tensor = tf.placeholder(tf.float32, shape=shape, name='input')
+        alpha = tf.constant(np.random.rand(shape[-1],), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(shape[-1],), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='output')
+    data = np.random.rand(*shape)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['output'])
+
+    for device in ["llvm"]:
+        ctx = tvm.context(device, 0)
+        if not ctx.exist:
+            print("Skip because %s is not enabled" % device)
+            continue
+        mod, params = relay.frontend.from_tensorflow(constant_graph,
+                                                     outputs=['output'])
+        with relay.build_config(opt_level=3):
+            graph, lib, params = relay.build(mod,
+                                             target=device,
+                                             params=params)
+        from tvm.contrib import graph_runtime
+        m = graph_runtime.create(graph, lib, ctx)
+        m.set_input(**params)
+        m.set_input('input', data)
+        m.run()
+        tvm_out = m.get_output(0)
+        tvm.testing.assert_allclose(tvm_out.asnumpy(), tf_out.astype(tvm_out.dtype), rtol=1e-1)
 
 Review comment:
   Yeah, it works now! I didn't realize the atol attribute. The default setting of atol is too small. Sorry about that!   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388952812
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_dynamic.py
 ##########
 @@ -0,0 +1,63 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This is a test script to test fused_batch_norm operators
+in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fused_batch_norm():
+    g = tf.Graph()
+    with g.as_default():
+        input_tensor = tf.placeholder(tf.float32, shape=(1, 12, 12, 32), name='input')
+        alpha = tf.constant(np.random.rand(32,), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(32,), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='output')
+    data = np.random.rand(1, 12, 12, 32)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['output'])
+
+    for device in ["llvm"]:
+        ctx = tvm.context(device, 0)
+        if not ctx.exist:
+            print("Skip because %s is not enabled" % device)
+            continue
+        mod, params = relay.frontend.from_tensorflow(constant_graph,
+                                                     outputs=['output'])
+        with relay.build_config(opt_level=3):
+            graph, lib, params = relay.build(mod,
+                                             target=device,
+                                             params=params)
+        from tvm.contrib import graph_runtime
+        m = graph_runtime.create(graph, lib, ctx)
+        m.set_input(**params)
+        m.set_input('input', data)
+        m.run()
+        tvm_out = m.get_output(0)
+        tvm.testing.assert_allclose(tvm_out.asnumpy(), tf_out.astype(tvm_out.dtype), rtol=1e-3)
+
+if __name__ == "__main__":
+    test_fused_batch_norm()
 
 Review comment:
   Sorry, I think I have another one comment. How about us add some more testing data?
   
   for example:
   
   ```python
   def verify_fused_batch_norm(shape):
         ....
   
   def test_fused_batch_norm():
       verify_fused_batch_norm(shape=(1, 12, 12, 32))
       verify_fused_batch_norm(shape=(1, 24, 24, 64))
       ...
   
   if __name__ == "__main__":
       test_fused_batch_norm()
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388734493
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_trainingmod.py
 ##########
 @@ -0,0 +1,61 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This article is a test script to test fused_batch_norm operators in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fusedbatchnorm():
+    g=tf.Graph()
+    with g.as_default(): 
+        input_tensor = tf.placeholder(tf.float32,shape=(1, 12, 12, 32), name='input')
+        alpha = tf.constant(np.random.rand(32,), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(32,), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='sum')
+    data = np.random.rand(1, 12, 12, 32)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['sum'])
+    
+
+    layout = None
+    target = 'llvm'
+    ctx=tvm.cpu(0)
+    mod, params = relay.frontend.from_tensorflow(constant_graph, layout=layout, outputs=['sum'])
+    with relay.build_config(opt_level=3):
+        graph, lib, params = relay.build(mod,
+                                     target=target,
+                                     target_host = target,
+                                     params=params)
 
 Review comment:
   Align. Make sure `target=target` keep the same align `relay.build(mod,`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388939613
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_dynamic.py
 ##########
 @@ -0,0 +1,64 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This is a test script to test fused_batch_norm operators
+in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fused_batch_norm():
+    g = tf.Graph()
+    with g.as_default():
+        input_tensor = tf.placeholder(tf.float32, shape=(1, 12, 12, 32), name='input')
+        alpha = tf.constant(np.random.rand(32,), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(32,), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='output')
+    data = np.random.rand(1, 12, 12, 32)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['output'])
+
+    for device in ["llvm"]:
+        ctx = tvm.context(device, 0)
+        if not ctx.exist:
+            print("Skip because %s is not enabled" % device)
+            continue
+        layout = None
+        mod, params = relay.frontend.from_tensorflow(constant_graph, layout=layout, outputs=['output'])
+        with relay.build_config(opt_level=3):
+            graph, lib, params = relay.build(mod,
+                                             target=device,
+                                             target_host=None,
 
 Review comment:
   I have already modified the code as your suggestions. 
   Thank you so much for the review!

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388842174
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_dynamic.py
 ##########
 @@ -0,0 +1,61 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This is a test script to test fused_batch_norm operators
+in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fused_batch_norm():
+    g = tf.Graph()
+    with g.as_default():
+        input_tensor = tf.placeholder(tf.float32, shape=(1, 12, 12, 32), name='input')
+        alpha = tf.constant(np.random.rand(32,), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(32,), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='sum')
+    data = np.random.rand(1, 12, 12, 32)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['sum'])
+
+    layout = None
+    target = 'llvm'
+    ctx = tvm.cpu(0)
 
 Review comment:
   change to the code like this so that we could pass test when we don't enable `llvm`:
   ```python
           for device in ["llvm"]:
               ctx = tvm.context(device, 0)
               if not ctx.exist:
                   print("Skip because %s is not enabled" % device)
                   continue
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388952812
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_dynamic.py
 ##########
 @@ -0,0 +1,63 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This is a test script to test fused_batch_norm operators
+in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fused_batch_norm():
+    g = tf.Graph()
+    with g.as_default():
+        input_tensor = tf.placeholder(tf.float32, shape=(1, 12, 12, 32), name='input')
+        alpha = tf.constant(np.random.rand(32,), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(32,), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='output')
+    data = np.random.rand(1, 12, 12, 32)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['output'])
+
+    for device in ["llvm"]:
+        ctx = tvm.context(device, 0)
+        if not ctx.exist:
+            print("Skip because %s is not enabled" % device)
+            continue
+        mod, params = relay.frontend.from_tensorflow(constant_graph,
+                                                     outputs=['output'])
+        with relay.build_config(opt_level=3):
+            graph, lib, params = relay.build(mod,
+                                             target=device,
+                                             params=params)
+        from tvm.contrib import graph_runtime
+        m = graph_runtime.create(graph, lib, ctx)
+        m.set_input(**params)
+        m.set_input('input', data)
+        m.run()
+        tvm_out = m.get_output(0)
+        tvm.testing.assert_allclose(tvm_out.asnumpy(), tf_out.astype(tvm_out.dtype), rtol=1e-3)
+
+if __name__ == "__main__":
+    test_fused_batch_norm()
 
 Review comment:
   Sorry, I think I have another one comment. How about adding some more testing data?
   
   for example:
   
   ```python
   def verify_fused_batch_norm(shape):
       ...
   
   def test_fused_batch_norm():
       verify_fused_batch_norm(shape=(1, 12, 12, 32))
       verify_fused_batch_norm(shape=(1, 24, 24, 64))
       ...
   
   if __name__ == "__main__":
       test_fused_batch_norm()
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388831831
 
 

 ##########
 File path: python/tvm/relay/frontend/tensorflow.py
 ##########
 @@ -887,7 +887,14 @@ def _impl(inputs, attr, params):
         if 'U' in attr:
             need_cast = True
             inputs[0] = _op.cast(inputs[0], dtype=attr['U'].name)
-
+        # Check if mean and variance are empty
+        # If so, replace them with Mean and Variance Ops
+        # For run-time calculation
+        moving_mean_shape = [int(n) for n in inputs[3].type_annotation.shape]
 
 Review comment:
   Thank you for the detailed review comments! I have corrected the related codes in the newest commit. But for the `add CHECK of len(inputs)` issue, does it mean that to check the number of nodes in `inputs` and to see whether this number is 5? Also, it seems in python there is no `CHECK` method?
   Thank you so much for the help!

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r389227589
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_dynamic.py
 ##########
 @@ -0,0 +1,71 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This is a test script to test fused_batch_norm operators
+in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def verify_fused_batch_norm(shape):
+    g = tf.Graph()
+    with g.as_default():
+        input_tensor = tf.placeholder(tf.float32, shape=shape, name='input')
+        alpha = tf.constant(np.random.rand(shape[-1],), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(shape[-1],), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='output')
+    data = np.random.rand(*shape)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['output'])
+
+    for device in ["llvm"]:
+        ctx = tvm.context(device, 0)
+        if not ctx.exist:
+            print("Skip because %s is not enabled" % device)
+            continue
+        mod, params = relay.frontend.from_tensorflow(constant_graph,
+                                                     outputs=['output'])
+        with relay.build_config(opt_level=3):
+            graph, lib, params = relay.build(mod,
+                                             target=device,
+                                             params=params)
+        from tvm.contrib import graph_runtime
+        m = graph_runtime.create(graph, lib, ctx)
+        m.set_input(**params)
+        m.set_input('input', data)
+        m.run()
+        tvm_out = m.get_output(0)
+        tvm.testing.assert_allclose(tvm_out.asnumpy(), tf_out.astype(tvm_out.dtype), rtol=1e-1)
 
 Review comment:
   Yeah, 1e-3 cannot be satisfied from the CI. The max related difference from the previous CI run is about 0.007. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad commented on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad commented on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595147201
 
 
   > I think we needn't add `_empty_for_training_mode_inference`. If we find `mean` / `variance` is `VarNode`, we should call `Mean` and `Variance`.
   > 
   > I don't think we should add `is_training` flag to relay `BatchNorm`. This should be done by users to make sure TF's model `BatchNorm`'s `is_training` flag be false. However, we still have user cases like you mention, so we could support as current implementation and don't add attribute to `BatchNorm`.
   
   Thank you so much for the quick reply!  
   Yeah, our current implementation is just to check whether `mean` / `variance` is empty `VarNode` (with zero dimension),  and then call `Mean` and `Variance` in BatchNormToInferUnpack.  Also as I understand, if `mean` / `variance` is `VarNode` but with non-zero dimension, it still has the possibility to hold the given pre-defined constant values and thus cannot be replaced with `Mean` \ `Variance`.   
   Thank you for the discussion!

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388735082
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_trainingmod.py
 ##########
 @@ -0,0 +1,61 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This article is a test script to test fused_batch_norm operators in TensorFlow frontend when mean and variance are not given.
 
 Review comment:
   remove `article`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad commented on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad commented on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-596166737
 
 
   > Thanks @lfengad This is merged now.
   
   Thank you so much for your help! 😄

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595562012
 
 
   > > Yeah, our current implementation is just to check whether `mean` / `variance` is empty `VarNode` (with zero dimension), and then call `Mean` and `Variance` in BatchNormToInferUnpack.
   > 
   >> I think our pr could remove `name_hint` too.
   
   Yeah, I agree that the better way should be removing `name_hint` and just checking whether the `mean` and `variance` are empty inside `BatchNormToInferUnpack`, with no need to modify the tensorflow frontend. Previously I have tried this way but got come compilation errors related with layout checking. If we plan to do in this way, we need to modify the `BatchNormRel` for data shape assignment too, since the current `batch_norm` relay operator only accept `mean` and `variance` with non-empty dimension. We need to make it accept `mean` and `variance` with empty dimension.  
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has the possibility to hold the given pre-defined constant values and thus cannot be replaced with Mean \ Variance.
   > 
   > >Could you give us an example of this condition? I could only imagine models have empty or full pre-defined values. So we should only to calculate it by calling `Mean` / `Variance` feed by `data` or our current implementation of `BatchNormToInferUnpack `.
   
   What I mean is that for both cases the `mean` and `variance` are `VarNode`. In one case the `VarNode` is empty without pre-defined values, while in the other case the `VarNode` is not empty with pre-defined values. 
   Thank you for the discussion!
   
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388735248
 
 

 ##########
 File path: python/tvm/relay/frontend/tensorflow.py
 ##########
 @@ -887,7 +887,14 @@ def _impl(inputs, attr, params):
         if 'U' in attr:
             need_cast = True
             inputs[0] = _op.cast(inputs[0], dtype=attr['U'].name)
-
+        # Check if mean and variance are empty
+        # If so, replace them with Mean and Variance Ops
+        # For run-time calculation
+        moving_mean_shape = [int(n) for n in inputs[3].type_annotation.shape]
 
 Review comment:
   It is the time to add `CHECK` of `len(inputs)`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388734523
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_trainingmod.py
 ##########
 @@ -0,0 +1,61 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This article is a test script to test fused_batch_norm operators in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fusedbatchnorm():
+    g=tf.Graph()
+    with g.as_default(): 
+        input_tensor = tf.placeholder(tf.float32,shape=(1, 12, 12, 32), name='input')
+        alpha = tf.constant(np.random.rand(32,), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(32,), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='sum')
+    data = np.random.rand(1, 12, 12, 32)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['sum'])
+    
+
+    layout = None
+    target = 'llvm'
+    ctx=tvm.cpu(0)
 
 Review comment:
   space between `=`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r389230480
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_dynamic.py
 ##########
 @@ -0,0 +1,71 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This is a test script to test fused_batch_norm operators
+in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def verify_fused_batch_norm(shape):
+    g = tf.Graph()
+    with g.as_default():
+        input_tensor = tf.placeholder(tf.float32, shape=shape, name='input')
+        alpha = tf.constant(np.random.rand(shape[-1],), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(shape[-1],), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='output')
+    data = np.random.rand(*shape)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['output'])
+
+    for device in ["llvm"]:
+        ctx = tvm.context(device, 0)
+        if not ctx.exist:
+            print("Skip because %s is not enabled" % device)
+            continue
+        mod, params = relay.frontend.from_tensorflow(constant_graph,
+                                                     outputs=['output'])
+        with relay.build_config(opt_level=3):
+            graph, lib, params = relay.build(mod,
+                                             target=device,
+                                             params=params)
+        from tvm.contrib import graph_runtime
+        m = graph_runtime.create(graph, lib, ctx)
+        m.set_input(**params)
+        m.set_input('input', data)
+        m.run()
+        tvm_out = m.get_output(0)
+        tvm.testing.assert_allclose(tvm_out.asnumpy(), tf_out.astype(tvm_out.dtype), rtol=1e-1)
 
 Review comment:
   I doubt it is not true. Could you double check it? You could refer our exist testing case. atol=1e3 rtol=1e-3 should work. Your code omit `atol`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388734661
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_trainingmod.py
 ##########
 @@ -0,0 +1,61 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This article is a test script to test fused_batch_norm operators in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fusedbatchnorm():
+    g=tf.Graph()
+    with g.as_default(): 
+        input_tensor = tf.placeholder(tf.float32,shape=(1, 12, 12, 32), name='input')
+        alpha = tf.constant(np.random.rand(32,), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(32,), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='sum')
+    data = np.random.rand(1, 12, 12, 32)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['sum'])
+    
+
+    layout = None
+    target = 'llvm'
+    ctx=tvm.cpu(0)
+    mod, params = relay.frontend.from_tensorflow(constant_graph, layout=layout, outputs=['sum'])
+    with relay.build_config(opt_level=3):
+        graph, lib, params = relay.build(mod,
+                                     target=target,
+                                     target_host = target,
+                                     params=params)
+    from tvm.contrib import graph_runtime
+    m = graph_runtime.create(graph, lib, ctx)
+    m.set_input(**params)
+    m.set_input('input', data)
+    m.run()
+    tvm_out=m.get_output(0)
 
 Review comment:
   space between `=`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595562012
 
 
   > > Yeah, our current implementation is just to check whether `mean` / `variance` is empty `VarNode` (with zero dimension), and then call `Mean` and `Variance` in BatchNormToInferUnpack.
   > 
   > >I think our pr could remove `name_hint` too.
   Yeah, I agree that the better way should be removing `name_hint` and just checking whether the `mean` and `variance` are empty inside `BatchNormToInferUnpack`, with no need to modify the tensorflow frontend. Previously I have tried this way but got come compilation errors related with layout checking. If we plan to do in this way, we need to modify the layout checking of `batch_norm` operator too. 
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has the possibility to hold the given pre-defined constant values and thus cannot be replaced with Mean \ Variance.
   > 
   > >Could you give us an example of this condition? I could only imagine models have empty or full pre-defined values. So we should only to calculate it by calling `Mean` / `Variance` feed by `data` or our current implementation of `BatchNormToInferUnpack `.
   What I mean is that for both cases the `mean` and `variance` are `VarNode`. In one case the `VarNode` is empty without pre-defined values, while in the other case the `VarNode` is not empty with pre-defined values. 
   Thank you for the discussion!
   
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad commented on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad commented on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595856735
 
 
   > Let us wait CI green.
   > 
   > As GitHub has issue: https://discuss.tvm.ai/t/github-issue-the-commit-author-is-wrong-since-today/5880/15 I will merge it after it is solved.
   
   Okay, thank you so much for the efforts! 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388912080
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_dynamic.py
 ##########
 @@ -0,0 +1,61 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This is a test script to test fused_batch_norm operators
+in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fused_batch_norm():
+    g = tf.Graph()
+    with g.as_default():
+        input_tensor = tf.placeholder(tf.float32, shape=(1, 12, 12, 32), name='input')
+        alpha = tf.constant(np.random.rand(32,), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(32,), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='sum')
+    data = np.random.rand(1, 12, 12, 32)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['sum'])
+
+    layout = None
+    target = 'llvm'
+    ctx = tvm.cpu(0)
 
 Review comment:
   I have already modified the related code according to your suggestions, as in the newest commit. Thank you so much for your help!

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388734566
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_trainingmod.py
 ##########
 @@ -0,0 +1,61 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This article is a test script to test fused_batch_norm operators in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fusedbatchnorm():
+    g=tf.Graph()
 
 Review comment:
   space between `=`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388840561
 
 

 ##########
 File path: python/tvm/relay/frontend/tensorflow.py
 ##########
 @@ -887,7 +887,14 @@ def _impl(inputs, attr, params):
         if 'U' in attr:
             need_cast = True
             inputs[0] = _op.cast(inputs[0], dtype=attr['U'].name)
-
+        # Check if mean and variance are empty
+        # If so, replace them with Mean and Variance Ops
+        # For run-time calculation
+        moving_mean_shape = [int(n) for n in inputs[3].type_annotation.shape]
 
 Review comment:
   The `CHECK` means `assert`. 
   
   Yes, `CHECK` the number of nodes, so that we could visit `inputs[3]` / `inputs[4]` and so on safely.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388734798
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_trainingmod.py
 ##########
 @@ -0,0 +1,61 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This article is a test script to test fused_batch_norm operators in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fusedbatchnorm():
+    g=tf.Graph()
+    with g.as_default(): 
+        input_tensor = tf.placeholder(tf.float32,shape=(1, 12, 12, 32), name='input')
+        alpha = tf.constant(np.random.rand(32,), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(32,), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='sum')
+    data = np.random.rand(1, 12, 12, 32)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['sum'])
+    
+
+    layout = None
+    target = 'llvm'
+    ctx=tvm.cpu(0)
+    mod, params = relay.frontend.from_tensorflow(constant_graph, layout=layout, outputs=['sum'])
+    with relay.build_config(opt_level=3):
+        graph, lib, params = relay.build(mod,
+                                     target=target,
+                                     target_host = target,
+                                     params=params)
+    from tvm.contrib import graph_runtime
+    m = graph_runtime.create(graph, lib, ctx)
+    m.set_input(**params)
+    m.set_input('input', data)
+    m.run()
+    tvm_out=m.get_output(0)
+    tvm.testing.assert_allclose(tvm_out.asnumpy(), tf_out.astype(tvm_out.dtype), rtol=1e-3)
+
+if __name__ == "__main__":
+    test_fusedbatchnorm()
 
 Review comment:
   test_fused_batch_norm

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388997607
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_dynamic.py
 ##########
 @@ -0,0 +1,63 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This is a test script to test fused_batch_norm operators
+in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fused_batch_norm():
+    g = tf.Graph()
+    with g.as_default():
+        input_tensor = tf.placeholder(tf.float32, shape=(1, 12, 12, 32), name='input')
+        alpha = tf.constant(np.random.rand(32,), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(32,), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='output')
+    data = np.random.rand(1, 12, 12, 32)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['output'])
+
+    for device in ["llvm"]:
+        ctx = tvm.context(device, 0)
+        if not ctx.exist:
+            print("Skip because %s is not enabled" % device)
+            continue
+        mod, params = relay.frontend.from_tensorflow(constant_graph,
+                                                     outputs=['output'])
+        with relay.build_config(opt_level=3):
+            graph, lib, params = relay.build(mod,
+                                             target=device,
+                                             params=params)
+        from tvm.contrib import graph_runtime
+        m = graph_runtime.create(graph, lib, ctx)
+        m.set_input(**params)
+        m.set_input('input', data)
+        m.run()
+        tvm_out = m.get_output(0)
+        tvm.testing.assert_allclose(tvm_out.asnumpy(), tf_out.astype(tvm_out.dtype), rtol=1e-3)
+
+if __name__ == "__main__":
+    test_fused_batch_norm()
 
 Review comment:
   > Sorry, I think I have another one comment. How about adding some more testing data?
   > 
   > for example:
   > 
   > ```python
   > def verify_fused_batch_norm(shape):
   >     ...
   > 
   > def test_fused_batch_norm():
   >     verify_fused_batch_norm(shape=(1, 12, 12, 32))
   >     verify_fused_batch_norm(shape=(1, 24, 24, 64))
   >     ...
   > 
   > if __name__ == "__main__":
   >     test_fused_batch_norm()
   > ```
   
   I have modified the code as your suggestions to add more testing cases as in the newest commit.  
   Thank you for the comments!  

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595562012
 
 
   > > Yeah, our current implementation is just to check whether `mean` / `variance` is empty `VarNode` (with zero dimension), and then call `Mean` and `Variance` in BatchNormToInferUnpack.
   > 
   >> I think our pr could remove `name_hint` too.
   
   Yeah, I agree that the better way should be removing `name_hint` and just checking whether the `mean` and `variance` are empty inside `BatchNormToInferUnpack`, with no need to modify the tensorflow frontend. Previously I have tried this way but got come compilation errors related with shape checking. If we plan to do in this way, we need to modify the `BatchNormRel` for data shape assignment too, since the current `batch_norm` relay operator only accept `mean` and `variance` with non-empty dimension. We need to make it accept `mean` and `variance` with empty dimension.  
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has the possibility to hold the given pre-defined constant values and thus cannot be replaced with Mean \ Variance.
   > 
   > >Could you give us an example of this condition? I could only imagine models have empty or full pre-defined values. So we should only to calculate it by calling `Mean` / `Variance` feed by `data` or our current implementation of `BatchNormToInferUnpack `.
   
   What I mean is that for both cases the `mean` and `variance` are `VarNode`. In one case the `VarNode` is empty without pre-defined values, while in the other case the `VarNode` is not empty with pre-defined values. 
   Thank you for the discussion!
   
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-596166401
 
 
   Thanks @lfengad This is merged now.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on issue #4990: [Relay][Topi] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on issue #4990: [Relay][Topi] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595107872
 
 
   I think we needn't add `_empty_for_training_mode_inference`. If we find `mean` / `variance` is `VarNode`, we should call `Mean` and `Variance`.
   
   I don't think we should add `is_training` flag to relay `BatchNorm`. This should be done by users to make sure TF's model `BatchNorm`'s `is_training` flag be false. However, we still have user cases like you mention, so we could support as current implementation and don't add attribute to `BatchNorm`.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388914223
 
 

 ##########
 File path: python/tvm/relay/frontend/tensorflow.py
 ##########
 @@ -887,7 +887,15 @@ def _impl(inputs, attr, params):
         if 'U' in attr:
             need_cast = True
             inputs[0] = _op.cast(inputs[0], dtype=attr['U'].name)
-
+        # Check if mean and variance are empty
+        # If so, replace them with Mean and Variance Ops
+        # For run-time calculation
+        assert len(inputs) == 5
 
 Review comment:
   Let us move `CHECK` to the beginning of this function (line 880). Because line 889 also access `inputs[0]`.  

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r389230534
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_dynamic.py
 ##########
 @@ -0,0 +1,71 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This is a test script to test fused_batch_norm operators
+in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def verify_fused_batch_norm(shape):
+    g = tf.Graph()
+    with g.as_default():
+        input_tensor = tf.placeholder(tf.float32, shape=shape, name='input')
+        alpha = tf.constant(np.random.rand(shape[-1],), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(shape[-1],), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='output')
+    data = np.random.rand(*shape)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['output'])
+
+    for device in ["llvm"]:
+        ctx = tvm.context(device, 0)
+        if not ctx.exist:
+            print("Skip because %s is not enabled" % device)
+            continue
+        mod, params = relay.frontend.from_tensorflow(constant_graph,
+                                                     outputs=['output'])
+        with relay.build_config(opt_level=3):
+            graph, lib, params = relay.build(mod,
+                                             target=device,
+                                             params=params)
+        from tvm.contrib import graph_runtime
+        m = graph_runtime.create(graph, lib, ctx)
+        m.set_input(**params)
+        m.set_input('input', data)
+        m.run()
+        tvm_out = m.get_output(0)
+        tvm.testing.assert_allclose(tvm_out.asnumpy(), tf_out.astype(tvm_out.dtype), rtol=1e-1)
 
 Review comment:
   Okay, I can try it. Thank you for the suggestions! 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595611551
 
 
   > > Yeah, our current implementation is just to check whether `mean` / `variance` is empty `VarNode` (with zero dimension), and then call `Mean` and `Variance` in BatchNormToInferUnpack.
   > 
   > I think our pr could remove `name_hint` too.
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has the possibility to hold the given pre-defined constant values and thus cannot be replaced with Mean \ Variance.
   > 
   > Could you give us an example of this condition? I could only imagine models have empty or full pre-defined values. So we should only to calculate it by calling `Mean` / `Variance` feed by `data` or our current implementation of `BatchNormToInferUnpack `.
   
   Thanks for your discussion! According to our discussion, I have rewritten the code as in the newest commit. This time, the function `BatchNormToInferUnpack` is not modified. We only modify the tensorflow frontend for `_fused_batch_norm`. If `mean` and `variance` are empty, we directly add `Mean` and `Variance` relay operators before the `batch_norm` relay operator in the frontend graph, without modifying the `batch_norm` relay operator at all.
   Thank you for the suggestions!

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388917594
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_dynamic.py
 ##########
 @@ -0,0 +1,64 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This is a test script to test fused_batch_norm operators
+in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fused_batch_norm():
+    g = tf.Graph()
+    with g.as_default():
+        input_tensor = tf.placeholder(tf.float32, shape=(1, 12, 12, 32), name='input')
+        alpha = tf.constant(np.random.rand(32,), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(32,), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='output')
+    data = np.random.rand(1, 12, 12, 32)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['output'])
+
+    for device in ["llvm"]:
+        ctx = tvm.context(device, 0)
+        if not ctx.exist:
+            print("Skip because %s is not enabled" % device)
+            continue
+        layout = None
+        mod, params = relay.frontend.from_tensorflow(constant_graph, layout=layout, outputs=['output'])
 
 Review comment:
   I think `layout = None` is unnecessary. Because for CPU, we use the default `NHWC` layout is enough. The `layout` is used when we have `convolution` ops, which will be `NCHW`  better if target is `GPU`. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388952812
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_dynamic.py
 ##########
 @@ -0,0 +1,63 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This is a test script to test fused_batch_norm operators
+in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fused_batch_norm():
+    g = tf.Graph()
+    with g.as_default():
+        input_tensor = tf.placeholder(tf.float32, shape=(1, 12, 12, 32), name='input')
+        alpha = tf.constant(np.random.rand(32,), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(32,), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='output')
+    data = np.random.rand(1, 12, 12, 32)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['output'])
+
+    for device in ["llvm"]:
+        ctx = tvm.context(device, 0)
+        if not ctx.exist:
+            print("Skip because %s is not enabled" % device)
+            continue
+        mod, params = relay.frontend.from_tensorflow(constant_graph,
+                                                     outputs=['output'])
+        with relay.build_config(opt_level=3):
+            graph, lib, params = relay.build(mod,
+                                             target=device,
+                                             params=params)
+        from tvm.contrib import graph_runtime
+        m = graph_runtime.create(graph, lib, ctx)
+        m.set_input(**params)
+        m.set_input('input', data)
+        m.run()
+        tvm_out = m.get_output(0)
+        tvm.testing.assert_allclose(tvm_out.asnumpy(), tf_out.astype(tvm_out.dtype), rtol=1e-3)
+
+if __name__ == "__main__":
+    test_fused_batch_norm()
 
 Review comment:
   Sorry, I think I have another one comment. How about adding some more testing data?
   
   for example:
   
   ```python
   def verify_fused_batch_norm(shape):
         ....
   
   def test_fused_batch_norm():
       verify_fused_batch_norm(shape=(1, 12, 12, 32))
       verify_fused_batch_norm(shape=(1, 24, 24, 64))
       ...
   
   if __name__ == "__main__":
       test_fused_batch_norm()
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595562012
 
 
   > > Yeah, our current implementation is just to check whether `mean` / `variance` is empty `VarNode` (with zero dimension), and then call `Mean` and `Variance` in BatchNormToInferUnpack.
   > 
   >> I think our pr could remove `name_hint` too.
   
   Yeah, I agree that the better way should be removing `name_hint` and just checking whether the `mean` and `variance` are empty inside `BatchNormToInferUnpack`, with no need to modify the tensorflow frontend. Previously I have tried this way but got come compilation errors related with data shape checking. If we plan to do in this way, we need to modify the `BatchNormRel` for data shape assignment, since the current `batch_norm` relay operator only accept `mean` and `variance` with the same shape as the `channel` dimension. We need to make this relay operator accept `mean` and `variance` with empty shape by doing more modifications.  
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has the possibility to hold the given pre-defined constant values and thus cannot be replaced with Mean \ Variance.
   > 
   > >Could you give us an example of this condition? I could only imagine models have empty or full pre-defined values. So we should only to calculate it by calling `Mean` / `Variance` feed by `data` or our current implementation of `BatchNormToInferUnpack `.
   
   What I mean is that for both cases the `mean` and `variance` are `VarNode`. In one case the `VarNode` is empty without pre-defined values, while in the other case the `VarNode` is not empty with pre-defined values. 
   Thank you for the discussion!
   
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595156271
 
 
   > Yeah, our current implementation is just to check whether `mean` / `variance` is empty `VarNode` (with zero dimension), and then call `Mean` and `Variance` in BatchNormToInferUnpack.
   
   I think our pr could remove `name_hint` too.
   
   >  if mean / variance is VarNode but with non-zero dimension, it still has the possibility to hold the given pre-defined constant values and thus cannot be replaced with Mean \ Variance.
   
   Could you give us an example of this condition? I could only imagine models have empty or full pre-defined values. So we should only to calculate it by calling `Mean` / `Variance` feed by `data` or our current implementation of `BatchNormToInferUnpack `.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595562012
 
 
   > > Yeah, our current implementation is just to check whether `mean` / `variance` is empty `VarNode` (with zero dimension), and then call `Mean` and `Variance` in BatchNormToInferUnpack.
   > 
   >> I think our pr could remove `name_hint` too.
   
   Yeah, I agree that the better way should be removing `name_hint` and just checking whether the `mean` and `variance` are empty inside `BatchNormToInferUnpack`, with no need to modify the tensorflow frontend. Previously I have tried this way but got come compilation errors related with data shape checking. If we plan to do in this way, we need to modify the `BatchNormRel` for data shape assignment too, since the current `batch_norm` relay operator only accept `mean` and `variance` with non-empty dimension. We need to make it accept `mean` and `variance` with empty dimension.  
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has the possibility to hold the given pre-defined constant values and thus cannot be replaced with Mean \ Variance.
   > 
   > >Could you give us an example of this condition? I could only imagine models have empty or full pre-defined values. So we should only to calculate it by calling `Mean` / `Variance` feed by `data` or our current implementation of `BatchNormToInferUnpack `.
   
   What I mean is that for both cases the `mean` and `variance` are `VarNode`. In one case the `VarNode` is empty without pre-defined values, while in the other case the `VarNode` is not empty with pre-defined values. 
   Thank you for the discussion!
   
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595562012
 
 
   > > Yeah, our current implementation is just to check whether `mean` / `variance` is empty `VarNode` (with zero dimension), and then call `Mean` and `Variance` in BatchNormToInferUnpack.
   > 
   >> I think our pr could remove `name_hint` too.
   
   Yeah, I agree that the better way should be removing `name_hint` and just checking whether the `mean` and `variance` are empty inside `BatchNormToInferUnpack`, with no need to modify the tensorflow frontend. Previously I have tried this way but got come compilation errors related with data shape checking. If we plan to do in this way, we need to modify the `BatchNormRel` for data shape assignment, since the current `batch_norm` relay operator only accept `mean` and `variance` with non-empty dimension. We need to make it accept `mean` and `variance` with empty dimension.  
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has the possibility to hold the given pre-defined constant values and thus cannot be replaced with Mean \ Variance.
   > 
   > >Could you give us an example of this condition? I could only imagine models have empty or full pre-defined values. So we should only to calculate it by calling `Mean` / `Variance` feed by `data` or our current implementation of `BatchNormToInferUnpack `.
   
   What I mean is that for both cases the `mean` and `variance` are `VarNode`. In one case the `VarNode` is empty without pre-defined values, while in the other case the `VarNode` is not empty with pre-defined values. 
   Thank you for the discussion!
   
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad commented on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad commented on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595562012
 
 
   > > Yeah, our current implementation is just to check whether `mean` / `variance` is empty `VarNode` (with zero dimension), and then call `Mean` and `Variance` in BatchNormToInferUnpack.
   > 
   > I think our pr could remove `name_hint` too.
   Yeah, I agree that the better way should be removing `name_hint` and just checking whether the `mean` and `variance` are empty inside `BatchNormToInferUnpack`, with no need to modify the tensorflow frontend. Previously I have tried this way but got come compilation errors related with layout checking. If we plan to do in this way, we need to modify the layout checking of `batch_norm` operator too. 
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has the possibility to hold the given pre-defined constant values and thus cannot be replaced with Mean \ Variance.
   > 
   > Could you give us an example of this condition? I could only imagine models have empty or full pre-defined values. So we should only to calculate it by calling `Mean` / `Variance` feed by `data` or our current implementation of `BatchNormToInferUnpack `.
   What I mean is that for both cases the `mean` and `variance` are `VarNode`. In one case the `VarNode` is empty without pre-defined values, while in the other case the `VarNode` is not empty with pre-defined values. 
   Thank you for the discussion!
   
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388959058
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_dynamic.py
 ##########
 @@ -0,0 +1,63 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This is a test script to test fused_batch_norm operators
+in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fused_batch_norm():
+    g = tf.Graph()
+    with g.as_default():
+        input_tensor = tf.placeholder(tf.float32, shape=(1, 12, 12, 32), name='input')
+        alpha = tf.constant(np.random.rand(32,), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(32,), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='output')
+    data = np.random.rand(1, 12, 12, 32)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['output'])
+
+    for device in ["llvm"]:
+        ctx = tvm.context(device, 0)
+        if not ctx.exist:
+            print("Skip because %s is not enabled" % device)
+            continue
+        mod, params = relay.frontend.from_tensorflow(constant_graph,
+                                                     outputs=['output'])
+        with relay.build_config(opt_level=3):
+            graph, lib, params = relay.build(mod,
+                                             target=device,
+                                             params=params)
+        from tvm.contrib import graph_runtime
+        m = graph_runtime.create(graph, lib, ctx)
+        m.set_input(**params)
+        m.set_input('input', data)
+        m.run()
+        tvm_out = m.get_output(0)
+        tvm.testing.assert_allclose(tvm_out.asnumpy(), tf_out.astype(tvm_out.dtype), rtol=1e-3)
+
+if __name__ == "__main__":
+    test_fused_batch_norm()
 
 Review comment:
   Sure, I think such a test could be more convincing.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388895728
 
 

 ##########
 File path: python/tvm/relay/frontend/tensorflow.py
 ##########
 @@ -887,7 +887,14 @@ def _impl(inputs, attr, params):
         if 'U' in attr:
             need_cast = True
             inputs[0] = _op.cast(inputs[0], dtype=attr['U'].name)
-
+        # Check if mean and variance are empty
+        # If so, replace them with Mean and Variance Ops
+        # For run-time calculation
+        moving_mean_shape = [int(n) for n in inputs[3].type_annotation.shape]
 
 Review comment:
   I see. Thank you so much!

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595562012
 
 
   > > Yeah, our current implementation is just to check whether `mean` / `variance` is empty `VarNode` (with zero dimension), and then call `Mean` and `Variance` in BatchNormToInferUnpack.
   > 
   >> I think our pr could remove `name_hint` too.
   
   Yeah, I agree that the better way should be removing `name_hint` and just checking whether the `mean` and `variance` are empty inside `BatchNormToInferUnpack`, with no need to modify the tensorflow frontend. Previously I have tried this way but got come compilation errors related with layout checking. If we plan to do in this way, we need to modify the layout checking of `batch_norm` operator too. 
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has the possibility to hold the given pre-defined constant values and thus cannot be replaced with Mean \ Variance.
   > 
   > >Could you give us an example of this condition? I could only imagine models have empty or full pre-defined values. So we should only to calculate it by calling `Mean` / `Variance` feed by `data` or our current implementation of `BatchNormToInferUnpack `.
   
   What I mean is that for both cases the `mean` and `variance` are `VarNode`. In one case the `VarNode` is empty without pre-defined values, while in the other case the `VarNode` is not empty with pre-defined values. 
   Thank you for the discussion!
   
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595855406
 
 
   Let us wait CI green.
   
   As GitHub has issue: https://discuss.tvm.ai/t/github-issue-the-commit-author-is-wrong-since-today/5880/15 I will merge it after it is solved.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r389230480
 
 

 ##########
 File path: tests/python/frontend/tensorflow/test_bn_dynamic.py
 ##########
 @@ -0,0 +1,71 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+====================
+This is a test script to test fused_batch_norm operators
+in TensorFlow frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def verify_fused_batch_norm(shape):
+    g = tf.Graph()
+    with g.as_default():
+        input_tensor = tf.placeholder(tf.float32, shape=shape, name='input')
+        alpha = tf.constant(np.random.rand(shape[-1],), dtype=tf.float32, name='alpha')
+        beta = tf.constant(np.random.rand(shape[-1],), dtype=tf.float32, name='beta')
+        bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, name='bn')
+        out = tf.identity(bn[0], name='output')
+    data = np.random.rand(*shape)
+    with tf.Session(graph=out.graph) as sess:
+        sess.run([tf.global_variables_initializer()])
+        tf_out = sess.run(out, feed_dict={input_tensor:data})
+        constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['output'])
+
+    for device in ["llvm"]:
+        ctx = tvm.context(device, 0)
+        if not ctx.exist:
+            print("Skip because %s is not enabled" % device)
+            continue
+        mod, params = relay.frontend.from_tensorflow(constant_graph,
+                                                     outputs=['output'])
+        with relay.build_config(opt_level=3):
+            graph, lib, params = relay.build(mod,
+                                             target=device,
+                                             params=params)
+        from tvm.contrib import graph_runtime
+        m = graph_runtime.create(graph, lib, ctx)
+        m.set_input(**params)
+        m.set_input('input', data)
+        m.run()
+        tvm_out = m.get_output(0)
+        tvm.testing.assert_allclose(tvm_out.asnumpy(), tf_out.astype(tvm_out.dtype), rtol=1e-1)
 
 Review comment:
   I doubt it is not true. Could you double check it? You could refer our exist testing case. atol=1e-3 rtol=1e-3 should work. Your code omit `atol`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services