You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/12/09 20:26:43 UTC

[GitHub] [tvm] michalpiszczek opened a new pull request #9694: [TOPI] Add generic batch norm

michalpiszczek opened a new pull request #9694:
URL: https://github.com/apache/tvm/pull/9694


   - Adds a TOPI implementation of batch norm so it can be constant folded.
   - Adds associated tests


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] michalpiszczek commented on pull request #9694: [TOPI] Add generic batch norm

Posted by GitBox <gi...@apache.org>.
michalpiszczek commented on pull request #9694:
URL: https://github.com/apache/tvm/pull/9694#issuecomment-991250994


   @AndrewZhaoLuo @jwfromm PTAL


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] michalpiszczek commented on pull request #9694: [TOPI] Add generic batch norm

Posted by GitBox <gi...@apache.org>.
michalpiszczek commented on pull request #9694:
URL: https://github.com/apache/tvm/pull/9694#issuecomment-991250994


   @AndrewZhaoLuo @jwfromm PTAL


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] michalpiszczek commented on a change in pull request #9694: [TOPI] Add generic batch norm

Posted by GitBox <gi...@apache.org>.
michalpiszczek commented on a change in pull request #9694:
URL: https://github.com/apache/tvm/pull/9694#discussion_r767021788



##########
File path: tests/python/relay/test_op_level1.py
##########
@@ -427,6 +428,52 @@ def test_batch_norm():
         )
 
 
+def test_batch_norm_fold_const():
+    axis = 1
+    dtype = "float32"
+    shape = [4, 5, 6]
+
+    data_np = np.random.random(shape).astype(dtype)
+    beta_np = np.random.random(shape[axis]).astype(dtype)
+    gamma_np = np.random.random(shape[axis]).astype(dtype)
+    moving_mean_np = np.random.random(shape[axis]).astype(dtype)
+    moving_var_np = np.random.random(shape[axis]).astype(dtype)
+
+    data = relay.var("data", relay.TensorType(shape, dtype))
+    beta = relay.var("beta", relay.TensorType((shape[1],), dtype))
+    gamma = relay.var("gamma", relay.TensorType((shape[1],), dtype))
+    moving_mean = relay.var("moving_mean", relay.TensorType((shape[1],), dtype))
+    moving_var = relay.var("moving_var", relay.TensorType((shape[1],), dtype))
+    out = relay.nn.batch_norm(data, gamma, beta, moving_mean, moving_var, axis=axis).astuple()
+    func = relay.Function([data, gamma, beta, moving_mean, moving_var], out)
+
+    out_const = relay.nn.batch_norm(
+        relay.const(data_np),
+        relay.const(gamma_np),
+        relay.const(beta_np),
+        relay.const(moving_mean_np),
+        relay.const(moving_var_np),
+        axis=axis,
+    ).astuple()
+    func_const = relay.Function([], out_const)
+
+    # Build the module with constants to have FoldConstant transform batch_norm.
+    mod_const = tvm.IRModule.from_expr(func_const)
+    lib_const = relay.build(mod_const, tvm.target.create("llvm"))

Review comment:
       Good catch, fixed! 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on a change in pull request #9694: [TOPI] Add generic batch norm

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on a change in pull request #9694:
URL: https://github.com/apache/tvm/pull/9694#discussion_r766988112



##########
File path: tests/python/relay/test_op_level1.py
##########
@@ -427,6 +428,52 @@ def test_batch_norm():
         )
 
 
+def test_batch_norm_fold_const():
+    axis = 1
+    dtype = "float32"
+    shape = [4, 5, 6]
+
+    data_np = np.random.random(shape).astype(dtype)
+    beta_np = np.random.random(shape[axis]).astype(dtype)
+    gamma_np = np.random.random(shape[axis]).astype(dtype)
+    moving_mean_np = np.random.random(shape[axis]).astype(dtype)
+    moving_var_np = np.random.random(shape[axis]).astype(dtype)
+
+    data = relay.var("data", relay.TensorType(shape, dtype))
+    beta = relay.var("beta", relay.TensorType((shape[1],), dtype))
+    gamma = relay.var("gamma", relay.TensorType((shape[1],), dtype))
+    moving_mean = relay.var("moving_mean", relay.TensorType((shape[1],), dtype))
+    moving_var = relay.var("moving_var", relay.TensorType((shape[1],), dtype))
+    out = relay.nn.batch_norm(data, gamma, beta, moving_mean, moving_var, axis=axis).astuple()
+    func = relay.Function([data, gamma, beta, moving_mean, moving_var], out)
+
+    out_const = relay.nn.batch_norm(
+        relay.const(data_np),
+        relay.const(gamma_np),
+        relay.const(beta_np),
+        relay.const(moving_mean_np),
+        relay.const(moving_var_np),
+        axis=axis,
+    ).astuple()
+    func_const = relay.Function([], out_const)
+
+    # Build the module with constants to have FoldConstant transform batch_norm.
+    mod_const = tvm.IRModule.from_expr(func_const)
+    lib_const = relay.build(mod_const, tvm.target.create("llvm"))

Review comment:
       When compiling, SimplifyInference runs before fold constant. Can you manually FoldConstant here instead, and compare that output to the VM run below?
   
   https://github.com/apache/tvm/blob/main/src/relay/backend/utils.cc#L196-L226




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on a change in pull request #9694: [TOPI] Add generic batch norm

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on a change in pull request #9694:
URL: https://github.com/apache/tvm/pull/9694#discussion_r766988112



##########
File path: tests/python/relay/test_op_level1.py
##########
@@ -427,6 +428,52 @@ def test_batch_norm():
         )
 
 
+def test_batch_norm_fold_const():
+    axis = 1
+    dtype = "float32"
+    shape = [4, 5, 6]
+
+    data_np = np.random.random(shape).astype(dtype)
+    beta_np = np.random.random(shape[axis]).astype(dtype)
+    gamma_np = np.random.random(shape[axis]).astype(dtype)
+    moving_mean_np = np.random.random(shape[axis]).astype(dtype)
+    moving_var_np = np.random.random(shape[axis]).astype(dtype)
+
+    data = relay.var("data", relay.TensorType(shape, dtype))
+    beta = relay.var("beta", relay.TensorType((shape[1],), dtype))
+    gamma = relay.var("gamma", relay.TensorType((shape[1],), dtype))
+    moving_mean = relay.var("moving_mean", relay.TensorType((shape[1],), dtype))
+    moving_var = relay.var("moving_var", relay.TensorType((shape[1],), dtype))
+    out = relay.nn.batch_norm(data, gamma, beta, moving_mean, moving_var, axis=axis).astuple()
+    func = relay.Function([data, gamma, beta, moving_mean, moving_var], out)
+
+    out_const = relay.nn.batch_norm(
+        relay.const(data_np),
+        relay.const(gamma_np),
+        relay.const(beta_np),
+        relay.const(moving_mean_np),
+        relay.const(moving_var_np),
+        axis=axis,
+    ).astuple()
+    func_const = relay.Function([], out_const)
+
+    # Build the module with constants to have FoldConstant transform batch_norm.
+    mod_const = tvm.IRModule.from_expr(func_const)
+    lib_const = relay.build(mod_const, tvm.target.create("llvm"))

Review comment:
       When compiling, SimplifyInference runs before fold constant. Can you manually FoldConstant here instead, and compare that output to the VM run below?
   
   https://github.com/apache/tvm/blob/main/src/relay/backend/utils.cc#L196-L226




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart merged pull request #9694: [TOPI] Add generic batch norm

Posted by GitBox <gi...@apache.org>.
mbrookhart merged pull request #9694:
URL: https://github.com/apache/tvm/pull/9694


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] michalpiszczek commented on a change in pull request #9694: [TOPI] Add generic batch norm

Posted by GitBox <gi...@apache.org>.
michalpiszczek commented on a change in pull request #9694:
URL: https://github.com/apache/tvm/pull/9694#discussion_r767021788



##########
File path: tests/python/relay/test_op_level1.py
##########
@@ -427,6 +428,52 @@ def test_batch_norm():
         )
 
 
+def test_batch_norm_fold_const():
+    axis = 1
+    dtype = "float32"
+    shape = [4, 5, 6]
+
+    data_np = np.random.random(shape).astype(dtype)
+    beta_np = np.random.random(shape[axis]).astype(dtype)
+    gamma_np = np.random.random(shape[axis]).astype(dtype)
+    moving_mean_np = np.random.random(shape[axis]).astype(dtype)
+    moving_var_np = np.random.random(shape[axis]).astype(dtype)
+
+    data = relay.var("data", relay.TensorType(shape, dtype))
+    beta = relay.var("beta", relay.TensorType((shape[1],), dtype))
+    gamma = relay.var("gamma", relay.TensorType((shape[1],), dtype))
+    moving_mean = relay.var("moving_mean", relay.TensorType((shape[1],), dtype))
+    moving_var = relay.var("moving_var", relay.TensorType((shape[1],), dtype))
+    out = relay.nn.batch_norm(data, gamma, beta, moving_mean, moving_var, axis=axis).astuple()
+    func = relay.Function([data, gamma, beta, moving_mean, moving_var], out)
+
+    out_const = relay.nn.batch_norm(
+        relay.const(data_np),
+        relay.const(gamma_np),
+        relay.const(beta_np),
+        relay.const(moving_mean_np),
+        relay.const(moving_var_np),
+        axis=axis,
+    ).astuple()
+    func_const = relay.Function([], out_const)
+
+    # Build the module with constants to have FoldConstant transform batch_norm.
+    mod_const = tvm.IRModule.from_expr(func_const)
+    lib_const = relay.build(mod_const, tvm.target.create("llvm"))

Review comment:
       Good catch, fixed! 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] michalpiszczek commented on pull request #9694: [TOPI] Add generic batch norm

Posted by GitBox <gi...@apache.org>.
michalpiszczek commented on pull request #9694:
URL: https://github.com/apache/tvm/pull/9694#issuecomment-990220938


   @mbrookhart PTAL


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org