You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/11/02 00:52:25 UTC

[GitHub] [tvm] mbs-octoml opened a new pull request #9421: BUG: alloc_tensor offset and reshape shape should be on the CPU

mbs-octoml opened a new pull request #9421:
URL: https://github.com/apache/tvm/pull/9421


   The VM ManifestAlloc pass was allocating constants in a few places I
   forgot to tag with on_device for the host/cpu. As a result the runtime
   would (silently) do the x-device copy, which destroys perf.
   
   To make this easier to spot in the future added a 'constants' property
   to the VM Executable to dump the shape & device for all VM constants.
   
   This is CORE-102 in OctoML JIRA.
   
   (Special thanks to @mbrookhart  & @tkonolige for finding this.)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] junrushao1994 commented on pull request #9421: BUG: alloc_tensor offset and reshape shape should be on the CPU

Posted by GitBox <gi...@apache.org>.
junrushao1994 commented on pull request #9421:
URL: https://github.com/apache/tvm/pull/9421#issuecomment-958009770


   Thanks for the fix!!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] junrushao1994 commented on pull request #9421: BUG: alloc_tensor offset and reshape shape should be on the CPU

Posted by GitBox <gi...@apache.org>.
junrushao1994 commented on pull request #9421:
URL: https://github.com/apache/tvm/pull/9421#issuecomment-958009770


   Thanks for the fix!!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbs-octoml commented on a change in pull request #9421: BUG: alloc_tensor offset and reshape shape should be on the CPU

Posted by GitBox <gi...@apache.org>.
mbs-octoml commented on a change in pull request #9421:
URL: https://github.com/apache/tvm/pull/9421#discussion_r741210407



##########
File path: tests/python/relay/test_vm.py
##########
@@ -999,6 +999,71 @@ def test_shape_func_nested_function():
     compiler.lower(mod, "llvm")
 
 
+def test_storage_size_and_offset_on_cpu():
+    """Tests allocations place sizes and offsets on the CPU host even if the rest
+    of the computation is on a different device type."""
+
+    # CPU = device type 1
+    # GPU = device type 2
+    def input():
+        return tvm.parser.fromtext(
+            """
+            #[version = "0.0.5"]
+            def @main(%a: Tensor[(5, 7), float32],
+                      param_device_types=[2], result_device_type=2) {
+              add(%a, %a)
+            }
+        """
+        )
+
+    exe = relay.vm.compile(
+        input(),
+        tvm.target.Target("cuda"),
+    )
+
+    print(exe.constants)

Review comment:
       done

##########
File path: tests/python/relay/test_vm.py
##########
@@ -999,6 +999,71 @@ def test_shape_func_nested_function():
     compiler.lower(mod, "llvm")
 
 
+def test_storage_size_and_offset_on_cpu():
+    """Tests allocations place sizes and offsets on the CPU host even if the rest
+    of the computation is on a different device type."""
+
+    # CPU = device type 1
+    # GPU = device type 2
+    def input():
+        return tvm.parser.fromtext(
+            """
+            #[version = "0.0.5"]
+            def @main(%a: Tensor[(5, 7), float32],
+                      param_device_types=[2], result_device_type=2) {
+              add(%a, %a)
+            }
+        """
+        )
+
+    exe = relay.vm.compile(
+        input(),
+        tvm.target.Target("cuda"),
+    )
+
+    print(exe.constants)
+    print(exe.bytecode)
+
+    # This program needs two constants:
+    # - The size of the tensor's storage (first arg) to alloc_storage
+    # - The offset of the tensor within the storage (second arg) to alloc_tensor
+    # Both should be on the CPU
+    assert not "on device of type 2" in exe.constants
+    assert "on device of type 1" in exe.constants
+
+
+def test_reshape_shape_on_cpu():
+    """Tests the argument to a reshape places the shape on the CPU host even if the rest
+    of the copmutation is on a different device type."""
+
+    # CPU = device type 1
+    # GPU = device type 2
+    def input():
+        newshape = [2, 4, 2]
+        metatable = {"relay.Constant": [relay.const(newshape, dtype="int64")]}
+        return tvm.parser.fromtext(
+            """
+            #[version = "0.0.5"]
+            def @main(%x: Tensor[(2, 8), float32],
+                      param_device_types=[2], result_device_type=2) {
+              reshape(%x, newshape=[2, 4, 2])
+            }
+        """
+        )
+
+    exe = relay.vm.compile(
+        input(),
+        tvm.target.Target("cuda"),
+    )
+
+    print(exe.constants)

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on a change in pull request #9421: BUG: alloc_tensor offset and reshape shape should be on the CPU

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on a change in pull request #9421:
URL: https://github.com/apache/tvm/pull/9421#discussion_r740667148



##########
File path: tests/python/relay/test_vm.py
##########
@@ -999,6 +999,71 @@ def test_shape_func_nested_function():
     compiler.lower(mod, "llvm")
 
 
+def test_storage_size_and_offset_on_cpu():
+    """Tests allocations place sizes and offsets on the CPU host even if the rest
+    of the computation is on a different device type."""
+
+    # CPU = device type 1
+    # GPU = device type 2
+    def input():
+        return tvm.parser.fromtext(
+            """
+            #[version = "0.0.5"]
+            def @main(%a: Tensor[(5, 7), float32],
+                      param_device_types=[2], result_device_type=2) {
+              add(%a, %a)
+            }
+        """
+        )
+
+    exe = relay.vm.compile(
+        input(),
+        tvm.target.Target("cuda"),
+    )
+
+    print(exe.constants)
+    print(exe.bytecode)
+
+    # This program needs two constants:
+    # - The size of the tensor's storage (first arg) to alloc_storage
+    # - The offset of the tensor within the storage (second arg) to alloc_tensor
+    # Both should be on the CPU
+    assert not "on device of type 2" in exe.constants
+    assert "on device of type 1" in exe.constants
+
+
+def test_reshape_shape_on_cpu():
+    """Tests the argument to a reshape places the shape on the CPU host even if the rest
+    of the copmutation is on a different device type."""
+
+    # CPU = device type 1
+    # GPU = device type 2
+    def input():
+        newshape = [2, 4, 2]
+        metatable = {"relay.Constant": [relay.const(newshape, dtype="int64")]}
+        return tvm.parser.fromtext(
+            """
+            #[version = "0.0.5"]
+            def @main(%x: Tensor[(2, 8), float32],
+                      param_device_types=[2], result_device_type=2) {
+              reshape(%x, newshape=[2, 4, 2])
+            }
+        """
+        )
+
+    exe = relay.vm.compile(
+        input(),
+        tvm.target.Target("cuda"),
+    )
+
+    print(exe.constants)

Review comment:
       Nit, remove prints in tests

##########
File path: tests/python/relay/test_vm.py
##########
@@ -999,6 +999,71 @@ def test_shape_func_nested_function():
     compiler.lower(mod, "llvm")
 
 
+def test_storage_size_and_offset_on_cpu():
+    """Tests allocations place sizes and offsets on the CPU host even if the rest
+    of the computation is on a different device type."""
+
+    # CPU = device type 1
+    # GPU = device type 2
+    def input():
+        return tvm.parser.fromtext(
+            """
+            #[version = "0.0.5"]
+            def @main(%a: Tensor[(5, 7), float32],
+                      param_device_types=[2], result_device_type=2) {
+              add(%a, %a)
+            }
+        """
+        )
+
+    exe = relay.vm.compile(
+        input(),
+        tvm.target.Target("cuda"),
+    )
+
+    print(exe.constants)

Review comment:
       Nit, remove prints in tests




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbs-octoml commented on pull request #9421: BUG: alloc_tensor offset and reshape shape should be on the CPU

Posted by GitBox <gi...@apache.org>.
mbs-octoml commented on pull request #9421:
URL: https://github.com/apache/tvm/pull/9421#issuecomment-957007813


   With this fix the ONNX perf test suite agrees in timing to 2 decimal places, and the vm executables agree exactly.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi merged pull request #9421: BUG: alloc_tensor offset and reshape shape should be on the CPU

Posted by GitBox <gi...@apache.org>.
masahi merged pull request #9421:
URL: https://github.com/apache/tvm/pull/9421


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbs-octoml commented on pull request #9421: BUG: alloc_tensor offset and reshape shape should be on the CPU

Posted by GitBox <gi...@apache.org>.
mbs-octoml commented on pull request #9421:
URL: https://github.com/apache/tvm/pull/9421#issuecomment-957007813


   With this fix the ONNX perf test suite agrees in timing to 2 decimal places, and the vm executables agree exactly.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on a change in pull request #9421: BUG: alloc_tensor offset and reshape shape should be on the CPU

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on a change in pull request #9421:
URL: https://github.com/apache/tvm/pull/9421#discussion_r740667148



##########
File path: tests/python/relay/test_vm.py
##########
@@ -999,6 +999,71 @@ def test_shape_func_nested_function():
     compiler.lower(mod, "llvm")
 
 
+def test_storage_size_and_offset_on_cpu():
+    """Tests allocations place sizes and offsets on the CPU host even if the rest
+    of the computation is on a different device type."""
+
+    # CPU = device type 1
+    # GPU = device type 2
+    def input():
+        return tvm.parser.fromtext(
+            """
+            #[version = "0.0.5"]
+            def @main(%a: Tensor[(5, 7), float32],
+                      param_device_types=[2], result_device_type=2) {
+              add(%a, %a)
+            }
+        """
+        )
+
+    exe = relay.vm.compile(
+        input(),
+        tvm.target.Target("cuda"),
+    )
+
+    print(exe.constants)
+    print(exe.bytecode)
+
+    # This program needs two constants:
+    # - The size of the tensor's storage (first arg) to alloc_storage
+    # - The offset of the tensor within the storage (second arg) to alloc_tensor
+    # Both should be on the CPU
+    assert not "on device of type 2" in exe.constants
+    assert "on device of type 1" in exe.constants
+
+
+def test_reshape_shape_on_cpu():
+    """Tests the argument to a reshape places the shape on the CPU host even if the rest
+    of the copmutation is on a different device type."""
+
+    # CPU = device type 1
+    # GPU = device type 2
+    def input():
+        newshape = [2, 4, 2]
+        metatable = {"relay.Constant": [relay.const(newshape, dtype="int64")]}
+        return tvm.parser.fromtext(
+            """
+            #[version = "0.0.5"]
+            def @main(%x: Tensor[(2, 8), float32],
+                      param_device_types=[2], result_device_type=2) {
+              reshape(%x, newshape=[2, 4, 2])
+            }
+        """
+        )
+
+    exe = relay.vm.compile(
+        input(),
+        tvm.target.Target("cuda"),
+    )
+
+    print(exe.constants)

Review comment:
       Nit, remove prints in tests

##########
File path: tests/python/relay/test_vm.py
##########
@@ -999,6 +999,71 @@ def test_shape_func_nested_function():
     compiler.lower(mod, "llvm")
 
 
+def test_storage_size_and_offset_on_cpu():
+    """Tests allocations place sizes and offsets on the CPU host even if the rest
+    of the computation is on a different device type."""
+
+    # CPU = device type 1
+    # GPU = device type 2
+    def input():
+        return tvm.parser.fromtext(
+            """
+            #[version = "0.0.5"]
+            def @main(%a: Tensor[(5, 7), float32],
+                      param_device_types=[2], result_device_type=2) {
+              add(%a, %a)
+            }
+        """
+        )
+
+    exe = relay.vm.compile(
+        input(),
+        tvm.target.Target("cuda"),
+    )
+
+    print(exe.constants)

Review comment:
       Nit, remove prints in tests




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbs-octoml commented on a change in pull request #9421: BUG: alloc_tensor offset and reshape shape should be on the CPU

Posted by GitBox <gi...@apache.org>.
mbs-octoml commented on a change in pull request #9421:
URL: https://github.com/apache/tvm/pull/9421#discussion_r741210407



##########
File path: tests/python/relay/test_vm.py
##########
@@ -999,6 +999,71 @@ def test_shape_func_nested_function():
     compiler.lower(mod, "llvm")
 
 
+def test_storage_size_and_offset_on_cpu():
+    """Tests allocations place sizes and offsets on the CPU host even if the rest
+    of the computation is on a different device type."""
+
+    # CPU = device type 1
+    # GPU = device type 2
+    def input():
+        return tvm.parser.fromtext(
+            """
+            #[version = "0.0.5"]
+            def @main(%a: Tensor[(5, 7), float32],
+                      param_device_types=[2], result_device_type=2) {
+              add(%a, %a)
+            }
+        """
+        )
+
+    exe = relay.vm.compile(
+        input(),
+        tvm.target.Target("cuda"),
+    )
+
+    print(exe.constants)

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbs-octoml commented on pull request #9421: BUG: alloc_tensor offset and reshape shape should be on the CPU

Posted by GitBox <gi...@apache.org>.
mbs-octoml commented on pull request #9421:
URL: https://github.com/apache/tvm/pull/9421#issuecomment-957007813


   With this fix the ONNX perf test suite agrees in timing to 2 decimal places, and the vm executables agree exactly.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on a change in pull request #9421: BUG: alloc_tensor offset and reshape shape should be on the CPU

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on a change in pull request #9421:
URL: https://github.com/apache/tvm/pull/9421#discussion_r740667148



##########
File path: tests/python/relay/test_vm.py
##########
@@ -999,6 +999,71 @@ def test_shape_func_nested_function():
     compiler.lower(mod, "llvm")
 
 
+def test_storage_size_and_offset_on_cpu():
+    """Tests allocations place sizes and offsets on the CPU host even if the rest
+    of the computation is on a different device type."""
+
+    # CPU = device type 1
+    # GPU = device type 2
+    def input():
+        return tvm.parser.fromtext(
+            """
+            #[version = "0.0.5"]
+            def @main(%a: Tensor[(5, 7), float32],
+                      param_device_types=[2], result_device_type=2) {
+              add(%a, %a)
+            }
+        """
+        )
+
+    exe = relay.vm.compile(
+        input(),
+        tvm.target.Target("cuda"),
+    )
+
+    print(exe.constants)
+    print(exe.bytecode)
+
+    # This program needs two constants:
+    # - The size of the tensor's storage (first arg) to alloc_storage
+    # - The offset of the tensor within the storage (second arg) to alloc_tensor
+    # Both should be on the CPU
+    assert not "on device of type 2" in exe.constants
+    assert "on device of type 1" in exe.constants
+
+
+def test_reshape_shape_on_cpu():
+    """Tests the argument to a reshape places the shape on the CPU host even if the rest
+    of the copmutation is on a different device type."""
+
+    # CPU = device type 1
+    # GPU = device type 2
+    def input():
+        newshape = [2, 4, 2]
+        metatable = {"relay.Constant": [relay.const(newshape, dtype="int64")]}
+        return tvm.parser.fromtext(
+            """
+            #[version = "0.0.5"]
+            def @main(%x: Tensor[(2, 8), float32],
+                      param_device_types=[2], result_device_type=2) {
+              reshape(%x, newshape=[2, 4, 2])
+            }
+        """
+        )
+
+    exe = relay.vm.compile(
+        input(),
+        tvm.target.Target("cuda"),
+    )
+
+    print(exe.constants)

Review comment:
       Nit, remove prints in tests

##########
File path: tests/python/relay/test_vm.py
##########
@@ -999,6 +999,71 @@ def test_shape_func_nested_function():
     compiler.lower(mod, "llvm")
 
 
+def test_storage_size_and_offset_on_cpu():
+    """Tests allocations place sizes and offsets on the CPU host even if the rest
+    of the computation is on a different device type."""
+
+    # CPU = device type 1
+    # GPU = device type 2
+    def input():
+        return tvm.parser.fromtext(
+            """
+            #[version = "0.0.5"]
+            def @main(%a: Tensor[(5, 7), float32],
+                      param_device_types=[2], result_device_type=2) {
+              add(%a, %a)
+            }
+        """
+        )
+
+    exe = relay.vm.compile(
+        input(),
+        tvm.target.Target("cuda"),
+    )
+
+    print(exe.constants)

Review comment:
       Nit, remove prints in tests




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbs-octoml commented on a change in pull request #9421: BUG: alloc_tensor offset and reshape shape should be on the CPU

Posted by GitBox <gi...@apache.org>.
mbs-octoml commented on a change in pull request #9421:
URL: https://github.com/apache/tvm/pull/9421#discussion_r741210407



##########
File path: tests/python/relay/test_vm.py
##########
@@ -999,6 +999,71 @@ def test_shape_func_nested_function():
     compiler.lower(mod, "llvm")
 
 
+def test_storage_size_and_offset_on_cpu():
+    """Tests allocations place sizes and offsets on the CPU host even if the rest
+    of the computation is on a different device type."""
+
+    # CPU = device type 1
+    # GPU = device type 2
+    def input():
+        return tvm.parser.fromtext(
+            """
+            #[version = "0.0.5"]
+            def @main(%a: Tensor[(5, 7), float32],
+                      param_device_types=[2], result_device_type=2) {
+              add(%a, %a)
+            }
+        """
+        )
+
+    exe = relay.vm.compile(
+        input(),
+        tvm.target.Target("cuda"),
+    )
+
+    print(exe.constants)

Review comment:
       done

##########
File path: tests/python/relay/test_vm.py
##########
@@ -999,6 +999,71 @@ def test_shape_func_nested_function():
     compiler.lower(mod, "llvm")
 
 
+def test_storage_size_and_offset_on_cpu():
+    """Tests allocations place sizes and offsets on the CPU host even if the rest
+    of the computation is on a different device type."""
+
+    # CPU = device type 1
+    # GPU = device type 2
+    def input():
+        return tvm.parser.fromtext(
+            """
+            #[version = "0.0.5"]
+            def @main(%a: Tensor[(5, 7), float32],
+                      param_device_types=[2], result_device_type=2) {
+              add(%a, %a)
+            }
+        """
+        )
+
+    exe = relay.vm.compile(
+        input(),
+        tvm.target.Target("cuda"),
+    )
+
+    print(exe.constants)
+    print(exe.bytecode)
+
+    # This program needs two constants:
+    # - The size of the tensor's storage (first arg) to alloc_storage
+    # - The offset of the tensor within the storage (second arg) to alloc_tensor
+    # Both should be on the CPU
+    assert not "on device of type 2" in exe.constants
+    assert "on device of type 1" in exe.constants
+
+
+def test_reshape_shape_on_cpu():
+    """Tests the argument to a reshape places the shape on the CPU host even if the rest
+    of the copmutation is on a different device type."""
+
+    # CPU = device type 1
+    # GPU = device type 2
+    def input():
+        newshape = [2, 4, 2]
+        metatable = {"relay.Constant": [relay.const(newshape, dtype="int64")]}
+        return tvm.parser.fromtext(
+            """
+            #[version = "0.0.5"]
+            def @main(%x: Tensor[(2, 8), float32],
+                      param_device_types=[2], result_device_type=2) {
+              reshape(%x, newshape=[2, 4, 2])
+            }
+        """
+        )
+
+    exe = relay.vm.compile(
+        input(),
+        tvm.target.Target("cuda"),
+    )
+
+    print(exe.constants)

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbs-octoml commented on a change in pull request #9421: BUG: alloc_tensor offset and reshape shape should be on the CPU

Posted by GitBox <gi...@apache.org>.
mbs-octoml commented on a change in pull request #9421:
URL: https://github.com/apache/tvm/pull/9421#discussion_r741210582



##########
File path: tests/python/relay/test_vm.py
##########
@@ -999,6 +999,71 @@ def test_shape_func_nested_function():
     compiler.lower(mod, "llvm")
 
 
+def test_storage_size_and_offset_on_cpu():
+    """Tests allocations place sizes and offsets on the CPU host even if the rest
+    of the computation is on a different device type."""
+
+    # CPU = device type 1
+    # GPU = device type 2
+    def input():
+        return tvm.parser.fromtext(
+            """
+            #[version = "0.0.5"]
+            def @main(%a: Tensor[(5, 7), float32],
+                      param_device_types=[2], result_device_type=2) {
+              add(%a, %a)
+            }
+        """
+        )
+
+    exe = relay.vm.compile(
+        input(),
+        tvm.target.Target("cuda"),
+    )
+
+    print(exe.constants)
+    print(exe.bytecode)
+
+    # This program needs two constants:
+    # - The size of the tensor's storage (first arg) to alloc_storage
+    # - The offset of the tensor within the storage (second arg) to alloc_tensor
+    # Both should be on the CPU
+    assert not "on device of type 2" in exe.constants
+    assert "on device of type 1" in exe.constants
+
+
+def test_reshape_shape_on_cpu():
+    """Tests the argument to a reshape places the shape on the CPU host even if the rest
+    of the copmutation is on a different device type."""
+
+    # CPU = device type 1
+    # GPU = device type 2
+    def input():
+        newshape = [2, 4, 2]
+        metatable = {"relay.Constant": [relay.const(newshape, dtype="int64")]}
+        return tvm.parser.fromtext(
+            """
+            #[version = "0.0.5"]
+            def @main(%x: Tensor[(2, 8), float32],
+                      param_device_types=[2], result_device_type=2) {
+              reshape(%x, newshape=[2, 4, 2])
+            }
+        """
+        )
+
+    exe = relay.vm.compile(
+        input(),
+        tvm.target.Target("cuda"),
+    )
+
+    print(exe.constants)

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org