You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/11/08 17:24:32 UTC

[GitHub] [tvm] mehrdadh opened a new pull request, #13324: [microTVM][PyTorch][Tutorial]Adding a PyTorch tutorial for microTVM with CRT

mehrdadh opened a new pull request, #13324:
URL: https://github.com/apache/tvm/pull/13324

   This PR adds a tutorial to compile/run a PyTorch model using microTVM CRT.
   
   Do not merge before https://github.com/apache/tvm/pull/13313


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] gromero commented on a diff in pull request #13324: [microTVM][PyTorch][Tutorial]Adding a PyTorch tutorial for microTVM with CRT

Posted by GitBox <gi...@apache.org>.
gromero commented on code in PR #13324:
URL: https://github.com/apache/tvm/pull/13324#discussion_r1017216300


##########
src/runtime/crt/host/microtvm_api_server.py:
##########
@@ -35,6 +37,18 @@
 
 IS_TEMPLATE = not os.path.exists(os.path.join(PROJECT_DIR, MODEL_LIBRARY_FORMAT_RELPATH))
 
+MEMORY_SIZE_BYTES = 2 * 1024 * 1024

Review Comment:
   @mehrdadh I think that's actually interesting / important info: the constant is determined experimentally, it's really good to have comment about it as @alanmacd requested, I don't even consider it a nit :-) 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] alanmacd commented on a diff in pull request #13324: [microTVM][PyTorch][Tutorial]Adding a PyTorch tutorial for microTVM with CRT

Posted by GitBox <gi...@apache.org>.
alanmacd commented on code in PR #13324:
URL: https://github.com/apache/tvm/pull/13324#discussion_r1016952380


##########
src/runtime/crt/host/microtvm_api_server.py:
##########
@@ -35,6 +37,18 @@
 
 IS_TEMPLATE = not os.path.exists(os.path.join(PROJECT_DIR, MODEL_LIBRARY_FORMAT_RELPATH))
 
+MEMORY_SIZE_BYTES = 2 * 1024 * 1024

Review Comment:
   nit: maybe add comment as to why this is default memory size



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mehrdadh commented on pull request #13324: [microTVM][PyTorch][Tutorial]Adding a PyTorch tutorial for microTVM with CRT

Posted by GitBox <gi...@apache.org>.
mehrdadh commented on PR #13324:
URL: https://github.com/apache/tvm/pull/13324#issuecomment-1307992585

   @gromero that's a good point. I added that compiler flag.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mehrdadh commented on a diff in pull request #13324: [microTVM][PyTorch][Tutorial]Adding a PyTorch tutorial for microTVM with CRT

Posted by GitBox <gi...@apache.org>.
mehrdadh commented on code in PR #13324:
URL: https://github.com/apache/tvm/pull/13324#discussion_r1016956677


##########
src/runtime/crt/host/microtvm_api_server.py:
##########
@@ -35,6 +37,18 @@
 
 IS_TEMPLATE = not os.path.exists(os.path.join(PROJECT_DIR, MODEL_LIBRARY_FORMAT_RELPATH))
 
+MEMORY_SIZE_BYTES = 2 * 1024 * 1024

Review Comment:
   added, it's not an interesting reason. It's basically chosen to pass CRT tests in TVM



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mehrdadh commented on a diff in pull request #13324: [microTVM][PyTorch][Tutorial]Adding a PyTorch tutorial for microTVM with CRT

Posted by GitBox <gi...@apache.org>.
mehrdadh commented on code in PR #13324:
URL: https://github.com/apache/tvm/pull/13324#discussion_r1017222970


##########
gallery/how_to/work_with_microtvm/micro_pytorch.py:
##########
@@ -0,0 +1,198 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-Pytorch:
+
+microTVM PyTorch Tutorial
+===========================
+**Authors**:
+`Mehrdad Hessar <https://github.com/mehrdadh>`_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a PyTorch model. This tutorial can be executed on a x86 CPU using C runtime (CRT).
+
+**Note:** This tutorial only runs on x86 CPU using CRT and does not run on Zephyr
+since the model would not fit on our current supported Zephyr boards.
+"""
+import pathlib
+
+import torch
+import torchvision
+from torchvision import transforms
+import numpy as np
+from PIL import Image
+
+import tvm
+from tvm import relay
+from tvm.contrib.download import download_testdata
+from tvm.relay.backend import Executor
+
+#################################
+# Load a pre-trained PyTorch model
+# -------------------------------
+#
+# To begin with, load pre-trained MobileNetV2 from torchvision. Then,
+# download a cat image and preprocess it to use as the model input.
+#
+
+model = torchvision.models.quantization.mobilenet_v2(pretrained=True, quantize=True)
+model = model.eval()
+
+input_shape = [1, 3, 224, 224]
+input_data = torch.randn(input_shape)
+scripted_model = torch.jit.trace(model, input_data).eval()
+
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
+img_path = download_testdata(img_url, "cat.png", module="data")
+img = Image.open(img_path).resize((224, 224))
+
+# Preprocess the image and convert to tensor
+my_preprocess = transforms.Compose(
+    [
+        transforms.Resize(256),
+        transforms.CenterCrop(224),
+        transforms.ToTensor(),
+        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+    ]
+)
+img = my_preprocess(img)
+img = np.expand_dims(img, 0)
+
+input_name = "input0"
+shape_list = [(input_name, input_shape)]
+relay_mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
+
+#################################
+# Define Target, Runtime and Executor
+# -------------------------------
+#
+# In this tutorial we use AOT host driven executor. To compile the model

Review Comment:
   done



##########
gallery/how_to/work_with_microtvm/micro_pytorch.py:
##########
@@ -0,0 +1,198 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-Pytorch:
+
+microTVM PyTorch Tutorial
+===========================
+**Authors**:
+`Mehrdad Hessar <https://github.com/mehrdadh>`_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a PyTorch model. This tutorial can be executed on a x86 CPU using C runtime (CRT).
+
+**Note:** This tutorial only runs on x86 CPU using CRT and does not run on Zephyr
+since the model would not fit on our current supported Zephyr boards.
+"""
+import pathlib
+
+import torch
+import torchvision
+from torchvision import transforms
+import numpy as np
+from PIL import Image
+
+import tvm
+from tvm import relay
+from tvm.contrib.download import download_testdata
+from tvm.relay.backend import Executor
+
+#################################
+# Load a pre-trained PyTorch model
+# -------------------------------

Review Comment:
   done



##########
gallery/how_to/work_with_microtvm/micro_pytorch.py:
##########
@@ -0,0 +1,205 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-Pytorch:
+
+microTVM PyTorch Tutorial
+===========================
+**Authors**:
+`Mehrdad Hessar <https://github.com/mehrdadh>`_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a PyTorch model. This tutorial can be executed on a x86 CPU using C runtime (CRT).
+
+**Note:** This tutorial only runs on x86 CPU using CRT and does not run on Zephyr
+since the model would not fit on our current supported Zephyr boards.
+"""
+
+# sphinx_gallery_start_ignore
+from tvm import testing
+
+testing.utils.install_request_hook(depth=3)
+# sphinx_gallery_end_ignore
+
+import pathlib
+
+import torch
+import torchvision
+from torchvision import transforms
+import numpy as np
+from PIL import Image
+
+import tvm
+from tvm import relay
+from tvm.contrib.download import download_testdata
+from tvm.relay.backend import Executor
+
+#################################
+# Load a pre-trained PyTorch model
+# -------------------------------
+#
+# To begin with, load pre-trained MobileNetV2 from torchvision. Then,
+# download a cat image and preprocess it to use as the model input.
+#
+
+model = torchvision.models.quantization.mobilenet_v2(pretrained=True, quantize=True)
+model = model.eval()
+
+input_shape = [1, 3, 224, 224]
+input_data = torch.randn(input_shape)
+scripted_model = torch.jit.trace(model, input_data).eval()
+
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
+img_path = download_testdata(img_url, "cat.png", module="data")
+img = Image.open(img_path).resize((224, 224))
+
+# Preprocess the image and convert to tensor
+my_preprocess = transforms.Compose(
+    [
+        transforms.Resize(256),
+        transforms.CenterCrop(224),
+        transforms.ToTensor(),
+        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+    ]
+)
+img = my_preprocess(img)
+img = np.expand_dims(img, 0)
+
+input_name = "input0"
+shape_list = [(input_name, input_shape)]
+relay_mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
+
+#################################
+# Define Target, Runtime and Executor
+# -------------------------------
+#
+# In this tutorial we use AOT host driven executor. To compile the model
+# for an emulated embedded environment on an X86 machine we use C runtime (CRT)
+# and we use `host` micro target. Using this setup, TVM compiles the model
+# for C runtime which can run on a X86 CPU machine with the same flow that
+# would run on a physical microcontroller.
+#
+
+
+# Simulate a microcontroller on the host machine. Uses the main() from `src/runtime/crt/host/main.cc`
+# To use physical hardware, replace "host" with something matching your hardware.

Review Comment:
   done



##########
gallery/how_to/work_with_microtvm/micro_pytorch.py:
##########
@@ -0,0 +1,198 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-Pytorch:
+
+microTVM PyTorch Tutorial
+===========================
+**Authors**:
+`Mehrdad Hessar <https://github.com/mehrdadh>`_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a PyTorch model. This tutorial can be executed on a x86 CPU using C runtime (CRT).
+
+**Note:** This tutorial only runs on x86 CPU using CRT and does not run on Zephyr
+since the model would not fit on our current supported Zephyr boards.
+"""
+import pathlib
+
+import torch
+import torchvision
+from torchvision import transforms
+import numpy as np
+from PIL import Image
+
+import tvm
+from tvm import relay
+from tvm.contrib.download import download_testdata
+from tvm.relay.backend import Executor
+
+#################################
+# Load a pre-trained PyTorch model
+# -------------------------------
+#
+# To begin with, load pre-trained MobileNetV2 from torchvision. Then,
+# download a cat image and preprocess it to use as the model input.
+#
+
+model = torchvision.models.quantization.mobilenet_v2(pretrained=True, quantize=True)
+model = model.eval()
+
+input_shape = [1, 3, 224, 224]
+input_data = torch.randn(input_shape)
+scripted_model = torch.jit.trace(model, input_data).eval()
+
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
+img_path = download_testdata(img_url, "cat.png", module="data")
+img = Image.open(img_path).resize((224, 224))
+
+# Preprocess the image and convert to tensor
+my_preprocess = transforms.Compose(
+    [
+        transforms.Resize(256),
+        transforms.CenterCrop(224),
+        transforms.ToTensor(),
+        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+    ]
+)
+img = my_preprocess(img)
+img = np.expand_dims(img, 0)
+
+input_name = "input0"
+shape_list = [(input_name, input_shape)]
+relay_mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
+
+#################################
+# Define Target, Runtime and Executor
+# -------------------------------

Review Comment:
   done



##########
gallery/how_to/work_with_microtvm/micro_pytorch.py:
##########
@@ -0,0 +1,198 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-Pytorch:
+
+microTVM PyTorch Tutorial
+===========================
+**Authors**:
+`Mehrdad Hessar <https://github.com/mehrdadh>`_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a PyTorch model. This tutorial can be executed on a x86 CPU using C runtime (CRT).
+
+**Note:** This tutorial only runs on x86 CPU using CRT and does not run on Zephyr
+since the model would not fit on our current supported Zephyr boards.
+"""
+import pathlib
+
+import torch
+import torchvision
+from torchvision import transforms
+import numpy as np
+from PIL import Image
+
+import tvm
+from tvm import relay
+from tvm.contrib.download import download_testdata
+from tvm.relay.backend import Executor
+
+#################################
+# Load a pre-trained PyTorch model
+# -------------------------------
+#
+# To begin with, load pre-trained MobileNetV2 from torchvision. Then,
+# download a cat image and preprocess it to use as the model input.
+#
+
+model = torchvision.models.quantization.mobilenet_v2(pretrained=True, quantize=True)
+model = model.eval()
+
+input_shape = [1, 3, 224, 224]
+input_data = torch.randn(input_shape)
+scripted_model = torch.jit.trace(model, input_data).eval()
+
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
+img_path = download_testdata(img_url, "cat.png", module="data")
+img = Image.open(img_path).resize((224, 224))
+
+# Preprocess the image and convert to tensor
+my_preprocess = transforms.Compose(
+    [
+        transforms.Resize(256),
+        transforms.CenterCrop(224),
+        transforms.ToTensor(),
+        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+    ]
+)
+img = my_preprocess(img)
+img = np.expand_dims(img, 0)
+
+input_name = "input0"
+shape_list = [(input_name, input_shape)]
+relay_mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
+
+#################################

Review Comment:
   done



##########
gallery/how_to/work_with_microtvm/micro_pytorch.py:
##########
@@ -0,0 +1,198 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-Pytorch:
+
+microTVM PyTorch Tutorial
+===========================
+**Authors**:
+`Mehrdad Hessar <https://github.com/mehrdadh>`_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a PyTorch model. This tutorial can be executed on a x86 CPU using C runtime (CRT).
+
+**Note:** This tutorial only runs on x86 CPU using CRT and does not run on Zephyr
+since the model would not fit on our current supported Zephyr boards.
+"""
+import pathlib
+
+import torch
+import torchvision
+from torchvision import transforms
+import numpy as np
+from PIL import Image
+
+import tvm
+from tvm import relay
+from tvm.contrib.download import download_testdata
+from tvm.relay.backend import Executor
+
+#################################

Review Comment:
   done



##########
gallery/how_to/work_with_microtvm/micro_pytorch.py:
##########
@@ -0,0 +1,198 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-Pytorch:
+
+microTVM PyTorch Tutorial
+===========================
+**Authors**:
+`Mehrdad Hessar <https://github.com/mehrdadh>`_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a PyTorch model. This tutorial can be executed on a x86 CPU using C runtime (CRT).
+
+**Note:** This tutorial only runs on x86 CPU using CRT and does not run on Zephyr
+since the model would not fit on our current supported Zephyr boards.
+"""
+import pathlib
+
+import torch
+import torchvision
+from torchvision import transforms
+import numpy as np
+from PIL import Image
+
+import tvm
+from tvm import relay
+from tvm.contrib.download import download_testdata
+from tvm.relay.backend import Executor
+
+#################################
+# Load a pre-trained PyTorch model
+# -------------------------------
+#
+# To begin with, load pre-trained MobileNetV2 from torchvision. Then,
+# download a cat image and preprocess it to use as the model input.
+#
+
+model = torchvision.models.quantization.mobilenet_v2(pretrained=True, quantize=True)
+model = model.eval()
+
+input_shape = [1, 3, 224, 224]
+input_data = torch.randn(input_shape)
+scripted_model = torch.jit.trace(model, input_data).eval()
+
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
+img_path = download_testdata(img_url, "cat.png", module="data")
+img = Image.open(img_path).resize((224, 224))
+
+# Preprocess the image and convert to tensor
+my_preprocess = transforms.Compose(
+    [
+        transforms.Resize(256),
+        transforms.CenterCrop(224),
+        transforms.ToTensor(),
+        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+    ]
+)
+img = my_preprocess(img)
+img = np.expand_dims(img, 0)
+
+input_name = "input0"
+shape_list = [(input_name, input_shape)]
+relay_mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
+
+#################################
+# Define Target, Runtime and Executor
+# -------------------------------
+#
+# In this tutorial we use AOT host driven executor. To compile the model
+# for an emulated embedded environment on an X86 machine we use C runtime (CRT)

Review Comment:
   done



##########
gallery/how_to/work_with_microtvm/micro_pytorch.py:
##########
@@ -0,0 +1,198 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-Pytorch:
+
+microTVM PyTorch Tutorial
+===========================
+**Authors**:
+`Mehrdad Hessar <https://github.com/mehrdadh>`_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a PyTorch model. This tutorial can be executed on a x86 CPU using C runtime (CRT).
+
+**Note:** This tutorial only runs on x86 CPU using CRT and does not run on Zephyr
+since the model would not fit on our current supported Zephyr boards.
+"""
+import pathlib
+
+import torch
+import torchvision
+from torchvision import transforms
+import numpy as np
+from PIL import Image
+
+import tvm
+from tvm import relay
+from tvm.contrib.download import download_testdata
+from tvm.relay.backend import Executor
+
+#################################
+# Load a pre-trained PyTorch model
+# -------------------------------
+#
+# To begin with, load pre-trained MobileNetV2 from torchvision. Then,
+# download a cat image and preprocess it to use as the model input.
+#
+
+model = torchvision.models.quantization.mobilenet_v2(pretrained=True, quantize=True)
+model = model.eval()
+
+input_shape = [1, 3, 224, 224]
+input_data = torch.randn(input_shape)
+scripted_model = torch.jit.trace(model, input_data).eval()
+
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
+img_path = download_testdata(img_url, "cat.png", module="data")
+img = Image.open(img_path).resize((224, 224))
+
+# Preprocess the image and convert to tensor
+my_preprocess = transforms.Compose(
+    [
+        transforms.Resize(256),
+        transforms.CenterCrop(224),
+        transforms.ToTensor(),
+        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+    ]
+)
+img = my_preprocess(img)
+img = np.expand_dims(img, 0)
+
+input_name = "input0"
+shape_list = [(input_name, input_shape)]
+relay_mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
+
+#################################
+# Define Target, Runtime and Executor
+# -------------------------------
+#
+# In this tutorial we use AOT host driven executor. To compile the model
+# for an emulated embedded environment on an X86 machine we use C runtime (CRT)
+# and we use `host` micro target. Using this setup, TVM compiles the model
+# for C runtime which can run on a X86 CPU machine with the same flow that

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] gromero merged pull request #13324: [microTVM][PyTorch][Tutorial]Adding a PyTorch tutorial for microTVM with CRT

Posted by GitBox <gi...@apache.org>.
gromero merged PR #13324:
URL: https://github.com/apache/tvm/pull/13324


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] tvm-bot commented on pull request #13324: [microTVM][PyTorch][Tutorial]Adding a PyTorch tutorial for microTVM with CRT

Posted by GitBox <gi...@apache.org>.
tvm-bot commented on PR #13324:
URL: https://github.com/apache/tvm/pull/13324#issuecomment-1307570041

   <!---bot-comment-->
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @-ing them in a comment.
   
   <!--bot-comment-ccs-start-->
    * cc @alanmacd, @gromero, @yelite <sub>See [#10317](https://github.com/apache/tvm/issues/10317) for details</sub><!--bot-comment-ccs-end-->
   
   <sub>Generated by [tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)</sub>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] gromero commented on a diff in pull request #13324: [microTVM][PyTorch][Tutorial]Adding a PyTorch tutorial for microTVM with CRT

Posted by GitBox <gi...@apache.org>.
gromero commented on code in PR #13324:
URL: https://github.com/apache/tvm/pull/13324#discussion_r1017158971


##########
gallery/how_to/work_with_microtvm/micro_pytorch.py:
##########
@@ -0,0 +1,198 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-Pytorch:
+
+microTVM PyTorch Tutorial
+===========================
+**Authors**:
+`Mehrdad Hessar <https://github.com/mehrdadh>`_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a PyTorch model. This tutorial can be executed on a x86 CPU using C runtime (CRT).
+
+**Note:** This tutorial only runs on x86 CPU using CRT and does not run on Zephyr
+since the model would not fit on our current supported Zephyr boards.
+"""
+import pathlib
+
+import torch
+import torchvision
+from torchvision import transforms
+import numpy as np
+from PIL import Image
+
+import tvm
+from tvm import relay
+from tvm.contrib.download import download_testdata
+from tvm.relay.backend import Executor
+
+#################################
+# Load a pre-trained PyTorch model
+# -------------------------------
+#
+# To begin with, load pre-trained MobileNetV2 from torchvision. Then,
+# download a cat image and preprocess it to use as the model input.
+#
+
+model = torchvision.models.quantization.mobilenet_v2(pretrained=True, quantize=True)
+model = model.eval()
+
+input_shape = [1, 3, 224, 224]
+input_data = torch.randn(input_shape)
+scripted_model = torch.jit.trace(model, input_data).eval()
+
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
+img_path = download_testdata(img_url, "cat.png", module="data")
+img = Image.open(img_path).resize((224, 224))
+
+# Preprocess the image and convert to tensor
+my_preprocess = transforms.Compose(
+    [
+        transforms.Resize(256),
+        transforms.CenterCrop(224),
+        transforms.ToTensor(),
+        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+    ]
+)
+img = my_preprocess(img)
+img = np.expand_dims(img, 0)
+
+input_name = "input0"
+shape_list = [(input_name, input_shape)]
+relay_mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
+
+#################################

Review Comment:
   nit: add more `#` chars to "cover" end of line below? 



##########
gallery/how_to/work_with_microtvm/micro_pytorch.py:
##########
@@ -0,0 +1,198 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-Pytorch:
+
+microTVM PyTorch Tutorial
+===========================
+**Authors**:
+`Mehrdad Hessar <https://github.com/mehrdadh>`_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a PyTorch model. This tutorial can be executed on a x86 CPU using C runtime (CRT).
+
+**Note:** This tutorial only runs on x86 CPU using CRT and does not run on Zephyr
+since the model would not fit on our current supported Zephyr boards.
+"""
+import pathlib
+
+import torch
+import torchvision
+from torchvision import transforms
+import numpy as np
+from PIL import Image
+
+import tvm
+from tvm import relay
+from tvm.contrib.download import download_testdata
+from tvm.relay.backend import Executor
+
+#################################
+# Load a pre-trained PyTorch model
+# -------------------------------
+#
+# To begin with, load pre-trained MobileNetV2 from torchvision. Then,
+# download a cat image and preprocess it to use as the model input.
+#
+
+model = torchvision.models.quantization.mobilenet_v2(pretrained=True, quantize=True)
+model = model.eval()
+
+input_shape = [1, 3, 224, 224]
+input_data = torch.randn(input_shape)
+scripted_model = torch.jit.trace(model, input_data).eval()
+
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
+img_path = download_testdata(img_url, "cat.png", module="data")
+img = Image.open(img_path).resize((224, 224))
+
+# Preprocess the image and convert to tensor
+my_preprocess = transforms.Compose(
+    [
+        transforms.Resize(256),
+        transforms.CenterCrop(224),
+        transforms.ToTensor(),
+        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+    ]
+)
+img = my_preprocess(img)
+img = np.expand_dims(img, 0)
+
+input_name = "input0"
+shape_list = [(input_name, input_shape)]
+relay_mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
+
+#################################
+# Define Target, Runtime and Executor
+# -------------------------------
+#
+# In this tutorial we use AOT host driven executor. To compile the model
+# for an emulated embedded environment on an X86 machine we use C runtime (CRT)

Review Comment:
   s/X86/x86/
   



##########
gallery/how_to/work_with_microtvm/micro_pytorch.py:
##########
@@ -0,0 +1,198 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-Pytorch:
+
+microTVM PyTorch Tutorial
+===========================
+**Authors**:
+`Mehrdad Hessar <https://github.com/mehrdadh>`_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a PyTorch model. This tutorial can be executed on a x86 CPU using C runtime (CRT).
+
+**Note:** This tutorial only runs on x86 CPU using CRT and does not run on Zephyr
+since the model would not fit on our current supported Zephyr boards.
+"""
+import pathlib
+
+import torch
+import torchvision
+from torchvision import transforms
+import numpy as np
+from PIL import Image
+
+import tvm
+from tvm import relay
+from tvm.contrib.download import download_testdata
+from tvm.relay.backend import Executor
+
+#################################
+# Load a pre-trained PyTorch model
+# -------------------------------

Review Comment:
   nit: add one more `-` to match end of line above?



##########
gallery/how_to/work_with_microtvm/micro_pytorch.py:
##########
@@ -0,0 +1,198 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-Pytorch:
+
+microTVM PyTorch Tutorial
+===========================
+**Authors**:
+`Mehrdad Hessar <https://github.com/mehrdadh>`_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a PyTorch model. This tutorial can be executed on a x86 CPU using C runtime (CRT).
+
+**Note:** This tutorial only runs on x86 CPU using CRT and does not run on Zephyr
+since the model would not fit on our current supported Zephyr boards.
+"""
+import pathlib
+
+import torch
+import torchvision
+from torchvision import transforms
+import numpy as np
+from PIL import Image
+
+import tvm
+from tvm import relay
+from tvm.contrib.download import download_testdata
+from tvm.relay.backend import Executor
+
+#################################
+# Load a pre-trained PyTorch model
+# -------------------------------
+#
+# To begin with, load pre-trained MobileNetV2 from torchvision. Then,
+# download a cat image and preprocess it to use as the model input.
+#
+
+model = torchvision.models.quantization.mobilenet_v2(pretrained=True, quantize=True)
+model = model.eval()
+
+input_shape = [1, 3, 224, 224]
+input_data = torch.randn(input_shape)
+scripted_model = torch.jit.trace(model, input_data).eval()
+
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
+img_path = download_testdata(img_url, "cat.png", module="data")
+img = Image.open(img_path).resize((224, 224))
+
+# Preprocess the image and convert to tensor
+my_preprocess = transforms.Compose(
+    [
+        transforms.Resize(256),
+        transforms.CenterCrop(224),
+        transforms.ToTensor(),
+        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+    ]
+)
+img = my_preprocess(img)
+img = np.expand_dims(img, 0)
+
+input_name = "input0"
+shape_list = [(input_name, input_shape)]
+relay_mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
+
+#################################
+# Define Target, Runtime and Executor
+# -------------------------------
+#
+# In this tutorial we use AOT host driven executor. To compile the model
+# for an emulated embedded environment on an X86 machine we use C runtime (CRT)
+# and we use `host` micro target. Using this setup, TVM compiles the model
+# for C runtime which can run on a X86 CPU machine with the same flow that

Review Comment:
   same here: x86 instead of X86



##########
gallery/how_to/work_with_microtvm/micro_pytorch.py:
##########
@@ -0,0 +1,198 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-Pytorch:
+
+microTVM PyTorch Tutorial
+===========================
+**Authors**:
+`Mehrdad Hessar <https://github.com/mehrdadh>`_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a PyTorch model. This tutorial can be executed on a x86 CPU using C runtime (CRT).
+
+**Note:** This tutorial only runs on x86 CPU using CRT and does not run on Zephyr
+since the model would not fit on our current supported Zephyr boards.
+"""
+import pathlib
+
+import torch
+import torchvision
+from torchvision import transforms
+import numpy as np
+from PIL import Image
+
+import tvm
+from tvm import relay
+from tvm.contrib.download import download_testdata
+from tvm.relay.backend import Executor
+
+#################################
+# Load a pre-trained PyTorch model
+# -------------------------------
+#
+# To begin with, load pre-trained MobileNetV2 from torchvision. Then,
+# download a cat image and preprocess it to use as the model input.
+#
+
+model = torchvision.models.quantization.mobilenet_v2(pretrained=True, quantize=True)
+model = model.eval()
+
+input_shape = [1, 3, 224, 224]
+input_data = torch.randn(input_shape)
+scripted_model = torch.jit.trace(model, input_data).eval()
+
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
+img_path = download_testdata(img_url, "cat.png", module="data")
+img = Image.open(img_path).resize((224, 224))
+
+# Preprocess the image and convert to tensor
+my_preprocess = transforms.Compose(
+    [
+        transforms.Resize(256),
+        transforms.CenterCrop(224),
+        transforms.ToTensor(),
+        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+    ]
+)
+img = my_preprocess(img)
+img = np.expand_dims(img, 0)
+
+input_name = "input0"
+shape_list = [(input_name, input_shape)]
+relay_mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
+
+#################################
+# Define Target, Runtime and Executor
+# -------------------------------

Review Comment:
   nit: add enough `-` chars to align this to the end of line above?



##########
gallery/how_to/work_with_microtvm/micro_pytorch.py:
##########
@@ -0,0 +1,198 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-Pytorch:
+
+microTVM PyTorch Tutorial
+===========================
+**Authors**:
+`Mehrdad Hessar <https://github.com/mehrdadh>`_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a PyTorch model. This tutorial can be executed on a x86 CPU using C runtime (CRT).
+
+**Note:** This tutorial only runs on x86 CPU using CRT and does not run on Zephyr
+since the model would not fit on our current supported Zephyr boards.
+"""
+import pathlib
+
+import torch
+import torchvision
+from torchvision import transforms
+import numpy as np
+from PIL import Image
+
+import tvm
+from tvm import relay
+from tvm.contrib.download import download_testdata
+from tvm.relay.backend import Executor
+
+#################################

Review Comment:
   nit: add one more `#` to "cover" the line below? 
   



##########
gallery/how_to/work_with_microtvm/micro_pytorch.py:
##########
@@ -0,0 +1,198 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-Pytorch:
+
+microTVM PyTorch Tutorial
+===========================
+**Authors**:
+`Mehrdad Hessar <https://github.com/mehrdadh>`_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a PyTorch model. This tutorial can be executed on a x86 CPU using C runtime (CRT).
+
+**Note:** This tutorial only runs on x86 CPU using CRT and does not run on Zephyr
+since the model would not fit on our current supported Zephyr boards.
+"""
+import pathlib
+
+import torch
+import torchvision
+from torchvision import transforms
+import numpy as np
+from PIL import Image
+
+import tvm
+from tvm import relay
+from tvm.contrib.download import download_testdata
+from tvm.relay.backend import Executor
+
+#################################
+# Load a pre-trained PyTorch model
+# -------------------------------
+#
+# To begin with, load pre-trained MobileNetV2 from torchvision. Then,
+# download a cat image and preprocess it to use as the model input.
+#
+
+model = torchvision.models.quantization.mobilenet_v2(pretrained=True, quantize=True)
+model = model.eval()
+
+input_shape = [1, 3, 224, 224]
+input_data = torch.randn(input_shape)
+scripted_model = torch.jit.trace(model, input_data).eval()
+
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
+img_path = download_testdata(img_url, "cat.png", module="data")
+img = Image.open(img_path).resize((224, 224))
+
+# Preprocess the image and convert to tensor
+my_preprocess = transforms.Compose(
+    [
+        transforms.Resize(256),
+        transforms.CenterCrop(224),
+        transforms.ToTensor(),
+        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+    ]
+)
+img = my_preprocess(img)
+img = np.expand_dims(img, 0)
+
+input_name = "input0"
+shape_list = [(input_name, input_shape)]
+relay_mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
+
+#################################
+# Define Target, Runtime and Executor
+# -------------------------------
+#
+# In this tutorial we use AOT host driven executor. To compile the model

Review Comment:
   nit: host-driven? 



##########
gallery/how_to/work_with_microtvm/micro_pytorch.py:
##########
@@ -0,0 +1,205 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-Pytorch:
+
+microTVM PyTorch Tutorial
+===========================
+**Authors**:
+`Mehrdad Hessar <https://github.com/mehrdadh>`_
+
+This tutorial is showcasing microTVM host-driven AoT compilation with
+a PyTorch model. This tutorial can be executed on a x86 CPU using C runtime (CRT).
+
+**Note:** This tutorial only runs on x86 CPU using CRT and does not run on Zephyr
+since the model would not fit on our current supported Zephyr boards.
+"""
+
+# sphinx_gallery_start_ignore
+from tvm import testing
+
+testing.utils.install_request_hook(depth=3)
+# sphinx_gallery_end_ignore
+
+import pathlib
+
+import torch
+import torchvision
+from torchvision import transforms
+import numpy as np
+from PIL import Image
+
+import tvm
+from tvm import relay
+from tvm.contrib.download import download_testdata
+from tvm.relay.backend import Executor
+
+#################################
+# Load a pre-trained PyTorch model
+# -------------------------------
+#
+# To begin with, load pre-trained MobileNetV2 from torchvision. Then,
+# download a cat image and preprocess it to use as the model input.
+#
+
+model = torchvision.models.quantization.mobilenet_v2(pretrained=True, quantize=True)
+model = model.eval()
+
+input_shape = [1, 3, 224, 224]
+input_data = torch.randn(input_shape)
+scripted_model = torch.jit.trace(model, input_data).eval()
+
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
+img_path = download_testdata(img_url, "cat.png", module="data")
+img = Image.open(img_path).resize((224, 224))
+
+# Preprocess the image and convert to tensor
+my_preprocess = transforms.Compose(
+    [
+        transforms.Resize(256),
+        transforms.CenterCrop(224),
+        transforms.ToTensor(),
+        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+    ]
+)
+img = my_preprocess(img)
+img = np.expand_dims(img, 0)
+
+input_name = "input0"
+shape_list = [(input_name, input_shape)]
+relay_mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
+
+#################################
+# Define Target, Runtime and Executor
+# -------------------------------
+#
+# In this tutorial we use AOT host driven executor. To compile the model
+# for an emulated embedded environment on an X86 machine we use C runtime (CRT)
+# and we use `host` micro target. Using this setup, TVM compiles the model
+# for C runtime which can run on a X86 CPU machine with the same flow that
+# would run on a physical microcontroller.
+#
+
+
+# Simulate a microcontroller on the host machine. Uses the main() from `src/runtime/crt/host/main.cc`
+# To use physical hardware, replace "host" with something matching your hardware.

Review Comment:
   How about instead of using "something'" say "replace 'hosts' with another physical micro target, e.g. 'nrf52840' or 'mps2_an521' -- see more more target examples in micro_train.py and micro_tflite.py tutorials "?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] tvm-bot commented on pull request #13324: [microTVM][PyTorch][Tutorial]Adding a PyTorch tutorial for microTVM with CRT

Posted by GitBox <gi...@apache.org>.
tvm-bot commented on PR #13324:
URL: https://github.com/apache/tvm/pull/13324#issuecomment-1307570038

   <!---bot-comment-->
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @-ing them in a comment.
   
   <!--bot-comment-ccs-start-->
    * cc @alanmacd, @gromero, @yelite <sub>See [#10317](https://github.com/apache/tvm/issues/10317) for details</sub><!--bot-comment-ccs-end-->
   
   <sub>Generated by [tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)</sub>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] gromero commented on a diff in pull request #13324: [microTVM][PyTorch][Tutorial]Adding a PyTorch tutorial for microTVM with CRT

Posted by GitBox <gi...@apache.org>.
gromero commented on code in PR #13324:
URL: https://github.com/apache/tvm/pull/13324#discussion_r1017216300


##########
src/runtime/crt/host/microtvm_api_server.py:
##########
@@ -35,6 +37,18 @@
 
 IS_TEMPLATE = not os.path.exists(os.path.join(PROJECT_DIR, MODEL_LIBRARY_FORMAT_RELPATH))
 
+MEMORY_SIZE_BYTES = 2 * 1024 * 1024

Review Comment:
   @mehrdadh I think that's actually interesting / important info: the constant is determined experimentally, it really good to have comment about it as @alanmacd requested, I don't even consider it a nit :-) 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org