You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/09/22 15:32:40 UTC

[GitHub] [tvm] joshherr-quic opened a new pull request, #12873: [Hexagon] Float and quantized dense operators with schedules

joshherr-quic opened a new pull request, #12873:
URL: https://github.com/apache/tvm/pull/12873

   This PR implements dense operators for float types and quantized types. The quantized implementation uses floating point numbers for its intermediate compute type, fixed point will be investigated in the future.
   
   Additionally, there are some accuracy issues overall. This might be due to the test case not being valid (meaning doesn't conform to real-world dense operator activations/weights).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] kparzysz-quic commented on a diff in pull request #12873: [Hexagon] Float and quantized dense operators with schedules

Posted by "kparzysz-quic (via GitHub)" <gi...@apache.org>.
kparzysz-quic commented on code in PR #12873:
URL: https://github.com/apache/tvm/pull/12873#discussion_r1085670530


##########
tests/python/contrib/test_hexagon/topi/slice_op/test_dense_slice.py:
##########
@@ -0,0 +1,297 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import pytest
+import numpy as np
+
+from tvm import te, topi
+
+import tvm.testing
+from tvm.topi import testing
+from tvm.contrib.hexagon.build import HexagonLauncher
+from tvm.contrib.hexagon.session import Session
+import tvm.topi.hexagon.qnn as qnn
+import tvm.topi.hexagon.slice_ops as sl
+from ...infrastructure import transform_numpy, quantize_np
+from tvm.contrib.hexagon import allocate_hexagon_array
+
+
+@tvm.testing.fixture
+def input_np(input_shape, dtype):
+    if "int" in dtype:
+        data = np.random.random(input_shape).astype("float32")
+    elif "float" in dtype:
+        data = np.random.random(input_shape).astype(dtype)
+    return data
+
+
+@tvm.testing.fixture
+def weight_np(weight_shape, dtype):
+    if "int" in dtype:
+        weight = np.random.random(weight_shape).astype("float32")
+    elif "float" in dtype:
+        weight = np.random.random(weight_shape).astype(dtype)
+    return weight
+
+
+@tvm.testing.fixture
+def input_quant(input_np, dtype):
+    if "float" in dtype:
+        return None
+    quant, scale, zp = quantize_np(input_np, dtype)
+    return {"zero": zp, "scale": scale, "data": quant}
+
+
+@tvm.testing.fixture
+def weight_quant(weight_np, dtype):
+    if "float" in dtype:
+        return None
+    quant, scale, zp = quantize_np(weight_np, "int8")
+    return {"zero": zp, "scale": scale, "data": quant}
+
+
+@tvm.testing.fixture
+def bias_np(bias_shape, bias, dtype):
+    if bias:
+        if "int" in dtype:
+            data = np.random.randint(-128, 127, size=bias_shape).astype("int32")
+        elif "float" in dtype:
+            data = np.random.random(bias_shape).astype(dtype)
+        return data
+    else:
+        return None
+
+
+@tvm.testing.fixture
+def quant_arr(input_quant, weight_quant):
+    if input_quant is None:
+        return None
+    arr = np.empty((6,), dtype="float32")
+    arr[0] = input_quant["zero"]
+    arr[1] = input_quant["scale"]
+    arr[2] = weight_quant["zero"]
+    arr[3] = weight_quant["scale"]
+    return arr
+
+
+@tvm.testing.fixture
+def transformed_expected_output_np(expected_output_np, layout):
+    return transform_numpy(expected_output_np, "nc", layout)
+
+
+@tvm.testing.fixture
+def transformed_input_np(input_np, layout):
+    return transform_numpy(input_np, "nc", layout)
+
+
+# TODO(joshherr-quic): transforming weight forces us to put it in vtcm. Crashes at runtime in vtcm

Review Comment:
   Removed.



##########
tests/python/contrib/test_hexagon/topi/slice_op/test_dense_slice.py:
##########
@@ -0,0 +1,297 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import pytest
+import numpy as np
+
+from tvm import te, topi
+
+import tvm.testing
+from tvm.topi import testing
+from tvm.contrib.hexagon.build import HexagonLauncher
+from tvm.contrib.hexagon.session import Session
+import tvm.topi.hexagon.qnn as qnn
+import tvm.topi.hexagon.slice_ops as sl
+from ...infrastructure import transform_numpy, quantize_np
+from tvm.contrib.hexagon import allocate_hexagon_array
+
+
+@tvm.testing.fixture
+def input_np(input_shape, dtype):
+    if "int" in dtype:
+        data = np.random.random(input_shape).astype("float32")
+    elif "float" in dtype:
+        data = np.random.random(input_shape).astype(dtype)
+    return data
+
+
+@tvm.testing.fixture
+def weight_np(weight_shape, dtype):
+    if "int" in dtype:
+        weight = np.random.random(weight_shape).astype("float32")
+    elif "float" in dtype:
+        weight = np.random.random(weight_shape).astype(dtype)
+    return weight
+
+
+@tvm.testing.fixture
+def input_quant(input_np, dtype):
+    if "float" in dtype:
+        return None
+    quant, scale, zp = quantize_np(input_np, dtype)
+    return {"zero": zp, "scale": scale, "data": quant}
+
+
+@tvm.testing.fixture
+def weight_quant(weight_np, dtype):
+    if "float" in dtype:
+        return None
+    quant, scale, zp = quantize_np(weight_np, "int8")
+    return {"zero": zp, "scale": scale, "data": quant}
+
+
+@tvm.testing.fixture
+def bias_np(bias_shape, bias, dtype):
+    if bias:
+        if "int" in dtype:
+            data = np.random.randint(-128, 127, size=bias_shape).astype("int32")
+        elif "float" in dtype:
+            data = np.random.random(bias_shape).astype(dtype)
+        return data
+    else:
+        return None
+
+
+@tvm.testing.fixture
+def quant_arr(input_quant, weight_quant):
+    if input_quant is None:
+        return None
+    arr = np.empty((6,), dtype="float32")
+    arr[0] = input_quant["zero"]
+    arr[1] = input_quant["scale"]
+    arr[2] = weight_quant["zero"]
+    arr[3] = weight_quant["scale"]
+    return arr
+
+
+@tvm.testing.fixture
+def transformed_expected_output_np(expected_output_np, layout):
+    return transform_numpy(expected_output_np, "nc", layout)
+
+
+@tvm.testing.fixture
+def transformed_input_np(input_np, layout):
+    return transform_numpy(input_np, "nc", layout)
+
+
+# TODO(joshherr-quic): transforming weight forces us to put it in vtcm. Crashes at runtime in vtcm
+# @tvm.testing.fixture
+# def transformed_weight_np(weight_np, layout):
+#     return transform_numpy(weight_np, "nc", layout)
+
+
+@tvm.testing.fixture
+def transformed_input_quant(input_quant, layout):
+    if input_quant is None:
+        return None
+    input_quant["data"] = transform_numpy(input_quant["data"], "nc", layout)
+    return input_quant
+
+
+# @tvm.testing.fixture

Review Comment:
   Removed.



##########
tests/python/contrib/test_hexagon/topi/slice_op/test_dense_slice.py:
##########
@@ -0,0 +1,297 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import pytest
+import numpy as np
+
+from tvm import te, topi
+
+import tvm.testing
+from tvm.topi import testing
+from tvm.contrib.hexagon.build import HexagonLauncher
+from tvm.contrib.hexagon.session import Session
+import tvm.topi.hexagon.qnn as qnn
+import tvm.topi.hexagon.slice_ops as sl
+from ...infrastructure import transform_numpy, quantize_np
+from tvm.contrib.hexagon import allocate_hexagon_array
+
+
+@tvm.testing.fixture
+def input_np(input_shape, dtype):
+    if "int" in dtype:
+        data = np.random.random(input_shape).astype("float32")
+    elif "float" in dtype:
+        data = np.random.random(input_shape).astype(dtype)
+    return data
+
+
+@tvm.testing.fixture
+def weight_np(weight_shape, dtype):
+    if "int" in dtype:
+        weight = np.random.random(weight_shape).astype("float32")
+    elif "float" in dtype:
+        weight = np.random.random(weight_shape).astype(dtype)
+    return weight
+
+
+@tvm.testing.fixture
+def input_quant(input_np, dtype):
+    if "float" in dtype:
+        return None
+    quant, scale, zp = quantize_np(input_np, dtype)
+    return {"zero": zp, "scale": scale, "data": quant}
+
+
+@tvm.testing.fixture
+def weight_quant(weight_np, dtype):
+    if "float" in dtype:
+        return None
+    quant, scale, zp = quantize_np(weight_np, "int8")
+    return {"zero": zp, "scale": scale, "data": quant}
+
+
+@tvm.testing.fixture
+def bias_np(bias_shape, bias, dtype):
+    if bias:
+        if "int" in dtype:
+            data = np.random.randint(-128, 127, size=bias_shape).astype("int32")
+        elif "float" in dtype:
+            data = np.random.random(bias_shape).astype(dtype)
+        return data
+    else:
+        return None
+
+
+@tvm.testing.fixture
+def quant_arr(input_quant, weight_quant):
+    if input_quant is None:
+        return None
+    arr = np.empty((6,), dtype="float32")
+    arr[0] = input_quant["zero"]
+    arr[1] = input_quant["scale"]
+    arr[2] = weight_quant["zero"]
+    arr[3] = weight_quant["scale"]
+    return arr
+
+
+@tvm.testing.fixture
+def transformed_expected_output_np(expected_output_np, layout):
+    return transform_numpy(expected_output_np, "nc", layout)
+
+
+@tvm.testing.fixture
+def transformed_input_np(input_np, layout):
+    return transform_numpy(input_np, "nc", layout)
+
+
+# TODO(joshherr-quic): transforming weight forces us to put it in vtcm. Crashes at runtime in vtcm
+# @tvm.testing.fixture
+# def transformed_weight_np(weight_np, layout):
+#     return transform_numpy(weight_np, "nc", layout)
+
+
+@tvm.testing.fixture
+def transformed_input_quant(input_quant, layout):
+    if input_quant is None:
+        return None
+    input_quant["data"] = transform_numpy(input_quant["data"], "nc", layout)
+    return input_quant
+
+
+# @tvm.testing.fixture
+# def transformed_weight_quant(weight_quant, layout):
+#     weight_quant["data"] = transform_numpy(weight_quant["data"], "nc", layout)
+#     return weight_quant
+
+# Test combinations of the following:

Review Comment:
   Removed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] kparzysz-quic commented on pull request #12873: [Hexagon] Float and quantized dense operators with schedules

Posted by GitBox <gi...@apache.org>.
kparzysz-quic commented on PR #12873:
URL: https://github.com/apache/tvm/pull/12873#issuecomment-1343020927

   @tvm-bot rerun


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] joshherr-quic commented on pull request #12873: [Hexagon] Float and quantized dense operators with schedules

Posted by GitBox <gi...@apache.org>.
joshherr-quic commented on PR #12873:
URL: https://github.com/apache/tvm/pull/12873#issuecomment-1310525244

   PR is failing for other targets, not sure what is going on.
   
   CC: @mehrdadh 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mehrdadh commented on pull request #12873: [Hexagon] Float and quantized dense operators with schedules

Posted by GitBox <gi...@apache.org>.
mehrdadh commented on PR #12873:
URL: https://github.com/apache/tvm/pull/12873#issuecomment-1301198107

   @tvm-bot rerun


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] kparzysz-quic commented on pull request #12873: [Hexagon] Float and quantized dense operators with schedules

Posted by "kparzysz-quic (via GitHub)" <gi...@apache.org>.
kparzysz-quic commented on PR #12873:
URL: https://github.com/apache/tvm/pull/12873#issuecomment-1401048093

   @mehrdadh 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] kparzysz-quic commented on pull request #12873: [Hexagon] Float and quantized dense operators with schedules

Posted by GitBox <gi...@apache.org>.
kparzysz-quic commented on PR #12873:
URL: https://github.com/apache/tvm/pull/12873#issuecomment-1351947837

   Ping.  The lint issues have been addressed and the PR has been rebased.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] github-actions[bot] commented on pull request #12873: [Hexagon] Float and quantized dense operators with schedules

Posted by GitBox <gi...@apache.org>.
github-actions[bot] commented on PR #12873:
URL: https://github.com/apache/tvm/pull/12873#issuecomment-1343021235

   Failed to re-run CI in https://github.com/apache/tvm/actions/runs/3650397411
   
   <details>
   
   ```
   Traceback (most recent call last):
     File "ci/scripts/github/github_tvmbot.py", line 593, in comment_failure
       raise item
     File "ci/scripts/github/github_tvmbot.py", line 699, in run
       pr.rerun_jenkins_ci()
     File "ci/scripts/github/github_tvmbot.py", line 552, in rerun_jenkins_ci
       post(url, auth=("tvm-bot", TVM_BOT_JENKINS_TOKEN))
     File "/home/runner/work/tvm/tvm/ci/scripts/jenkins/git_utils.py", line 53, in post
       with request.urlopen(req, data) as response:
     File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen
       return opener.open(url, data, timeout)
     File "/usr/lib/python3.8/urllib/request.py", line 531, in open
       response = meth(req, response)
     File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
       response = self.parent.error(
     File "/usr/lib/python3.8/urllib/request.py", line 569, in error
       return self._call_chain(*args)
     File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
       result = func(*args)
     File "/usr/lib/python3.8/urllib/request.py", line 649, in http_error_default
       raise HTTPError(req.full_url, code, msg, hdrs, fp)
   urllib.error.HTTPError: HTTP Error 500: Server Error
   
   ```
   
   with response
   
   ```
   
     
     <!DOCTYPE html><html><head resURL="/static/e3b9d568" data-rooturl="" data-resurl="/static/e3b9d568" data-extensions-available="true" data-unit-test="false" data-imagesurl="/static/e3b9d568/images" data-crumb-header="Jenkins-Crumb" data-crumb-value="8b0f3a6076812477b272301aa611530d3a27d1305f65c004fe1472d435a0fc38b42aab891f382c4b1322d9ac31e90babca33cc5e79c49427bd87c9d455d8a6b5">
       
       
   
       <title>Jenkins [Jenkins]</title><link rel="stylesheet" href="/static/e3b9d568/jsbundles/styles.css" type="text/css"><link rel="stylesheet" href="/static/e3b9d568/css/responsive-grid.css" type="text/css"><link rel="shortcut icon" href="/static/e3b9d568/favicon.ico" type="image/vnd.microsoft.icon"><script src="/static/e3b9d568/scripts/prototype.js" type="text/javascript"></script><script src="/static/e3b9d568/scripts/behavior.js" type="text/javascript"></script><script src='/adjuncts/e3b9d568/org/kohsuke/stapler/bind.js' type='text/javascript'></script><script src="/static/e3b9d568/scripts/yui/yahoo/yahoo-min.js"></script><script src="/static/e3b9d568/scripts/yui/dom/dom-min.js"></script><script src="/static/e3b9d568/scripts/yui/event/event-min.js"></script><script src="/static/e3b9d568/scripts/yui/animation/animation-min.js"></script><script src="/static/e3b9d568/scripts/yui/dragdrop/dragdrop-min.js"></script><script src="/static/e3b9d568/scripts/yui/container/container-min.js"
 ></script><script src="/static/e3b9d568/scripts/yui/connection/connection-min.js"></script><script src="/static/e3b9d568/scripts/yui/datasource/datasource-min.js"></script><script src="/static/e3b9d568/scripts/yui/autocomplete/autocomplete-min.js"></script><script src="/static/e3b9d568/scripts/yui/menu/menu-min.js"></script><script src="/static/e3b9d568/scripts/yui/element/element-min.js"></script><script src="/static/e3b9d568/scripts/yui/button/button-min.js"></script><script src="/static/e3b9d568/scripts/yui/storage/storage-min.js"></script><script src="/static/e3b9d568/scripts/hudson-behavior.js" type="text/javascript"></script><script src="/static/e3b9d568/scripts/sortable.js" type="text/javascript"></script><link rel="stylesheet" href="/static/e3b9d568/scripts/yui/container/assets/container.css" type="text/css"><link rel="stylesheet" href="/static/e3b9d568/scripts/yui/container/assets/skins/sam/container.css" type="text/css"><link rel="stylesheet" href="/static/e3b9d568/scripts
 /yui/menu/assets/skins/sam/menu.css" type="text/css"><link rel="search" href="/opensearch.xml" type="application/opensearchdescription+xml" title="Jenkins"><meta name="ROBOTS" content="INDEX,NOFOLLOW"><meta name="viewport" content="width=device-width, initial-scale=1"><script src="/static/e3b9d568/jsbundles/vendors.js" type="text/javascript"></script><script src="/static/e3b9d568/jsbundles/page-init.js" type="text/javascript"></script><script src="/static/e3b9d568/jsbundles/sortable-drag-drop.js" type="text/javascript"></script></head><body data-model-type="hudson.model.Hudson" id="jenkins" class="yui-skin-sam one-column jenkins-2.361.2" data-version="2.361.2"><a href="#skip2content" class="skiplink">Skip to content</a><header id="page-header" class="page-header"><div class="page-header__brand"><div class="logo"><a id="jenkins-home-link" href="/"><img src="/static/e3b9d568/images/svgs/logo.svg" alt="[Jenkins]" id="jenkins-head-icon"><img src="/static/e3b9d568/images/title.svg" alt="
 Jenkins" width="139" id="jenkins-name-icon" height="34"></a></div><a href="/" class="page-header__brand-link"><img src="/static/e3b9d568/images/svgs/logo.svg" alt="[Jenkins]" class="page-header__brand-image"><span class="page-header__brand-name">Jenkins</span></a></div><div class="searchbox hidden-xs"><form role="search" method="get" name="search" action="/search/" style="position:relative;" class="no-json"><div id="search-box-sizer"></div><div id="searchform"><input role="searchbox" name="q" placeholder="Search" id="search-box" class="main-search__input"><span class="main-search__icon-leading"><svg class="" class="" aria-hidden="true" xmlns="http://www.w3.org/2000/svg" class="" viewBox="0 0 512 512"><title></title><path d="M221.09 64a157.09 157.09 0 10157.09 157.09A157.1 157.1 0 00221.09 64z" fill="none" stroke="currentColor" stroke-miterlimit="10" stroke-width="32"/><path fill="none" stroke="currentColor" stroke-linecap="round" stroke-miterlimit="10" stroke-width="32" d="M338.29 3
 38.29L448 448"/></svg></span><a href="https://www.jenkins.io/redirect/search-box" class="main-search__icon-trailing"><svg class="" class="" aria-hidden="true" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path d="M256 40a216 216 0 10216 216A216 216 0 00256 40z" fill="none" stroke="currentColor" stroke-miterlimit="10" stroke-width="38"/><path d="M200 202.29s.84-17.5 19.57-32.57C230.68 160.77 244 158.18 256 158c10.93-.14 20.69 1.67 26.53 4.45 10 4.76 29.47 16.38 29.47 41.09 0 26-17 37.81-36.37 50.8S251 281.43 251 296" fill="none" stroke="currentColor" stroke-linecap="round" stroke-miterlimit="10" stroke-width="38"/><circle cx="250" cy="360" r="25" fill="currentColor"/></svg></a><div id="search-box-completion" data-search-url="/search/"></div><script src='/adjuncts/e3b9d568/jenkins/views/JenkinsHeader/search-box.js' type='text/javascript'></script></div></form></div><div class="login page-header__hyperlinks"><div id="visible-am-insertion" class="page-header__am-wrapper"></d
 iv><div id="visible-sec-am-insertion" class="page-header__am-wrapper"></div><a href="/securityRealm/commenceLogin?from=%2Fjob%2Ftvm-arm%2Fjob%2FPR-12873%2FbuildWithParameters"><b>log in</b></a></div></header><script src="/static/e3b9d568/jsbundles/keyboard-shortcuts.js" type="text/javascript"></script><div id="breadcrumbBar"><script src='/adjuncts/e3b9d568/lib/layout/breadcrumbs.js' type='text/javascript'></script><div class="top-sticker noedge"><div class="top-sticker-inner"><div class="jenkins-breadcrumbs"><ul id="breadcrumbs"><li class="item"><a href="/" class="model-link">Dashboard</a></li><li href="/" class="children"></li></ul><div id="breadcrumb-menu-target"></div></div></div></div></div><div id="page-body" class="clear"><div id="main-panel"><a name="skip2content"></a><h1 style="text-align: center"><img src="/static/e3b9d568/images/rage.svg" width="154" height="179"><span style="font-size:50px"> Oops!</span></h1><div id="error-description"><h2 style="text-align: center">A pr
 oblem occurred while processing the request.</h2><p style="text-align: center">Logging ID=9261c60e-b83c-49c8-a76c-45b2010f4d00</div></div></div><footer class="page-footer"><div class="container-fluid"><div class="page-footer__flex-row"><div class="page-footer__footer-id-placeholder" id="footer"></div><div class="page-footer__links rest_api hidden-xs"><a href="api/">REST API</a></div><div class="page-footer__links page-footer__links--white jenkins_ver"><a rel="noopener noreferrer" href="https://www.jenkins.io/" target="_blank">Jenkins 2.361.2</a></div></div></div></footer></body></html>
   ```
   
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mehrdadh commented on a diff in pull request #12873: [Hexagon] Float and quantized dense operators with schedules

Posted by "mehrdadh (via GitHub)" <gi...@apache.org>.
mehrdadh commented on code in PR #12873:
URL: https://github.com/apache/tvm/pull/12873#discussion_r1084641685


##########
tests/python/contrib/test_hexagon/topi/slice_op/test_dense_slice.py:
##########
@@ -0,0 +1,297 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import pytest
+import numpy as np
+
+from tvm import te, topi
+
+import tvm.testing
+from tvm.topi import testing
+from tvm.contrib.hexagon.build import HexagonLauncher
+from tvm.contrib.hexagon.session import Session
+import tvm.topi.hexagon.qnn as qnn
+import tvm.topi.hexagon.slice_ops as sl
+from ...infrastructure import transform_numpy, quantize_np
+from tvm.contrib.hexagon import allocate_hexagon_array
+
+
+@tvm.testing.fixture
+def input_np(input_shape, dtype):
+    if "int" in dtype:
+        data = np.random.random(input_shape).astype("float32")
+    elif "float" in dtype:
+        data = np.random.random(input_shape).astype(dtype)
+    return data
+
+
+@tvm.testing.fixture
+def weight_np(weight_shape, dtype):
+    if "int" in dtype:
+        weight = np.random.random(weight_shape).astype("float32")
+    elif "float" in dtype:
+        weight = np.random.random(weight_shape).astype(dtype)
+    return weight
+
+
+@tvm.testing.fixture
+def input_quant(input_np, dtype):
+    if "float" in dtype:
+        return None
+    quant, scale, zp = quantize_np(input_np, dtype)
+    return {"zero": zp, "scale": scale, "data": quant}
+
+
+@tvm.testing.fixture
+def weight_quant(weight_np, dtype):
+    if "float" in dtype:
+        return None
+    quant, scale, zp = quantize_np(weight_np, "int8")
+    return {"zero": zp, "scale": scale, "data": quant}
+
+
+@tvm.testing.fixture
+def bias_np(bias_shape, bias, dtype):
+    if bias:
+        if "int" in dtype:
+            data = np.random.randint(-128, 127, size=bias_shape).astype("int32")
+        elif "float" in dtype:
+            data = np.random.random(bias_shape).astype(dtype)
+        return data
+    else:
+        return None
+
+
+@tvm.testing.fixture
+def quant_arr(input_quant, weight_quant):
+    if input_quant is None:
+        return None
+    arr = np.empty((6,), dtype="float32")
+    arr[0] = input_quant["zero"]
+    arr[1] = input_quant["scale"]
+    arr[2] = weight_quant["zero"]
+    arr[3] = weight_quant["scale"]
+    return arr
+
+
+@tvm.testing.fixture
+def transformed_expected_output_np(expected_output_np, layout):
+    return transform_numpy(expected_output_np, "nc", layout)
+
+
+@tvm.testing.fixture
+def transformed_input_np(input_np, layout):
+    return transform_numpy(input_np, "nc", layout)
+
+
+# TODO(joshherr-quic): transforming weight forces us to put it in vtcm. Crashes at runtime in vtcm
+# @tvm.testing.fixture
+# def transformed_weight_np(weight_np, layout):
+#     return transform_numpy(weight_np, "nc", layout)
+
+
+@tvm.testing.fixture
+def transformed_input_quant(input_quant, layout):
+    if input_quant is None:
+        return None
+    input_quant["data"] = transform_numpy(input_quant["data"], "nc", layout)
+    return input_quant
+
+
+# @tvm.testing.fixture
+# def transformed_weight_quant(weight_quant, layout):
+#     weight_quant["data"] = transform_numpy(weight_quant["data"], "nc", layout)
+#     return weight_quant
+
+# Test combinations of the following:

Review Comment:
   These comments are also not needed, testing parameters show the combination



##########
tests/python/contrib/test_hexagon/topi/slice_op/test_dense_slice.py:
##########
@@ -0,0 +1,297 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import pytest
+import numpy as np
+
+from tvm import te, topi
+
+import tvm.testing
+from tvm.topi import testing
+from tvm.contrib.hexagon.build import HexagonLauncher
+from tvm.contrib.hexagon.session import Session
+import tvm.topi.hexagon.qnn as qnn
+import tvm.topi.hexagon.slice_ops as sl
+from ...infrastructure import transform_numpy, quantize_np
+from tvm.contrib.hexagon import allocate_hexagon_array
+
+
+@tvm.testing.fixture
+def input_np(input_shape, dtype):
+    if "int" in dtype:
+        data = np.random.random(input_shape).astype("float32")
+    elif "float" in dtype:
+        data = np.random.random(input_shape).astype(dtype)
+    return data
+
+
+@tvm.testing.fixture
+def weight_np(weight_shape, dtype):
+    if "int" in dtype:
+        weight = np.random.random(weight_shape).astype("float32")
+    elif "float" in dtype:
+        weight = np.random.random(weight_shape).astype(dtype)
+    return weight
+
+
+@tvm.testing.fixture
+def input_quant(input_np, dtype):
+    if "float" in dtype:
+        return None
+    quant, scale, zp = quantize_np(input_np, dtype)
+    return {"zero": zp, "scale": scale, "data": quant}
+
+
+@tvm.testing.fixture
+def weight_quant(weight_np, dtype):
+    if "float" in dtype:
+        return None
+    quant, scale, zp = quantize_np(weight_np, "int8")
+    return {"zero": zp, "scale": scale, "data": quant}
+
+
+@tvm.testing.fixture
+def bias_np(bias_shape, bias, dtype):
+    if bias:
+        if "int" in dtype:
+            data = np.random.randint(-128, 127, size=bias_shape).astype("int32")
+        elif "float" in dtype:
+            data = np.random.random(bias_shape).astype(dtype)
+        return data
+    else:
+        return None
+
+
+@tvm.testing.fixture
+def quant_arr(input_quant, weight_quant):
+    if input_quant is None:
+        return None
+    arr = np.empty((6,), dtype="float32")
+    arr[0] = input_quant["zero"]
+    arr[1] = input_quant["scale"]
+    arr[2] = weight_quant["zero"]
+    arr[3] = weight_quant["scale"]
+    return arr
+
+
+@tvm.testing.fixture
+def transformed_expected_output_np(expected_output_np, layout):
+    return transform_numpy(expected_output_np, "nc", layout)
+
+
+@tvm.testing.fixture
+def transformed_input_np(input_np, layout):
+    return transform_numpy(input_np, "nc", layout)
+
+
+# TODO(joshherr-quic): transforming weight forces us to put it in vtcm. Crashes at runtime in vtcm
+# @tvm.testing.fixture
+# def transformed_weight_np(weight_np, layout):
+#     return transform_numpy(weight_np, "nc", layout)
+
+
+@tvm.testing.fixture
+def transformed_input_quant(input_quant, layout):
+    if input_quant is None:
+        return None
+    input_quant["data"] = transform_numpy(input_quant["data"], "nc", layout)
+    return input_quant
+
+
+# @tvm.testing.fixture

Review Comment:
   same here



##########
tests/python/contrib/test_hexagon/topi/slice_op/test_dense_slice.py:
##########
@@ -0,0 +1,297 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import pytest
+import numpy as np
+
+from tvm import te, topi
+
+import tvm.testing
+from tvm.topi import testing
+from tvm.contrib.hexagon.build import HexagonLauncher
+from tvm.contrib.hexagon.session import Session
+import tvm.topi.hexagon.qnn as qnn
+import tvm.topi.hexagon.slice_ops as sl
+from ...infrastructure import transform_numpy, quantize_np
+from tvm.contrib.hexagon import allocate_hexagon_array
+
+
+@tvm.testing.fixture
+def input_np(input_shape, dtype):
+    if "int" in dtype:
+        data = np.random.random(input_shape).astype("float32")
+    elif "float" in dtype:
+        data = np.random.random(input_shape).astype(dtype)
+    return data
+
+
+@tvm.testing.fixture
+def weight_np(weight_shape, dtype):
+    if "int" in dtype:
+        weight = np.random.random(weight_shape).astype("float32")
+    elif "float" in dtype:
+        weight = np.random.random(weight_shape).astype(dtype)
+    return weight
+
+
+@tvm.testing.fixture
+def input_quant(input_np, dtype):
+    if "float" in dtype:
+        return None
+    quant, scale, zp = quantize_np(input_np, dtype)
+    return {"zero": zp, "scale": scale, "data": quant}
+
+
+@tvm.testing.fixture
+def weight_quant(weight_np, dtype):
+    if "float" in dtype:
+        return None
+    quant, scale, zp = quantize_np(weight_np, "int8")
+    return {"zero": zp, "scale": scale, "data": quant}
+
+
+@tvm.testing.fixture
+def bias_np(bias_shape, bias, dtype):
+    if bias:
+        if "int" in dtype:
+            data = np.random.randint(-128, 127, size=bias_shape).astype("int32")
+        elif "float" in dtype:
+            data = np.random.random(bias_shape).astype(dtype)
+        return data
+    else:
+        return None
+
+
+@tvm.testing.fixture
+def quant_arr(input_quant, weight_quant):
+    if input_quant is None:
+        return None
+    arr = np.empty((6,), dtype="float32")
+    arr[0] = input_quant["zero"]
+    arr[1] = input_quant["scale"]
+    arr[2] = weight_quant["zero"]
+    arr[3] = weight_quant["scale"]
+    return arr
+
+
+@tvm.testing.fixture
+def transformed_expected_output_np(expected_output_np, layout):
+    return transform_numpy(expected_output_np, "nc", layout)
+
+
+@tvm.testing.fixture
+def transformed_input_np(input_np, layout):
+    return transform_numpy(input_np, "nc", layout)
+
+
+# TODO(joshherr-quic): transforming weight forces us to put it in vtcm. Crashes at runtime in vtcm

Review Comment:
   please remove these comments.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] tvm-bot commented on pull request #12873: [Hexagon] Float and quantized dense operators with schedules

Posted by GitBox <gi...@apache.org>.
tvm-bot commented on PR #12873:
URL: https://github.com/apache/tvm/pull/12873#issuecomment-1296382381

   <!---bot-comment-->
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @-ing them in a comment.
   
   <!--bot-comment-docs-start-->
    * Built docs for commit 11d1cb824905b9e38c45db11953683c0938c3e83 can be found [here](https://pr-docs.tlcpack.ai/PR-12873/13/docs/index.html).<!--bot-comment-docs-end-->
   
   <sub>Generated by [tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)</sub>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mehrdadh commented on pull request #12873: [Hexagon] Float and quantized dense operators with schedules

Posted by GitBox <gi...@apache.org>.
mehrdadh commented on PR #12873:
URL: https://github.com/apache/tvm/pull/12873#issuecomment-1310642516

   `allocate_hexagon_array` was moved in this PR:https://github.com/apache/tvm/pull/13336
   Please rebase and fix imports in your PR


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] joshherr-quic commented on pull request #12873: [Hexagon] Float and quantized dense operators with schedules

Posted by GitBox <gi...@apache.org>.
joshherr-quic commented on PR #12873:
URL: https://github.com/apache/tvm/pull/12873#issuecomment-1309706691

   @tvm-bot rerun


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] kparzysz-quic commented on pull request #12873: [Hexagon] Float and quantized dense operators with schedules

Posted by GitBox <gi...@apache.org>.
kparzysz-quic commented on PR #12873:
URL: https://github.com/apache/tvm/pull/12873#issuecomment-1377976182

   Ping @mehrdadh 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mehrdadh merged pull request #12873: [Hexagon] Float and quantized dense operators with schedules

Posted by "mehrdadh (via GitHub)" <gi...@apache.org>.
mehrdadh merged PR #12873:
URL: https://github.com/apache/tvm/pull/12873


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org