You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/06/10 01:24:43 UTC

[GitHub] [tvm] AndrewZhaoLuo opened a new pull request, #11663: [AutoTVM] Fix flaky test

AndrewZhaoLuo opened a new pull request, #11663:
URL: https://github.com/apache/tvm/pull/11663

   Closes https://github.com/apache/tvm/issues/10489.
   
   The issue is that autotvm can generate uncompiliable code for gpu. There is a built in pass that filters these out. Unfortunately, it is impossible to guarantee during tuning, especially for small number of trials we will grab a schedule that doesn't get filtered out. 
   
   Therefore, we will accept records which are filtered out by the gpu verification pass as proof the system is working.
   
   The reason it was flaky before is because once in a while when tuning, it could not find a valid schedule due to low number of kernel trials. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] AndrewZhaoLuo commented on pull request #11663: [AutoTVM] Fix flaky test

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on PR #11663:
URL: https://github.com/apache/tvm/pull/11663#issuecomment-1151797889

   cc @driazati @jwfromm 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] AndrewZhaoLuo commented on pull request #11663: [AutoTVM] Fix flaky test

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on PR #11663:
URL: https://github.com/apache/tvm/pull/11663#issuecomment-1164932231

   cc @driazati any opinions?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] AndrewZhaoLuo merged pull request #11663: [AutoTVM] Fix flaky test

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo merged PR #11663:
URL: https://github.com/apache/tvm/pull/11663


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] driazati commented on a diff in pull request #11663: [AutoTVM] Fix flaky test

Posted by GitBox <gi...@apache.org>.
driazati commented on code in PR #11663:
URL: https://github.com/apache/tvm/pull/11663#discussion_r894765517


##########
tests/python/integration/test_tuning.py:
##########
@@ -174,7 +173,14 @@ def runner(target, dev):
 
         assert len(results) == 20
 
-        successful_results = [r for r in results if r.error_no == autotvm.MeasureErrorNo.NO_ERROR]
+        successful_results = [
+            r
+            for r in results
+            if r.error_no == autotvm.MeasureErrorNo.NO_ERROR
+            # Autotvm can filter some records before building if we know they won't work ahead of time.
+            # We can't guarantee we sample at least one good record so we count these as success too
+            or r.error_no == autotvm.MeasureErrorNo.INSTANTIATION_ERROR

Review Comment:
   no idea what I'm talking about here but wouldn't this just hide the flakiness the same as the `xfail` (i.e. if this line happens the test is bogus)? Is there a way to make it deterministic on a known good rng seed?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] AndrewZhaoLuo commented on a diff in pull request #11663: [AutoTVM] Fix flaky test

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on code in PR #11663:
URL: https://github.com/apache/tvm/pull/11663#discussion_r894777445


##########
tests/python/integration/test_tuning.py:
##########
@@ -174,7 +173,14 @@ def runner(target, dev):
 
         assert len(results) == 20
 
-        successful_results = [r for r in results if r.error_no == autotvm.MeasureErrorNo.NO_ERROR]
+        successful_results = [
+            r
+            for r in results
+            if r.error_no == autotvm.MeasureErrorNo.NO_ERROR
+            # Autotvm can filter some records before building if we know they won't work ahead of time.
+            # We can't guarantee we sample at least one good record so we count these as success too
+            or r.error_no == autotvm.MeasureErrorNo.INSTANTIATION_ERROR

Review Comment:
   Nah it's expected to sometimes fail during the tuning process so we would expect each result to have no error or an instantiation error (which indicates we caught it and didn't build). I don't know enough about how to make things deterministic, perhaps by replacing the workload with something simpler where it can never fail.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org