You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/08/22 14:03:22 UTC

[GitHub] [tvm] guberti opened a new pull request, #12539: [microTVM] Return median of model runtimes by default, instead of mean

guberti opened a new pull request, #12539:
URL: https://github.com/apache/tvm/pull/12539

   Changes `evaluate_model_accuracy` in `python/tvm/micro/testing/evaluation.py` to give the median model runtime, rather than the mean. This is intended as a workaround for #12538, but it is not a fix.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] guberti commented on a diff in pull request #12539: [microTVM] Return median of model runtimes by default, instead of mean

Posted by GitBox <gi...@apache.org>.
guberti commented on code in PR #12539:
URL: https://github.com/apache/tvm/pull/12539#discussion_r951760286


##########
python/tvm/micro/testing/evaluation.py:
##########
@@ -154,6 +154,6 @@ def evaluate_model_accuracy(session, aot_executor, input_data, true_labels, runs
         aot_runtimes.append(runtime)
 
     num_correct = sum(u == v for u, v in zip(true_labels, predicted_labels))
-    average_time = sum(aot_runtimes) / len(aot_runtimes)
+    average_time = np.median(aot_runtimes)

Review Comment:
   I think this is a great idea! To go one step further, I've expanded the scope of this PR a little and reworked `evaluate_model_accuracy` into `predict_labels_aot` - the description has been updated.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mehrdadh commented on a diff in pull request #12539: [microTVM] Return median of model runtimes by default, instead of mean

Posted by GitBox <gi...@apache.org>.
mehrdadh commented on code in PR #12539:
URL: https://github.com/apache/tvm/pull/12539#discussion_r951656042


##########
python/tvm/micro/testing/evaluation.py:
##########
@@ -154,6 +154,6 @@ def evaluate_model_accuracy(session, aot_executor, input_data, true_labels, runs
         aot_runtimes.append(runtime)
 
     num_correct = sum(u == v for u, v in zip(true_labels, predicted_labels))
-    average_time = sum(aot_runtimes) / len(aot_runtimes)
+    average_time = np.median(aot_runtimes)

Review Comment:
   as a helper function I don't think we want to make the decision here on how to use the data, specially where there are some anomaly in the data. I suggest to fix this issue we report the list of runtime and let users to handle this based on their case.
   For example I could see someone would sort the data and eliminate top 10% and 90% and then use the average
   wdyt?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] mehrdadh merged pull request #12539: [microTVM] Rework evaluate_model_accuracy into a more generic helper function

Posted by GitBox <gi...@apache.org>.
mehrdadh merged PR #12539:
URL: https://github.com/apache/tvm/pull/12539


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org