You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Kyle Weaver (Jira)" <ji...@apache.org> on 2020/11/09 21:18:00 UTC
[jira] [Updated] (BEAM-10689) Unskip test_metrics (py) in Spark
runner
[ https://issues.apache.org/jira/browse/BEAM-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Kyle Weaver updated BEAM-10689:
-------------------------------
Description:
For the test_metrics failure, I found that metrics are being passed to Python. The test breaks because no metrics match the filter [1], because the step names are transformed somehow such that the filter logic is too strict to recognize [2]:
Spark Runner:
MetricKey(step=ref_AppliedPTransform_count1_17, metric=MetricName(namespace=ns, name=counter), labels={}): 2
MetricKey(step=ref_AppliedPTransform_count2_18, metric=MetricName(namespace=ns, name=counter), labels={}): 4
...
Fn API Runner:
MetricKey(step=count1, metric=MetricName(namespace=ns, name=counter), labels={}): 2,
MetricKey(step=count2, metric=MetricName(namespace=ns, name=counter), labels={}): 4
Also, note that Flink has its own, completely different implementation of test_metrics [3].
[1] https://github.com/apache/beam/blob/2ef7b9db8af015dcba544b93df00a4e54cd8caf2/sdks/python/apache_beam/runners/portability/fn_api_runner/fn_runner_test.py#L744
[2] https://github.com/apache/beam/blob/2ef7b9db8af015dcba544b93df00a4e54cd8caf2/sdks/python/apache_beam/metrics/metric.py#L151-L155
[3] https://github.com/apache/beam/blob/2ef7b9db8af015dcba544b93df00a4e54cd8caf2/sdks/python/apache_beam/runners/portability/flink_runner_test.py#L251
was:https://issues.apache.org/jira/browse/BEAM-7219?focusedCommentId=17106702&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17106702
> Unskip test_metrics (py) in Spark runner
> ----------------------------------------
>
> Key: BEAM-10689
> URL: https://issues.apache.org/jira/browse/BEAM-10689
> Project: Beam
> Issue Type: Improvement
> Components: runner-spark, testing
> Reporter: Kyle Weaver
> Priority: P3
> Labels: portability-spark
> Time Spent: 1h
> Remaining Estimate: 0h
>
> For the test_metrics failure, I found that metrics are being passed to Python. The test breaks because no metrics match the filter [1], because the step names are transformed somehow such that the filter logic is too strict to recognize [2]:
> Spark Runner:
> MetricKey(step=ref_AppliedPTransform_count1_17, metric=MetricName(namespace=ns, name=counter), labels={}): 2
> MetricKey(step=ref_AppliedPTransform_count2_18, metric=MetricName(namespace=ns, name=counter), labels={}): 4
> ...
> Fn API Runner:
> MetricKey(step=count1, metric=MetricName(namespace=ns, name=counter), labels={}): 2,
> MetricKey(step=count2, metric=MetricName(namespace=ns, name=counter), labels={}): 4
> Also, note that Flink has its own, completely different implementation of test_metrics [3].
> [1] https://github.com/apache/beam/blob/2ef7b9db8af015dcba544b93df00a4e54cd8caf2/sdks/python/apache_beam/runners/portability/fn_api_runner/fn_runner_test.py#L744
> [2] https://github.com/apache/beam/blob/2ef7b9db8af015dcba544b93df00a4e54cd8caf2/sdks/python/apache_beam/metrics/metric.py#L151-L155
> [3] https://github.com/apache/beam/blob/2ef7b9db8af015dcba544b93df00a4e54cd8caf2/sdks/python/apache_beam/runners/portability/flink_runner_test.py#L251
--
This message was sent by Atlassian Jira
(v8.3.4#803005)