You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Beam JIRA Bot (Jira)" <ji...@apache.org> on 2022/04/07 16:59:00 UTC

[jira] [Commented] (BEAM-14068) RunInference Benchmarking tests

    [ https://issues.apache.org/jira/browse/BEAM-14068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17519020#comment-17519020 ] 

Beam JIRA Bot commented on BEAM-14068:
--------------------------------------

This issue is assigned but has not received an update in 30 days so it has been labeled "stale-assigned". If you are still working on the issue, please give an update and remove the label. If you are no longer working on the issue, please unassign so someone else may work on it. In 7 days the issue will be automatically unassigned.

> RunInference Benchmarking tests
> -------------------------------
>
>                 Key: BEAM-14068
>                 URL: https://issues.apache.org/jira/browse/BEAM-14068
>             Project: Beam
>          Issue Type: Sub-task
>          Components: sdk-py-core
>            Reporter: Anand Inguva
>            Assignee: Anand Inguva
>            Priority: P2
>              Labels: stale-assigned
>
> RunInference benchmarks will evaluate performance of Pipelines, which represent common use cases of Beam + Dataflow in Pytorch, sklearn and possibly TFX. These benchmarks would be the integration tests that exercise several software components using Beam, PyTorch, Scikit learn and TensorFlow extended.
> we would use the datasets that's available publicly (Eg; Kaggle). 
> Size: small / 10 GB / 1 TB etc
> The default execution runner would be Dataflow unless specified otherwise.
> These tests would be run very less frequently(every release cycle).  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)