You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2022/04/26 05:59:00 UTC

[jira] [Work logged] (BEAM-14068) RunInference Benchmarking tests

     [ https://issues.apache.org/jira/browse/BEAM-14068?focusedWorklogId=762129&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762129 ]

ASF GitHub Bot logged work on BEAM-14068:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 26/Apr/22 05:58
            Start Date: 26/Apr/22 05:58
    Worklog Time Spent: 10m 
      Work Description: asf-ci commented on PR #17462:
URL: https://github.com/apache/beam/pull/17462#issuecomment-1109377414

   Can one of the admins verify this patch?




Issue Time Tracking
-------------------

    Worklog Id:     (was: 762129)
    Time Spent: 20m  (was: 10m)

> RunInference Benchmarking tests
> -------------------------------
>
>                 Key: BEAM-14068
>                 URL: https://issues.apache.org/jira/browse/BEAM-14068
>             Project: Beam
>          Issue Type: Sub-task
>          Components: sdk-py-core
>            Reporter: Anand Inguva
>            Assignee: Anand Inguva
>            Priority: P2
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> RunInference benchmarks will evaluate performance of Pipelines, which represent common use cases of Beam + Dataflow in Pytorch, sklearn and possibly TFX. These benchmarks would be the integration tests that exercise several software components using Beam, PyTorch, Scikit learn and TensorFlow extended.
> we would use the datasets that's available publicly (Eg; Kaggle). 
> Size: small / 10 GB / 1 TB etc
> The default execution runner would be Dataflow unless specified otherwise.
> These tests would be run very less frequently(every release cycle).  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)