You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Chamikara Madhusanka Jayalath (Jira)" <ji...@apache.org> on 2022/05/10 16:40:00 UTC

[jira] [Assigned] (BEAM-14429) SyntheticUnboundedSource(with SDF) produce duplicate records when split with DEFAULT_DESIRED_NUM_SPLITS

     [ https://issues.apache.org/jira/browse/BEAM-14429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Chamikara Madhusanka Jayalath reassigned BEAM-14429:
----------------------------------------------------

    Assignee: John Casey

> SyntheticUnboundedSource(with SDF) produce duplicate records when split with DEFAULT_DESIRED_NUM_SPLITS
> -------------------------------------------------------------------------------------------------------
>
>                 Key: BEAM-14429
>                 URL: https://issues.apache.org/jira/browse/BEAM-14429
>             Project: Beam
>          Issue Type: Bug
>          Components: io-common
>            Reporter: Yichi Zhang
>            Assignee: John Casey
>            Priority: P2
>             Fix For: 2.39.0
>
>          Time Spent: 2h
>  Remaining Estimate: 0h
>
> With the default 20 split, the num records produced by Read.from(SyntheticUnboundedSource) is always larger than the numRecords specified. the more splits the more actual number records produced is off. And the Read step tends to take longer time with more splits.
> [https://github.com/apache/beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/io/Read.java#L512]
> The issue is manifested with java LoadTests on dataflow runner v2.
> Initial suspicion is that duplicate source readers for the same restriction and checkpoint were created by multiple UnboundedSourceAsSDFWrapperFns.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)