You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Kenneth Knowles (Jira)" <ji...@apache.org> on 2020/04/23 23:03:00 UTC

[jira] [Updated] (BEAM-9752) Too many shards in GCS

     [ https://issues.apache.org/jira/browse/BEAM-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Kenneth Knowles updated BEAM-9752:
----------------------------------
    Status: Open  (was: Triage Needed)

> Too many shards in GCS
> ----------------------
>
>                 Key: BEAM-9752
>                 URL: https://issues.apache.org/jira/browse/BEAM-9752
>             Project: Beam
>          Issue Type: Improvement
>          Components: sdk-py-core, sdk-py-harness
>            Reporter: Ankur Goenka
>            Priority: Major
>
> We have observed case where the data was spread very thinly over automatically computed number of shards.
> This caused wait for the buffers to fill before sending the data over to gcs causing upload timeout as we did not upload any data for while waiting.
> However, by setting an explicit number of shards (1000 in my case) solved this problem potentially because all the shards had enough data to fill the buffer write avoiding timeout.
>  
> We can improve the sharding logic so that we don't create too many shards.
> Alternatively, we can improve connection handling so that the connection does not timeout.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)