You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@beam.apache.org by "Joshua Fox (JIRA)" <ji...@apache.org> on 2016/12/01 11:23:58 UTC

[jira] [Comment Edited] (BEAM-991) DatastoreIO Write should flush early for large batches

    [ https://issues.apache.org/jira/browse/BEAM-991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15711705#comment-15711705 ] 

Joshua Fox edited comment on BEAM-991 at 12/1/16 11:23 AM:
-----------------------------------------------------------

[~dhalperi@google.com], unfortunately I cannot participate too much in the development and full testing, both for the usual  reason of time commitments but also because I am not close enough to the infrastructure to fully understand what is happening, particularly in edge cases. However, I will be glad to serve as an early user.

My situation: I wrote a Datastore backup in Dataflow, after Google's Backup and Managed Backup tool failed on various bugs.

However, this new tool cannot copy any Kind with Item size >20 KB (max in Datastore is 1 MB). So, I can only use it to backup Kinds with average Item size 10 KB (since there may be variation.) I wrote a second a simple multithreaded but nondistributed backup tool, using  "select-insert"  loops. It works, but of course is less scalable and more expensive than the Dataflow tool. I use a combination of the Dataflow and the nondistributed tool now, for different Kinds, but can easily switch to using just Dataflow.


was (Author: joshuafox):
Daniel Halperin, unfortunately I cannot participate too much in the development and full testing, both for the usual  reason of time commitments but also because I am not close enough to the infrastructure to fully understand what is happening, particularly in edge cases. However, I will be glad to serve as an early user.

My situation: I wrote a Datastore backup in Dataflow, after Google's Backup and Managed Backup tool failed on various bugs.

However, this new tool cannot copy any Kind with Item size >20 KB (max in Datastore is 1 MB). So, I can only use it to backup Kinds with average Item size 10 KB (since there may be variation.) I wrote a second a simple multithreaded but nondistributed backup tool, using  "select-insert"  loops. It works, but of course is less scalable and more expensive than the Dataflow tool. I use a combination of the Dataflow and the nondistributed tool now, for different Kinds, but can easily switch to using just Dataflow.

> DatastoreIO Write should flush early for large batches
> ------------------------------------------------------
>
>                 Key: BEAM-991
>                 URL: https://issues.apache.org/jira/browse/BEAM-991
>             Project: Beam
>          Issue Type: Bug
>          Components: sdk-java-gcp
>            Reporter: Vikas Kedigehalli
>            Assignee: Vikas Kedigehalli
>
> If entities are large (avg size > 20KB) then the a single batched write (500 entities) would exceed the Datastore size limit of a single request (10MB) from https://cloud.google.com/datastore/docs/concepts/limits.
> First reported in: http://stackoverflow.com/questions/40156400/why-does-dataflow-erratically-fail-in-datastore-access



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)