You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2019/11/01 21:44:00 UTC

[jira] [Work logged] (BEAM-7917) Python datastore v1new fails on retry

     [ https://issues.apache.org/jira/browse/BEAM-7917?focusedWorklogId=337569&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-337569 ]

ASF GitHub Bot logged work on BEAM-7917:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 01/Nov/19 21:43
            Start Date: 01/Nov/19 21:43
    Worklog Time Spent: 10m 
      Work Description: udim commented on pull request #9294: [BEAM-7917] Fix datastore writes failing on retry
URL: https://github.com/apache/beam/pull/9294#discussion_r341762012
 
 

 ##########
 File path: sdks/python/apache_beam/io/gcp/datastore/v1new/datastoreio.py
 ##########
 @@ -340,12 +340,13 @@ def finish_bundle(self):
     def _init_batch(self):
       self._batch_bytes_size = 0
       self._batch = self._client.batch()
-      self._batch.begin()
+      self._batch_mutations = []
 
     def _flush_batch(self):
       # Flush the current batch of mutations to Cloud Datastore.
       latency_ms = helper.write_mutations(
 
 Review comment:
   I think your choice is a valid compromise with what to do with the results of `element_to_client_batch_item`. I see 2 options here: 
   1. Save the batch items.
   2. Discard batch items after calling `ByteSize()` on them.
   
   The first option seems more CPU efficient while the second seems more memory efficient. I don't know which option is faster, but from my limited experience saving CPU at the expense of RAM seems like a good tradeoff.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 337569)
    Time Spent: 3.5h  (was: 3h 20m)

> Python datastore v1new fails on retry
> -------------------------------------
>
>                 Key: BEAM-7917
>                 URL: https://issues.apache.org/jira/browse/BEAM-7917
>             Project: Beam
>          Issue Type: Bug
>          Components: io-py-gcp, runner-dataflow
>    Affects Versions: 2.14.0
>         Environment: Python 3.7 on Dataflow
>            Reporter: Dmytro Sadovnychyi
>            Assignee: Dmytro Sadovnychyi
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Traceback (most recent call last):
>   File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.DoFnRunner.process
>   File "apache_beam/runners/common.py", line 454, in apache_beam.runners.common.SimpleInvoker.invoke_process
>   File "/usr/local/lib/python3.7/site-packages/apache_beam/io/gcp/datastore/v1new/datastoreio.py", line 334, in process
>     self._flush_batch()
>   File "/usr/local/lib/python3.7/site-packages/apache_beam/io/gcp/datastore/v1new/datastoreio.py", line 349, in _flush_batch
>     throttle_delay=util.WRITE_BATCH_TARGET_LATENCY_MS // 1000)
>   File "/usr/local/lib/python3.7/site-packages/apache_beam/utils/retry.py", line 197, in wrapper
>     return fun(*args, **kwargs)
>   File "/usr/local/lib/python3.7/site-packages/apache_beam/io/gcp/datastore/v1new/helper.py", line 99, in write_mutations
>     batch.commit()
>   File "/usr/local/lib/python3.7/site-packages/google/cloud/datastore/batch.py", line 271, in commit
>     raise ValueError("Batch must be in progress to commit()")
> ValueError: Batch must be in progress to commit()



--
This message was sent by Atlassian Jira
(v8.3.4#803005)