You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Daniel Thevessen (Jira)" <ji...@apache.org> on 2022/01/05 00:01:00 UTC

[jira] [Created] (BEAM-13599) Overflow in Python Datastore RampupThrottlingFn

Daniel Thevessen created BEAM-13599:
---------------------------------------

             Summary: Overflow in Python Datastore RampupThrottlingFn
                 Key: BEAM-13599
                 URL: https://issues.apache.org/jira/browse/BEAM-13599
             Project: Beam
          Issue Type: Bug
          Components: io-py-gcp
    Affects Versions: 2.35.0, 2.34.0, 2.33.0, 2.32.0
            Reporter: Daniel Thevessen


```

File "/usr/local/lib/python3.8/site-packages/apache_beam/io/gcp/datastore/v1new/rampup_throttling_fn.py", line 74, in _calc_max_ops_budget

max_ops_budget = int(self._BASE_BUDGET / self._num_workers * (1.5**growth))

RuntimeError: OverflowError: (34, 'Numerical result out of range') `[while running 'Write to Datastore/Enforce throttling during ramp-up-ptransform-483']

```

 

An intermediate value is a float dependent on start time, meaning it can run into overflows in long-running pipelines (usually on the ~6th day).

`max_ops_budget` should either clip to float(inf) or INT_MAX, or short-circuit the throttling decision [here]([https://github.com/apache/beam/blob/ea65a054f2fcb6349478d19609a773f66bbfa20e/sdks/python/apache_beam/io/gcp/datastore/v1new/rampup_throttling_fn.py#L87)] since it will long be irrelevant by then.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)