You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Heejong Lee (Jira)" <ji...@apache.org> on 2022/01/07 23:34:00 UTC
[jira] [Assigned] (BEAM-13599) Overflow in Python Datastore RampupThrottlingFn
[ https://issues.apache.org/jira/browse/BEAM-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Heejong Lee reassigned BEAM-13599:
----------------------------------
Assignee: Daniel Thevessen
> Overflow in Python Datastore RampupThrottlingFn
> -----------------------------------------------
>
> Key: BEAM-13599
> URL: https://issues.apache.org/jira/browse/BEAM-13599
> Project: Beam
> Issue Type: Bug
> Components: io-py-gcp
> Affects Versions: 2.32.0, 2.33.0, 2.34.0, 2.35.0
> Reporter: Daniel Thevessen
> Assignee: Daniel Thevessen
> Priority: P2
> Time Spent: 1h 10m
> Remaining Estimate: 0h
>
> {code:java}
> File "/usr/local/lib/python3.8/site-packages/apache_beam/io/gcp/datastore/v1new/rampup_throttling_fn.py", line 74, in _calc_max_ops_budget
> max_ops_budget = int(self._BASE_BUDGET / self._num_workers * (1.5**growth))
> RuntimeError: OverflowError: (34, 'Numerical result out of range') `[while running 'Write to Datastore/Enforce throttling during ramp-up-ptransform-483']
> {code}
> An intermediate value is a float dependent on start time, meaning it can run into overflows in long-running pipelines (usually on the ~6th day).
> `max_ops_budget` should either clip to float(inf) or INT_MAX, or short-circuit the throttling decision [here|#L87] since it will long be irrelevant by then.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)