You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by GitBox <gi...@apache.org> on 2020/10/22 00:20:51 UTC

[GitHub] [beam] robertwb commented on a change in pull request #13144: [BEAM-10475] Add max buffering duration option for GroupIntoBatches transform in Python

robertwb commented on a change in pull request #13144:
URL: https://github.com/apache/beam/pull/13144#discussion_r509806698



##########
File path: sdks/python/apache_beam/transforms/util.py
##########
@@ -777,33 +795,56 @@ def process(
         window=DoFn.WindowParam,
         element_state=DoFn.StateParam(ELEMENT_STATE),
         count_state=DoFn.StateParam(COUNT_STATE),
-        expiry_timer=DoFn.TimerParam(EXPIRY_TIMER)):
+        window_timer=DoFn.TimerParam(WINDOW_TIMER),
+        buffering_timer=DoFn.TimerParam(BUFFERING_TIMER)):
       # Allowed lateness not supported in Python SDK
       # https://beam.apache.org/documentation/programming-guide/#watermarks-and-late-data
-      expiry_timer.set(window.end)
+      window_timer.set(window.end)
       element_state.add(element)
       count_state.add(1)
       count = count_state.read()
+      if count == 1 and max_buffering_duration_secs is not None:
+        # This is the first element in batch. Start counting buffering time if a
+        # limit was set.
+        buffering_timer.set(clock() + max_buffering_duration_secs)
       if count >= batch_size:
         batch = [element for element in element_state.read()]
         key, _ = batch[0]
         batch_values = [v for (k, v) in batch]
-        yield (key, batch_values)
+        yield key, batch_values
         element_state.clear()
         count_state.clear()
+        buffering_timer.clear()
 
-    @on_timer(EXPIRY_TIMER)
-    def expiry(
+    @on_timer(WINDOW_TIMER)
+    def on_window_timer(
         self,
         element_state=DoFn.StateParam(ELEMENT_STATE),
-        count_state=DoFn.StateParam(COUNT_STATE)):
+        count_state=DoFn.StateParam(COUNT_STATE),
+        buffering_timer=DoFn.TimerParam(BUFFERING_TIMER)):
+      batch = [element for element in element_state.read()]
+      if batch:
+        key, _ = batch[0]

Review comment:
       Pull this (and the exact same logic above and below) into common helper method.

##########
File path: sdks/python/apache_beam/transforms/util.py
##########
@@ -1009,7 +1050,7 @@ def _process(element):
 
   @staticmethod
   @typehints.with_input_types(str)
-  @typehints.with_output_types(Union[List[str], List[Tuple[str, str]]])
+  @typehints.with_output_types(Union[List[str], Tuple[str, str]])

Review comment:
       Was this wrong?

##########
File path: sdks/python/apache_beam/transforms/util_test.py
##########
@@ -644,6 +657,12 @@ def _create_test_data():
       data.append(("key", scientists[index]))
     return data
 
+  class _ExpandIterable(DoFn):

Review comment:
       _ExpandValuesIterable? 
   
   This could also be `FlatMapTuple(lambda k, vs: vs)`

##########
File path: sdks/python/apache_beam/transforms/util_test.py
##########
@@ -110,10 +117,16 @@ def test_windowed_batches(self):
           | util.BatchElements(
               min_batch_size=5, max_batch_size=10, clock=FakeClock())
           | beam.Map(len))
-      assert_that(res, equal_to([
-          5, 5, 10, 10,  # elements in [0, 30)
-          10, 7,         # elements in [30, 47)
-      ]))
+      assert_that(
+          res,
+          equal_to([
+              5,
+              5,
+              10,
+              10,  # elements in [0, 30)

Review comment:
       Nit: I prefer the previous formatting (as the comment applies to the whole set of elements). 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org