You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by GitBox <gi...@apache.org> on 2020/09/03 16:54:50 UTC

[GitHub] [beam] ajamato commented on a change in pull request #12754: [BEAM-10791] Identify and log additional information needed to debug …

ajamato commented on a change in pull request #12754:
URL: https://github.com/apache/beam/pull/12754#discussion_r483100431



##########
File path: sdks/python/apache_beam/io/gcp/bigquery.py
##########
@@ -1198,8 +1209,34 @@ def process(self, element, *schema_side_inputs):
       return self._flush_all_batches()
 
   def finish_bundle(self):
+    current_millis = int(time.time() * 1000)

Review comment:
       Are we sure finishBundle is called frequently enough? I.e. are there bundles which process for a long time? Are we sure this code will get invoked when there are failures to write to BQ?

##########
File path: sdks/python/apache_beam/io/gcp/bigquery.py
##########
@@ -1198,8 +1209,34 @@ def process(self, element, *schema_side_inputs):
       return self._flush_all_batches()
 
   def finish_bundle(self):
+    current_millis = int(time.time() * 1000)
+    if BigQueryWriteFn.LATENCY_LOGGING_LOCK.acquire(False):
+      try:
+        if (BigQueryWriteFn.LATENCY_LOGGING_HISTOGRAM.total_count() > 0 and
+            (current_millis -
+             BigQueryWriteFn.LATENCY_LOGGING_LAST_REPORTED_MILLIS) >
+            self._latency_logging_frequency * 1000):
+          self._log_percentiles()
+          BigQueryWriteFn.LATENCY_LOGGING_HISTOGRAM.clear()
+          BigQueryWriteFn.LATENCY_LOGGING_LAST_REPORTED_MILLIS = current_millis
+      finally:
+        BigQueryWriteFn.LATENCY_LOGGING_LOCK.release()
     return self._flush_all_batches()
 
+  @classmethod
+  def _log_percentiles(cls):
+    # Note that the total count and each percentile value may not be correlated

Review comment:
       " Note that the total count and each percentile value may not be correlated each other"
   
   I don't fully grasp the meaning. Would you please rephrase? :)

##########
File path: sdks/python/apache_beam/io/gcp/bigquery.py
##########
@@ -1198,8 +1209,34 @@ def process(self, element, *schema_side_inputs):
       return self._flush_all_batches()
 
   def finish_bundle(self):
+    current_millis = int(time.time() * 1000)
+    if BigQueryWriteFn.LATENCY_LOGGING_LOCK.acquire(False):

Review comment:
       acquire with the false parameter can skip this section entirely, if another thread has the lock. So if that happens we won't record current_millis into the histogram

##########
File path: sdks/python/apache_beam/io/gcp/bigquery.py
##########
@@ -1096,6 +1106,7 @@ def __init__(
     self.failed_rows_metric = Metrics.distribution(
         self.__class__, "rows_failed_per_batch")
     self.bigquery_wrapper = None
+    self._latency_logging_frequency = latency_logging_frequency or 180

Review comment:
       add units to variable name
   i.e.
   
   _latency_logging_frequency_msec

##########
File path: sdks/python/apache_beam/options/pipeline_options.py
##########
@@ -675,6 +675,27 @@ def validate(self, validator):
     return errors
 
 
+class BigQueryOptions(PipelineOptions):
+  """BigQueryIO configuration options."""
+  @classmethod
+  def _add_argparse_args(cls, parser):
+    parser.add_argument(
+        '--latency_logging_frequency',

Review comment:
       add units, _secs, _msecs, etc.

##########
File path: sdks/python/apache_beam/io/gcp/bigquery.py
##########
@@ -1198,8 +1209,34 @@ def process(self, element, *schema_side_inputs):
       return self._flush_all_batches()
 
   def finish_bundle(self):
+    current_millis = int(time.time() * 1000)

Review comment:
       I noticed that you have three calls to time.time(). This is an expensive system call. Is there any way to eliminate this additional usage here? I.e. You could just update the current_millis, in other places where you call time.time().
   
   Though since this is in finish_bundle, maybe this is fine
   Though see other comment, not sure its wise to do this in finish_bundle. We still need to be careful that we don't starve out the logging. The solution to that may be to log in both finish bundle (always). and after each request (if we have exceeded the logging interval)
   @chamikaramj WDYT?

##########
File path: sdks/python/apache_beam/io/gcp/bigquery.py
##########
@@ -1198,8 +1209,34 @@ def process(self, element, *schema_side_inputs):
       return self._flush_all_batches()
 
   def finish_bundle(self):
+    current_millis = int(time.time() * 1000)
+    if BigQueryWriteFn.LATENCY_LOGGING_LOCK.acquire(False):
+      try:
+        if (BigQueryWriteFn.LATENCY_LOGGING_HISTOGRAM.total_count() > 0 and
+            (current_millis -
+             BigQueryWriteFn.LATENCY_LOGGING_LAST_REPORTED_MILLIS) >
+            self._latency_logging_frequency * 1000):
+          self._log_percentiles()
+          BigQueryWriteFn.LATENCY_LOGGING_HISTOGRAM.clear()
+          BigQueryWriteFn.LATENCY_LOGGING_LAST_REPORTED_MILLIS = current_millis
+      finally:
+        BigQueryWriteFn.LATENCY_LOGGING_LOCK.release()
     return self._flush_all_batches()
 
+  @classmethod
+  def _log_percentiles(cls):
+    # Note that the total count and each percentile value may not be correlated
+    # each other. Histogram releases lock between each percentile calculation
+    # so additional latencies could be added anytime.
+    # pylint: disable=round-builtin
+    _LOGGER.info(

Review comment:
       Seems like since you don't use the same lock to write here, as you do to record into the histogram. That there can be threads which write to this as you are printing it. Changing the values in the histogram as you are logging.
   
   You could fix this by having one call inside the LATENCY_LOGGING_HISTOGRAM which requires the histogram's internal lock returns all the variables you re logging

##########
File path: sdks/python/apache_beam/utils/histogram.py
##########
@@ -0,0 +1,176 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+from __future__ import absolute_import
+from __future__ import division
+
+import logging
+import math
+import threading
+
+_LOGGER = logging.getLogger(__name__)
+
+
+class Histogram(object):
+  """A histogram that supports estimated percentile with linear interpolation.
+
+  This class is considered experimental and may break or receive backwards-
+  incompatible changes in future versions of the Apache Beam SDK.
+  """
+  def __init__(self, bucket_type):
+    self._lock = threading.Lock()
+    self._bucket_type = bucket_type
+    self._buckets = {}
+    self._num_records = 0
+    self._num_top_records = 0
+    self._num_bot_records = 0
+
+  def clear(self):
+    with self._lock:
+      self._buckets = {}
+      self._num_records = 0
+      self._num_top_records = 0
+      self._num_bot_records = 0
+
+  def record(self, *args):
+    for arg in args:
+      self._record(arg)
+
+  def _record(self, value):
+    range_from = self._bucket_type.range_from()
+    range_to = self._bucket_type.range_to()
+    with self._lock:
+      if value >= range_to:
+        _LOGGER.warning('record is out of upper bound %s: %s', range_to, value)
+        self._num_top_records += 1
+      elif value < range_from:
+        _LOGGER.warning(
+            'record is out of lower bound %s: %s', range_from, value)
+        self._num_bot_records += 1
+      else:
+        index = self._bucket_type.bucket_index(value)
+        self._buckets[index] = self._buckets.get(index, 0) + 1
+        self._num_records += 1
+
+  def total_count(self):
+    return self._num_records + self._num_top_records + self._num_bot_records
+
+  def p99(self):
+    return self.get_linear_interpolation(0.99)
+
+  def p90(self):
+    return self.get_linear_interpolation(0.90)
+
+  def p50(self):
+    return self.get_linear_interpolation(0.50)
+
+  def get_linear_interpolation(self, percentile):
+    """Calculate percentile estimation based on linear interpolation.
+
+    It first finds the bucket which includes the target percentile and
+    projects the estimated point in the bucket by assuming all the elements
+    in the bucket are uniformly distributed.
+    """
+    with self._lock:
+      total_num_records = self.total_count()
+      if total_num_records == 0:
+        raise RuntimeError('histogram has no record.')
+
+      index = 0
+      record_sum = self._num_bot_records
+      if record_sum / total_num_records >= percentile:
+        return float('-inf')
+      while index < self._bucket_type.num_buckets():
+        record_sum += self._buckets.get(index, 0)
+        if record_sum / total_num_records >= percentile:
+          break
+        index += 1
+      if index == self._bucket_type.num_buckets():
+        return float('inf')
+
+      frac_percentile = percentile - (
+          record_sum - self._buckets[index]) / total_num_records
+      bucket_percentile = self._buckets[index] / total_num_records
+    frac_bucket_size = frac_percentile * self._bucket_type.bucket_size(
+        index) / bucket_percentile
+    return self._bucket_type.range_from(
+    ) + self._bucket_type.accumulated_bucket_size(index) + frac_bucket_size

Review comment:
       indent this line by 4

##########
File path: sdks/python/apache_beam/io/gcp/bigquery.py
##########
@@ -1198,8 +1209,34 @@ def process(self, element, *schema_side_inputs):
       return self._flush_all_batches()
 
   def finish_bundle(self):
+    current_millis = int(time.time() * 1000)
+    if BigQueryWriteFn.LATENCY_LOGGING_LOCK.acquire(False):

Review comment:
       Help me understand a bit here. Am I correct about this:
   - There are multiple threads
   - But due to Global Interpreter Lock, only one thread runs at a time
   - But threads can possibly block in these sections, get preempted and another thread will start running
   - so we need the locking to ensure only one thread is running in this block at a time.
   
   In addition
   - Multiple python processes also execute, and there are multiple python SDK harnesses running in order to get more parallelism (they don't have the same type of threading concurrency issues, but may communicate with the multiprocessing module).
   - Though this is not why the lock is introduced
   

##########
File path: sdks/python/apache_beam/utils/histogram.py
##########
@@ -0,0 +1,176 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+from __future__ import absolute_import
+from __future__ import division
+
+import logging
+import math
+import threading
+
+_LOGGER = logging.getLogger(__name__)
+
+
+class Histogram(object):
+  """A histogram that supports estimated percentile with linear interpolation.
+
+  This class is considered experimental and may break or receive backwards-
+  incompatible changes in future versions of the Apache Beam SDK.
+  """
+  def __init__(self, bucket_type):
+    self._lock = threading.Lock()
+    self._bucket_type = bucket_type
+    self._buckets = {}
+    self._num_records = 0
+    self._num_top_records = 0
+    self._num_bot_records = 0
+
+  def clear(self):
+    with self._lock:
+      self._buckets = {}
+      self._num_records = 0
+      self._num_top_records = 0
+      self._num_bot_records = 0
+
+  def record(self, *args):
+    for arg in args:
+      self._record(arg)
+
+  def _record(self, value):
+    range_from = self._bucket_type.range_from()
+    range_to = self._bucket_type.range_to()
+    with self._lock:
+      if value >= range_to:
+        _LOGGER.warning('record is out of upper bound %s: %s', range_to, value)
+        self._num_top_records += 1
+      elif value < range_from:
+        _LOGGER.warning(
+            'record is out of lower bound %s: %s', range_from, value)
+        self._num_bot_records += 1
+      else:

Review comment:
       This shouldn't be in an else block, Please always record the latency. Place large values it in a bucket representing -INF and +INF boundaries, as we discussed in the java PR

##########
File path: sdks/python/apache_beam/utils/histogram.py
##########
@@ -0,0 +1,176 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+from __future__ import absolute_import
+from __future__ import division
+
+import logging
+import math
+import threading
+
+_LOGGER = logging.getLogger(__name__)
+
+
+class Histogram(object):
+  """A histogram that supports estimated percentile with linear interpolation.
+
+  This class is considered experimental and may break or receive backwards-
+  incompatible changes in future versions of the Apache Beam SDK.
+  """
+  def __init__(self, bucket_type):
+    self._lock = threading.Lock()
+    self._bucket_type = bucket_type
+    self._buckets = {}
+    self._num_records = 0
+    self._num_top_records = 0
+    self._num_bot_records = 0
+
+  def clear(self):
+    with self._lock:
+      self._buckets = {}
+      self._num_records = 0
+      self._num_top_records = 0
+      self._num_bot_records = 0
+
+  def record(self, *args):
+    for arg in args:
+      self._record(arg)
+
+  def _record(self, value):
+    range_from = self._bucket_type.range_from()
+    range_to = self._bucket_type.range_to()
+    with self._lock:
+      if value >= range_to:
+        _LOGGER.warning('record is out of upper bound %s: %s', range_to, value)
+        self._num_top_records += 1
+      elif value < range_from:
+        _LOGGER.warning(
+            'record is out of lower bound %s: %s', range_from, value)
+        self._num_bot_records += 1
+      else:
+        index = self._bucket_type.bucket_index(value)
+        self._buckets[index] = self._buckets.get(index, 0) + 1
+        self._num_records += 1
+
+  def total_count(self):
+    return self._num_records + self._num_top_records + self._num_bot_records
+
+  def p99(self):
+    return self.get_linear_interpolation(0.99)
+
+  def p90(self):
+    return self.get_linear_interpolation(0.90)
+
+  def p50(self):
+    return self.get_linear_interpolation(0.50)
+
+  def get_linear_interpolation(self, percentile):
+    """Calculate percentile estimation based on linear interpolation.
+
+    It first finds the bucket which includes the target percentile and
+    projects the estimated point in the bucket by assuming all the elements
+    in the bucket are uniformly distributed.
+    """
+    with self._lock:
+      total_num_records = self.total_count()
+      if total_num_records == 0:
+        raise RuntimeError('histogram has no record.')

Review comment:
       This doesn't seem like an Error. Instead of Raising a RuntimeError, consider returning an empty/unpopulated result

##########
File path: sdks/python/apache_beam/utils/histogram.py
##########
@@ -0,0 +1,176 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+from __future__ import absolute_import
+from __future__ import division
+
+import logging
+import math
+import threading
+
+_LOGGER = logging.getLogger(__name__)
+
+
+class Histogram(object):
+  """A histogram that supports estimated percentile with linear interpolation.
+
+  This class is considered experimental and may break or receive backwards-
+  incompatible changes in future versions of the Apache Beam SDK.
+  """
+  def __init__(self, bucket_type):
+    self._lock = threading.Lock()
+    self._bucket_type = bucket_type
+    self._buckets = {}
+    self._num_records = 0
+    self._num_top_records = 0
+    self._num_bot_records = 0
+
+  def clear(self):
+    with self._lock:
+      self._buckets = {}
+      self._num_records = 0
+      self._num_top_records = 0
+      self._num_bot_records = 0
+
+  def record(self, *args):
+    for arg in args:
+      self._record(arg)
+
+  def _record(self, value):
+    range_from = self._bucket_type.range_from()
+    range_to = self._bucket_type.range_to()
+    with self._lock:
+      if value >= range_to:
+        _LOGGER.warning('record is out of upper bound %s: %s', range_to, value)
+        self._num_top_records += 1
+      elif value < range_from:
+        _LOGGER.warning(
+            'record is out of lower bound %s: %s', range_from, value)
+        self._num_bot_records += 1
+      else:
+        index = self._bucket_type.bucket_index(value)
+        self._buckets[index] = self._buckets.get(index, 0) + 1
+        self._num_records += 1
+
+  def total_count(self):
+    return self._num_records + self._num_top_records + self._num_bot_records
+
+  def p99(self):
+    return self.get_linear_interpolation(0.99)
+
+  def p90(self):
+    return self.get_linear_interpolation(0.90)
+
+  def p50(self):
+    return self.get_linear_interpolation(0.50)
+
+  def get_linear_interpolation(self, percentile):

Review comment:
       Add a pydoc for the percentile parameter. Stating that it should be in the range (0, 1)

##########
File path: sdks/python/apache_beam/utils/histogram.py
##########
@@ -0,0 +1,176 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+from __future__ import absolute_import
+from __future__ import division
+
+import logging
+import math
+import threading
+
+_LOGGER = logging.getLogger(__name__)
+
+
+class Histogram(object):
+  """A histogram that supports estimated percentile with linear interpolation.
+
+  This class is considered experimental and may break or receive backwards-
+  incompatible changes in future versions of the Apache Beam SDK.
+  """
+  def __init__(self, bucket_type):
+    self._lock = threading.Lock()
+    self._bucket_type = bucket_type
+    self._buckets = {}
+    self._num_records = 0
+    self._num_top_records = 0
+    self._num_bot_records = 0
+
+  def clear(self):
+    with self._lock:
+      self._buckets = {}
+      self._num_records = 0
+      self._num_top_records = 0
+      self._num_bot_records = 0
+
+  def record(self, *args):
+    for arg in args:
+      self._record(arg)
+
+  def _record(self, value):
+    range_from = self._bucket_type.range_from()
+    range_to = self._bucket_type.range_to()
+    with self._lock:
+      if value >= range_to:
+        _LOGGER.warning('record is out of upper bound %s: %s', range_to, value)
+        self._num_top_records += 1
+      elif value < range_from:
+        _LOGGER.warning(
+            'record is out of lower bound %s: %s', range_from, value)
+        self._num_bot_records += 1
+      else:
+        index = self._bucket_type.bucket_index(value)
+        self._buckets[index] = self._buckets.get(index, 0) + 1
+        self._num_records += 1
+
+  def total_count(self):
+    return self._num_records + self._num_top_records + self._num_bot_records
+
+  def p99(self):
+    return self.get_linear_interpolation(0.99)
+
+  def p90(self):
+    return self.get_linear_interpolation(0.90)
+
+  def p50(self):
+    return self.get_linear_interpolation(0.50)
+
+  def get_linear_interpolation(self, percentile):
+    """Calculate percentile estimation based on linear interpolation.
+
+    It first finds the bucket which includes the target percentile and
+    projects the estimated point in the bucket by assuming all the elements
+    in the bucket are uniformly distributed.
+    """
+    with self._lock:
+      total_num_records = self.total_count()
+      if total_num_records == 0:
+        raise RuntimeError('histogram has no record.')
+
+      index = 0
+      record_sum = self._num_bot_records
+      if record_sum / total_num_records >= percentile:
+        return float('-inf')

Review comment:
       Rather than stating inf or -inf, state that the percentile is at least as large as the smallest/largest bucket. Which is a more meaningful message to a user reading it




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org