You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Dongjoon Hyun (Jira)" <ji...@apache.org> on 2019/09/20 01:50:00 UTC

[jira] [Commented] (SPARK-28358) Fix Flaky Tests - test_time_with_timezone (pyspark.sql.tests.test_serde.SerdeTests)

    [ https://issues.apache.org/jira/browse/SPARK-28358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16933923#comment-16933923 ] 

Dongjoon Hyun commented on SPARK-28358:
---------------------------------------

It turns out that this starts to make `master` SBT Hadoop-2.7 jobs fail consecutively recently.
- https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-sbt-hadoop-2.7/

> Fix Flaky Tests - test_time_with_timezone (pyspark.sql.tests.test_serde.SerdeTests)
> -----------------------------------------------------------------------------------
>
>                 Key: SPARK-28358
>                 URL: https://issues.apache.org/jira/browse/SPARK-28358
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark, Tests
>    Affects Versions: 3.0.0
>            Reporter: Dongjoon Hyun
>            Priority: Major
>
> The some python tests fail with `error: release unlocked lock`.
> - https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-sbt-hadoop-2.7/6127/console
> - https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-sbt-hadoop-2.7/6123/console
> - https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-sbt-hadoop-2.7/6128/console
> - https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-sbt-hadoop-2.7/6506/console
> - https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-sbt-hadoop-2.7/6507/console
> - https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-sbt-hadoop-2.7/6508/console
> {code}
> test_time_with_timezone (pyspark.sql.tests.test_serde.SerdeTests) ... ERROR
> ======================================================================
> ERROR: test_time_with_timezone (pyspark.sql.tests.test_serde.SerdeTests)
> ----------------------------------------------------------------------
> Traceback (most recent call last):
>   File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/sql/tests/test_serde.py", line 92, in test_time_with_timezone
>     df = self.spark.createDataFrame([(day, now, utcnow)])
>   File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/sql/session.py", line 788, in createDataFrame
>     jrdd = self._jvm.SerDeUtil.toJavaArray(rdd._to_java_object_rdd())
>   File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/rdd.py", line 2369, in _to_java_object_rdd
>     rdd = self._pickled()
>   File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/rdd.py", line 264, in _pickled
>     return self._reserialize(AutoBatchedSerializer(PickleSerializer()))
>   File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/rdd.py", line 666, in _reserialize
>     self = self.map(lambda x: x, preservesPartitioning=True)
>   File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/rdd.py", line 397, in map
>     return self.mapPartitionsWithIndex(func, preservesPartitioning)
>   File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/rdd.py", line 437, in mapPartitionsWithIndex
>     return PipelinedRDD(self, f, preservesPartitioning)
>   File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/rdd.py", line 2586, in __init__
>     self.is_barrier = isFromBarrier or prev._is_barrier()
>   File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/rdd.py", line 2486, in _is_barrier
>     return self._jrdd.rdd().isBarrier()
>   File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line 1286, in __call__
>     answer, self.gateway_client, self.target_id, self.name)
>   File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/sql/utils.py", line 89, in deco
>     return f(*a, **kw)
>   File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/protocol.py", line 342, in get_return_value
>     return OUTPUT_CONVERTER[type](answer[2:], gateway_client)
>   File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line 2492, in <lambda>
>     lambda target_id, gateway_client: JavaObject(target_id, gateway_client))
>   File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line 1324, in __init__
>     ThreadSafeFinalizer.add_finalizer(key, value)
>   File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/finalizer.py", line 43, in add_finalizer
>     cls.finalizers[id] = weak_ref
>   File "/usr/lib64/pypy-2.5.1/lib-python/2.7/threading.py", line 216, in __exit__
>     self.release()
>   File "/usr/lib64/pypy-2.5.1/lib-python/2.7/threading.py", line 208, in release
>     self.__block.release()
> error: release unlocked lock
> {code}
> - https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-sbt-hadoop-2.7/6132/console
> {code}
> File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/sql/group.py", line 56, in pyspark.sql.group.GroupedData.mean
> Failed example:
>     df.groupBy().mean('age').collect()
> Exception raised:
>     Traceback (most recent call last):
>       File "/usr/lib64/pypy-2.5.1/lib-python/2.7/doctest.py", line 1315, in __run
>         compileflags, 1) in test.globs
>       File "<doctest pyspark.sql.group.GroupedData.mean[0]>", line 1, in <module>
>         df.groupBy().mean('age').collect()
>       File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/sql/group.py", line 42, in _api
>         jdf = getattr(self._jgd, name)(_to_seq(self.sql_ctx._sc, cols))
>       File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/sql/column.py", line 66, in _to_seq
>         return sc._jvm.PythonUtils.toSeq(cols)
>       File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line 1277, in __call__
>         args_command, temp_args = self._build_args(*args)
>       File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line 1241, in _build_args
>         (new_args, temp_args) = self._get_args(args)
>       File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line 1228, in _get_args
>         temp_arg = converter.convert(arg, self.gateway_client)
>       File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_collections.py", line 499, in convert
>         java_list = ArrayList()
>       File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line 1554, in __call__
>         answer, self._gateway_client, None, self._fqn)
>       File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/sql/utils.py", line 89, in deco
>         return f(*a, **kw)
>       File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/protocol.py", line 342, in get_return_value
>         return OUTPUT_CONVERTER[type](answer[2:], gateway_client)
>       File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_collections.py", line 525, in <lambda>
>         JavaList(target_id, gateway_client))
>       File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_collections.py", line 252, in __init__
>         JavaObject.__init__(self, target_id, gateway_client)
>       File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line 1324, in __init__
>         ThreadSafeFinalizer.add_finalizer(key, value)
>       File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/finalizer.py", line 43, in add_finalizer
>         cls.finalizers[id] = weak_ref
>       File "/usr/lib64/pypy-2.5.1/lib-python/2.7/threading.py", line 216, in __exit__
>         self.release()
>       File "/usr/lib64/pypy-2.5.1/lib-python/2.7/threading.py", line 208, in release
>         self.__block.release()
>     error: release unlocked lock
> **********************************************************************
>    1 of   2 in pyspark.sql.group.GroupedData.mean
> ***Test Failed*** 1 failures.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org