You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Dongjoon Hyun (JIRA)" <ji...@apache.org> on 2019/07/12 00:43:00 UTC
[jira] [Created] (SPARK-28358) Fix Flaky Tests -
test_time_with_timezone (pyspark.sql.tests.test_serde.SerdeTests)
Dongjoon Hyun created SPARK-28358:
-------------------------------------
Summary: Fix Flaky Tests - test_time_with_timezone (pyspark.sql.tests.test_serde.SerdeTests)
Key: SPARK-28358
URL: https://issues.apache.org/jira/browse/SPARK-28358
Project: Spark
Issue Type: Bug
Components: PySpark, Tests
Affects Versions: 3.0.0
Reporter: Dongjoon Hyun
- https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-sbt-hadoop-2.7/6127/console
{code}
test_time_with_timezone (pyspark.sql.tests.test_serde.SerdeTests) ... ERROR
======================================================================
ERROR: test_time_with_timezone (pyspark.sql.tests.test_serde.SerdeTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/sql/tests/test_serde.py", line 92, in test_time_with_timezone
df = self.spark.createDataFrame([(day, now, utcnow)])
File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/sql/session.py", line 788, in createDataFrame
jrdd = self._jvm.SerDeUtil.toJavaArray(rdd._to_java_object_rdd())
File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/rdd.py", line 2369, in _to_java_object_rdd
rdd = self._pickled()
File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/rdd.py", line 264, in _pickled
return self._reserialize(AutoBatchedSerializer(PickleSerializer()))
File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/rdd.py", line 666, in _reserialize
self = self.map(lambda x: x, preservesPartitioning=True)
File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/rdd.py", line 397, in map
return self.mapPartitionsWithIndex(func, preservesPartitioning)
File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/rdd.py", line 437, in mapPartitionsWithIndex
return PipelinedRDD(self, f, preservesPartitioning)
File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/rdd.py", line 2586, in __init__
self.is_barrier = isFromBarrier or prev._is_barrier()
File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/rdd.py", line 2486, in _is_barrier
return self._jrdd.rdd().isBarrier()
File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line 1286, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/pyspark/sql/utils.py", line 89, in deco
return f(*a, **kw)
File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/protocol.py", line 342, in get_return_value
return OUTPUT_CONVERTER[type](answer[2:], gateway_client)
File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line 2492, in <lambda>
lambda target_id, gateway_client: JavaObject(target_id, gateway_client))
File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line 1324, in __init__
ThreadSafeFinalizer.add_finalizer(key, value)
File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/finalizer.py", line 43, in add_finalizer
cls.finalizers[id] = weak_ref
File "/usr/lib64/pypy-2.5.1/lib-python/2.7/threading.py", line 216, in __exit__
self.release()
File "/usr/lib64/pypy-2.5.1/lib-python/2.7/threading.py", line 208, in release
self.__block.release()
error: release unlocked lock
{code}
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org