You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2020/05/18 00:29:00 UTC

[jira] [Created] (SPARK-31741) Flaky test: pyspark.sql.dataframe.DataFrame.intersectAll

Hyukjin Kwon created SPARK-31741:
------------------------------------

             Summary: Flaky test: pyspark.sql.dataframe.DataFrame.intersectAll
                 Key: SPARK-31741
                 URL: https://issues.apache.org/jira/browse/SPARK-31741
             Project: Spark
          Issue Type: Test
          Components: PySpark
    Affects Versions: 3.0.0
            Reporter: Hyukjin Kwon


https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-sbt-hadoop-2.7-hive-2.3/695/console

```
File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/pyspark/sql/dataframe.py", line 1600, in pyspark.sql.dataframe.DataFrame.intersectAll
Failed example:
    df1.intersectAll(df2).sort("C1", "C2").show()
Exception raised:
    Traceback (most recent call last):
      File "/usr/lib64/pypy-2.5.1/lib-python/2.7/doctest.py", line 1315, in __run
        compileflags, 1) in test.globs
      File "<doctest pyspark.sql.dataframe.DataFrame.intersectAll[2]>", line 1, in <module>
        df1.intersectAll(df2).sort("C1", "C2").show()
      File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/pyspark/sql/dataframe.py", line 1182, in sort
        jdf = self._jdf.sort(self._sort_cols(cols, kwargs))
      File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/pyspark/sql/dataframe.py", line 1221, in _sort_cols
        return self._jseq(jcols)
      File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/pyspark/sql/dataframe.py", line 1189, in _jseq
        return _to_seq(self.sql_ctx._sc, cols, converter)
      File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/pyspark/sql/column.py", line 69, in _to_seq
        return sc._jvm.PythonUtils.toSeq(cols)
      File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__
        answer, self.gateway_client, self.target_id, self.name)
      File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/pyspark/sql/utils.py", line 98, in deco
        return f(*a, **kw)
      File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 342, in get_return_value
        return OUTPUT_CONVERTER[type](answer[2:], gateway_client)
      File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 2525, in <lambda>
        lambda target_id, gateway_client: JavaObject(target_id, gateway_client))
      File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1343, in __init__
        ThreadSafeFinalizer.add_finalizer(key, value)
      File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7-hive-2.3/python/lib/py4j-0.10.9-src.zip/py4j/finalizer.py", line 43, in add_finalizer
        cls.finalizers[id] = weak_ref
      File "/usr/lib64/pypy-2.5.1/lib-python/2.7/threading.py", line 216, in __exit__
        self.release()
      File "/usr/lib64/pypy-2.5.1/lib-python/2.7/threading.py", line 204, in release
        raise RuntimeError("cannot release un-acquired lock")
    RuntimeError: cannot release un-acquired lock
**********************************************************************
   1 of   3 in pyspark.sql.dataframe.DataFrame.intersectAll
***Test Failed*** 1 failures.
```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org