You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Linbo (JIRA)" <ji...@apache.org> on 2016/10/16 02:18:20 UTC
[jira] [Created] (SPARK-17957) Calling outer join and na.fill(0)
and then inner join will miss rows
Linbo created SPARK-17957:
-----------------------------
Summary: Calling outer join and na.fill(0) and then inner join will miss rows
Key: SPARK-17957
URL: https://issues.apache.org/jira/browse/SPARK-17957
Project: Spark
Issue Type: Bug
Components: SQL
Affects Versions: 2.0.1
Environment: Spark 2.0.1, Mac, Local
Reporter: Linbo
I reported a similar bug two months ago and it's fixed in Spark 2.0.1: https://issues.apache.org/jira/browse/SPARK-17060 But I find a new bug: when I insert a na.fill(0) call between outer join and inner join in the same workflow in SPARK-17060 I get wrong result.
{code:title=spark-shell|borderStyle=solid}
scala> val a = Seq((1, 2), (2, 3)).toDF("a", "b")
a: org.apache.spark.sql.DataFrame = [a: int, b: int]
scala> val b = Seq((2, 5), (3, 4)).toDF("a", "c")
b: org.apache.spark.sql.DataFrame = [a: int, c: int]
scala> val ab = a.join(b, Seq("a"), "fullouter").na.fill(0)
ab: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
scala> ab.show
+---+---+---+
| a| b| c|
+---+---+---+
| 1| 2| 0|
| 3| 0| 4|
| 2| 3| 5|
+---+---+---+
scala> val c = Seq((3, 1)).toDF("a", "d")
c: org.apache.spark.sql.DataFrame = [a: int, d: int]
scala> c.show
+---+---+
| a| d|
+---+---+
| 3| 1|
+---+---+
scala> ab.join(c, "a").show
+---+---+---+---+
| a| b| c| d|
+---+---+---+---+
+---+---+---+---+
scala> val ab = a.join(b, Seq("a"), "outer").na.fill(0)
ab: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
scala> ab.join(c, "a").show
+---+---+---+---+
| a| b| c| d|
+---+---+---+---+
+---+---+---+---+
{code}
And again if i use persist, the result is correct. I think the problem is join optimizer similar to this pr: https://github.com/apache/spark/pull/14661
{code:title=spark-shell|borderStyle=solid}
scala> val ab = a.join(b, Seq("a"), "outer").na.fill(0).persist
ab: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [a: int, b: int ... 1 more field]
scala> ab.show
+---+---+---+
| a| b| c|
+---+---+---+
| 1| 2| 0|
| 3| 0| 4|
| 2| 3| 5|
+---+---+---+
scala> ab.join(c, "a").show
+---+---+---+---+
| a| b| c| d|
+---+---+---+---+
| 3| 0| 4| 1|
+---+---+---+---+
{code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org