You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Linbo (JIRA)" <ji...@apache.org> on 2016/08/15 13:43:20 UTC
[jira] [Updated] (SPARK-17060) Call inner join after outer join
will miss rows with null values
[ https://issues.apache.org/jira/browse/SPARK-17060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Linbo updated SPARK-17060:
--------------------------
Description:
{code:title=test.scala|borderStyle=solid}
scala> val df1 = sc.parallelize(Seq((1, 2, 3), (3, 3, 3))).toDF("a", "b", "c")
df1: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
scala> val df2 = sc.parallelize(Seq((1, 2, 4), (4, 4, 4))).toDF("a", "b", "d")
df2: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
scala> val df3 = df1.join(df2, Seq("a", "b"), "outer")
df3: org.apache.spark.sql.DataFrame = [a: int, b: int ... 2 more fields]
scala> df3.show()
+---+---+----+----+
| a| b| c| d|
+---+---+----+----+
| 1| 2| 3| 4|
| 3| 3| 3|null|
| 4| 4|null| 4|
+---+---+----+----+
scala> val df4 = sc.parallelize(Seq((1, 2, 5), (3, 3, 5), (4, 4, 5))).toDF("a", "b", "e")
df4: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
scala> df4.show()
+---+---+---+
| a| b| e|
+---+---+---+
| 1| 2| 5|
| 3| 3| 5|
| 4| 4| 5|
+---+---+---+
scala> df3.join(df4, Seq("a", "b"), "inner").show()
+---+---+---+---+---+
| a| b| c| d| e|
+---+---+---+---+---+
| 1| 2| 3| 4| 5|
+---+---+---+---+---+
{code}
If call persist on df3, the output is correct
{code:title=test2.scala|borderStyle=solid}
scala> df3.persist
res32: df3.type = [a: int, b: int ... 2 more fields]
scala> df3.join(df4, Seq("a", "b"), "inner").show()
+---+---+----+----+---+
| a| b| c| d| e|
+---+---+----+----+---+
| 1| 2| 3| 4| 5|
| 3| 3| 3|null| 5|
| 4| 4|null| 4| 5|
+---+---+----+----+---+
{code}
was:
{code:title=Bar.java|borderStyle=solid}
scala> val df1 = sc.parallelize(Seq((1, 2, 3), (3, 3, 3))).toDF("a", "b", "c")
df1: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
scala> val df2 = sc.parallelize(Seq((1, 2, 4), (4, 4, 4))).toDF("a", "b", "d")
df2: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
scala> val df3 = df1.join(df2, Seq("a", "b"), "outer")
df3: org.apache.spark.sql.DataFrame = [a: int, b: int ... 2 more fields]
scala> df3.show()
+---+---+----+----+
| a| b| c| d|
+---+---+----+----+
| 1| 2| 3| 4|
| 3| 3| 3|null|
| 4| 4|null| 4|
+---+---+----+----+
scala> val df4 = sc.parallelize(Seq((1, 2, 5), (3, 3, 5), (4, 4, 5))).toDF("a", "b", "e")
df4: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
scala> df4.show()
+---+---+---+
| a| b| e|
+---+---+---+
| 1| 2| 5|
| 3| 3| 5|
| 4| 4| 5|
+---+---+---+
scala> df3.join(df4, Seq("a", "b"), "inner").show()
+---+---+---+---+---+
| a| b| c| d| e|
+---+---+---+---+---+
| 1| 2| 3| 4| 5|
+---+---+---+---+---+
// if call persist on df3, the output is correct
scala> df3.persist
res32: df3.type = [a: int, b: int ... 2 more fields]
scala> df3.join(df4, Seq("a", "b"), "inner").show()
+---+---+----+----+---+
| a| b| c| d| e|
+---+---+----+----+---+
| 1| 2| 3| 4| 5|
| 3| 3| 3|null| 5|
| 4| 4|null| 4| 5|
+---+---+----+----+---+
{code}
> Call inner join after outer join will miss rows with null values
> ----------------------------------------------------------------
>
> Key: SPARK-17060
> URL: https://issues.apache.org/jira/browse/SPARK-17060
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.0.0
> Environment: Spark 2.0.0, Mac, Local
> Reporter: Linbo
> Labels: join
>
> {code:title=test.scala|borderStyle=solid}
> scala> val df1 = sc.parallelize(Seq((1, 2, 3), (3, 3, 3))).toDF("a", "b", "c")
> df1: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
> scala> val df2 = sc.parallelize(Seq((1, 2, 4), (4, 4, 4))).toDF("a", "b", "d")
> df2: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
> scala> val df3 = df1.join(df2, Seq("a", "b"), "outer")
> df3: org.apache.spark.sql.DataFrame = [a: int, b: int ... 2 more fields]
> scala> df3.show()
> +---+---+----+----+
> | a| b| c| d|
> +---+---+----+----+
> | 1| 2| 3| 4|
> | 3| 3| 3|null|
> | 4| 4|null| 4|
> +---+---+----+----+
> scala> val df4 = sc.parallelize(Seq((1, 2, 5), (3, 3, 5), (4, 4, 5))).toDF("a", "b", "e")
> df4: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
> scala> df4.show()
> +---+---+---+
> | a| b| e|
> +---+---+---+
> | 1| 2| 5|
> | 3| 3| 5|
> | 4| 4| 5|
> +---+---+---+
> scala> df3.join(df4, Seq("a", "b"), "inner").show()
> +---+---+---+---+---+
> | a| b| c| d| e|
> +---+---+---+---+---+
> | 1| 2| 3| 4| 5|
> +---+---+---+---+---+
> {code}
> If call persist on df3, the output is correct
> {code:title=test2.scala|borderStyle=solid}
> scala> df3.persist
> res32: df3.type = [a: int, b: int ... 2 more fields]
> scala> df3.join(df4, Seq("a", "b"), "inner").show()
> +---+---+----+----+---+
> | a| b| c| d| e|
> +---+---+----+----+---+
> | 1| 2| 3| 4| 5|
> | 3| 3| 3|null| 5|
> | 4| 4|null| 4| 5|
> +---+---+----+----+---+
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org