You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Kevin Chen (JIRA)" <ji...@apache.org> on 2016/06/24 00:39:16 UTC
[jira] [Created] (SPARK-16181) Incorrect behavior for isNull filter
Kevin Chen created SPARK-16181:
----------------------------------
Summary: Incorrect behavior for isNull filter
Key: SPARK-16181
URL: https://issues.apache.org/jira/browse/SPARK-16181
Project: Spark
Issue Type: Bug
Components: SQL
Affects Versions: 2.0.0
Reporter: Kevin Chen
Repro:
JavaRDD<Row> leftRdd = javaSparkContext.parallelize(ImmutableList.of(RowFactory.create("x")));
JavaRDD<Row> rightRdd = javaSparkContext.parallelize(ImmutableList.of(RowFactory.create("y")));
StructType schema = DataTypes.createStructType(ImmutableList.of(
DataTypes.createStructField("col", DataTypes.StringType, true)));
Dataset<Row> left = sparkSession.createDataFrame(leftRdd, schema);
Dataset<Row> right = sparkSession.createDataFrame(rightRdd, schema);
// add a column to the right
Dataset<Row> withConstantColumn = right.withColumn("new", functions.lit(true));
// do a left join. Nothing matches; expect Dataset joined to have a single row ['x', null, null]
Column joinCondition = left.col("col").equalTo(right.col("col"));
Dataset<Row> joined = left.join(withConstantColumn, joinCondition, LeftOuter.toString());
// filter for nulls, still expect the single row ['x', null, null]
Dataset<Row> filtered = joined.filter(functions.col("new").isNull());
// This fails with 1 != 0
Assert.assertEquals(1, filtered.count());
[~rxin]
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org