You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Xiao Li (JIRA)" <ji...@apache.org> on 2019/01/23 22:57:00 UTC
[jira] [Created] (SPARK-26709) OptimizeMetadataOnlyQuery does not
correctly handle the empty files
Xiao Li created SPARK-26709:
-------------------------------
Summary: OptimizeMetadataOnlyQuery does not correctly handle the empty files
Key: SPARK-26709
URL: https://issues.apache.org/jira/browse/SPARK-26709
Project: Spark
Issue Type: Bug
Components: SQL
Affects Versions: 2.4.0, 2.3.2, 2.2.3, 2.1.3
Reporter: Xiao Li
{code:java}
import org.apache.spark.sql.functions.lit
withSQLConf(SQLConf.OPTIMIZER_METADATA_ONLY.key -> "true") {
withTempPath { path =>
val tabLocation = path.getAbsolutePath
val partLocation = new Path(path.getAbsolutePath, "partCol1=3")
val df = spark.emptyDataFrame.select(lit(1).as("col1"))
df.write.parquet(partLocation.toString)
val readDF = spark.read.parquet(tabLocation)
checkAnswer(readDF.selectExpr("max(partCol1)"), Row(null))
checkAnswer(readDF.selectExpr("max(col1)"), Row(null))
}
}
{code}
OptimizeMetadataOnlyQuery has a correctness bug to handle the file with the empty records for partitioned tables.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org