You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Fatal Lin <fa...@gmail.com> on 2021/04/17 07:30:10 UTC

[Spark SQL] quick enhancement for SPARK-28098

Hello spark-devs,
we hit a similar case with SPARK-28098 when we tried to read a parquet
format table which is generated by hive union operation, and I made a quick
fix for it.

I'm not sure we should reuse the same configuration with hive or add a new
one.

And this is my first time to contribute codes to Spark, looks like I need a
committer to authorize the testing flow.
Any feedback is appreciated!