You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Chenxiao Mao (JIRA)" <ji...@apache.org> on 2018/08/16 13:49:00 UTC

[jira] [Created] (SPARK-25132) Spark returns NULL for a column whose Hive metastore schema and Parquet schema are in different letter cases

Chenxiao Mao created SPARK-25132:
------------------------------------

             Summary: Spark returns NULL for a column whose Hive metastore schema and Parquet schema are in different letter cases
                 Key: SPARK-25132
                 URL: https://issues.apache.org/jira/browse/SPARK-25132
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 2.3.1
            Reporter: Chenxiao Mao


Spark SQL returns NULL for a column whose Hive metastore schema and Parquet schema are in different letter cases, regardless of spark.sql.caseSensitive set to true or false.

Here is a simple example to reproduce this issue:

scala> spark.range(5).toDF.write.mode("overwrite").saveAsTable("t1")

spark-sql> show create table t1;
CREATE TABLE `t1` (`id` BIGINT)
USING parquet
OPTIONS (
 `serialization.format` '1'
)

spark-sql> CREATE TABLE `t2` (`ID` BIGINT)
 > USING parquet
 > LOCATION 'hdfs://localhost/user/hive/warehouse/t1';

spark-sql> select * from t1;
0
1
2
3
4

spark-sql> select * from t2;
NULL
NULL
NULL
NULL
NULL

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org