You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "liuxian (JIRA)" <ji...@apache.org> on 2019/03/22 03:20:00 UTC

[jira] [Created] (SPARK-27238) In the same APP, maybe some hive parquet tables can't use the built-in Parquet reader and writer

liuxian created SPARK-27238:
-------------------------------

             Summary: In the same APP, maybe some hive parquet tables can't use the built-in Parquet reader and writer
                 Key: SPARK-27238
                 URL: https://issues.apache.org/jira/browse/SPARK-27238
             Project: Spark
          Issue Type: Improvement
          Components: SQL
    Affects Versions: 3.0.0
            Reporter: liuxian


In the same APP, TableA and TableB are both hive parquet tables, but TableA can't use the built-in Parquet reader and writer.

{color:#6a8759}{color:#333333}In {color}{color:#333333}this situation,  {color}{color:#ffffff}{color}spark.sql.hive.convertMetastoreParquet {color:#333333}can't control this well, so I think we can add a fine-grained configuration to handle this case{color}
{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org