You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2018/06/22 03:47:00 UTC
[jira] [Commented] (SPARK-23710) Upgrade Hive to 2.3.2
[ https://issues.apache.org/jira/browse/SPARK-23710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16519979#comment-16519979 ]
Hyukjin Kwon commented on SPARK-23710:
--------------------------------------
[~q79969786], per Xiao's comment, I think the investigation for the potential downside and the gain we take should be done here. I think we should consider this option too for a long term to get rid of the fork.
Also, I saw few comments what [~dongjoon] left on your experimental try (https://github.com/apache/spark/pull/20659). Hive's ORC shouldn't be referred and got rid of really. Do you think it's possible and make a safer fix?
cc [~vanzin] and [~srowen] too
> Upgrade Hive to 2.3.2
> ---------------------
>
> Key: SPARK-23710
> URL: https://issues.apache.org/jira/browse/SPARK-23710
> Project: Spark
> Issue Type: Improvement
> Components: SQL
> Affects Versions: 2.4.0
> Reporter: Yuming Wang
> Priority: Major
>
> h1. Mainly changes
> * Maven dependency:
> hive.version from {{1.2.1.spark2}} to {{2.3.2}} and change {{hive.classifier}} to {{core}}
> calcite.version from {{1.2.0-incubating}} to {{1.10.0}}
> datanucleus-core.version from {{3.2.10}} to {{4.1.17}}
> remove {{orc.classifier}}, it means orc use the {{hive.storage.api}}, see: ORC-174
> add new dependency {{avatica}} and {{hive.storage.api}}
> * ORC compatibility changes:
> OrcColumnVector.java, OrcColumnarBatchReader.java, OrcDeserializer.scala, OrcFilters.scala, OrcSerializer.scala, OrcFilterSuite.scala
> * hive-thriftserver java file update:
> update {{sql/hive-thriftserver/if/TCLIService.thrift}} to hive 2.3.2
> update {{sql/hive-thriftserver/src/main/java/org/apache/hive/service/*}} to hive 2.3.2
> * TestSuite should update:
> ||TestSuite||Reason||
> |StatisticsSuite|HIVE-16098|
> |SessionCatalogSuite|Similar to [VersionsSuite.scala#L427|#L427]|
> |CliSuite, HiveThriftServer2Suites, HiveSparkSubmitSuite, HiveQuerySuite, SQLQuerySuite|Update hive-hcatalog-core-0.13.1.jar to hive-hcatalog-core-2.3.2.jar|
> |SparkExecuteStatementOperationSuite|Interface changed from org.apache.hive.service.cli.Type.NULL_TYPE to org.apache.hadoop.hive.serde2.thrift.Type.NULL_TYPE|
> |ClasspathDependenciesSuite|org.apache.hive.com.esotericsoftware.kryo.Kryo change to com.esotericsoftware.kryo.Kryo|
> |HiveMetastoreCatalogSuite|Result format changed from Seq("1.1\t1", "2.1\t2") to Seq("1.100\t1", "2.100\t2")|
> |HiveOrcFilterSuite|Result format changed|
> |HiveDDLSuite|Remove $ (This change needs to be reconsidered)|
> |HiveExternalCatalogVersionsSuite| java.lang.ClassCastException: org.datanucleus.identity.DatastoreIdImpl cannot be cast to org.datanucleus.identity.OID|
> * Other changes:
> Close hive schema verification: [HiveClientImpl.scala#L251|https://github.com/wangyum/spark/blob/75e4cc9e80f85517889e87a35da117bc361f2ff3/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala#L251] and [HiveExternalCatalog.scala#L58|https://github.com/wangyum/spark/blob/75e4cc9e80f85517889e87a35da117bc361f2ff3/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala#L58]
> Update [IsolatedClientLoader.scala#L189-L192|https://github.com/wangyum/spark/blob/75e4cc9e80f85517889e87a35da117bc361f2ff3/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/IsolatedClientLoader.scala#L189-L192]
> Because Hive 2.3.2's {{org.apache.hadoop.hive.ql.metadata.Hive}} can't connect to Hive 1.x metastore, We should use {{HiveMetaStoreClient.getDelegationToken}} instead of {{Hive.getDelegationToken}} and update {{HiveClientImpl.toHiveTable}}
> All changes can be found at [PR-20659|https://github.com/apache/spark/pull/20659].
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org