You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Marcelo Vanzin (JIRA)" <ji...@apache.org> on 2016/08/10 18:43:20 UTC

[jira] [Resolved] (SPARK-17000) Spark cannot connect to secure metastore when using custom metastore jars

     [ https://issues.apache.org/jira/browse/SPARK-17000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Marcelo Vanzin resolved SPARK-17000.
------------------------------------
    Resolution: Duplicate

My bad, duplicate.

> Spark cannot connect to secure metastore when using custom metastore jars
> -------------------------------------------------------------------------
>
>                 Key: SPARK-17000
>                 URL: https://issues.apache.org/jira/browse/SPARK-17000
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.0
>            Reporter: Marcelo Vanzin
>
> When you set {{spark.sql.hive.metastore.jars}} and try to connect to a secured metastore server, the connection fails with errors like this:
> {noformat}
> 16/08/10 10:19:25 WARN hive.metastore: set_ugi() not successful, Likely cause: new client talking to old server. Continuing without it.
> org.apache.thrift.transport.TTransportException
>         at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
>         at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
>         at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:380)
>         at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:230)
>         at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77)
>         at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_set_ugi(ThriftHiveMetastore.java:3788)
>         at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.set_ugi(ThriftHiveMetastore.java:3774)
>         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:447)
>         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:240)
>         at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1501)
>         at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:67)
>         at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:82)
>         at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3037)
>         at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3056)
>         at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3281)
>         at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:217)
>         at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:201)
>         at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:324)
>         at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:285)
>         at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:260)
>         at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:514)
>         at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:187)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:266)
> {noformat}
> This is because the Hive configuration in the main class loader's {{hadoopConf}} instance is not being set in the {{HiveClientImpl}} instance of {{HiveConf}}; the constructor used to create the {{HiveConf}} instance does not properly copy Hive configs from a plain {{Configuration}} object.
> This works fine in non-secure mode because the metastore URIs seem to end up in the final {{HiveConf}} instance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org