You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Rishi S Balajis (Jira)" <ji...@apache.org> on 2020/10/19 15:20:00 UTC
[jira] [Updated] (SPARK-33182) Using Delegation Tokens to access
HBase through Spark (Java)
[ https://issues.apache.org/jira/browse/SPARK-33182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Rishi S Balajis updated SPARK-33182:
------------------------------------
Affects Version/s: (was: 2.4.0)
2.3.0
> Using Delegation Tokens to access HBase through Spark (Java)
> ------------------------------------------------------------
>
> Key: SPARK-33182
> URL: https://issues.apache.org/jira/browse/SPARK-33182
> Project: Spark
> Issue Type: Question
> Components: Spark Core
> Affects Versions: 2.3.0
> Environment: Spark 2.3, RHEL 7.5, HDP 3
> Reporter: Rishi S Balajis
> Priority: Major
> Labels: hbase, kerberos
>
> I have a requirement to access a kerberized HBase through delegation tokens instead of keytab. I have generated the token using the token util API and also loaded it back into the UserGroupInformation. However, hasKerberosCredentials returns a false. What is the right way to use a saved delegation token to access HBase . The code that I have currently looks like shown below. I see the error :{{client cannot authenticate via:[token, kerberos]}}
>
> {code:java}
> UserGroupInformation ugi;
> ugi = UserGroupInformation.getLoginUser();
> Credentials creds = new Credentials ();
> creds = Credentials.readTokenStorageFile (new Path("file:///zen-volume-home/tokenFile"),conf);
>
> System.out.println("TOKEN *********" + creds.getToken(new org.apache.hadoop.io.Text("hbase")));
> ugi.addToken(creds.getToken(new org.apache.hadoop.io.Text("hbase")));
> ugi.addCredentials(creds);
> /* I do see the token getting printed. However, I am looking for information on how to use this ugi which has the token added to it, to access data on HBase.
> I have tried doing this : */
> SQLContext sqlC = ugi.getLoginUser().doAs((new PrivilegedExceptionAction<SQLContext>() {
> public SQLContext run() throws Exception {
>
> sparkconf = new SparkConf().setAppName("Hbase With Spark");
> jsc = new JavaSparkContext(sparkconf);
> hbaseContext = new JavaHBaseContext(jsc, conf);
> sqlContext = new org.apache.spark.sql.SQLContext(jsc);
>
> String sqlMapping = "name STRING all:name" ;
> Dataset<Row> dfEmp = sqlContext.read().format("org.apache.hadoop.hbase.spark")
> .option("hbase.columns.mapping", sqlMapping) .option("hbase.table", "employee").load();
>
> dfEmp.registerTempTable("empdata");
> dfEmp.show();
> return sqlContext;
> }
> }));
> {code}
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org