You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ranger.apache.org by Anandha L Ranganathan <an...@gmail.com> on 2016/12/19 22:30:01 UTC

Unable to connect to S3 after enabling Ranger with Hive

Hi,


Unable to create table pointing to S3 after enabling Ranger.

This is database we created before enabling Ranger.


   1. SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
   2. SET fs.s3a.access.key=xxxxxxx;
   3. SET fs.s3a.secret.key=yyyyyyyyyyyyyyy;
   4.
   5.
   6. CREATE DATABASE IF NOT EXISTS backup_s3a1
   7. COMMENT "s3a schema test"
   8. LOCATION "s3a://gd-de-dp-db-hcat-backup-schema/";

After Ranger was enabled, we try to create another database but it is
throwing error.


   1. 0: jdbc:hive2://usw2dxdpmn01.local:> SET
fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
   2. Error: Error while processing statement: Cannot modify
fs.s3a.impl at runtime. It is not in list of params that are allowed
to be modified at runtime (state=42000,code=1)
   3.



I configured the credentials in the core-site.xml and always returns
"undefined" when I am trying to see the values for  below commands. This is
in our " dev" environment where Ranger is enabled. In   other environment
where Ranger is not installed , we are not facing this problem.


   1. 0: jdbc:hive2://usw2dxdpmn01:10010> set  fs.s3a.impl;
   2. +-----------------------------------------------------+--+
   3. |                         set                         |
   4. +-----------------------------------------------------+--+
   5. | fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  |
   6. +-----------------------------------------------------+--+
   7. 1 row selected (0.006 seconds)
   8. 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key;
   9. +---------------------------------+--+
   10. |               set               |
   11. +---------------------------------+--+
   12. | fs.s3a.access.key is undefined  |
   13. +---------------------------------+--+
   14. 1 row selected (0.005 seconds)
   15. 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.secret.key;
   16. +---------------------------------+--+
   17. |               set               |
   18. +---------------------------------+--+
   19. | fs.s3a.secret.key is undefined  |
   20. +---------------------------------+--+
   21. 1 row selected (0.005 seconds)


Any help or pointers is appreciated.

Re: Unable to connect to S3 after enabling Ranger with Hive

Posted by Selvamohan Neethiraj <sn...@apache.org>.
Are these variable names are added to hive.security.authorization.sqlstd.confwhitelist.append ?
I believe, you need to set the following hive configuration variable:  hive.security.authorization.sqlstd.confwhitelist.append with all possible variables that can be set by the end-user.

Please see: https://community.hortonworks.com/content/supportkb/49437/how-to-setup-multiple-properties-in-hivesecurityau.html
Please note that this value is a RegEx – so, a dot represents any single character, pipe represents regex-OR condition.

Thanks,
Selva-

From:  Don Bosco Durai <bo...@apache.org>
Reply-To:  "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date:  Thursday, January 5, 2017 at 3:58 PM
To:  "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject:  Re: Unable to connect to S3 after enabling Ranger with Hive

Is this for Ranger or Hive?

 

Bosco

 

 

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: <us...@ranger.incubator.apache.org>
Date: Thursday, January 5, 2017 at 12:16 PM
To: <us...@ranger.incubator.apache.org>
Subject: Re: Unable to connect to S3 after enabling Ranger with Hive

 

Can anyone help on how to set  custom parameters  after ranger setup ? 

 

On Thu, Jan 5, 2017 at 12:43 AM, Anandha L Ranganathan <an...@gmail.com> wrote:

 

We are just facing another problem to set  custom parameters.  How do we set these parameters in beeline at runtime ?    These are out custom parameters.

SET airflow_cluster=${env:CLUSTER}; 
SET default_date=unix_timestamp('1970-01-01 00:00:00');
SET default_timestamp=CAST('1970-01-01 00:00:00' AS TIMESTAMP);
SET default_future_date=unix_timestamp('2099-12-31 00:00:00');

We get these errors when we set these parameters.


0: jdbc:hive2://usw2dbdpmn01:10000/> SET default_timestamp=CAST('1970-01-01 00:00:00' AS TIMESTAMP);
Error: Error while processing statement: Cannot modify default_timestamp at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)

Thanks

Anand

 

 

On Mon, Dec 19, 2016 at 5:34 PM, Anandha L Ranganathan <an...@gmail.com> wrote:

Cool. After adding  the configuration it is working fine. 

0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.sqlstd.confwhitelist.append;
+------------------------------------------------------------------------------------+--+
|                                        set                                         |
+------------------------------------------------------------------------------------+--+
| hive.security.authorization.sqlstd.confwhitelist.append=|fs\.s3a\..*|fs\.s3n\..* |  |
+------------------------------------------------------------------------------------+--+


Thanks Selva for the quick help.

 

 

On Mon, Dec 19, 2016 at 5:29 PM, Selvamohan Neethiraj <sn...@apache.org> wrote:

Hi,

 

Can you try appending the following string to the  existing value of  hive.security.authorization.sqlstd.confwhitelist 

 

|fs\.s3a\..*

 

And restart the HiveServer2 to see if this fixes this issue ?   

 

Thanks,

Selva-

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date: Monday, December 19, 2016 at 6:27 PM


To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject: Re: Unable to connect to S3 after enabling Ranger with Hive

 

Selva,

Please find the results.


set hive.security.authorization.sqlstd.confwhitelist;

| hive.security.authorization.sqlstd.confwhitelist=hive\.auto\..*|hive\.cbo\..*|hive\.convert\..*|hive\.exec\.dynamic\.partition.*|hive\.exec\..*\.dynamic\.partitions\..*|hive\.exec\.compress\..*|hive\.exec\.infer\..*|hive\.exec\.mode.local\..*|hive\.exec\.orc\..*|hive\.exec\.parallel.*|hive\.explain\..*|hive\.fetch.task\..*|hive\.groupby\..*|hive\.hbase\..*|hive\.index\..*|hive\.index\..*|hive\.intermediate\..*|hive\.join\..*|hive\.limit\..*|hive\.log\..*|hive\.mapjoin\..*|hive\.merge\..*|hive\.optimize\..*|hive\.orc\..*|hive\.outerjoin\..*|hive\.parquet\..*|hive\.ppd\..*|hive\.prewarm\..*|hive\.server2\.proxy\.user|hive\.skewjoin\..*|hive\.smbjoin\..*|hive\.stats\..*|hive\.tez\..*|hive\.vectorized\..*|mapred\.map\..*|mapred\.reduce\..*|mapred\.output\.compression\.codec|mapred\.job\.queuename|mapred\.output\.compression\.type|mapred\.min\.split\.size|mapreduce\.job\.reduce\.slowstart\.completedmaps|mapreduce\.job\.queuename|mapreduce\.job\.tags|mapreduce\.input\.fileinputformat\.split\.minsize|mapreduce\.map\..*|mapreduce\.reduce\..*|mapreduce\.output\.fileoutputformat\.compress\.codec|mapreduce\.output\.fileoutputformat\.compress\.type|tez\.am\..*|tez\.task\..*|tez\.runtime\..*|tez.queue.name|hive\.exec\.reducers\.bytes\.per\.reducer|hive\.client\.stats\.counters|hive\.exec\.default\.partition\.name|hive\.exec\.drop\.ignorenonexistent|hive\.counters\.group\.name|hive\.default\.fileformat\.managed|hive\.enforce\.bucketing|hive\.enforce\.bucketmapjoin|hive\.enforce\.sorting|hive\.enforce\.sortmergebucketmapjoin|hive\.cache\.expr\.evaluation|hive\.hashtable\.loadfactor|hive\.hashtable\.initialCapacity|hive\.ignore\.mapjoin\.hint|hive\.limit\.row\.max\.size|hive\.mapred\.mode|hive\.map\.aggr|hive\.compute\.query\.using\.stats|hive\.exec\.rowoffset|hive\.variable\.substitute|hive\.variable\.substitute\.depth|hive\.autogen\.columnalias\.prefix\.includefuncname|hive\.autogen\.columnalias\.prefix\.label|hive\.exec\.check\.crossproducts|hive\.compat|hive\.exec\.concatenate\.check\.index|hive\.display\.partition\.cols\.separately|hive\.error\.on\.empty\.partition|hive\.execution\.engine|hive\.exim\.uri\.scheme\.whitelist|hive\.file\.max\.footer|hive\.mapred\.supports\.subdirectories|hive\.insert\.into\.multilevel\.dirs|hive\.localize\.resource\.num\.wait\.attempts|hive\.multi\.insert\.move\.tasks\.share\.dependencies|hive\.support\.quoted\.identifiers|hive\.resultset\.use\.unique\.column\.names|hive\.analyze\.stmt\.collect\.partlevel\.stats|hive\.server2\.logging\.operation\.level|hive\.support\.sql11\.reserved\.keywords|hive\.exec\.job\.debug\.capture\.stacktraces|hive\.exec\.job\.debug\.timeout|hive\.exec\.max\.created\.files|hive\.exec\.reducers\.max|hive\.reorder\.nway\.joins|hive\.output\.file\.extension|hive\.exec\.show\.job\.failure\.debug\.info|hive\.exec\.tasklog\.debug\.timeout  |



0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.sqlstd.confwhitelist.append;
+-----------------------------------------------------------------------+--+
|                                  set                                  |
+-----------------------------------------------------------------------+--+
| hive.security.authorization.sqlstd.confwhitelist.append is undefined  |
+-----------------------------------------------------------------------+--+

 

On Mon, Dec 19, 2016 at 3:12 PM, Selvamohan Neethiraj <sn...@apache.org> wrote:

Hi,

 

Can you also post here the value for the following two parameters:

 

hive.security.authorization.sqlstd.confwhitelist

hive.security.authorization.sqlstd.confwhitelist.append







Thanks,

Selva-

 

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date: Monday, December 19, 2016 at 5:54 PM
To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject: Re: Unable to connect to S3 after enabling Ranger with Hive

 

Selva,

We are using HDP and here are versions and results.

Hive :  1.2.1.2.4   

Ranger: 0.5.0.2.4

 



0: jdbc:hive2://usw2dxdpmn01:10010> set  hive.conf.restricted.list;
+----------------------------------------------------------------------------------------------------------------------------------------+--+
|                                                                  set                                                                   |
+----------------------------------------------------------------------------------------------------------------------------------------+--+
| hive.conf.restricted.list=hive.security.authorization.enabled,hive.security.authorization.manager,hive.security.authenticator.manager  |
+----------------------------------------------------------------------------------------------------------------------------------------+--+
1 row selected (0.006 seconds)
0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.command.whitelist;
+-------------------------------------------------------------------------------+--+
|                                      set                                      |
+-------------------------------------------------------------------------------+--+
| hive.security.command.whitelist=set,reset,dfs,add,list,delete,reload,compile  |
+-------------------------------------------------------------------------------+--+
1 row selected (0.008 seconds)




0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key=xxxxxxxxxxxxxxx;
Error: Error while processing statement: Cannot modify fs.s3a.access.key at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)

 

On Mon, Dec 19, 2016 at 2:47 PM, Selvamohan Neethiraj <sn...@apache.org> wrote:

Hi,

 

Which version of Hive and Ranger are you using ? Can you check if Ranger has added  hiveserver2 parameters  hive.conf.restricted.list,hive.security.command.whitelist  in the hive configuration file(s) ?

Can you please list out these parameter values here ?

 

Thanks,

Selva-

 

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date: Monday, December 19, 2016 at 5:30 PM
To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject: Unable to connect to S3 after enabling Ranger with Hive

 

Hi,
 

Unable to create table pointing to S3 after enabling Ranger.

This is database we created before enabling Ranger.

1.  SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
2.  SET fs.s3a.access.key=xxxxxxx;
3.  SET fs.s3a.secret.key=yyyyyyyyyyyyyyy;
4.   
5.   
6.  CREATE DATABASE IF NOT EXISTS backup_s3a1
7.  COMMENT "s3a schema test"
8.  LOCATION "s3a://gd-de-dp-db-hcat-backup-schema/";
After Ranger was enabled, we try to create another database but it is throwing error.

1.  0: jdbc:hive2://usw2dxdpmn01.local:> SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
2.  Error: Error while processing statement: Cannot modify fs.s3a.impl at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)
3.   
 

I configured the credentials in the core-site.xml and always returns "undefined" when I am trying to see the values for  below commands. This is in our " dev" environment where Ranger is enabled. In   other environment where Ranger is not installed , we are not facing this problem.

1.  0: jdbc:hive2://usw2dxdpmn01:10010> set  fs.s3a.impl;
2.  +-----------------------------------------------------+--+
3.  |                         set                         |
4.  +-----------------------------------------------------+--+
5.  | fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  |
6.  +-----------------------------------------------------+--+
7.  1 row selected (0.006 seconds)
8.  0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key;
9.  +---------------------------------+--+
10.|               set               |
11.+---------------------------------+--+
12.| fs.s3a.access.key is undefined  |
13.+---------------------------------+--+
14.1 row selected (0.005 seconds)
15.0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.secret.key;
16.+---------------------------------+--+
17.|               set               |
18.+---------------------------------+--+
19.| fs.s3a.secret.key is undefined  |
20.+---------------------------------+--+
21.1 row selected (0.005 seconds)
 

Any help or pointers is appreciated. 

 

 

 

 

 

 


Re: Unable to connect to S3 after enabling Ranger with Hive

Posted by Anandha L Ranganathan <an...@gmail.com>.
Thanks Selva..

I will follow up with Hive user group .


On Thu, Jan 5, 2017 at 2:20 PM, Selvamohan Neethiraj <sn...@apache.org>
wrote:

> Anand,
>
> As it seems to be a Hive specific issue,  can you reach out Hive user
> group to see if this issue has an alternative solution ?
>
> Thanks,
> Selva-
>
> From: Anandha L Ranganathan <an...@gmail.com>
> Reply-To: "user@ranger.incubator.apache.org" <
> user@ranger.incubator.apache.org>
> Date: Thursday, January 5, 2017 at 5:08 PM
>
> To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
> Subject: Re: Unable to connect to S3 after enabling Ranger with Hive
>
> Don Bosco / Selvamohan,
>
> Even though it provides solution to handle the custom properties at Hive,
> it will be overhead to whitelisting in hive-server2  configuration.  This
> requires restarting of hiveserver2 .
>
>
>
> Data Engineers should  have freedom to assign a value to the variable at
> runtime that is specific to job. For an example, some of these are default
> values we want to define it in hiveconf.sql and gets loaded automatically
> when we launch beeline.
>
> SET airflow_cluster=${env:CLUSTER};
> SET default_id=0;
> SET default_string='UNKNOWN';
> SET default_double=0.0;
> SET default_float=0.0;
> SET default_bool=False;
>
> A data engineer shouldn't rely upon updating these parameters in
> hive-server2  configuration.  Do you have something  like "negation" where
> everything should be accepted other than the system properties
> hadoop.*,hive.*,fs.*,tez.*, etc. etc..
>
>
>
>
> Thanks
> Anand
>
>
>
> On Thu, Jan 5, 2017 at 1:09 PM, Don Bosco Durai <bo...@apache.org> wrote:
>
>> In the respective config folder of the components, there will be
>> properties file prefixed with “ranger-“. You can update them directly if
>> needed.
>>
>> ls ranger-*
>>
>> ranger-hive-audit.xml     ranger-policymgr-ssl.xml
>>
>> ranger-hive-security.xml  ranger-security.xml
>>
>>
>>
>> For any component configuration, you need to update the components
>>  corresponding configuration files.
>>
>> ls hive*.xml
>>
>> hivemetastore-site.xml  hiveserver2-site.xml  hive-site.xml
>>
>>
>>
>> If you are using Ambari, then note that Ambari will reset the properties
>> on each restart. So you will have the set the properties from Ambari Admin.
>>
>>
>>
>> Bosco
>>
>>
>>
>>
>>
>>
>>
>> *From: *Anandha L Ranganathan <an...@gmail.com>
>> *Reply-To: *<us...@ranger.incubator.apache.org>
>> *Date: *Thursday, January 5, 2017 at 1:05 PM
>>
>> *To: *<us...@ranger.incubator.apache.org>
>> *Subject: *Re: Unable to connect to S3 after enabling Ranger with Hive
>>
>>
>>
>> Yes. It is ranger for hive.
>>
>>
>>
>> On Thu, Jan 5, 2017 at 12:58 PM, Don Bosco Durai <bo...@apache.org>
>> wrote:
>>
>> Is this for Ranger or Hive?
>>
>>
>>
>> Bosco
>>
>>
>>
>>
>>
>> *From: *Anandha L Ranganathan <an...@gmail.com>
>> *Reply-To: *<us...@ranger.incubator.apache.org>
>> *Date: *Thursday, January 5, 2017 at 12:16 PM
>> *To: *<us...@ranger.incubator.apache.org>
>>
>>
>> *Subject: *Re: Unable to connect to S3 after enabling Ranger with Hive
>>
>>
>>
>> Can anyone help on how to set  custom parameters  after ranger setup ?
>>
>>
>>
>> On Thu, Jan 5, 2017 at 12:43 AM, Anandha L Ranganathan <
>> analog.sony@gmail.com> wrote:
>>
>>
>>
>> We are just facing another problem to set  custom parameters.  How do we
>> set these parameters in beeline at runtime ?    These are out custom
>> parameters.
>>
>> SET airflow_cluster=${env:CLUSTER};
>> SET default_date=unix_timestamp('1970-01-01 00:00:00');
>> SET default_timestamp=CAST('1970-01-01 00:00:00' AS TIMESTAMP);
>> SET default_future_date=unix_timestamp('2099-12-31 00:00:00');
>>
>> We get these errors when we set these parameters.
>>
>>
>> 0: jdbc:hive2://usw2dbdpmn01:10000/> SET default_timestamp=CAST('1970-01-01
>> 00:00:00' AS TIMESTAMP);
>> Error: Error while processing statement: Cannot modify default_timestamp
>> at runtime. It is not in list of params that are allowed to be modified at
>> runtime (state=42000,code=1)
>>
>> Thanks
>>
>> Anand
>>
>>
>>
>>
>>
>> On Mon, Dec 19, 2016 at 5:34 PM, Anandha L Ranganathan <
>> analog.sony@gmail.com> wrote:
>>
>> Cool. After adding  the configuration it is working fine.
>>
>> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.sq
>> lstd.confwhitelist.append;
>> +-----------------------------------------------------------
>> -------------------------+--+
>> |                                        set
>> |
>> +-----------------------------------------------------------
>> -------------------------+--+
>> | hive.security.authorization.sqlstd.confwhitelist.append=|fs\.s3a\..*|fs\.s3n\..*
>> |  |
>> +-----------------------------------------------------------
>> -------------------------+--+
>>
>> Thanks Selva for the quick help.
>>
>>
>>
>>
>>
>> On Mon, Dec 19, 2016 at 5:29 PM, Selvamohan Neethiraj <
>> sneethir@apache.org> wrote:
>>
>> Hi,
>>
>>
>>
>> Can you try appending the following string to the  existing value of
>>  hive.security.authorization.sqlstd.confwhitelist
>>
>>
>>
>> |fs\.s3a\..*
>>
>>
>>
>> And restart the HiveServer2 to see if this fixes this issue ?
>>
>>
>>
>> Thanks,
>>
>> Selva-
>>
>> *From: *Anandha L Ranganathan <an...@gmail.com>
>> *Reply-To: *"user@ranger.incubator.apache.org" <
>> user@ranger.incubator.apache.org>
>> *Date: *Monday, December 19, 2016 at 6:27 PM
>>
>>
>> *To: *"user@ranger.incubator.apache.org" <user@ranger.incubator.apache.
>> org>
>> *Subject: *Re: Unable to connect to S3 after enabling Ranger with Hive
>>
>>
>>
>> Selva,
>>
>> Please find the results.
>>
>>
>> set hive.security.authorization.sqlstd.confwhitelist;
>>
>> | hive.security.authorization.sqlstd.confwhitelist=hive\.auto\
>> ..*|hive\.cbo\..*|hive\.convert\..*|hive\.exec\.dynamic\.
>> partition.*|hive\.exec\..*\.dynamic\.partitions\..*|hive\.
>> exec\.compress\..*|hive\.exec\.infer\..*|hive\.exec\.mode.
>> local\..*|hive\.exec\.orc\..*|hive\.exec\.parallel.*|hive\.
>> explain\..*|hive\.fetch.task\..*|hive\.groupby\..*|hive\.
>> hbase\..*|hive\.index\..*|hive\.index\..*|hive\.
>> intermediate\..*|hive\.join\..*|hive\.limit\..*|hive\.log\..
>> *|hive\.mapjoin\..*|hive\.merge\..*|hive\.optimize\..*|
>> hive\.orc\..*|hive\.outerjoin\..*|hive\.parquet\..*|hive\.
>> ppd\..*|hive\.prewarm\..*|hive\.server2\.proxy\.user|
>> hive\.skewjoin\..*|hive\.smbjoin\..*|hive\.stats\..*|hive\.tez\..*|hive\.
>> vectorized\..*|mapred\.map\..*|mapred\.reduce\..*|mapred\.
>> output\.compression\.codec|mapred\.job\.queuename|mapred\
>> .output\.compression\.type|mapred\.min\.split\.size|mapre
>> duce\.job\.reduce\.slowstart\.completedmaps|mapreduce\.job\.
>> queuename|mapreduce\.job\.tags|mapreduce\.input\.fileinp
>> utformat\.split\.minsize|mapreduce\.map\..*|mapreduce\.
>> reduce\..*|mapreduce\.output\.fileoutputformat\.compress\.co
>> dec|mapreduce\.output\.fileoutputformat\.compress\.type|tez\
>> .am\..*|tez\.task\..*|tez\.runtime\..*|tez.queue.name|
>> hive\.exec\.reducers\.bytes\.per\.reducer|hive\.client\.
>> stats\.counters|hive\.exec\.default\.partition\.name|hive\
>> .exec\.drop\.ignorenonexistent|hive\.counters\.group\.name|
>> hive\.default\.fileformat\.managed|hive\.enforce\.
>> bucketing|hive\.enforce\.bucketmapjoin|hive\.enforce\.
>> sorting|hive\.enforce\.sortmergebucketmapjoin|hive\.cache\.
>> expr\.evaluation|hive\.hashtable\.loadfactor|hive\.hashtable
>> \.initialCapacity|hive\.ignore\.mapjoin\.hint|hive\.
>> limit\.row\.max\.size|hive\.mapred\.mode|hive\.map\.aggr|
>> hive\.compute\.query\.using\.stats|hive\.exec\.rowoffset|
>> hive\.variable\.substitute|hive\.variable\.substitute\.
>> depth|hive\.autogen\.columnalias\.prefix\.includefuncname|hive\.autogen\.
>> columnalias\.prefix\.label|hive\.exec\.check\.crossproducts|
>> hive\.compat|hive\.exec\.concatenate\.check\.index|
>> hive\.display\.partition\.cols\.separately|hive\.error\.
>> on\.empty\.partition|hive\.execution\.engine|hive\.exim\.
>> uri\.scheme\.whitelist|hive\.file\.max\.footer|hive\.mapred\.supports\.
>> subdirectories|hive\.insert\.into\.multilevel\.dirs|hive\.
>> localize\.resource\.num\.wait\.attempts|hive\.multi\.insert\
>> .move\.tasks\.share\.dependencies|hive\.support\.
>> quoted\.identifiers|hive\.resultset\.use\.unique\.column
>> \.names|hive\.analyze\.stmt\.collect\.partlevel\.stats|
>> hive\.server2\.logging\.operation\.level|hive\.support\.
>> sql11\.reserved\.keywords|hive\.exec\.job\.debug\.
>> capture\.stacktraces|hive\.exec\.job\.debug\.timeout|
>> hive\.exec\.max\.created\.files|hive\.exec\.reducers\.
>> max|hive\.reorder\.nway\.joins|hive\.output\.file\.
>> extension|hive\.exec\.show\.job\.failure\.debug\.info|
>> hive\.exec\.tasklog\.debug\.timeout  |
>>
>>
>>
>> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.sq
>> lstd.confwhitelist.append;
>> +-----------------------------------------------------------
>> ------------+--+
>> |                                  set                                  |
>> +-----------------------------------------------------------
>> ------------+--+
>> | hive.security.authorization.sqlstd.confwhitelist.append is undefined  |
>> +-----------------------------------------------------------
>> ------------+--+
>>
>>
>>
>> On Mon, Dec 19, 2016 at 3:12 PM, Selvamohan Neethiraj <
>> sneethir@apache.org> wrote:
>>
>> Hi,
>>
>>
>>
>> Can you also post here the value for the following two parameters:
>>
>>
>>
>> hive.security.authorization.sqlstd.confwhitelist
>>
>> hive.security.authorization.sqlstd.confwhitelist.append
>>
>>
>>
>>
>>
>> Thanks,
>>
>> Selva-
>>
>>
>>
>> *From: *Anandha L Ranganathan <an...@gmail.com>
>> *Reply-To: *"user@ranger.incubator.apache.org" <
>> user@ranger.incubator.apache.org>
>> *Date: *Monday, December 19, 2016 at 5:54 PM
>> *To: *"user@ranger.incubator.apache.org" <user@ranger.incubator.apache.
>> org>
>> *Subject: *Re: Unable to connect to S3 after enabling Ranger with Hive
>>
>>
>>
>> Selva,
>>
>> We are using HDP and here are versions and results.
>>
>> Hive :  1.2.1.2.4
>>
>> Ranger: 0.5.0.2.4
>>
>>
>>
>>
>>
>> 0: jdbc:hive2://usw2dxdpmn01:10010> set  hive.conf.restricted.list;
>> +-----------------------------------------------------------
>> ------------------------------------------------------------
>> -----------------+--+
>> |
>> set                                                                   |
>> +-----------------------------------------------------------
>> ------------------------------------------------------------
>> -----------------+--+
>> | hive.conf.restricted.list=hive.security.authorization.enable
>> d,hive.security.authorization.manager,hive.security.authenticator.manager
>> |
>> +-----------------------------------------------------------
>> ------------------------------------------------------------
>> -----------------+--+
>> 1 row selected (0.006 seconds)
>> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.command.whitelist;
>> +-----------------------------------------------------------
>> --------------------+--+
>> |                                      set
>> |
>> +-----------------------------------------------------------
>> --------------------+--+
>> | hive.security.command.whitelist=set,reset,dfs,add,list,delete,reload,compile
>> |
>> +-----------------------------------------------------------
>> --------------------+--+
>> 1 row selected (0.008 seconds)
>>
>>
>>
>>
>> 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key=xxxxxxxxxxxx
>> xxx;
>> Error: Error while processing statement: Cannot modify fs.s3a.access.key
>> at runtime. It is not in list of params that are allowed to be modified at
>> runtime (state=42000,code=1)
>>
>>
>>
>> On Mon, Dec 19, 2016 at 2:47 PM, Selvamohan Neethiraj <
>> sneethir@apache.org> wrote:
>>
>> Hi,
>>
>>
>>
>> Which version of Hive and Ranger are you using ? Can you check if Ranger
>> has added  hiveserver2 parameters  hive.conf.restricted.list,hive.security.command.whitelist
>>  in the hive configuration file(s) ?
>>
>> Can you please list out these parameter values here ?
>>
>>
>>
>> Thanks,
>>
>> Selva-
>>
>>
>>
>> *From: *Anandha L Ranganathan <an...@gmail.com>
>> *Reply-To: *"user@ranger.incubator.apache.org" <
>> user@ranger.incubator.apache.org>
>> *Date: *Monday, December 19, 2016 at 5:30 PM
>> *To: *"user@ranger.incubator.apache.org" <user@ranger.incubator.apache.
>> org>
>> *Subject: *Unable to connect to S3 after enabling Ranger with Hive
>>
>>
>>
>> Hi,
>>
>>
>> Unable to create table pointing to S3 after enabling Ranger.
>>
>> This is database we created before enabling Ranger.
>>
>> 1.  SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
>>
>> 2.  SET fs.s3a.access.key=xxxxxxx;
>>
>> 3.  SET fs.s3a.secret.key=yyyyyyyyyyyyyyy;
>>
>> 4.
>>
>> 5.
>>
>> 6.  CREATE DATABASE IF NOT EXISTS backup_s3a1
>>
>> 7.  COMMENT "s3a schema test"
>>
>> 8.  LOCATION "s3a://gd-de-dp-db-hcat-backup-schema/";
>>
>> After Ranger was enabled, we try to create another database but it is
>> throwing error.
>>
>> 1.  0: jdbc:hive2://usw2dxdpmn01.local:> SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
>>
>> 2.  Error: Error while processing statement: Cannot modify fs.s3a.impl at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)
>>
>> 3.
>>
>>
>>
>> I configured the credentials in the core-site.xml and always returns
>> "undefined" when I am trying to see the values for  below commands. This is
>> in our " dev" environment where Ranger is enabled. In   other environment
>> where Ranger is not installed , we are not facing this problem.
>>
>> 1.  0: jdbc:hive2://usw2dxdpmn01:10010> set  fs.s3a.impl;
>>
>> 2.  +-----------------------------------------------------+--+
>>
>> 3.  |                         set                         |
>>
>> 4.  +-----------------------------------------------------+--+
>>
>> 5.  | fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  |
>>
>> 6.  +-----------------------------------------------------+--+
>>
>> 7.  1 row selected (0.006 seconds)
>>
>> 8.  0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key;
>>
>> 9.  +---------------------------------+--+
>>
>> 10.|               set               |
>>
>> 11.+---------------------------------+--+
>>
>> 12.| fs.s3a.access.key is undefined  |
>>
>> 13.+---------------------------------+--+
>>
>> 14.1 row selected (0.005 seconds)
>>
>> 15.0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.secret.key;
>>
>> 16.+---------------------------------+--+
>>
>> 17.|               set               |
>>
>> 18.+---------------------------------+--+
>>
>> 19.| fs.s3a.secret.key is undefined  |
>>
>> 20.+---------------------------------+--+
>>
>> 21.1 row selected (0.005 seconds)
>>
>>
>>
>> Any help or pointers is appreciated.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>
>

Re: Unable to connect to S3 after enabling Ranger with Hive

Posted by Selvamohan Neethiraj <sn...@apache.org>.
Anand,

As it seems to be a Hive specific issue,  can you reach out Hive user group to see if this issue has an alternative solution ?

Thanks,
Selva-

From:  Anandha L Ranganathan <an...@gmail.com>
Reply-To:  "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date:  Thursday, January 5, 2017 at 5:08 PM
To:  "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject:  Re: Unable to connect to S3 after enabling Ranger with Hive

Don Bosco / Selvamohan,

Even though it provides solution to handle the custom properties at Hive, it will be overhead to whitelisting in hive-server2  configuration.  This requires restarting of hiveserver2 .

 

Data Engineers should  have freedom to assign a value to the variable at runtime that is specific to job. For an example, some of these are default values we want to define it in hiveconf.sql and gets loaded automatically when we launch beeline. 

SET airflow_cluster=${env:CLUSTER};
SET default_id=0;
SET default_string='UNKNOWN';
SET default_double=0.0;
SET default_float=0.0;
SET default_bool=False;

A data engineer shouldn't rely upon updating these parameters in hive-server2  configuration.  Do you have something  like "negation" where everything should be accepted other than the system properties   hadoop.*,hive.*,fs.*,tez.*, etc. etc..




Thanks
Anand

 

On Thu, Jan 5, 2017 at 1:09 PM, Don Bosco Durai <bo...@apache.org> wrote:
In the respective config folder of the components, there will be properties file prefixed with “ranger-“. You can update them directly if needed.

ls ranger-*

ranger-hive-audit.xml     ranger-policymgr-ssl.xml

ranger-hive-security.xml  ranger-security.xml

 

For any component configuration, you need to update the components  corresponding configuration files.

ls hive*.xml

hivemetastore-site.xml  hiveserver2-site.xml  hive-site.xml

 

If you are using Ambari, then note that Ambari will reset the properties on each restart. So you will have the set the properties from Ambari Admin.

 

Bosco

 

 

 

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: <us...@ranger.incubator.apache.org>
Date: Thursday, January 5, 2017 at 1:05 PM


To: <us...@ranger.incubator.apache.org>
Subject: Re: Unable to connect to S3 after enabling Ranger with Hive
 

Yes. It is ranger for hive.

 

On Thu, Jan 5, 2017 at 12:58 PM, Don Bosco Durai <bo...@apache.org> wrote:

Is this for Ranger or Hive?

 

Bosco

 

 

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: <us...@ranger.incubator.apache.org>
Date: Thursday, January 5, 2017 at 12:16 PM
To: <us...@ranger.incubator.apache.org>


Subject: Re: Unable to connect to S3 after enabling Ranger with Hive

 

Can anyone help on how to set  custom parameters  after ranger setup ? 

 

On Thu, Jan 5, 2017 at 12:43 AM, Anandha L Ranganathan <an...@gmail.com> wrote:

 

We are just facing another problem to set  custom parameters.  How do we set these parameters in beeline at runtime ?    These are out custom parameters.

SET airflow_cluster=${env:CLUSTER}; 
SET default_date=unix_timestamp('1970-01-01 00:00:00');
SET default_timestamp=CAST('1970-01-01 00:00:00' AS TIMESTAMP);
SET default_future_date=unix_timestamp('2099-12-31 00:00:00');

We get these errors when we set these parameters.


0: jdbc:hive2://usw2dbdpmn01:10000/> SET default_timestamp=CAST('1970-01-01 00:00:00' AS TIMESTAMP);
Error: Error while processing statement: Cannot modify default_timestamp at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)

Thanks

Anand

 

 

On Mon, Dec 19, 2016 at 5:34 PM, Anandha L Ranganathan <an...@gmail.com> wrote:

Cool. After adding  the configuration it is working fine. 

0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.sqlstd.confwhitelist.append;
+------------------------------------------------------------------------------------+--+
|                                        set                                         |
+------------------------------------------------------------------------------------+--+
| hive.security.authorization.sqlstd.confwhitelist.append=|fs\.s3a\..*|fs\.s3n\..* |  |
+------------------------------------------------------------------------------------+--+

Thanks Selva for the quick help.

 

 

On Mon, Dec 19, 2016 at 5:29 PM, Selvamohan Neethiraj <sn...@apache.org> wrote:

Hi,

 

Can you try appending the following string to the  existing value of  hive.security.authorization.sqlstd.confwhitelist 

 

|fs\.s3a\..*

 

And restart the HiveServer2 to see if this fixes this issue ?   

 

Thanks,

Selva-

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date: Monday, December 19, 2016 at 6:27 PM


To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject: Re: Unable to connect to S3 after enabling Ranger with Hive

 

Selva,

Please find the results.


set hive.security.authorization.sqlstd.confwhitelist;

| hive.security.authorization.sqlstd.confwhitelist=hive\.auto\..*|hive\.cbo\..*|hive\.convert\..*|hive\.exec\.dynamic\.partition.*|hive\.exec\..*\.dynamic\.partitions\..*|hive\.exec\.compress\..*|hive\.exec\.infer\..*|hive\.exec\.mode.local\..*|hive\.exec\.orc\..*|hive\.exec\.parallel.*|hive\.explain\..*|hive\.fetch.task\..*|hive\.groupby\..*|hive\.hbase\..*|hive\.index\..*|hive\.index\..*|hive\.intermediate\..*|hive\.join\..*|hive\.limit\..*|hive\.log\..*|hive\.mapjoin\..*|hive\.merge\..*|hive\.optimize\..*|hive\.orc\..*|hive\.outerjoin\..*|hive\.parquet\..*|hive\.ppd\..*|hive\.prewarm\..*|hive\.server2\.proxy\.user|hive\.skewjoin\..*|hive\.smbjoin\..*|hive\.stats\..*|hive\.tez\..*|hive\.vectorized\..*|mapred\.map\..*|mapred\.reduce\..*|mapred\.output\.compression\.codec|mapred\.job\.queuename|mapred\.output\.compression\.type|mapred\.min\.split\.size|mapreduce\.job\.reduce\.slowstart\.completedmaps|mapreduce\.job\.queuename|mapreduce\.job\.tags|mapreduce\.input\.fileinputformat\.split\.minsize|mapreduce\.map\..*|mapreduce\.reduce\..*|mapreduce\.output\.fileoutputformat\.compress\.codec|mapreduce\.output\.fileoutputformat\.compress\.type|tez\.am\..*|tez\.task\..*|tez\.runtime\..*|tez.queue.name|hive\.exec\.reducers\.bytes\.per\.reducer|hive\.client\.stats\.counters|hive\.exec\.default\.partition\.name|hive\.exec\.drop\.ignorenonexistent|hive\.counters\.group\.name|hive\.default\.fileformat\.managed|hive\.enforce\.bucketing|hive\.enforce\.bucketmapjoin|hive\.enforce\.sorting|hive\.enforce\.sortmergebucketmapjoin|hive\.cache\.expr\.evaluation|hive\.hashtable\.loadfactor|hive\.hashtable\.initialCapacity|hive\.ignore\.mapjoin\.hint|hive\.limit\.row\.max\.size|hive\.mapred\.mode|hive\.map\.aggr|hive\.compute\.query\.using\.stats|hive\.exec\.rowoffset|hive\.variable\.substitute|hive\.variable\.substitute\.depth|hive\.autogen\.columnalias\.prefix\.includefuncname|hive\.autogen\.columnalias\.prefix\.label|hive\.exec\.check\.crossproducts|hive\.compat|hive\.exec\.concatenate\.check\.index|hive\.display\.partition\.cols\.separately|hive\.error\.on\.empty\.partition|hive\.execution\.engine|hive\.exim\.uri\.scheme\.whitelist|hive\.file\.max\.footer|hive\.mapred\.supports\.subdirectories|hive\.insert\.into\.multilevel\.dirs|hive\.localize\.resource\.num\.wait\.attempts|hive\.multi\.insert\.move\.tasks\.share\.dependencies|hive\.support\.quoted\.identifiers|hive\.resultset\.use\.unique\.column\.names|hive\.analyze\.stmt\.collect\.partlevel\.stats|hive\.server2\.logging\.operation\.level|hive\.support\.sql11\.reserved\.keywords|hive\.exec\.job\.debug\.capture\.stacktraces|hive\.exec\.job\.debug\.timeout|hive\.exec\.max\.created\.files|hive\.exec\.reducers\.max|hive\.reorder\.nway\.joins|hive\.output\.file\.extension|hive\.exec\.show\.job\.failure\.debug\.info|hive\.exec\.tasklog\.debug\.timeout  |



0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.sqlstd.confwhitelist.append;
+-----------------------------------------------------------------------+--+
|                                  set                                  |
+-----------------------------------------------------------------------+--+
| hive.security.authorization.sqlstd.confwhitelist.append is undefined  |
+-----------------------------------------------------------------------+--+

 

On Mon, Dec 19, 2016 at 3:12 PM, Selvamohan Neethiraj <sn...@apache.org> wrote:

Hi,

 

Can you also post here the value for the following two parameters:

 

hive.security.authorization.sqlstd.confwhitelist

hive.security.authorization.sqlstd.confwhitelist.append

 

 

Thanks,

Selva-

 

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date: Monday, December 19, 2016 at 5:54 PM
To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject: Re: Unable to connect to S3 after enabling Ranger with Hive

 

Selva,

We are using HDP and here are versions and results.

Hive :  1.2.1.2.4   

Ranger: 0.5.0.2.4

 



0: jdbc:hive2://usw2dxdpmn01:10010> set  hive.conf.restricted.list;
+----------------------------------------------------------------------------------------------------------------------------------------+--+
|                                                                  set                                                                   |
+----------------------------------------------------------------------------------------------------------------------------------------+--+
| hive.conf.restricted.list=hive.security.authorization.enabled,hive.security.authorization.manager,hive.security.authenticator.manager  |
+----------------------------------------------------------------------------------------------------------------------------------------+--+
1 row selected (0.006 seconds)
0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.command.whitelist;
+-------------------------------------------------------------------------------+--+
|                                      set                                      |
+-------------------------------------------------------------------------------+--+
| hive.security.command.whitelist=set,reset,dfs,add,list,delete,reload,compile  |
+-------------------------------------------------------------------------------+--+
1 row selected (0.008 seconds)




0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key=xxxxxxxxxxxxxxx;
Error: Error while processing statement: Cannot modify fs.s3a.access.key at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)

 

On Mon, Dec 19, 2016 at 2:47 PM, Selvamohan Neethiraj <sn...@apache.org> wrote:

Hi,

 

Which version of Hive and Ranger are you using ? Can you check if Ranger has added  hiveserver2 parameters  hive.conf.restricted.list,hive.security.command.whitelist  in the hive configuration file(s) ?

Can you please list out these parameter values here ?

 

Thanks,

Selva-

 

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date: Monday, December 19, 2016 at 5:30 PM
To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject: Unable to connect to S3 after enabling Ranger with Hive

 

Hi,
 

Unable to create table pointing to S3 after enabling Ranger.

This is database we created before enabling Ranger.

1.  SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
2.  SET fs.s3a.access.key=xxxxxxx;
3.  SET fs.s3a.secret.key=yyyyyyyyyyyyyyy;
4.   
5.   
6.  CREATE DATABASE IF NOT EXISTS backup_s3a1
7.  COMMENT "s3a schema test"
8.  LOCATION "s3a://gd-de-dp-db-hcat-backup-schema/";
After Ranger was enabled, we try to create another database but it is throwing error.

1.  0: jdbc:hive2://usw2dxdpmn01.local:> SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
2.  Error: Error while processing statement: Cannot modify fs.s3a.impl at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)
3.   
 

I configured the credentials in the core-site.xml and always returns "undefined" when I am trying to see the values for  below commands. This is in our " dev" environment where Ranger is enabled. In   other environment where Ranger is not installed , we are not facing this problem.

1.  0: jdbc:hive2://usw2dxdpmn01:10010> set  fs.s3a.impl;
2.  +-----------------------------------------------------+--+
3.  |                         set                         |
4.  +-----------------------------------------------------+--+
5.  | fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  |
6.  +-----------------------------------------------------+--+
7.  1 row selected (0.006 seconds)
8.  0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key;
9.  +---------------------------------+--+
10.|               set               |
11.+---------------------------------+--+
12.| fs.s3a.access.key is undefined  |
13.+---------------------------------+--+
14.1 row selected (0.005 seconds)
15.0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.secret.key;
16.+---------------------------------+--+
17.|               set               |
18.+---------------------------------+--+
19.| fs.s3a.secret.key is undefined  |
20.+---------------------------------+--+
21.1 row selected (0.005 seconds)
 

Any help or pointers is appreciated. 

 

 

 

 

 

 

 



Re: Unable to connect to S3 after enabling Ranger with Hive

Posted by Anandha L Ranganathan <an...@gmail.com>.
Don Bosco / Selvamohan,

Even though it provides solution to handle the custom properties at Hive,
it will be overhead to whitelisting in hive-server2  configuration.  This
requires restarting of hiveserver2 .



Data Engineers should  have freedom to assign a value to the variable at
runtime that is specific to job. For an example, some of these are default
values we want to define it in hiveconf.sql and gets loaded automatically
when we launch beeline.

SET airflow_cluster=${env:CLUSTER};
SET default_id=0;
SET default_string='UNKNOWN';
SET default_double=0.0;
SET default_float=0.0;
SET default_bool=False;

A data engineer shouldn't rely upon updating these parameters in
hive-server2  configuration.  Do you have something  like "negation" where
everything should be accepted other than the system properties
hadoop.*,hive.*,fs.*,tez.*, etc. etc..




Thanks
Anand



On Thu, Jan 5, 2017 at 1:09 PM, Don Bosco Durai <bo...@apache.org> wrote:

> In the respective config folder of the components, there will be
> properties file prefixed with “ranger-“. You can update them directly if
> needed.
>
> ls ranger-*
>
> ranger-hive-audit.xml     ranger-policymgr-ssl.xml
>
> ranger-hive-security.xml  ranger-security.xml
>
>
>
> For any component configuration, you need to update the components
>  corresponding configuration files.
>
> ls hive*.xml
>
> hivemetastore-site.xml  hiveserver2-site.xml  hive-site.xml
>
>
>
> If you are using Ambari, then note that Ambari will reset the properties
> on each restart. So you will have the set the properties from Ambari Admin.
>
>
>
> Bosco
>
>
>
>
>
>
>
> *From: *Anandha L Ranganathan <an...@gmail.com>
> *Reply-To: *<us...@ranger.incubator.apache.org>
> *Date: *Thursday, January 5, 2017 at 1:05 PM
>
> *To: *<us...@ranger.incubator.apache.org>
> *Subject: *Re: Unable to connect to S3 after enabling Ranger with Hive
>
>
>
> Yes. It is ranger for hive.
>
>
>
> On Thu, Jan 5, 2017 at 12:58 PM, Don Bosco Durai <bo...@apache.org> wrote:
>
> Is this for Ranger or Hive?
>
>
>
> Bosco
>
>
>
>
>
> *From: *Anandha L Ranganathan <an...@gmail.com>
> *Reply-To: *<us...@ranger.incubator.apache.org>
> *Date: *Thursday, January 5, 2017 at 12:16 PM
> *To: *<us...@ranger.incubator.apache.org>
>
>
> *Subject: *Re: Unable to connect to S3 after enabling Ranger with Hive
>
>
>
> Can anyone help on how to set  custom parameters  after ranger setup ?
>
>
>
> On Thu, Jan 5, 2017 at 12:43 AM, Anandha L Ranganathan <
> analog.sony@gmail.com> wrote:
>
>
>
> We are just facing another problem to set  custom parameters.  How do we
> set these parameters in beeline at runtime ?    These are out custom
> parameters.
>
> SET airflow_cluster=${env:CLUSTER};
> SET default_date=unix_timestamp('1970-01-01 00:00:00');
> SET default_timestamp=CAST('1970-01-01 00:00:00' AS TIMESTAMP);
> SET default_future_date=unix_timestamp('2099-12-31 00:00:00');
>
> We get these errors when we set these parameters.
>
>
> 0: jdbc:hive2://usw2dbdpmn01:10000/> SET default_timestamp=CAST('1970-01-01
> 00:00:00' AS TIMESTAMP);
> Error: Error while processing statement: Cannot modify default_timestamp
> at runtime. It is not in list of params that are allowed to be modified at
> runtime (state=42000,code=1)
>
> Thanks
>
> Anand
>
>
>
>
>
> On Mon, Dec 19, 2016 at 5:34 PM, Anandha L Ranganathan <
> analog.sony@gmail.com> wrote:
>
> Cool. After adding  the configuration it is working fine.
>
> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.
> sqlstd.confwhitelist.append;
> +-----------------------------------------------------------
> -------------------------+--+
> |                                        set
> |
> +-----------------------------------------------------------
> -------------------------+--+
> | hive.security.authorization.sqlstd.confwhitelist.append=|fs\.s3a\..*|fs\.s3n\..*
> |  |
> +-----------------------------------------------------------
> -------------------------+--+
>
> Thanks Selva for the quick help.
>
>
>
>
>
> On Mon, Dec 19, 2016 at 5:29 PM, Selvamohan Neethiraj <sn...@apache.org>
> wrote:
>
> Hi,
>
>
>
> Can you try appending the following string to the  existing value of
>  hive.security.authorization.sqlstd.confwhitelist
>
>
>
> |fs\.s3a\..*
>
>
>
> And restart the HiveServer2 to see if this fixes this issue ?
>
>
>
> Thanks,
>
> Selva-
>
> *From: *Anandha L Ranganathan <an...@gmail.com>
> *Reply-To: *"user@ranger.incubator.apache.org" <
> user@ranger.incubator.apache.org>
> *Date: *Monday, December 19, 2016 at 6:27 PM
>
>
> *To: *"user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org
> >
> *Subject: *Re: Unable to connect to S3 after enabling Ranger with Hive
>
>
>
> Selva,
>
> Please find the results.
>
>
> set hive.security.authorization.sqlstd.confwhitelist;
>
> | hive.security.authorization.sqlstd.confwhitelist=hive\.
> auto\..*|hive\.cbo\..*|hive\.convert\..*|hive\.exec\.
> dynamic\.partition.*|hive\.exec\..*\.dynamic\.partitions\
> ..*|hive\.exec\.compress\..*|hive\.exec\.infer\..*|hive\.
> exec\.mode.local\..*|hive\.exec\.orc\..*|hive\.exec\.
> parallel.*|hive\.explain\..*|hive\.fetch.task\..*|hive\.
> groupby\..*|hive\.hbase\..*|hive\.index\..*|hive\.index\..
> *|hive\.intermediate\..*|hive\.join\..*|hive\.limit\..*|
> hive\.log\..*|hive\.mapjoin\..*|hive\.merge\..*|hive\.
> optimize\..*|hive\.orc\..*|hive\.outerjoin\..*|hive\.
> parquet\..*|hive\.ppd\..*|hive\.prewarm\..*|hive\.
> server2\.proxy\.user|hive\.skewjoin\..*|hive\.smbjoin\..*
> |hive\.stats\..*|hive\.tez\..*|hive\.vectorized\..*|mapred\.
> map\..*|mapred\.reduce\..*|mapred\.output\.compression\.
> codec|mapred\.job\.queuename|mapred\.output\.compression\.
> type|mapred\.min\.split\.size|mapreduce\.job\.reduce\.
> slowstart\.completedmaps|mapreduce\.job\.queuename|mapreduce\.job\.tags|
> mapreduce\.input\.fileinputformat\.split\.minsize|mapreduce\.map\..*|
> mapreduce\.reduce\..*|mapreduce\.output\.fileoutputformat\.compress\.
> codec|mapreduce\.output\.fileoutputformat\.compress\.
> type|tez\.am\..*|tez\.task\..*|tez\.runtime\..*|tez.queue.name
> |hive\.exec\.reducers\.bytes\.per\.reducer|hive\.
> client\.stats\.counters|hive\.exec\.default\.partition\.
> name|hive\.exec\.drop\.ignorenonexistent|hive\.
> counters\.group\.name|hive\.default\.fileformat\.managed|
> hive\.enforce\.bucketing|hive\.enforce\.bucketmapjoin|hive\.
> enforce\.sorting|hive\.enforce\.sortmergebucketmapjoin|hive\.
> cache\.expr\.evaluation|hive\.hashtable\.loadfactor|hive\.
> hashtable\.initialCapacity|hive\.ignore\.mapjoin\.hint|
> hive\.limit\.row\.max\.size|hive\.mapred\.mode|hive\.map\.
> aggr|hive\.compute\.query\.using\.stats|hive\.exec\.
> rowoffset|hive\.variable\.substitute|hive\.variable\.
> substitute\.depth|hive\.autogen\.columnalias\.prefix\.
> includefuncname|hive\.autogen\.columnalias\.prefix\.label|
> hive\.exec\.check\.crossproducts|hive\.compat|hive\.exec\.concatenate\.
> check\.index|hive\.display\.partition\.cols\.separately|
> hive\.error\.on\.empty\.partition|hive\.execution\.
> engine|hive\.exim\.uri\.scheme\.whitelist|hive\.file\.
> max\.footer|hive\.mapred\.supports\.subdirectories|hive\
> .insert\.into\.multilevel\.dirs|hive\.localize\.resource\
> .num\.wait\.attempts|hive\.multi\.insert\.move\.tasks\.
> share\.dependencies|hive\.support\.quoted\.identifiers|
> hive\.resultset\.use\.unique\.column\.names|hive\.analyze\.
> stmt\.collect\.partlevel\.stats|hive\.server2\.logging\.
> operation\.level|hive\.support\.sql11\.reserved\.
> keywords|hive\.exec\.job\.debug\.capture\.stacktraces|
> hive\.exec\.job\.debug\.timeout|hive\.exec\.max\.
> created\.files|hive\.exec\.reducers\.max|hive\.reorder\.
> nway\.joins|hive\.output\.file\.extension|hive\.exec\.
> show\.job\.failure\.debug\.info|hive\.exec\.tasklog\.debug\.timeout  |
>
>
>
> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.
> sqlstd.confwhitelist.append;
> +-----------------------------------------------------------
> ------------+--+
> |                                  set                                  |
> +-----------------------------------------------------------
> ------------+--+
> | hive.security.authorization.sqlstd.confwhitelist.append is undefined  |
> +-----------------------------------------------------------
> ------------+--+
>
>
>
> On Mon, Dec 19, 2016 at 3:12 PM, Selvamohan Neethiraj <sn...@apache.org>
> wrote:
>
> Hi,
>
>
>
> Can you also post here the value for the following two parameters:
>
>
>
> hive.security.authorization.sqlstd.confwhitelist
>
> hive.security.authorization.sqlstd.confwhitelist.append
>
>
>
>
>
> Thanks,
>
> Selva-
>
>
>
> *From: *Anandha L Ranganathan <an...@gmail.com>
> *Reply-To: *"user@ranger.incubator.apache.org" <
> user@ranger.incubator.apache.org>
> *Date: *Monday, December 19, 2016 at 5:54 PM
> *To: *"user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org
> >
> *Subject: *Re: Unable to connect to S3 after enabling Ranger with Hive
>
>
>
> Selva,
>
> We are using HDP and here are versions and results.
>
> Hive :  1.2.1.2.4
>
> Ranger: 0.5.0.2.4
>
>
>
>
>
> 0: jdbc:hive2://usw2dxdpmn01:10010> set  hive.conf.restricted.list;
> +-----------------------------------------------------------
> ------------------------------------------------------------
> -----------------+--+
> |
> set                                                                   |
> +-----------------------------------------------------------
> ------------------------------------------------------------
> -----------------+--+
> | hive.conf.restricted.list=hive.security.authorization.
> enabled,hive.security.authorization.manager,hive.security.authenticator.manager
> |
> +-----------------------------------------------------------
> ------------------------------------------------------------
> -----------------+--+
> 1 row selected (0.006 seconds)
> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.command.whitelist;
> +-----------------------------------------------------------
> --------------------+--+
> |                                      set
> |
> +-----------------------------------------------------------
> --------------------+--+
> | hive.security.command.whitelist=set,reset,dfs,add,list,delete,reload,compile
> |
> +-----------------------------------------------------------
> --------------------+--+
> 1 row selected (0.008 seconds)
>
>
>
>
> 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key=xxxxxxxxxxxxxxx;
> Error: Error while processing statement: Cannot modify fs.s3a.access.key
> at runtime. It is not in list of params that are allowed to be modified at
> runtime (state=42000,code=1)
>
>
>
> On Mon, Dec 19, 2016 at 2:47 PM, Selvamohan Neethiraj <sn...@apache.org>
> wrote:
>
> Hi,
>
>
>
> Which version of Hive and Ranger are you using ? Can you check if Ranger
> has added  hiveserver2 parameters  hive.conf.restricted.list,hive.security.command.whitelist
>  in the hive configuration file(s) ?
>
> Can you please list out these parameter values here ?
>
>
>
> Thanks,
>
> Selva-
>
>
>
> *From: *Anandha L Ranganathan <an...@gmail.com>
> *Reply-To: *"user@ranger.incubator.apache.org" <
> user@ranger.incubator.apache.org>
> *Date: *Monday, December 19, 2016 at 5:30 PM
> *To: *"user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org
> >
> *Subject: *Unable to connect to S3 after enabling Ranger with Hive
>
>
>
> Hi,
>
>
> Unable to create table pointing to S3 after enabling Ranger.
>
> This is database we created before enabling Ranger.
>
> 1.  SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
>
> 2.  SET fs.s3a.access.key=xxxxxxx;
>
> 3.  SET fs.s3a.secret.key=yyyyyyyyyyyyyyy;
>
> 4.
>
> 5.
>
> 6.  CREATE DATABASE IF NOT EXISTS backup_s3a1
>
> 7.  COMMENT "s3a schema test"
>
> 8.  LOCATION "s3a://gd-de-dp-db-hcat-backup-schema/";
>
> After Ranger was enabled, we try to create another database but it is
> throwing error.
>
> 1.  0: jdbc:hive2://usw2dxdpmn01.local:> SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
>
> 2.  Error: Error while processing statement: Cannot modify fs.s3a.impl at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)
>
> 3.
>
>
>
> I configured the credentials in the core-site.xml and always returns
> "undefined" when I am trying to see the values for  below commands. This is
> in our " dev" environment where Ranger is enabled. In   other environment
> where Ranger is not installed , we are not facing this problem.
>
> 1.  0: jdbc:hive2://usw2dxdpmn01:10010> set  fs.s3a.impl;
>
> 2.  +-----------------------------------------------------+--+
>
> 3.  |                         set                         |
>
> 4.  +-----------------------------------------------------+--+
>
> 5.  | fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  |
>
> 6.  +-----------------------------------------------------+--+
>
> 7.  1 row selected (0.006 seconds)
>
> 8.  0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key;
>
> 9.  +---------------------------------+--+
>
> 10.|               set               |
>
> 11.+---------------------------------+--+
>
> 12.| fs.s3a.access.key is undefined  |
>
> 13.+---------------------------------+--+
>
> 14.1 row selected (0.005 seconds)
>
> 15.0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.secret.key;
>
> 16.+---------------------------------+--+
>
> 17.|               set               |
>
> 18.+---------------------------------+--+
>
> 19.| fs.s3a.secret.key is undefined  |
>
> 20.+---------------------------------+--+
>
> 21.1 row selected (0.005 seconds)
>
>
>
> Any help or pointers is appreciated.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>

Re: Unable to connect to S3 after enabling Ranger with Hive

Posted by Don Bosco Durai <bo...@apache.org>.
In the respective config folder of the components, there will be properties file prefixed with “ranger-“. You can update them directly if needed.

ls ranger-*

ranger-hive-audit.xml     ranger-policymgr-ssl.xml

ranger-hive-security.xml  ranger-security.xml

 

For any component configuration, you need to update the components  corresponding configuration files.

ls hive*.xml

hivemetastore-site.xml  hiveserver2-site.xml  hive-site.xml

 

If you are using Ambari, then note that Ambari will reset the properties on each restart. So you will have the set the properties from Ambari Admin.

 

Bosco

 

 

 

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: <us...@ranger.incubator.apache.org>
Date: Thursday, January 5, 2017 at 1:05 PM
To: <us...@ranger.incubator.apache.org>
Subject: Re: Unable to connect to S3 after enabling Ranger with Hive

 

Yes. It is ranger for hive.

 

On Thu, Jan 5, 2017 at 12:58 PM, Don Bosco Durai <bo...@apache.org> wrote:

Is this for Ranger or Hive?

 

Bosco

 

 

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: <us...@ranger.incubator.apache.org>
Date: Thursday, January 5, 2017 at 12:16 PM
To: <us...@ranger.incubator.apache.org>


Subject: Re: Unable to connect to S3 after enabling Ranger with Hive

 

Can anyone help on how to set  custom parameters  after ranger setup ? 

 

On Thu, Jan 5, 2017 at 12:43 AM, Anandha L Ranganathan <an...@gmail.com> wrote:

 

We are just facing another problem to set  custom parameters.  How do we set these parameters in beeline at runtime ?    These are out custom parameters.

SET airflow_cluster=${env:CLUSTER}; 
SET default_date=unix_timestamp('1970-01-01 00:00:00');
SET default_timestamp=CAST('1970-01-01 00:00:00' AS TIMESTAMP);
SET default_future_date=unix_timestamp('2099-12-31 00:00:00');

We get these errors when we set these parameters.


0: jdbc:hive2://usw2dbdpmn01:10000/> SET default_timestamp=CAST('1970-01-01 00:00:00' AS TIMESTAMP);
Error: Error while processing statement: Cannot modify default_timestamp at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)

Thanks

Anand

 

 

On Mon, Dec 19, 2016 at 5:34 PM, Anandha L Ranganathan <an...@gmail.com> wrote:

Cool. After adding  the configuration it is working fine. 

0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.sqlstd.confwhitelist.append;
+------------------------------------------------------------------------------------+--+
|                                        set                                         |
+------------------------------------------------------------------------------------+--+
| hive.security.authorization.sqlstd.confwhitelist.append=|fs\.s3a\..*|fs\.s3n\..* |  |
+------------------------------------------------------------------------------------+--+

Thanks Selva for the quick help.

 

 

On Mon, Dec 19, 2016 at 5:29 PM, Selvamohan Neethiraj <sn...@apache.org> wrote:

Hi,

 

Can you try appending the following string to the  existing value of  hive.security.authorization.sqlstd.confwhitelist 

 

|fs\.s3a\..*

 

And restart the HiveServer2 to see if this fixes this issue ?   

 

Thanks,

Selva-

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date: Monday, December 19, 2016 at 6:27 PM


To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject: Re: Unable to connect to S3 after enabling Ranger with Hive

 

Selva,

Please find the results.


set hive.security.authorization.sqlstd.confwhitelist;

| hive.security.authorization.sqlstd.confwhitelist=hive\.auto\..*|hive\.cbo\..*|hive\.convert\..*|hive\.exec\.dynamic\.partition.*|hive\.exec\..*\.dynamic\.partitions\..*|hive\.exec\.compress\..*|hive\.exec\.infer\..*|hive\.exec\.mode.local\..*|hive\.exec\.orc\..*|hive\.exec\.parallel.*|hive\.explain\..*|hive\.fetch.task\..*|hive\.groupby\..*|hive\.hbase\..*|hive\.index\..*|hive\.index\..*|hive\.intermediate\..*|hive\.join\..*|hive\.limit\..*|hive\.log\..*|hive\.mapjoin\..*|hive\.merge\..*|hive\.optimize\..*|hive\.orc\..*|hive\.outerjoin\..*|hive\.parquet\..*|hive\.ppd\..*|hive\.prewarm\..*|hive\.server2\.proxy\.user|hive\.skewjoin\..*|hive\.smbjoin\..*|hive\.stats\..*|hive\.tez\..*|hive\.vectorized\..*|mapred\.map\..*|mapred\.reduce\..*|mapred\.output\.compression\.codec|mapred\.job\.queuename|mapred\.output\.compression\.type|mapred\.min\.split\.size|mapreduce\.job\.reduce\.slowstart\.completedmaps|mapreduce\.job\.queuename|mapreduce\.job\.tags|mapreduce\.input\.fileinputformat\.split\.minsize|mapreduce\.map\..*|mapreduce\.reduce\..*|mapreduce\.output\.fileoutputformat\.compress\.codec|mapreduce\.output\.fileoutputformat\.compress\.type|tez\.am\..*|tez\.task\..*|tez\.runtime\..*|tez.queue.name|hive\.exec\.reducers\.bytes\.per\.reducer|hive\.client\.stats\.counters|hive\.exec\.default\.partition\.name|hive\.exec\.drop\.ignorenonexistent|hive\.counters\.group\.name|hive\.default\.fileformat\.managed|hive\.enforce\.bucketing|hive\.enforce\.bucketmapjoin|hive\.enforce\.sorting|hive\.enforce\.sortmergebucketmapjoin|hive\.cache\.expr\.evaluation|hive\.hashtable\.loadfactor|hive\.hashtable\.initialCapacity|hive\.ignore\.mapjoin\.hint|hive\.limit\.row\.max\.size|hive\.mapred\.mode|hive\.map\.aggr|hive\.compute\.query\.using\.stats|hive\.exec\.rowoffset|hive\.variable\.substitute|hive\.variable\.substitute\.depth|hive\.autogen\.columnalias\.prefix\.includefuncname|hive\.autogen\.columnalias\.prefix\.label|hive\.exec\.check\.crossproducts|hive\.compat|hive\.exec\.concatenate\.check\.index|hive\.display\.partition\.cols\.separately|hive\.error\.on\.empty\.partition|hive\.execution\.engine|hive\.exim\.uri\.scheme\.whitelist|hive\.file\.max\.footer|hive\.mapred\.supports\.subdirectories|hive\.insert\.into\.multilevel\.dirs|hive\.localize\.resource\.num\.wait\.attempts|hive\.multi\.insert\.move\.tasks\.share\.dependencies|hive\.support\.quoted\.identifiers|hive\.resultset\.use\.unique\.column\.names|hive\.analyze\.stmt\.collect\.partlevel\.stats|hive\.server2\.logging\.operation\.level|hive\.support\.sql11\.reserved\.keywords|hive\.exec\.job\.debug\.capture\.stacktraces|hive\.exec\.job\.debug\.timeout|hive\.exec\.max\.created\.files|hive\.exec\.reducers\.max|hive\.reorder\.nway\.joins|hive\.output\.file\.extension|hive\.exec\.show\.job\.failure\.debug\.info|hive\.exec\.tasklog\.debug\.timeout  |



0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.sqlstd.confwhitelist.append;
+-----------------------------------------------------------------------+--+
|                                  set                                  |
+-----------------------------------------------------------------------+--+
| hive.security.authorization.sqlstd.confwhitelist.append is undefined  |
+-----------------------------------------------------------------------+--+

 

On Mon, Dec 19, 2016 at 3:12 PM, Selvamohan Neethiraj <sn...@apache.org> wrote:

Hi,

 

Can you also post here the value for the following two parameters:

 

hive.security.authorization.sqlstd.confwhitelist

hive.security.authorization.sqlstd.confwhitelist.append

 

 

Thanks,

Selva-

 

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date: Monday, December 19, 2016 at 5:54 PM
To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject: Re: Unable to connect to S3 after enabling Ranger with Hive

 

Selva,

We are using HDP and here are versions and results.

Hive :  1.2.1.2.4    

Ranger: 0.5.0.2.4

 



0: jdbc:hive2://usw2dxdpmn01:10010> set  hive.conf.restricted.list;
+----------------------------------------------------------------------------------------------------------------------------------------+--+
|                                                                  set                                                                   |
+----------------------------------------------------------------------------------------------------------------------------------------+--+
| hive.conf.restricted.list=hive.security.authorization.enabled,hive.security.authorization.manager,hive.security.authenticator.manager  |
+----------------------------------------------------------------------------------------------------------------------------------------+--+
1 row selected (0.006 seconds)
0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.command.whitelist;
+-------------------------------------------------------------------------------+--+
|                                      set                                      |
+-------------------------------------------------------------------------------+--+
| hive.security.command.whitelist=set,reset,dfs,add,list,delete,reload,compile  |
+-------------------------------------------------------------------------------+--+
1 row selected (0.008 seconds)




0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key=xxxxxxxxxxxxxxx;
Error: Error while processing statement: Cannot modify fs.s3a.access.key at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)

 

On Mon, Dec 19, 2016 at 2:47 PM, Selvamohan Neethiraj <sn...@apache.org> wrote:

Hi,

 

Which version of Hive and Ranger are you using ? Can you check if Ranger has added  hiveserver2 parameters  hive.conf.restricted.list,hive.security.command.whitelist  in the hive configuration file(s) ?

Can you please list out these parameter values here ?

 

Thanks,

Selva-

 

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date: Monday, December 19, 2016 at 5:30 PM
To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject: Unable to connect to S3 after enabling Ranger with Hive

 

Hi,
 

Unable to create table pointing to S3 after enabling Ranger.

This is database we created before enabling Ranger.

1.  SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
2.  SET fs.s3a.access.key=xxxxxxx;
3.  SET fs.s3a.secret.key=yyyyyyyyyyyyyyy;
4.   
5.   
6.  CREATE DATABASE IF NOT EXISTS backup_s3a1
7.  COMMENT "s3a schema test"
8.  LOCATION "s3a://gd-de-dp-db-hcat-backup-schema/";
After Ranger was enabled, we try to create another database but it is throwing error.

1.  0: jdbc:hive2://usw2dxdpmn01.local:> SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
2.  Error: Error while processing statement: Cannot modify fs.s3a.impl at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)
3.   
  

I configured the credentials in the core-site.xml and always returns "undefined" when I am trying to see the values for  below commands. This is in our " dev" environment where Ranger is enabled. In   other environment where Ranger is not installed , we are not facing this problem.

1.  0: jdbc:hive2://usw2dxdpmn01:10010> set  fs.s3a.impl;
2.  +-----------------------------------------------------+--+
3.  |                         set                         |
4.  +-----------------------------------------------------+--+
5.  | fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  |
6.  +-----------------------------------------------------+--+
7.  1 row selected (0.006 seconds)
8.  0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key;
9.  +---------------------------------+--+
10.|               set               |
11.+---------------------------------+--+
12.| fs.s3a.access.key is undefined  |
13.+---------------------------------+--+
14.1 row selected (0.005 seconds)
15.0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.secret.key;
16.+---------------------------------+--+
17.|               set               |
18.+---------------------------------+--+
19.| fs.s3a.secret.key is undefined  |
20.+---------------------------------+--+
21.1 row selected (0.005 seconds)
 

Any help or pointers is appreciated. 

 

 

 

 

 

 

 


Re: Unable to connect to S3 after enabling Ranger with Hive

Posted by Anandha L Ranganathan <an...@gmail.com>.
Yes. It is ranger for hive.

On Thu, Jan 5, 2017 at 12:58 PM, Don Bosco Durai <bo...@apache.org> wrote:

> Is this for Ranger or Hive?
>
>
>
> Bosco
>
>
>
>
>
> *From: *Anandha L Ranganathan <an...@gmail.com>
> *Reply-To: *<us...@ranger.incubator.apache.org>
> *Date: *Thursday, January 5, 2017 at 12:16 PM
> *To: *<us...@ranger.incubator.apache.org>
>
> *Subject: *Re: Unable to connect to S3 after enabling Ranger with Hive
>
>
>
> Can anyone help on how to set  custom parameters  after ranger setup ?
>
>
>
> On Thu, Jan 5, 2017 at 12:43 AM, Anandha L Ranganathan <
> analog.sony@gmail.com> wrote:
>
>
>
> We are just facing another problem to set  custom parameters.  How do we
> set these parameters in beeline at runtime ?    These are out custom
> parameters.
>
> SET airflow_cluster=${env:CLUSTER};
> SET default_date=unix_timestamp('1970-01-01 00:00:00');
> SET default_timestamp=CAST('1970-01-01 00:00:00' AS TIMESTAMP);
> SET default_future_date=unix_timestamp('2099-12-31 00:00:00');
>
> We get these errors when we set these parameters.
>
>
> 0: jdbc:hive2://usw2dbdpmn01:10000/> SET default_timestamp=CAST('1970-01-01
> 00:00:00' AS TIMESTAMP);
> Error: Error while processing statement: Cannot modify default_timestamp
> at runtime. It is not in list of params that are allowed to be modified at
> runtime (state=42000,code=1)
>
> Thanks
>
> Anand
>
>
>
>
>
> On Mon, Dec 19, 2016 at 5:34 PM, Anandha L Ranganathan <
> analog.sony@gmail.com> wrote:
>
> Cool. After adding  the configuration it is working fine.
>
> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.
> sqlstd.confwhitelist.append;
> +-----------------------------------------------------------
> -------------------------+--+
> |                                        set
> |
> +-----------------------------------------------------------
> -------------------------+--+
> | hive.security.authorization.sqlstd.confwhitelist.append=|fs\.s3a\..*|fs\.s3n\..*
> |  |
> +-----------------------------------------------------------
> -------------------------+--+
>
> Thanks Selva for the quick help.
>
>
>
>
>
> On Mon, Dec 19, 2016 at 5:29 PM, Selvamohan Neethiraj <sn...@apache.org>
> wrote:
>
> Hi,
>
>
>
> Can you try appending the following string to the  existing value of
>  hive.security.authorization.sqlstd.confwhitelist
>
>
>
> |fs\.s3a\..*
>
>
>
> And restart the HiveServer2 to see if this fixes this issue ?
>
>
>
> Thanks,
>
> Selva-
>
> *From: *Anandha L Ranganathan <an...@gmail.com>
> *Reply-To: *"user@ranger.incubator.apache.org" <
> user@ranger.incubator.apache.org>
> *Date: *Monday, December 19, 2016 at 6:27 PM
>
>
> *To: *"user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org
> >
> *Subject: *Re: Unable to connect to S3 after enabling Ranger with Hive
>
>
>
> Selva,
>
> Please find the results.
>
>
> set hive.security.authorization.sqlstd.confwhitelist;
>
> | hive.security.authorization.sqlstd.confwhitelist=hive\.
> auto\..*|hive\.cbo\..*|hive\.convert\..*|hive\.exec\.
> dynamic\.partition.*|hive\.exec\..*\.dynamic\.partitions\
> ..*|hive\.exec\.compress\..*|hive\.exec\.infer\..*|hive\.
> exec\.mode.local\..*|hive\.exec\.orc\..*|hive\.exec\.
> parallel.*|hive\.explain\..*|hive\.fetch.task\..*|hive\.
> groupby\..*|hive\.hbase\..*|hive\.index\..*|hive\.index\..
> *|hive\.intermediate\..*|hive\.join\..*|hive\.limit\..*|
> hive\.log\..*|hive\.mapjoin\..*|hive\.merge\..*|hive\.
> optimize\..*|hive\.orc\..*|hive\.outerjoin\..*|hive\.
> parquet\..*|hive\.ppd\..*|hive\.prewarm\..*|hive\.
> server2\.proxy\.user|hive\.skewjoin\..*|hive\.smbjoin\..*
> |hive\.stats\..*|hive\.tez\..*|hive\.vectorized\..*|mapred\.
> map\..*|mapred\.reduce\..*|mapred\.output\.compression\.
> codec|mapred\.job\.queuename|mapred\.output\.compression\.
> type|mapred\.min\.split\.size|mapreduce\.job\.reduce\.
> slowstart\.completedmaps|mapreduce\.job\.queuename|mapreduce\.job\.tags|
> mapreduce\.input\.fileinputformat\.split\.minsize|mapreduce\.map\..*|
> mapreduce\.reduce\..*|mapreduce\.output\.fileoutputformat\.compress\.
> codec|mapreduce\.output\.fileoutputformat\.compress\.
> type|tez\.am\..*|tez\.task\..*|tez\.runtime\..*|tez.queue.name
> |hive\.exec\.reducers\.bytes\.per\.reducer|hive\.
> client\.stats\.counters|hive\.exec\.default\.partition\.
> name|hive\.exec\.drop\.ignorenonexistent|hive\.
> counters\.group\.name|hive\.default\.fileformat\.managed|
> hive\.enforce\.bucketing|hive\.enforce\.bucketmapjoin|hive\.
> enforce\.sorting|hive\.enforce\.sortmergebucketmapjoin|hive\.
> cache\.expr\.evaluation|hive\.hashtable\.loadfactor|hive\.
> hashtable\.initialCapacity|hive\.ignore\.mapjoin\.hint|
> hive\.limit\.row\.max\.size|hive\.mapred\.mode|hive\.map\.
> aggr|hive\.compute\.query\.using\.stats|hive\.exec\.
> rowoffset|hive\.variable\.substitute|hive\.variable\.
> substitute\.depth|hive\.autogen\.columnalias\.prefix\.
> includefuncname|hive\.autogen\.columnalias\.prefix\.label|
> hive\.exec\.check\.crossproducts|hive\.compat|hive\.exec\.concatenate\.
> check\.index|hive\.display\.partition\.cols\.separately|
> hive\.error\.on\.empty\.partition|hive\.execution\.
> engine|hive\.exim\.uri\.scheme\.whitelist|hive\.file\.
> max\.footer|hive\.mapred\.supports\.subdirectories|hive\
> .insert\.into\.multilevel\.dirs|hive\.localize\.resource\
> .num\.wait\.attempts|hive\.multi\.insert\.move\.tasks\.
> share\.dependencies|hive\.support\.quoted\.identifiers|
> hive\.resultset\.use\.unique\.column\.names|hive\.analyze\.
> stmt\.collect\.partlevel\.stats|hive\.server2\.logging\.
> operation\.level|hive\.support\.sql11\.reserved\.
> keywords|hive\.exec\.job\.debug\.capture\.stacktraces|
> hive\.exec\.job\.debug\.timeout|hive\.exec\.max\.
> created\.files|hive\.exec\.reducers\.max|hive\.reorder\.
> nway\.joins|hive\.output\.file\.extension|hive\.exec\.
> show\.job\.failure\.debug\.info|hive\.exec\.tasklog\.debug\.timeout  |
>
>
>
> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.
> sqlstd.confwhitelist.append;
> +-----------------------------------------------------------
> ------------+--+
> |                                  set                                  |
> +-----------------------------------------------------------
> ------------+--+
> | hive.security.authorization.sqlstd.confwhitelist.append is undefined  |
> +-----------------------------------------------------------
> ------------+--+
>
>
>
> On Mon, Dec 19, 2016 at 3:12 PM, Selvamohan Neethiraj <sn...@apache.org>
> wrote:
>
> Hi,
>
>
>
> Can you also post here the value for the following two parameters:
>
>
>
> hive.security.authorization.sqlstd.confwhitelist
>
> hive.security.authorization.sqlstd.confwhitelist.append
>
>
>
>
>
> Thanks,
>
> Selva-
>
>
>
> *From: *Anandha L Ranganathan <an...@gmail.com>
> *Reply-To: *"user@ranger.incubator.apache.org" <
> user@ranger.incubator.apache.org>
> *Date: *Monday, December 19, 2016 at 5:54 PM
> *To: *"user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org
> >
> *Subject: *Re: Unable to connect to S3 after enabling Ranger with Hive
>
>
>
> Selva,
>
> We are using HDP and here are versions and results.
>
> Hive :  1.2.1.2.4
>
> Ranger: 0.5.0.2.4
>
>
>
>
>
> 0: jdbc:hive2://usw2dxdpmn01:10010> set  hive.conf.restricted.list;
> +-----------------------------------------------------------
> ------------------------------------------------------------
> -----------------+--+
> |
> set                                                                   |
> +-----------------------------------------------------------
> ------------------------------------------------------------
> -----------------+--+
> | hive.conf.restricted.list=hive.security.authorization.
> enabled,hive.security.authorization.manager,hive.security.authenticator.manager
> |
> +-----------------------------------------------------------
> ------------------------------------------------------------
> -----------------+--+
> 1 row selected (0.006 seconds)
> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.command.whitelist;
> +-----------------------------------------------------------
> --------------------+--+
> |                                      set
> |
> +-----------------------------------------------------------
> --------------------+--+
> | hive.security.command.whitelist=set,reset,dfs,add,list,delete,reload,compile
> |
> +-----------------------------------------------------------
> --------------------+--+
> 1 row selected (0.008 seconds)
>
>
>
>
> 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key=xxxxxxxxxxxxxxx;
> Error: Error while processing statement: Cannot modify fs.s3a.access.key
> at runtime. It is not in list of params that are allowed to be modified at
> runtime (state=42000,code=1)
>
>
>
> On Mon, Dec 19, 2016 at 2:47 PM, Selvamohan Neethiraj <sn...@apache.org>
> wrote:
>
> Hi,
>
>
>
> Which version of Hive and Ranger are you using ? Can you check if Ranger
> has added  hiveserver2 parameters  hive.conf.restricted.list,hive.security.command.whitelist
>  in the hive configuration file(s) ?
>
> Can you please list out these parameter values here ?
>
>
>
> Thanks,
>
> Selva-
>
>
>
> *From: *Anandha L Ranganathan <an...@gmail.com>
> *Reply-To: *"user@ranger.incubator.apache.org" <
> user@ranger.incubator.apache.org>
> *Date: *Monday, December 19, 2016 at 5:30 PM
> *To: *"user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org
> >
> *Subject: *Unable to connect to S3 after enabling Ranger with Hive
>
>
>
> Hi,
>
>
> Unable to create table pointing to S3 after enabling Ranger.
>
> This is database we created before enabling Ranger.
>
> 1.  SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
>
> 2.  SET fs.s3a.access.key=xxxxxxx;
>
> 3.  SET fs.s3a.secret.key=yyyyyyyyyyyyyyy;
>
> 4.
>
> 5.
>
> 6.  CREATE DATABASE IF NOT EXISTS backup_s3a1
>
> 7.  COMMENT "s3a schema test"
>
> 8.  LOCATION "s3a://gd-de-dp-db-hcat-backup-schema/";
>
> After Ranger was enabled, we try to create another database but it is
> throwing error.
>
> 1.  0: jdbc:hive2://usw2dxdpmn01.local:> SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
>
> 2.  Error: Error while processing statement: Cannot modify fs.s3a.impl at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)
>
> 3.
>
>
>
> I configured the credentials in the core-site.xml and always returns
> "undefined" when I am trying to see the values for  below commands. This is
> in our " dev" environment where Ranger is enabled. In   other environment
> where Ranger is not installed , we are not facing this problem.
>
> 1.  0: jdbc:hive2://usw2dxdpmn01:10010> set  fs.s3a.impl;
>
> 2.  +-----------------------------------------------------+--+
>
> 3.  |                         set                         |
>
> 4.  +-----------------------------------------------------+--+
>
> 5.  | fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  |
>
> 6.  +-----------------------------------------------------+--+
>
> 7.  1 row selected (0.006 seconds)
>
> 8.  0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key;
>
> 9.  +---------------------------------+--+
>
> 10.|               set               |
>
> 11.+---------------------------------+--+
>
> 12.| fs.s3a.access.key is undefined  |
>
> 13.+---------------------------------+--+
>
> 14.1 row selected (0.005 seconds)
>
> 15.0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.secret.key;
>
> 16.+---------------------------------+--+
>
> 17.|               set               |
>
> 18.+---------------------------------+--+
>
> 19.| fs.s3a.secret.key is undefined  |
>
> 20.+---------------------------------+--+
>
> 21.1 row selected (0.005 seconds)
>
>
>
> Any help or pointers is appreciated.
>
>
>
>
>
>
>
>
>
>
>
>
>

Re: Unable to connect to S3 after enabling Ranger with Hive

Posted by Don Bosco Durai <bo...@apache.org>.
Is this for Ranger or Hive?

 

Bosco

 

 

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: <us...@ranger.incubator.apache.org>
Date: Thursday, January 5, 2017 at 12:16 PM
To: <us...@ranger.incubator.apache.org>
Subject: Re: Unable to connect to S3 after enabling Ranger with Hive

 

Can anyone help on how to set  custom parameters  after ranger setup ? 

 

On Thu, Jan 5, 2017 at 12:43 AM, Anandha L Ranganathan <an...@gmail.com> wrote:

 

We are just facing another problem to set  custom parameters.  How do we set these parameters in beeline at runtime ?    These are out custom parameters.

SET airflow_cluster=${env:CLUSTER}; 
SET default_date=unix_timestamp('1970-01-01 00:00:00');
SET default_timestamp=CAST('1970-01-01 00:00:00' AS TIMESTAMP);
SET default_future_date=unix_timestamp('2099-12-31 00:00:00');

We get these errors when we set these parameters.


0: jdbc:hive2://usw2dbdpmn01:10000/> SET default_timestamp=CAST('1970-01-01 00:00:00' AS TIMESTAMP);
Error: Error while processing statement: Cannot modify default_timestamp at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)

Thanks

Anand

 

 

On Mon, Dec 19, 2016 at 5:34 PM, Anandha L Ranganathan <an...@gmail.com> wrote:

Cool. After adding  the configuration it is working fine. 

0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.sqlstd.confwhitelist.append;
+------------------------------------------------------------------------------------+--+
|                                        set                                         |
+------------------------------------------------------------------------------------+--+
| hive.security.authorization.sqlstd.confwhitelist.append=|fs\.s3a\..*|fs\.s3n\..* |  |
+------------------------------------------------------------------------------------+--+


Thanks Selva for the quick help.

 

 

On Mon, Dec 19, 2016 at 5:29 PM, Selvamohan Neethiraj <sn...@apache.org> wrote:

Hi,

 

Can you try appending the following string to the  existing value of  hive.security.authorization.sqlstd.confwhitelist 

 

|fs\.s3a\..*

 

And restart the HiveServer2 to see if this fixes this issue ?   

 

Thanks,

Selva-

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date: Monday, December 19, 2016 at 6:27 PM


To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject: Re: Unable to connect to S3 after enabling Ranger with Hive

 

Selva,

Please find the results.


set hive.security.authorization.sqlstd.confwhitelist;

| hive.security.authorization.sqlstd.confwhitelist=hive\.auto\..*|hive\.cbo\..*|hive\.convert\..*|hive\.exec\.dynamic\.partition.*|hive\.exec\..*\.dynamic\.partitions\..*|hive\.exec\.compress\..*|hive\.exec\.infer\..*|hive\.exec\.mode.local\..*|hive\.exec\.orc\..*|hive\.exec\.parallel.*|hive\.explain\..*|hive\.fetch.task\..*|hive\.groupby\..*|hive\.hbase\..*|hive\.index\..*|hive\.index\..*|hive\.intermediate\..*|hive\.join\..*|hive\.limit\..*|hive\.log\..*|hive\.mapjoin\..*|hive\.merge\..*|hive\.optimize\..*|hive\.orc\..*|hive\.outerjoin\..*|hive\.parquet\..*|hive\.ppd\..*|hive\.prewarm\..*|hive\.server2\.proxy\.user|hive\.skewjoin\..*|hive\.smbjoin\..*|hive\.stats\..*|hive\.tez\..*|hive\.vectorized\..*|mapred\.map\..*|mapred\.reduce\..*|mapred\.output\.compression\.codec|mapred\.job\.queuename|mapred\.output\.compression\.type|mapred\.min\.split\.size|mapreduce\.job\.reduce\.slowstart\.completedmaps|mapreduce\.job\.queuename|mapreduce\.job\.tags|mapreduce\.input\.fileinputformat\.split\.minsize|mapreduce\.map\..*|mapreduce\.reduce\..*|mapreduce\.output\.fileoutputformat\.compress\.codec|mapreduce\.output\.fileoutputformat\.compress\.type|tez\.am\..*|tez\.task\..*|tez\.runtime\..*|tez.queue.name|hive\.exec\.reducers\.bytes\.per\.reducer|hive\.client\.stats\.counters|hive\.exec\.default\.partition\.name|hive\.exec\.drop\.ignorenonexistent|hive\.counters\.group\.name|hive\.default\.fileformat\.managed|hive\.enforce\.bucketing|hive\.enforce\.bucketmapjoin|hive\.enforce\.sorting|hive\.enforce\.sortmergebucketmapjoin|hive\.cache\.expr\.evaluation|hive\.hashtable\.loadfactor|hive\.hashtable\.initialCapacity|hive\.ignore\.mapjoin\.hint|hive\.limit\.row\.max\.size|hive\.mapred\.mode|hive\.map\.aggr|hive\.compute\.query\.using\.stats|hive\.exec\.rowoffset|hive\.variable\.substitute|hive\.variable\.substitute\.depth|hive\.autogen\.columnalias\.prefix\.includefuncname|hive\.autogen\.columnalias\.prefix\.label|hive\.exec\.check\.crossproducts|hive\.compat|hive\.exec\.concatenate\.check\.index|hive\.display\.partition\.cols\.separately|hive\.error\.on\.empty\.partition|hive\.execution\.engine|hive\.exim\.uri\.scheme\.whitelist|hive\.file\.max\.footer|hive\.mapred\.supports\.subdirectories|hive\.insert\.into\.multilevel\.dirs|hive\.localize\.resource\.num\.wait\.attempts|hive\.multi\.insert\.move\.tasks\.share\.dependencies|hive\.support\.quoted\.identifiers|hive\.resultset\.use\.unique\.column\.names|hive\.analyze\.stmt\.collect\.partlevel\.stats|hive\.server2\.logging\.operation\.level|hive\.support\.sql11\.reserved\.keywords|hive\.exec\.job\.debug\.capture\.stacktraces|hive\.exec\.job\.debug\.timeout|hive\.exec\.max\.created\.files|hive\.exec\.reducers\.max|hive\.reorder\.nway\.joins|hive\.output\.file\.extension|hive\.exec\.show\.job\.failure\.debug\.info|hive\.exec\.tasklog\.debug\.timeout  |



0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.sqlstd.confwhitelist.append;
+-----------------------------------------------------------------------+--+
|                                  set                                  |
+-----------------------------------------------------------------------+--+
| hive.security.authorization.sqlstd.confwhitelist.append is undefined  |
+-----------------------------------------------------------------------+--+

 

On Mon, Dec 19, 2016 at 3:12 PM, Selvamohan Neethiraj <sn...@apache.org> wrote:

Hi,

 

Can you also post here the value for the following two parameters:

 

hive.security.authorization.sqlstd.confwhitelist

hive.security.authorization.sqlstd.confwhitelist.append







Thanks,

Selva-

 

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date: Monday, December 19, 2016 at 5:54 PM
To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject: Re: Unable to connect to S3 after enabling Ranger with Hive

 

Selva,

We are using HDP and here are versions and results.

Hive :  1.2.1.2.4    

Ranger: 0.5.0.2.4

 



0: jdbc:hive2://usw2dxdpmn01:10010> set  hive.conf.restricted.list;
+----------------------------------------------------------------------------------------------------------------------------------------+--+
|                                                                  set                                                                   |
+----------------------------------------------------------------------------------------------------------------------------------------+--+
| hive.conf.restricted.list=hive.security.authorization.enabled,hive.security.authorization.manager,hive.security.authenticator.manager  |
+----------------------------------------------------------------------------------------------------------------------------------------+--+
1 row selected (0.006 seconds)
0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.command.whitelist;
+-------------------------------------------------------------------------------+--+
|                                      set                                      |
+-------------------------------------------------------------------------------+--+
| hive.security.command.whitelist=set,reset,dfs,add,list,delete,reload,compile  |
+-------------------------------------------------------------------------------+--+
1 row selected (0.008 seconds)




0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key=xxxxxxxxxxxxxxx;
Error: Error while processing statement: Cannot modify fs.s3a.access.key at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)

 

On Mon, Dec 19, 2016 at 2:47 PM, Selvamohan Neethiraj <sn...@apache.org> wrote:

Hi,

 

Which version of Hive and Ranger are you using ? Can you check if Ranger has added  hiveserver2 parameters  hive.conf.restricted.list,hive.security.command.whitelist  in the hive configuration file(s) ?

Can you please list out these parameter values here ?

 

Thanks,

Selva-

 

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date: Monday, December 19, 2016 at 5:30 PM
To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject: Unable to connect to S3 after enabling Ranger with Hive

 

Hi,
 

Unable to create table pointing to S3 after enabling Ranger.

This is database we created before enabling Ranger.

1.  SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
2.  SET fs.s3a.access.key=xxxxxxx;
3.  SET fs.s3a.secret.key=yyyyyyyyyyyyyyy;
4.   
5.   
6.  CREATE DATABASE IF NOT EXISTS backup_s3a1
7.  COMMENT "s3a schema test"
8.  LOCATION "s3a://gd-de-dp-db-hcat-backup-schema/";
After Ranger was enabled, we try to create another database but it is throwing error.

1.  0: jdbc:hive2://usw2dxdpmn01.local:> SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
2.  Error: Error while processing statement: Cannot modify fs.s3a.impl at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)
3.   
  

I configured the credentials in the core-site.xml and always returns "undefined" when I am trying to see the values for  below commands. This is in our " dev" environment where Ranger is enabled. In   other environment where Ranger is not installed , we are not facing this problem.

1.  0: jdbc:hive2://usw2dxdpmn01:10010> set  fs.s3a.impl;
2.  +-----------------------------------------------------+--+
3.  |                         set                         |
4.  +-----------------------------------------------------+--+
5.  | fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  |
6.  +-----------------------------------------------------+--+
7.  1 row selected (0.006 seconds)
8.  0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key;
9.  +---------------------------------+--+
10.|               set               |
11.+---------------------------------+--+
12.| fs.s3a.access.key is undefined  |
13.+---------------------------------+--+
14.1 row selected (0.005 seconds)
15.0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.secret.key;
16.+---------------------------------+--+
17.|               set               |
18.+---------------------------------+--+
19.| fs.s3a.secret.key is undefined  |
20.+---------------------------------+--+
21.1 row selected (0.005 seconds)
 

Any help or pointers is appreciated. 

 

 

 

 

 

 


Re: Unable to connect to S3 after enabling Ranger with Hive

Posted by Anandha L Ranganathan <an...@gmail.com>.
Can anyone help on how to set  custom parameters  after ranger setup ?



On Thu, Jan 5, 2017 at 12:43 AM, Anandha L Ranganathan <
analog.sony@gmail.com> wrote:

>
> We are just facing another problem to set  custom parameters.  How do we
> set these parameters in beeline at runtime ?    These are out custom
> parameters.
>
> SET airflow_cluster=${env:CLUSTER};
> SET default_date=unix_timestamp('1970-01-01 00:00:00');
> SET default_timestamp=CAST('1970-01-01 00:00:00' AS TIMESTAMP);
> SET default_future_date=unix_timestamp('2099-12-31 00:00:00');
>
> We get these errors when we set these parameters.
>
> 0: jdbc:hive2://usw2dbdpmn01:10000/> SET default_timestamp=CAST('1970-01-01
> 00:00:00' AS TIMESTAMP);
> Error: Error while processing statement: Cannot modify default_timestamp
> at runtime. It is not in list of params that are allowed to be modified at
> runtime (state=42000,code=1)
>
>
> Thanks
> Anand
>
>
> On Mon, Dec 19, 2016 at 5:34 PM, Anandha L Ranganathan <
> analog.sony@gmail.com> wrote:
>
>> Cool. After adding  the configuration it is working fine.
>>
>> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.sq
>> lstd.confwhitelist.append;
>> +-----------------------------------------------------------
>> -------------------------+--+
>> |                                        set
>> |
>> +-----------------------------------------------------------
>> -------------------------+--+
>> | hive.security.authorization.sqlstd.confwhitelist.append=|fs\.s3a\..*|fs\.s3n\..*
>> |  |
>> +-----------------------------------------------------------
>> -------------------------+--+
>>
>>
>> Thanks Selva for the quick help.
>>
>>
>>
>> On Mon, Dec 19, 2016 at 5:29 PM, Selvamohan Neethiraj <
>> sneethir@apache.org> wrote:
>>
>>> Hi,
>>>
>>> Can you try appending the following string to the  existing value of
>>>  hive.security.authorization.sqlstd.confwhitelist
>>>
>>> |fs\.s3a\..*
>>>
>>> And restart the HiveServer2 to see if this fixes this issue ?
>>>
>>> Thanks,
>>> Selva-
>>> From: Anandha L Ranganathan <an...@gmail.com>
>>> Reply-To: "user@ranger.incubator.apache.org" <
>>> user@ranger.incubator.apache.org>
>>> Date: Monday, December 19, 2016 at 6:27 PM
>>>
>>> To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org
>>> >
>>> Subject: Re: Unable to connect to S3 after enabling Ranger with Hive
>>>
>>> Selva,
>>>
>>> Please find the results.
>>>
>>> set hive.security.authorization.sqlstd.confwhitelist;
>>>
>>> | hive.security.authorization.sqlstd.confwhitelist=hive\.auto\
>>> ..*|hive\.cbo\..*|hive\.convert\..*|hive\.exec\.dynamic\.par
>>> tition.*|hive\.exec\..*\.dynamic\.partitions\..*|hive\.exec\
>>> .compress\..*|hive\.exec\.infer\..*|hive\.exec\.mode.loc
>>> al\..*|hive\.exec\.orc\..*|hive\.exec\.parallel.*|hive\.expl
>>> ain\..*|hive\.fetch.task\..*|hive\.groupby\..*|hive\.hbase\
>>> ..*|hive\.index\..*|hive\.index\..*|hive\.intermediate\.
>>> .*|hive\.join\..*|hive\.limit\..*|hive\.log\..*|hive\.
>>> mapjoin\..*|hive\.merge\..*|hive\.optimize\..*|hive\.orc\.
>>> .*|hive\.outerjoin\..*|hive\.parquet\..*|hive\.ppd\..*|
>>> hive\.prewarm\..*|hive\.server2\.proxy\.user|hive\.
>>> skewjoin\..*|hive\.smbjoin\..*|hive\.stats\..*|hive\.tez\..*
>>> |hive\.vectorized\..*|mapred\.map\..*|mapred\.reduce\..*|
>>> mapred\.output\.compression\.codec|mapred\.job\.queuename|
>>> mapred\.output\.compression\.type|mapred\.min\.split\.size|
>>> mapreduce\.job\.reduce\.slowstart\.completedmaps|
>>> mapreduce\.job\.queuename|mapreduce\.job\.tags|
>>> mapreduce\.input\.fileinputformat\.split\.minsize|mapreduce\
>>> .map\..*|mapreduce\.reduce\..*|mapreduce\.output\.fileoutput
>>> format\.compress\.codec|mapreduce\.output\.fileoutputf
>>> ormat\.compress\.type|tez\.am\..*|tez\.task\..*|tez\.runtime\..*|
>>> tez.queue.name|hive\.exec\.reducers\.bytes\.per\.
>>> reducer|hive\.client\.stats\.counters|hive\.exec\.default\.
>>> partition\.name|hive\.exec\.drop\.ignorenonexistent|hive\.
>>> counters\.group\.name|hive\.default\.fileformat\.managed|
>>> hive\.enforce\.bucketing|hive\.enforce\.bucketmapjoin|hive\.
>>> enforce\.sorting|hive\.enforce\.sortmergebucketmapjoi
>>> n|hive\.cache\.expr\.evaluation|hive\.hashtable\.
>>> loadfactor|hive\.hashtable\.initialCapacity|hive\.ignore\.
>>> mapjoin\.hint|hive\.limit\.row\.max\.size|hive\.mapred\.
>>> mode|hive\.map\.aggr|hive\.compute\.query\.using\.stats|
>>> hive\.exec\.rowoffset|hive\.variable\.substitute|hive\.
>>> variable\.substitute\.depth|hive\.autogen\.columnalias\.
>>> prefix\.includefuncname|hive\.autogen\.columnalias\.prefix\.
>>> label|hive\.exec\.check\.crossproducts|hive\.compat|
>>> hive\.exec\.concatenate\.check\.index|hive\.display\.
>>> partition\.cols\.separately|hive\.error\.on\.empty\.
>>> partition|hive\.execution\.engine|hive\.exim\.uri\.
>>> scheme\.whitelist|hive\.file\.max\.footer|hive\.mapred\.
>>> supports\.subdirectories|hive\.insert\.into\.multilevel\.
>>> dirs|hive\.localize\.resource\.num\.wait\.attempts|hive\.
>>> multi\.insert\.move\.tasks\.share\.dependencies|hive\.
>>> support\.quoted\.identifiers|hive\.resultset\.use\.unique\.
>>> column\.names|hive\.analyze\.stmt\.collect\.partlevel\.
>>> stats|hive\.server2\.logging\.operation\.level|hive\.
>>> support\.sql11\.reserved\.keywords|hive\.exec\.job\.
>>> debug\.capture\.stacktraces|hive\.exec\.job\.debug\.
>>> timeout|hive\.exec\.max\.created\.files|hive\.exec\.
>>> reducers\.max|hive\.reorder\.nway\.joins|hive\.output\.
>>> file\.extension|hive\.exec\.show\.job\.failure\.debug\.
>>> info|hive\.exec\.tasklog\.debug\.timeout  |
>>>
>>>
>>>
>>> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.sq
>>> lstd.confwhitelist.append;
>>> +-----------------------------------------------------------
>>> ------------+--+
>>> |                                  set
>>> |
>>> +-----------------------------------------------------------
>>> ------------+--+
>>> | hive.security.authorization.sqlstd.confwhitelist.append is undefined
>>> |
>>> +-----------------------------------------------------------
>>> ------------+--+
>>>
>>>
>>> On Mon, Dec 19, 2016 at 3:12 PM, Selvamohan Neethiraj <
>>> sneethir@apache.org> wrote:
>>>
>>>> Hi,
>>>>
>>>> Can you also post here the value for the following two parameters:
>>>>
>>>> hive.security.authorization.sqlstd.confwhitelist
>>>>
>>>> hive.security.authorization.sqlstd.confwhitelist.append
>>>>
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> Selva-
>>>>
>>>> From: Anandha L Ranganathan <an...@gmail.com>
>>>> Reply-To: "user@ranger.incubator.apache.org" <
>>>> user@ranger.incubator.apache.org>
>>>> Date: Monday, December 19, 2016 at 5:54 PM
>>>> To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.
>>>> org>
>>>> Subject: Re: Unable to connect to S3 after enabling Ranger with Hive
>>>>
>>>> Selva,
>>>>
>>>> We are using HDP and here are versions and results.
>>>>
>>>> Hive :  1.2.1.2.4
>>>> Ranger: 0.5.0.2.4
>>>>
>>>>
>>>>
>>>> 0: jdbc:hive2://usw2dxdpmn01:10010> set  hive.conf.restricted.list;
>>>> +-----------------------------------------------------------
>>>> ------------------------------------------------------------
>>>> -----------------+--+
>>>> |
>>>> set                                                                   |
>>>> +-----------------------------------------------------------
>>>> ------------------------------------------------------------
>>>> -----------------+--+
>>>> | hive.conf.restricted.list=hive.security.authorization.enable
>>>> d,hive.security.authorization.manager,hive.security.authenticator.manager
>>>> |
>>>> +-----------------------------------------------------------
>>>> ------------------------------------------------------------
>>>> -----------------+--+
>>>> 1 row selected (0.006 seconds)
>>>> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.command.whitelis
>>>> t;
>>>> +-----------------------------------------------------------
>>>> --------------------+--+
>>>> |                                      set
>>>> |
>>>> +-----------------------------------------------------------
>>>> --------------------+--+
>>>> | hive.security.command.whitelist=set,reset,dfs,add,list,delete,reload,compile
>>>> |
>>>> +-----------------------------------------------------------
>>>> --------------------+--+
>>>> 1 row selected (0.008 seconds)
>>>>
>>>>
>>>>
>>>>
>>>> 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key=xxxxxxxxxxxx
>>>> xxx;
>>>> Error: Error while processing statement: Cannot modify
>>>> fs.s3a.access.key at runtime. It is not in list of params that are allowed
>>>> to be modified at runtime (state=42000,code=1)
>>>>
>>>> On Mon, Dec 19, 2016 at 2:47 PM, Selvamohan Neethiraj <
>>>> sneethir@apache.org> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Which version of Hive and Ranger are you using ? Can you check if
>>>>> Ranger has added  hiveserver2 parameters  hive.conf.restricted.list,
>>>>> hive.security.command.whitelist  in the hive configuration file(s) ?
>>>>> Can you please list out these parameter values here ?
>>>>>
>>>>> Thanks,
>>>>> Selva-
>>>>>
>>>>> From: Anandha L Ranganathan <an...@gmail.com>
>>>>> Reply-To: "user@ranger.incubator.apache.org" <
>>>>> user@ranger.incubator.apache.org>
>>>>> Date: Monday, December 19, 2016 at 5:30 PM
>>>>> To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.
>>>>> org>
>>>>> Subject: Unable to connect to S3 after enabling Ranger with Hive
>>>>>
>>>>> Hi,
>>>>>
>>>>>
>>>>> Unable to create table pointing to S3 after enabling Ranger.
>>>>>
>>>>> This is database we created before enabling Ranger.
>>>>>
>>>>>
>>>>>    1. SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
>>>>>    2. SET fs.s3a.access.key=xxxxxxx;
>>>>>    3. SET fs.s3a.secret.key=yyyyyyyyyyyyyyy;
>>>>>    4.
>>>>>    5.
>>>>>    6. CREATE DATABASE IF NOT EXISTS backup_s3a1
>>>>>    7. COMMENT "s3a schema test"
>>>>>    8. LOCATION "s3a://gd-de-dp-db-hcat-backup-schema/";
>>>>>
>>>>> After Ranger was enabled, we try to create another database but it is
>>>>> throwing error.
>>>>>
>>>>>
>>>>>    1. 0: jdbc:hive2://usw2dxdpmn01.local:> SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
>>>>>    2. Error: Error while processing statement: Cannot modify fs.s3a.impl at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)
>>>>>    3.
>>>>>
>>>>>
>>>>>
>>>>> I configured the credentials in the core-site.xml and always returns
>>>>> "undefined" when I am trying to see the values for  below commands. This is
>>>>> in our " dev" environment where Ranger is enabled. In   other environment
>>>>> where Ranger is not installed , we are not facing this problem.
>>>>>
>>>>>
>>>>>    1. 0: jdbc:hive2://usw2dxdpmn01:10010> set  fs.s3a.impl;
>>>>>    2. +-----------------------------------------------------+--+
>>>>>    3. |                         set                         |
>>>>>    4. +-----------------------------------------------------+--+
>>>>>    5. | fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  |
>>>>>    6. +-----------------------------------------------------+--+
>>>>>    7. 1 row selected (0.006 seconds)
>>>>>    8. 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key;
>>>>>    9. +---------------------------------+--+
>>>>>    10. |               set               |
>>>>>    11. +---------------------------------+--+
>>>>>    12. | fs.s3a.access.key is undefined  |
>>>>>    13. +---------------------------------+--+
>>>>>    14. 1 row selected (0.005 seconds)
>>>>>    15. 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.secret.key;
>>>>>    16. +---------------------------------+--+
>>>>>    17. |               set               |
>>>>>    18. +---------------------------------+--+
>>>>>    19. | fs.s3a.secret.key is undefined  |
>>>>>    20. +---------------------------------+--+
>>>>>    21. 1 row selected (0.005 seconds)
>>>>>
>>>>>
>>>>> Any help or pointers is appreciated.
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Unable to connect to S3 after enabling Ranger with Hive

Posted by Anandha L Ranganathan <an...@gmail.com>.
We are just facing another problem to set  custom parameters.  How do we
set these parameters in beeline at runtime ?    These are out custom
parameters.

SET airflow_cluster=${env:CLUSTER};
SET default_date=unix_timestamp('1970-01-01 00:00:00');
SET default_timestamp=CAST('1970-01-01 00:00:00' AS TIMESTAMP);
SET default_future_date=unix_timestamp('2099-12-31 00:00:00');

We get these errors when we set these parameters.

0: jdbc:hive2://usw2dbdpmn01:10000/> SET default_timestamp=CAST('1970-01-01
00:00:00' AS TIMESTAMP);
Error: Error while processing statement: Cannot modify default_timestamp at
runtime. It is not in list of params that are allowed to be modified at
runtime (state=42000,code=1)


Thanks
Anand


On Mon, Dec 19, 2016 at 5:34 PM, Anandha L Ranganathan <
analog.sony@gmail.com> wrote:

> Cool. After adding  the configuration it is working fine.
>
> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.
> sqlstd.confwhitelist.append;
> +-----------------------------------------------------------
> -------------------------+--+
> |                                        set
> |
> +-----------------------------------------------------------
> -------------------------+--+
> | hive.security.authorization.sqlstd.confwhitelist.append=|fs\.s3a\..*|fs\.s3n\..*
> |  |
> +-----------------------------------------------------------
> -------------------------+--+
>
>
> Thanks Selva for the quick help.
>
>
>
> On Mon, Dec 19, 2016 at 5:29 PM, Selvamohan Neethiraj <sneethir@apache.org
> > wrote:
>
>> Hi,
>>
>> Can you try appending the following string to the  existing value of
>>  hive.security.authorization.sqlstd.confwhitelist
>>
>> |fs\.s3a\..*
>>
>> And restart the HiveServer2 to see if this fixes this issue ?
>>
>> Thanks,
>> Selva-
>> From: Anandha L Ranganathan <an...@gmail.com>
>> Reply-To: "user@ranger.incubator.apache.org" <
>> user@ranger.incubator.apache.org>
>> Date: Monday, December 19, 2016 at 6:27 PM
>>
>> To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
>> Subject: Re: Unable to connect to S3 after enabling Ranger with Hive
>>
>> Selva,
>>
>> Please find the results.
>>
>> set hive.security.authorization.sqlstd.confwhitelist;
>>
>> | hive.security.authorization.sqlstd.confwhitelist=hive\.auto\
>> ..*|hive\.cbo\..*|hive\.convert\..*|hive\.exec\.dynamic\.
>> partition.*|hive\.exec\..*\.dynamic\.partitions\..*|hive\.
>> exec\.compress\..*|hive\.exec\.infer\..*|hive\.exec\.mode.
>> local\..*|hive\.exec\.orc\..*|hive\.exec\.parallel.*|hive\.
>> explain\..*|hive\.fetch.task\..*|hive\.groupby\..*|hive\.
>> hbase\..*|hive\.index\..*|hive\.index\..*|hive\.
>> intermediate\..*|hive\.join\..*|hive\.limit\..*|hive\.log\..
>> *|hive\.mapjoin\..*|hive\.merge\..*|hive\.optimize\..*|
>> hive\.orc\..*|hive\.outerjoin\..*|hive\.parquet\..*|hive\.
>> ppd\..*|hive\.prewarm\..*|hive\.server2\.proxy\.user|
>> hive\.skewjoin\..*|hive\.smbjoin\..*|hive\.stats\..*|hive\.tez\..*|hive\.
>> vectorized\..*|mapred\.map\..*|mapred\.reduce\..*|mapred\.
>> output\.compression\.codec|mapred\.job\.queuename|mapred\
>> .output\.compression\.type|mapred\.min\.split\.size|mapre
>> duce\.job\.reduce\.slowstart\.completedmaps|mapreduce\.job\.
>> queuename|mapreduce\.job\.tags|mapreduce\.input\.fileinp
>> utformat\.split\.minsize|mapreduce\.map\..*|mapreduce\.
>> reduce\..*|mapreduce\.output\.fileoutputformat\.compress\.co
>> dec|mapreduce\.output\.fileoutputformat\.compress\.type|tez\
>> .am\..*|tez\.task\..*|tez\.runtime\..*|tez.queue.name|
>> hive\.exec\.reducers\.bytes\.per\.reducer|hive\.client\.
>> stats\.counters|hive\.exec\.default\.partition\.name|hive\
>> .exec\.drop\.ignorenonexistent|hive\.counters\.group\.name|
>> hive\.default\.fileformat\.managed|hive\.enforce\.
>> bucketing|hive\.enforce\.bucketmapjoin|hive\.enforce\.
>> sorting|hive\.enforce\.sortmergebucketmapjoin|hive\.cache\.
>> expr\.evaluation|hive\.hashtable\.loadfactor|hive\.hashtable
>> \.initialCapacity|hive\.ignore\.mapjoin\.hint|hive\.
>> limit\.row\.max\.size|hive\.mapred\.mode|hive\.map\.aggr|
>> hive\.compute\.query\.using\.stats|hive\.exec\.rowoffset|
>> hive\.variable\.substitute|hive\.variable\.substitute\.
>> depth|hive\.autogen\.columnalias\.prefix\.includefuncname|hive\.autogen\.
>> columnalias\.prefix\.label|hive\.exec\.check\.crossproducts|
>> hive\.compat|hive\.exec\.concatenate\.check\.index|
>> hive\.display\.partition\.cols\.separately|hive\.error\.
>> on\.empty\.partition|hive\.execution\.engine|hive\.exim\.
>> uri\.scheme\.whitelist|hive\.file\.max\.footer|hive\.mapred\.supports\.
>> subdirectories|hive\.insert\.into\.multilevel\.dirs|hive\.
>> localize\.resource\.num\.wait\.attempts|hive\.multi\.insert\
>> .move\.tasks\.share\.dependencies|hive\.support\.
>> quoted\.identifiers|hive\.resultset\.use\.unique\.column
>> \.names|hive\.analyze\.stmt\.collect\.partlevel\.stats|
>> hive\.server2\.logging\.operation\.level|hive\.support\.
>> sql11\.reserved\.keywords|hive\.exec\.job\.debug\.
>> capture\.stacktraces|hive\.exec\.job\.debug\.timeout|
>> hive\.exec\.max\.created\.files|hive\.exec\.reducers\.
>> max|hive\.reorder\.nway\.joins|hive\.output\.file\.
>> extension|hive\.exec\.show\.job\.failure\.debug\.info|
>> hive\.exec\.tasklog\.debug\.timeout  |
>>
>>
>>
>> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.sq
>> lstd.confwhitelist.append;
>> +-----------------------------------------------------------
>> ------------+--+
>> |                                  set                                  |
>> +-----------------------------------------------------------
>> ------------+--+
>> | hive.security.authorization.sqlstd.confwhitelist.append is undefined  |
>> +-----------------------------------------------------------
>> ------------+--+
>>
>>
>> On Mon, Dec 19, 2016 at 3:12 PM, Selvamohan Neethiraj <
>> sneethir@apache.org> wrote:
>>
>>> Hi,
>>>
>>> Can you also post here the value for the following two parameters:
>>>
>>> hive.security.authorization.sqlstd.confwhitelist
>>>
>>> hive.security.authorization.sqlstd.confwhitelist.append
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Selva-
>>>
>>> From: Anandha L Ranganathan <an...@gmail.com>
>>> Reply-To: "user@ranger.incubator.apache.org" <
>>> user@ranger.incubator.apache.org>
>>> Date: Monday, December 19, 2016 at 5:54 PM
>>> To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org
>>> >
>>> Subject: Re: Unable to connect to S3 after enabling Ranger with Hive
>>>
>>> Selva,
>>>
>>> We are using HDP and here are versions and results.
>>>
>>> Hive :  1.2.1.2.4
>>> Ranger: 0.5.0.2.4
>>>
>>>
>>>
>>> 0: jdbc:hive2://usw2dxdpmn01:10010> set  hive.conf.restricted.list;
>>> +-----------------------------------------------------------
>>> ------------------------------------------------------------
>>> -----------------+--+
>>> |
>>> set                                                                   |
>>> +-----------------------------------------------------------
>>> ------------------------------------------------------------
>>> -----------------+--+
>>> | hive.conf.restricted.list=hive.security.authorization.enable
>>> d,hive.security.authorization.manager,hive.security.authenticator.manager
>>> |
>>> +-----------------------------------------------------------
>>> ------------------------------------------------------------
>>> -----------------+--+
>>> 1 row selected (0.006 seconds)
>>> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.command.whitelist;
>>> +-----------------------------------------------------------
>>> --------------------+--+
>>> |                                      set
>>> |
>>> +-----------------------------------------------------------
>>> --------------------+--+
>>> | hive.security.command.whitelist=set,reset,dfs,add,list,delete,reload,compile
>>> |
>>> +-----------------------------------------------------------
>>> --------------------+--+
>>> 1 row selected (0.008 seconds)
>>>
>>>
>>>
>>>
>>> 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key=xxxxxxxxxxxx
>>> xxx;
>>> Error: Error while processing statement: Cannot modify fs.s3a.access.key
>>> at runtime. It is not in list of params that are allowed to be modified at
>>> runtime (state=42000,code=1)
>>>
>>> On Mon, Dec 19, 2016 at 2:47 PM, Selvamohan Neethiraj <
>>> sneethir@apache.org> wrote:
>>>
>>>> Hi,
>>>>
>>>> Which version of Hive and Ranger are you using ? Can you check if
>>>> Ranger has added  hiveserver2 parameters  hive.conf.restricted.list,hive.security.command.whitelist
>>>>  in the hive configuration file(s) ?
>>>> Can you please list out these parameter values here ?
>>>>
>>>> Thanks,
>>>> Selva-
>>>>
>>>> From: Anandha L Ranganathan <an...@gmail.com>
>>>> Reply-To: "user@ranger.incubator.apache.org" <
>>>> user@ranger.incubator.apache.org>
>>>> Date: Monday, December 19, 2016 at 5:30 PM
>>>> To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.
>>>> org>
>>>> Subject: Unable to connect to S3 after enabling Ranger with Hive
>>>>
>>>> Hi,
>>>>
>>>>
>>>> Unable to create table pointing to S3 after enabling Ranger.
>>>>
>>>> This is database we created before enabling Ranger.
>>>>
>>>>
>>>>    1. SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
>>>>    2. SET fs.s3a.access.key=xxxxxxx;
>>>>    3. SET fs.s3a.secret.key=yyyyyyyyyyyyyyy;
>>>>    4.
>>>>    5.
>>>>    6. CREATE DATABASE IF NOT EXISTS backup_s3a1
>>>>    7. COMMENT "s3a schema test"
>>>>    8. LOCATION "s3a://gd-de-dp-db-hcat-backup-schema/";
>>>>
>>>> After Ranger was enabled, we try to create another database but it is
>>>> throwing error.
>>>>
>>>>
>>>>    1. 0: jdbc:hive2://usw2dxdpmn01.local:> SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
>>>>    2. Error: Error while processing statement: Cannot modify fs.s3a.impl at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)
>>>>    3.
>>>>
>>>>
>>>>
>>>> I configured the credentials in the core-site.xml and always returns
>>>> "undefined" when I am trying to see the values for  below commands. This is
>>>> in our " dev" environment where Ranger is enabled. In   other environment
>>>> where Ranger is not installed , we are not facing this problem.
>>>>
>>>>
>>>>    1. 0: jdbc:hive2://usw2dxdpmn01:10010> set  fs.s3a.impl;
>>>>    2. +-----------------------------------------------------+--+
>>>>    3. |                         set                         |
>>>>    4. +-----------------------------------------------------+--+
>>>>    5. | fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  |
>>>>    6. +-----------------------------------------------------+--+
>>>>    7. 1 row selected (0.006 seconds)
>>>>    8. 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key;
>>>>    9. +---------------------------------+--+
>>>>    10. |               set               |
>>>>    11. +---------------------------------+--+
>>>>    12. | fs.s3a.access.key is undefined  |
>>>>    13. +---------------------------------+--+
>>>>    14. 1 row selected (0.005 seconds)
>>>>    15. 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.secret.key;
>>>>    16. +---------------------------------+--+
>>>>    17. |               set               |
>>>>    18. +---------------------------------+--+
>>>>    19. | fs.s3a.secret.key is undefined  |
>>>>    20. +---------------------------------+--+
>>>>    21. 1 row selected (0.005 seconds)
>>>>
>>>>
>>>> Any help or pointers is appreciated.
>>>>
>>>>
>>>
>>
>

Re: Unable to connect to S3 after enabling Ranger with Hive

Posted by Anandha L Ranganathan <an...@gmail.com>.
Cool. After adding  the configuration it is working fine.

0: jdbc:hive2://usw2dxdpmn01:10010> set
hive.security.authorization.sqlstd.confwhitelist.append;
+------------------------------------------------------------------------------------+--+
|
set                                         |
+------------------------------------------------------------------------------------+--+
|
hive.security.authorization.sqlstd.confwhitelist.append=|fs\.s3a\..*|fs\.s3n\..*
|  |
+------------------------------------------------------------------------------------+--+


Thanks Selva for the quick help.



On Mon, Dec 19, 2016 at 5:29 PM, Selvamohan Neethiraj <sn...@apache.org>
wrote:

> Hi,
>
> Can you try appending the following string to the  existing value of
>  hive.security.authorization.sqlstd.confwhitelist
>
> |fs\.s3a\..*
>
> And restart the HiveServer2 to see if this fixes this issue ?
>
> Thanks,
> Selva-
> From: Anandha L Ranganathan <an...@gmail.com>
> Reply-To: "user@ranger.incubator.apache.org" <
> user@ranger.incubator.apache.org>
> Date: Monday, December 19, 2016 at 6:27 PM
>
> To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
> Subject: Re: Unable to connect to S3 after enabling Ranger with Hive
>
> Selva,
>
> Please find the results.
>
> set hive.security.authorization.sqlstd.confwhitelist;
>
> | hive.security.authorization.sqlstd.confwhitelist=hive\.
> auto\..*|hive\.cbo\..*|hive\.convert\..*|hive\.exec\.
> dynamic\.partition.*|hive\.exec\..*\.dynamic\.partitions\
> ..*|hive\.exec\.compress\..*|hive\.exec\.infer\..*|hive\.
> exec\.mode.local\..*|hive\.exec\.orc\..*|hive\.exec\.
> parallel.*|hive\.explain\..*|hive\.fetch.task\..*|hive\.
> groupby\..*|hive\.hbase\..*|hive\.index\..*|hive\.index\..
> *|hive\.intermediate\..*|hive\.join\..*|hive\.limit\..*|
> hive\.log\..*|hive\.mapjoin\..*|hive\.merge\..*|hive\.
> optimize\..*|hive\.orc\..*|hive\.outerjoin\..*|hive\.
> parquet\..*|hive\.ppd\..*|hive\.prewarm\..*|hive\.
> server2\.proxy\.user|hive\.skewjoin\..*|hive\.smbjoin\..*
> |hive\.stats\..*|hive\.tez\..*|hive\.vectorized\..*|mapred\.
> map\..*|mapred\.reduce\..*|mapred\.output\.compression\.
> codec|mapred\.job\.queuename|mapred\.output\.compression\.
> type|mapred\.min\.split\.size|mapreduce\.job\.reduce\.
> slowstart\.completedmaps|mapreduce\.job\.queuename|mapreduce\.job\.tags|
> mapreduce\.input\.fileinputformat\.split\.minsize|mapreduce\.map\..*|
> mapreduce\.reduce\..*|mapreduce\.output\.fileoutputformat\.compress\.
> codec|mapreduce\.output\.fileoutputformat\.compress\.
> type|tez\.am\..*|tez\.task\..*|tez\.runtime\..*|tez.queue.name
> |hive\.exec\.reducers\.bytes\.per\.reducer|hive\.
> client\.stats\.counters|hive\.exec\.default\.partition\.
> name|hive\.exec\.drop\.ignorenonexistent|hive\.
> counters\.group\.name|hive\.default\.fileformat\.managed|
> hive\.enforce\.bucketing|hive\.enforce\.bucketmapjoin|hive\.
> enforce\.sorting|hive\.enforce\.sortmergebucketmapjoin|hive\.
> cache\.expr\.evaluation|hive\.hashtable\.loadfactor|hive\.
> hashtable\.initialCapacity|hive\.ignore\.mapjoin\.hint|
> hive\.limit\.row\.max\.size|hive\.mapred\.mode|hive\.map\.
> aggr|hive\.compute\.query\.using\.stats|hive\.exec\.
> rowoffset|hive\.variable\.substitute|hive\.variable\.
> substitute\.depth|hive\.autogen\.columnalias\.prefix\.
> includefuncname|hive\.autogen\.columnalias\.prefix\.label|
> hive\.exec\.check\.crossproducts|hive\.compat|hive\.exec\.concatenate\.
> check\.index|hive\.display\.partition\.cols\.separately|
> hive\.error\.on\.empty\.partition|hive\.execution\.
> engine|hive\.exim\.uri\.scheme\.whitelist|hive\.file\.
> max\.footer|hive\.mapred\.supports\.subdirectories|hive\
> .insert\.into\.multilevel\.dirs|hive\.localize\.resource\
> .num\.wait\.attempts|hive\.multi\.insert\.move\.tasks\.
> share\.dependencies|hive\.support\.quoted\.identifiers|
> hive\.resultset\.use\.unique\.column\.names|hive\.analyze\.
> stmt\.collect\.partlevel\.stats|hive\.server2\.logging\.
> operation\.level|hive\.support\.sql11\.reserved\.
> keywords|hive\.exec\.job\.debug\.capture\.stacktraces|
> hive\.exec\.job\.debug\.timeout|hive\.exec\.max\.
> created\.files|hive\.exec\.reducers\.max|hive\.reorder\.
> nway\.joins|hive\.output\.file\.extension|hive\.exec\.
> show\.job\.failure\.debug\.info|hive\.exec\.tasklog\.debug\.timeout  |
>
>
>
> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.
> sqlstd.confwhitelist.append;
> +-----------------------------------------------------------
> ------------+--+
> |                                  set                                  |
> +-----------------------------------------------------------
> ------------+--+
> | hive.security.authorization.sqlstd.confwhitelist.append is undefined  |
> +-----------------------------------------------------------
> ------------+--+
>
>
> On Mon, Dec 19, 2016 at 3:12 PM, Selvamohan Neethiraj <sneethir@apache.org
> > wrote:
>
>> Hi,
>>
>> Can you also post here the value for the following two parameters:
>>
>> hive.security.authorization.sqlstd.confwhitelist
>>
>> hive.security.authorization.sqlstd.confwhitelist.append
>>
>>
>>
>> Thanks,
>>
>> Selva-
>>
>> From: Anandha L Ranganathan <an...@gmail.com>
>> Reply-To: "user@ranger.incubator.apache.org" <
>> user@ranger.incubator.apache.org>
>> Date: Monday, December 19, 2016 at 5:54 PM
>> To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
>> Subject: Re: Unable to connect to S3 after enabling Ranger with Hive
>>
>> Selva,
>>
>> We are using HDP and here are versions and results.
>>
>> Hive :  1.2.1.2.4
>> Ranger: 0.5.0.2.4
>>
>>
>>
>> 0: jdbc:hive2://usw2dxdpmn01:10010> set  hive.conf.restricted.list;
>> +-----------------------------------------------------------
>> ------------------------------------------------------------
>> -----------------+--+
>> |
>> set                                                                   |
>> +-----------------------------------------------------------
>> ------------------------------------------------------------
>> -----------------+--+
>> | hive.conf.restricted.list=hive.security.authorization.enable
>> d,hive.security.authorization.manager,hive.security.authenticator.manager
>> |
>> +-----------------------------------------------------------
>> ------------------------------------------------------------
>> -----------------+--+
>> 1 row selected (0.006 seconds)
>> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.command.whitelist;
>> +-----------------------------------------------------------
>> --------------------+--+
>> |                                      set
>> |
>> +-----------------------------------------------------------
>> --------------------+--+
>> | hive.security.command.whitelist=set,reset,dfs,add,list,delete,reload,compile
>> |
>> +-----------------------------------------------------------
>> --------------------+--+
>> 1 row selected (0.008 seconds)
>>
>>
>>
>>
>> 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key=xxxxxxxxxxxx
>> xxx;
>> Error: Error while processing statement: Cannot modify fs.s3a.access.key
>> at runtime. It is not in list of params that are allowed to be modified at
>> runtime (state=42000,code=1)
>>
>> On Mon, Dec 19, 2016 at 2:47 PM, Selvamohan Neethiraj <
>> sneethir@apache.org> wrote:
>>
>>> Hi,
>>>
>>> Which version of Hive and Ranger are you using ? Can you check if Ranger
>>> has added  hiveserver2 parameters  hive.conf.restricted.list,hive.security.command.whitelist
>>>  in the hive configuration file(s) ?
>>> Can you please list out these parameter values here ?
>>>
>>> Thanks,
>>> Selva-
>>>
>>> From: Anandha L Ranganathan <an...@gmail.com>
>>> Reply-To: "user@ranger.incubator.apache.org" <
>>> user@ranger.incubator.apache.org>
>>> Date: Monday, December 19, 2016 at 5:30 PM
>>> To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org
>>> >
>>> Subject: Unable to connect to S3 after enabling Ranger with Hive
>>>
>>> Hi,
>>>
>>>
>>> Unable to create table pointing to S3 after enabling Ranger.
>>>
>>> This is database we created before enabling Ranger.
>>>
>>>
>>>    1. SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
>>>    2. SET fs.s3a.access.key=xxxxxxx;
>>>    3. SET fs.s3a.secret.key=yyyyyyyyyyyyyyy;
>>>    4.
>>>    5.
>>>    6. CREATE DATABASE IF NOT EXISTS backup_s3a1
>>>    7. COMMENT "s3a schema test"
>>>    8. LOCATION "s3a://gd-de-dp-db-hcat-backup-schema/";
>>>
>>> After Ranger was enabled, we try to create another database but it is
>>> throwing error.
>>>
>>>
>>>    1. 0: jdbc:hive2://usw2dxdpmn01.local:> SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
>>>    2. Error: Error while processing statement: Cannot modify fs.s3a.impl at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)
>>>    3.
>>>
>>>
>>>
>>> I configured the credentials in the core-site.xml and always returns
>>> "undefined" when I am trying to see the values for  below commands. This is
>>> in our " dev" environment where Ranger is enabled. In   other environment
>>> where Ranger is not installed , we are not facing this problem.
>>>
>>>
>>>    1. 0: jdbc:hive2://usw2dxdpmn01:10010> set  fs.s3a.impl;
>>>    2. +-----------------------------------------------------+--+
>>>    3. |                         set                         |
>>>    4. +-----------------------------------------------------+--+
>>>    5. | fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  |
>>>    6. +-----------------------------------------------------+--+
>>>    7. 1 row selected (0.006 seconds)
>>>    8. 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key;
>>>    9. +---------------------------------+--+
>>>    10. |               set               |
>>>    11. +---------------------------------+--+
>>>    12. | fs.s3a.access.key is undefined  |
>>>    13. +---------------------------------+--+
>>>    14. 1 row selected (0.005 seconds)
>>>    15. 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.secret.key;
>>>    16. +---------------------------------+--+
>>>    17. |               set               |
>>>    18. +---------------------------------+--+
>>>    19. | fs.s3a.secret.key is undefined  |
>>>    20. +---------------------------------+--+
>>>    21. 1 row selected (0.005 seconds)
>>>
>>>
>>> Any help or pointers is appreciated.
>>>
>>>
>>
>

Re: Unable to connect to S3 after enabling Ranger with Hive

Posted by Selvamohan Neethiraj <sn...@apache.org>.
Hi,

Can you try appending the following string to the  existing value of  hive.security.authorization.sqlstd.confwhitelist 

|fs\.s3a\..*

And restart the HiveServer2 to see if this fixes this issue ?   

Thanks,
Selva-
From:  Anandha L Ranganathan <an...@gmail.com>
Reply-To:  "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date:  Monday, December 19, 2016 at 6:27 PM
To:  "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject:  Re: Unable to connect to S3 after enabling Ranger with Hive

Selva,

Please find the results.

set hive.security.authorization.sqlstd.confwhitelist;

| hive.security.authorization.sqlstd.confwhitelist=hive\.auto\..*|hive\.cbo\..*|hive\.convert\..*|hive\.exec\.dynamic\.partition.*|hive\.exec\..*\.dynamic\.partitions\..*|hive\.exec\.compress\..*|hive\.exec\.infer\..*|hive\.exec\.mode.local\..*|hive\.exec\.orc\..*|hive\.exec\.parallel.*|hive\.explain\..*|hive\.fetch.task\..*|hive\.groupby\..*|hive\.hbase\..*|hive\.index\..*|hive\.index\..*|hive\.intermediate\..*|hive\.join\..*|hive\.limit\..*|hive\.log\..*|hive\.mapjoin\..*|hive\.merge\..*|hive\.optimize\..*|hive\.orc\..*|hive\.outerjoin\..*|hive\.parquet\..*|hive\.ppd\..*|hive\.prewarm\..*|hive\.server2\.proxy\.user|hive\.skewjoin\..*|hive\.smbjoin\..*|hive\.stats\..*|hive\.tez\..*|hive\.vectorized\..*|mapred\.map\..*|mapred\.reduce\..*|mapred\.output\.compression\.codec|mapred\.job\.queuename|mapred\.output\.compression\.type|mapred\.min\.split\.size|mapreduce\.job\.reduce\.slowstart\.completedmaps|mapreduce\.job\.queuename|mapreduce\.job\.tags|mapreduce\.input\.fileinputformat\.sp
 lit\.minsize|mapreduce\.map\..*|mapreduce\.reduce\..*|mapreduce\.output\.fileoutputformat\.compress\.codec|mapreduce\.output\.fileoutputformat\.compress\.type|tez\.am\..*|tez\.task\..*|tez\.runtime\..*|tez.queue.name|hive\.exec\.reducers\.bytes\.per\.reducer|hive\.client\.stats\.counters|hive\.exec\.default\.partition\.name|hive\.exec\.drop\.ignorenonexistent|hive\.counters\.group\.name|hive\.default\.fileformat\.managed|hive\.enforce\.bucketing|hive\.enforce\.bucketmapjoin|hive\.enforce\.sorting|hive\.enforce\.sortmergebucketmapjoin|hive\.cache\.expr\.evaluation|hive\.hashtable\.loadfactor|hive\.hashtable\.initialCapacity|hive\.ignore\.mapjoin\.hint|hive\.limit\.row\.max\.size|hive\.mapred\.mode|hive\.map\.aggr|hive\.compute\.query\.using\.stats|hive\.exec\.rowoffset|hive\.variable\.substitute|hive\.variable\.substitute\.depth|hive\.autogen\.columnalias\.prefix\.includefuncname|hive\.autogen\.columnalias\.prefix\.label|hive\.exec\.check\.crossproducts|hive\.compat|hive\.exec\.conca
 tenate\.check\.index|hive\.display\.partition\.cols\.separately|hive\.error\.on\.empty\.partition|hive\.execution\.engine|hive\.exim\.uri\.scheme\.whitelist|hive\.file\.max\.footer|hive\.mapred\.supports\.subdirectories|hive\.insert\.into\.multilevel\.dirs|hive\.localize\.resource\.num\.wait\.attempts|hive\.multi\.insert\.move\.tasks\.share\.dependencies|hive\.support\.quoted\.identifiers|hive\.resultset\.use\.unique\.column\.names|hive\.analyze\.stmt\.collect\.partlevel\.stats|hive\.server2\.logging\.operation\.level|hive\.support\.sql11\.reserved\.keywords|hive\.exec\.job\.debug\.capture\.stacktraces|hive\.exec\.job\.debug\.timeout|hive\.exec\.max\.created\.files|hive\.exec\.reducers\.max|hive\.reorder\.nway\.joins|hive\.output\.file\.extension|hive\.exec\.show\.job\.failure\.debug\.info|hive\.exec\.tasklog\.debug\.timeout  |



0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.sqlstd.confwhitelist.append;
+-----------------------------------------------------------------------+--+
|                                  set                                  |
+-----------------------------------------------------------------------+--+
| hive.security.authorization.sqlstd.confwhitelist.append is undefined  |
+-----------------------------------------------------------------------+--+


On Mon, Dec 19, 2016 at 3:12 PM, Selvamohan Neethiraj <sn...@apache.org> wrote:
Hi,

Can you also post here the value for the following two parameters:

hive.security.authorization.sqlstd.confwhitelist

hive.security.authorization.sqlstd.confwhitelist.append





Thanks,

Selva-


From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date: Monday, December 19, 2016 at 5:54 PM
To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject: Re: Unable to connect to S3 after enabling Ranger with Hive

Selva,

We are using HDP and here are versions and results.

Hive :  1.2.1.2.4    
Ranger: 0.5.0.2.4



0: jdbc:hive2://usw2dxdpmn01:10010> set  hive.conf.restricted.list;
+----------------------------------------------------------------------------------------------------------------------------------------+--+
|                                                                  set                                                                   |
+----------------------------------------------------------------------------------------------------------------------------------------+--+
| hive.conf.restricted.list=hive.security.authorization.enabled,hive.security.authorization.manager,hive.security.authenticator.manager  |
+----------------------------------------------------------------------------------------------------------------------------------------+--+
1 row selected (0.006 seconds)
0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.command.whitelist;
+-------------------------------------------------------------------------------+--+
|                                      set                                      |
+-------------------------------------------------------------------------------+--+
| hive.security.command.whitelist=set,reset,dfs,add,list,delete,reload,compile  |
+-------------------------------------------------------------------------------+--+
1 row selected (0.008 seconds)




0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key=xxxxxxxxxxxxxxx;
Error: Error while processing statement: Cannot modify fs.s3a.access.key at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)

On Mon, Dec 19, 2016 at 2:47 PM, Selvamohan Neethiraj <sn...@apache.org> wrote:
Hi,

Which version of Hive and Ranger are you using ? Can you check if Ranger has added  hiveserver2 parameters  hive.conf.restricted.list,hive.security.command.whitelist  in the hive configuration file(s) ?
Can you please list out these parameter values here ?

Thanks,
Selva-

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date: Monday, December 19, 2016 at 5:30 PM
To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject: Unable to connect to S3 after enabling Ranger with Hive

Hi,
 
Unable to create table pointing to S3 after enabling Ranger.

This is database we created before enabling Ranger.
SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
SET fs.s3a.access.key=xxxxxxx;
SET fs.s3a.secret.key=yyyyyyyyyyyyyyy;
 
 
CREATE DATABASE IF NOT EXISTS backup_s3a1
COMMENT "s3a schema test"
LOCATION "s3a://gd-de-dp-db-hcat-backup-schema/";
After Ranger was enabled, we try to create another database but it is throwing error.
0: jdbc:hive2://usw2dxdpmn01.local:> SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
Error: Error while processing statement: Cannot modify fs.s3a.impl at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)
 
 

I configured the credentials in the core-site.xml and always returns "undefined" when I am trying to see the values for  below commands. This is in our " dev" environment where Ranger is enabled. In   other environment where Ranger is not installed , we are not facing this problem.

0: jdbc:hive2://usw2dxdpmn01:10010> set  fs.s3a.impl;
+-----------------------------------------------------+--+
|                         set                         |
+-----------------------------------------------------+--+
| fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  |
+-----------------------------------------------------+--+
1 row selected (0.006 seconds)
0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key;
+---------------------------------+--+
|               set               |
+---------------------------------+--+
| fs.s3a.access.key is undefined  |
+---------------------------------+--+
1 row selected (0.005 seconds)
0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.secret.key;
+---------------------------------+--+
|               set               |
+---------------------------------+--+
| fs.s3a.secret.key is undefined  |
+---------------------------------+--+
1 row selected (0.005 seconds)
Any help or pointers is appreciated. 





Re: Unable to connect to S3 after enabling Ranger with Hive

Posted by Anandha L Ranganathan <an...@gmail.com>.
Selva,

Please find the results.

set hive.security.authorization.sqlstd.confwhitelist;

|
hive.security.authorization.sqlstd.confwhitelist=hive\.auto\..*|hive\.cbo\..*|hive\.convert\..*|hive\.exec\.dynamic\.partition.*|hive\.exec\..*\.dynamic\.partitions\..*|hive\.exec\.compress\..*|hive\.exec\.infer\..*|hive\.exec\.mode.local\..*|hive\.exec\.orc\..*|hive\.exec\.parallel.*|hive\.explain\..*|hive\.fetch.task\..*|hive\.groupby\..*|hive\.hbase\..*|hive\.index\..*|hive\.index\..*|hive\.intermediate\..*|hive\.join\..*|hive\.limit\..*|hive\.log\..*|hive\.mapjoin\..*|hive\.merge\..*|hive\.optimize\..*|hive\.orc\..*|hive\.outerjoin\..*|hive\.parquet\..*|hive\.ppd\..*|hive\.prewarm\..*|hive\.server2\.proxy\.user|hive\.skewjoin\..*|hive\.smbjoin\..*|hive\.stats\..*|hive\.tez\..*|hive\.vectorized\..*|mapred\.map\..*|mapred\.reduce\..*|mapred\.output\.compression\.codec|mapred\.job\.queuename|mapred\.output\.compression\.type|mapred\.min\.split\.size|mapreduce\.job\.reduce\.slowstart\.completedmaps|mapreduce\.job\.queuename|mapreduce\.job\.tags|mapreduce\.input\.fileinputformat\.split\.minsize|mapreduce\.map\..*|mapreduce\.reduce\..*|mapreduce\.output\.fileoutputformat\.compress\.codec|mapreduce\.output\.fileoutputformat\.compress\.type|tez\.am\..*|tez\.task\..*|tez\.runtime\..*|
tez.queue.name|hive\.exec\.reducers\.bytes\.per\.reducer|hive\.client\.stats\.counters|hive\.exec\.default\.partition\.name|hive\.exec\.drop\.ignorenonexistent|hive\.counters\.group\.name|hive\.default\.fileformat\.managed|hive\.enforce\.bucketing|hive\.enforce\.bucketmapjoin|hive\.enforce\.sorting|hive\.enforce\.sortmergebucketmapjoin|hive\.cache\.expr\.evaluation|hive\.hashtable\.loadfactor|hive\.hashtable\.initialCapacity|hive\.ignore\.mapjoin\.hint|hive\.limit\.row\.max\.size|hive\.mapred\.mode|hive\.map\.aggr|hive\.compute\.query\.using\.stats|hive\.exec\.rowoffset|hive\.variable\.substitute|hive\.variable\.substitute\.depth|hive\.autogen\.columnalias\.prefix\.includefuncname|hive\.autogen\.columnalias\.prefix\.label|hive\.exec\.check\.crossproducts|hive\.compat|hive\.exec\.concatenate\.check\.index|hive\.display\.partition\.cols\.separately|hive\.error\.on\.empty\.partition|hive\.execution\.engine|hive\.exim\.uri\.scheme\.whitelist|hive\.file\.max\.footer|hive\.mapred\.supports\.subdirectories|hive\.insert\.into\.multilevel\.dirs|hive\.localize\.resource\.num\.wait\.attempts|hive\.multi\.insert\.move\.tasks\.share\.dependencies|hive\.support\.quoted\.identifiers|hive\.resultset\.use\.unique\.column\.names|hive\.analyze\.stmt\.collect\.partlevel\.stats|hive\.server2\.logging\.operation\.level|hive\.support\.sql11\.reserved\.keywords|hive\.exec\.job\.debug\.capture\.stacktraces|hive\.exec\.job\.debug\.timeout|hive\.exec\.max\.created\.files|hive\.exec\.reducers\.max|hive\.reorder\.nway\.joins|hive\.output\.file\.extension|hive\.exec\.show\.job\.failure\.debug\.info|hive\.exec\.tasklog\.debug\.timeout
|



0: jdbc:hive2://usw2dxdpmn01:10010> set
hive.security.authorization.sqlstd.confwhitelist.append;
+-----------------------------------------------------------------------+--+
|                                  set                                  |
+-----------------------------------------------------------------------+--+
| hive.security.authorization.sqlstd.confwhitelist.append is undefined  |
+-----------------------------------------------------------------------+--+


On Mon, Dec 19, 2016 at 3:12 PM, Selvamohan Neethiraj <sn...@apache.org>
wrote:

> Hi,
>
> Can you also post here the value for the following two parameters:
>
> hive.security.authorization.sqlstd.confwhitelist
>
> hive.security.authorization.sqlstd.confwhitelist.append
>
>
>
> Thanks,
>
> Selva-
>
> From: Anandha L Ranganathan <an...@gmail.com>
> Reply-To: "user@ranger.incubator.apache.org" <
> user@ranger.incubator.apache.org>
> Date: Monday, December 19, 2016 at 5:54 PM
> To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
> Subject: Re: Unable to connect to S3 after enabling Ranger with Hive
>
> Selva,
>
> We are using HDP and here are versions and results.
>
> Hive :  1.2.1.2.4
> Ranger: 0.5.0.2.4
>
>
>
> 0: jdbc:hive2://usw2dxdpmn01:10010> set  hive.conf.restricted.list;
> +-----------------------------------------------------------
> ------------------------------------------------------------
> -----------------+--+
> |
> set                                                                   |
> +-----------------------------------------------------------
> ------------------------------------------------------------
> -----------------+--+
> | hive.conf.restricted.list=hive.security.authorization.
> enabled,hive.security.authorization.manager,hive.security.authenticator.manager
> |
> +-----------------------------------------------------------
> ------------------------------------------------------------
> -----------------+--+
> 1 row selected (0.006 seconds)
> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.command.whitelist;
> +-----------------------------------------------------------
> --------------------+--+
> |                                      set
> |
> +-----------------------------------------------------------
> --------------------+--+
> | hive.security.command.whitelist=set,reset,dfs,add,list,delete,reload,compile
> |
> +-----------------------------------------------------------
> --------------------+--+
> 1 row selected (0.008 seconds)
>
>
>
>
> 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key=xxxxxxxxxxxxxxx;
> Error: Error while processing statement: Cannot modify fs.s3a.access.key
> at runtime. It is not in list of params that are allowed to be modified at
> runtime (state=42000,code=1)
>
> On Mon, Dec 19, 2016 at 2:47 PM, Selvamohan Neethiraj <sneethir@apache.org
> > wrote:
>
>> Hi,
>>
>> Which version of Hive and Ranger are you using ? Can you check if Ranger
>> has added  hiveserver2 parameters  hive.conf.restricted.list,hive.security.command.whitelist
>>  in the hive configuration file(s) ?
>> Can you please list out these parameter values here ?
>>
>> Thanks,
>> Selva-
>>
>> From: Anandha L Ranganathan <an...@gmail.com>
>> Reply-To: "user@ranger.incubator.apache.org" <
>> user@ranger.incubator.apache.org>
>> Date: Monday, December 19, 2016 at 5:30 PM
>> To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
>> Subject: Unable to connect to S3 after enabling Ranger with Hive
>>
>> Hi,
>>
>>
>> Unable to create table pointing to S3 after enabling Ranger.
>>
>> This is database we created before enabling Ranger.
>>
>>
>>    1. SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
>>    2. SET fs.s3a.access.key=xxxxxxx;
>>    3. SET fs.s3a.secret.key=yyyyyyyyyyyyyyy;
>>    4.
>>    5.
>>    6. CREATE DATABASE IF NOT EXISTS backup_s3a1
>>    7. COMMENT "s3a schema test"
>>    8. LOCATION "s3a://gd-de-dp-db-hcat-backup-schema/";
>>
>> After Ranger was enabled, we try to create another database but it is
>> throwing error.
>>
>>
>>    1. 0: jdbc:hive2://usw2dxdpmn01.local:> SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
>>    2. Error: Error while processing statement: Cannot modify fs.s3a.impl at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)
>>    3.
>>
>>
>>
>> I configured the credentials in the core-site.xml and always returns
>> "undefined" when I am trying to see the values for  below commands. This is
>> in our " dev" environment where Ranger is enabled. In   other environment
>> where Ranger is not installed , we are not facing this problem.
>>
>>
>>    1. 0: jdbc:hive2://usw2dxdpmn01:10010> set  fs.s3a.impl;
>>    2. +-----------------------------------------------------+--+
>>    3. |                         set                         |
>>    4. +-----------------------------------------------------+--+
>>    5. | fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  |
>>    6. +-----------------------------------------------------+--+
>>    7. 1 row selected (0.006 seconds)
>>    8. 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key;
>>    9. +---------------------------------+--+
>>    10. |               set               |
>>    11. +---------------------------------+--+
>>    12. | fs.s3a.access.key is undefined  |
>>    13. +---------------------------------+--+
>>    14. 1 row selected (0.005 seconds)
>>    15. 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.secret.key;
>>    16. +---------------------------------+--+
>>    17. |               set               |
>>    18. +---------------------------------+--+
>>    19. | fs.s3a.secret.key is undefined  |
>>    20. +---------------------------------+--+
>>    21. 1 row selected (0.005 seconds)
>>
>>
>> Any help or pointers is appreciated.
>>
>>
>

Re: Unable to connect to S3 after enabling Ranger with Hive

Posted by Selvamohan Neethiraj <sn...@apache.org>.
Hi,

Can you also post here the value for the following two parameters:

hive.security.authorization.sqlstd.confwhitelist

hive.security.authorization.sqlstd.confwhitelist.append





Thanks,

Selva-


From:  Anandha L Ranganathan <an...@gmail.com>
Reply-To:  "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date:  Monday, December 19, 2016 at 5:54 PM
To:  "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject:  Re: Unable to connect to S3 after enabling Ranger with Hive

Selva,

We are using HDP and here are versions and results.

Hive :  1.2.1.2.4    
Ranger: 0.5.0.2.4



0: jdbc:hive2://usw2dxdpmn01:10010> set  hive.conf.restricted.list;
+----------------------------------------------------------------------------------------------------------------------------------------+--+
|                                                                  set                                                                   |
+----------------------------------------------------------------------------------------------------------------------------------------+--+
| hive.conf.restricted.list=hive.security.authorization.enabled,hive.security.authorization.manager,hive.security.authenticator.manager  |
+----------------------------------------------------------------------------------------------------------------------------------------+--+
1 row selected (0.006 seconds)
0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.command.whitelist;
+-------------------------------------------------------------------------------+--+
|                                      set                                      |
+-------------------------------------------------------------------------------+--+
| hive.security.command.whitelist=set,reset,dfs,add,list,delete,reload,compile  |
+-------------------------------------------------------------------------------+--+
1 row selected (0.008 seconds)




0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key=xxxxxxxxxxxxxxx;
Error: Error while processing statement: Cannot modify fs.s3a.access.key at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)

On Mon, Dec 19, 2016 at 2:47 PM, Selvamohan Neethiraj <sn...@apache.org> wrote:
Hi,

Which version of Hive and Ranger are you using ? Can you check if Ranger has added  hiveserver2 parameters  hive.conf.restricted.list,hive.security.command.whitelist  in the hive configuration file(s) ?
Can you please list out these parameter values here ?

Thanks,
Selva-

From: Anandha L Ranganathan <an...@gmail.com>
Reply-To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date: Monday, December 19, 2016 at 5:30 PM
To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject: Unable to connect to S3 after enabling Ranger with Hive

Hi,
 
Unable to create table pointing to S3 after enabling Ranger.

This is database we created before enabling Ranger.
SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
SET fs.s3a.access.key=xxxxxxx;
SET fs.s3a.secret.key=yyyyyyyyyyyyyyy;
 
 
CREATE DATABASE IF NOT EXISTS backup_s3a1
COMMENT "s3a schema test"
LOCATION "s3a://gd-de-dp-db-hcat-backup-schema/";
After Ranger was enabled, we try to create another database but it is throwing error.
0: jdbc:hive2://usw2dxdpmn01.local:> SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
Error: Error while processing statement: Cannot modify fs.s3a.impl at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)
 
 

I configured the credentials in the core-site.xml and always returns "undefined" when I am trying to see the values for  below commands. This is in our " dev" environment where Ranger is enabled. In   other environment where Ranger is not installed , we are not facing this problem.

0: jdbc:hive2://usw2dxdpmn01:10010> set  fs.s3a.impl;
+-----------------------------------------------------+--+
|                         set                         |
+-----------------------------------------------------+--+
| fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  |
+-----------------------------------------------------+--+
1 row selected (0.006 seconds)
0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key;
+---------------------------------+--+
|               set               |
+---------------------------------+--+
| fs.s3a.access.key is undefined  |
+---------------------------------+--+
1 row selected (0.005 seconds)
0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.secret.key;
+---------------------------------+--+
|               set               |
+---------------------------------+--+
| fs.s3a.secret.key is undefined  |
+---------------------------------+--+
1 row selected (0.005 seconds)
Any help or pointers is appreciated. 




Re: Unable to connect to S3 after enabling Ranger with Hive

Posted by Anandha L Ranganathan <an...@gmail.com>.
Selva,

We are using HDP and here are versions and results.

Hive :  1.2.1.2.4
Ranger: 0.5.0.2.4



0: jdbc:hive2://usw2dxdpmn01:10010> set  hive.conf.restricted.list;
+----------------------------------------------------------------------------------------------------------------------------------------+--+
|
set                                                                   |
+----------------------------------------------------------------------------------------------------------------------------------------+--+
|
hive.conf.restricted.list=hive.security.authorization.enabled,hive.security.authorization.manager,hive.security.authenticator.manager
|
+----------------------------------------------------------------------------------------------------------------------------------------+--+
1 row selected (0.006 seconds)
0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.command.whitelist;
+-------------------------------------------------------------------------------+--+
|
set                                      |
+-------------------------------------------------------------------------------+--+
|
hive.security.command.whitelist=set,reset,dfs,add,list,delete,reload,compile
|
+-------------------------------------------------------------------------------+--+
1 row selected (0.008 seconds)




0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key=xxxxxxxxxxxxxxx;
Error: Error while processing statement: Cannot modify fs.s3a.access.key at
runtime. It is not in list of params that are allowed to be modified at
runtime (state=42000,code=1)

On Mon, Dec 19, 2016 at 2:47 PM, Selvamohan Neethiraj <sn...@apache.org>
wrote:

> Hi,
>
> Which version of Hive and Ranger are you using ? Can you check if Ranger
> has added  hiveserver2 parameters  hive.conf.restricted.list,hive.security.command.whitelist
>  in the hive configuration file(s) ?
> Can you please list out these parameter values here ?
>
> Thanks,
> Selva-
>
> From: Anandha L Ranganathan <an...@gmail.com>
> Reply-To: "user@ranger.incubator.apache.org" <
> user@ranger.incubator.apache.org>
> Date: Monday, December 19, 2016 at 5:30 PM
> To: "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
> Subject: Unable to connect to S3 after enabling Ranger with Hive
>
> Hi,
>
>
> Unable to create table pointing to S3 after enabling Ranger.
>
> This is database we created before enabling Ranger.
>
>
>    1. SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
>    2. SET fs.s3a.access.key=xxxxxxx;
>    3. SET fs.s3a.secret.key=yyyyyyyyyyyyyyy;
>    4.
>    5.
>    6. CREATE DATABASE IF NOT EXISTS backup_s3a1
>    7. COMMENT "s3a schema test"
>    8. LOCATION "s3a://gd-de-dp-db-hcat-backup-schema/";
>
> After Ranger was enabled, we try to create another database but it is
> throwing error.
>
>
>    1. 0: jdbc:hive2://usw2dxdpmn01.local:> SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
>    2. Error: Error while processing statement: Cannot modify fs.s3a.impl at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)
>    3.
>
>
>
> I configured the credentials in the core-site.xml and always returns
> "undefined" when I am trying to see the values for  below commands. This is
> in our " dev" environment where Ranger is enabled. In   other environment
> where Ranger is not installed , we are not facing this problem.
>
>
>    1. 0: jdbc:hive2://usw2dxdpmn01:10010> set  fs.s3a.impl;
>    2. +-----------------------------------------------------+--+
>    3. |                         set                         |
>    4. +-----------------------------------------------------+--+
>    5. | fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  |
>    6. +-----------------------------------------------------+--+
>    7. 1 row selected (0.006 seconds)
>    8. 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key;
>    9. +---------------------------------+--+
>    10. |               set               |
>    11. +---------------------------------+--+
>    12. | fs.s3a.access.key is undefined  |
>    13. +---------------------------------+--+
>    14. 1 row selected (0.005 seconds)
>    15. 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.secret.key;
>    16. +---------------------------------+--+
>    17. |               set               |
>    18. +---------------------------------+--+
>    19. | fs.s3a.secret.key is undefined  |
>    20. +---------------------------------+--+
>    21. 1 row selected (0.005 seconds)
>
>
> Any help or pointers is appreciated.
>
>

Re: Unable to connect to S3 after enabling Ranger with Hive

Posted by Selvamohan Neethiraj <sn...@apache.org>.
Hi,

Which version of Hive and Ranger are you using ? Can you check if Ranger has added  hiveserver2 parameters  hive.conf.restricted.list,hive.security.command.whitelist  in the hive configuration file(s) ?
Can you please list out these parameter values here ?

Thanks,
Selva-

From:  Anandha L Ranganathan <an...@gmail.com>
Reply-To:  "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Date:  Monday, December 19, 2016 at 5:30 PM
To:  "user@ranger.incubator.apache.org" <us...@ranger.incubator.apache.org>
Subject:  Unable to connect to S3 after enabling Ranger with Hive

Hi,
 
Unable to create table pointing to S3 after enabling Ranger.

This is database we created before enabling Ranger.
SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
SET fs.s3a.access.key=xxxxxxx;
SET fs.s3a.secret.key=yyyyyyyyyyyyyyy;
 
 
CREATE DATABASE IF NOT EXISTS backup_s3a1
COMMENT "s3a schema test"
LOCATION "s3a://gd-de-dp-db-hcat-backup-schema/";
After Ranger was enabled, we try to create another database but it is throwing error.
0: jdbc:hive2://usw2dxdpmn01.local:> SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
Error: Error while processing statement: Cannot modify fs.s3a.impl at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)
 
 

I configured the credentials in the core-site.xml and always returns "undefined" when I am trying to see the values for  below commands. This is in our " dev" environment where Ranger is enabled. In   other environment where Ranger is not installed , we are not facing this problem.

0: jdbc:hive2://usw2dxdpmn01:10010> set  fs.s3a.impl;
+-----------------------------------------------------+--+
|                         set                         |
+-----------------------------------------------------+--+
| fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  |
+-----------------------------------------------------+--+
1 row selected (0.006 seconds)
0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key;
+---------------------------------+--+
|               set               |
+---------------------------------+--+
| fs.s3a.access.key is undefined  |
+---------------------------------+--+
1 row selected (0.005 seconds)
0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.secret.key;
+---------------------------------+--+
|               set               |
+---------------------------------+--+
| fs.s3a.secret.key is undefined  |
+---------------------------------+--+
1 row selected (0.005 seconds)
Any help or pointers is appreciated.