You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ranger.apache.org by Hadoop Solutions <mu...@gmail.com> on 2015/03/06 07:30:54 UTC

How to enable HDFS plugin on HA NameNode Cluster

Hi,

I have installed Ranger from Git repo and I have started Ranger console.

I am trying to deploy ranger-hdfs plugin on active NN. But, plugin agent
unable to contact with Ranger.

Can you please let me know the right procedure for *ranger-hdfs plugin
deployment *on HA NN cluster.


Regards,
Shaik

Re: How to enable HDFS plugin on HA NameNode Cluster

Posted by Amith sha <am...@gmail.com>.
Hi,

>From your log i can see that ranger is searching for xasecure.xml file but
it cant be located.that might be due to some wrong installation or miss
configuration issue reinstall ranger.In Hortonworks 2.2 manual installation
guide ranger installation procedure is very clear.

Thanks & Regards
Amithsha

On Fri, Mar 6, 2015 at 4:01 PM, Muthu Pandi <mu...@gmail.com> wrote:

> From your logs it looks like you are using HDP. and the audit.xml file is
> not in CLASSPATH what version of HDP you r using
>
> this link is for ranger installation on HDP2.2
> http://hortonworks.com/blog/apache-ranger-audit-framework/  make sure you
> have followed everything, below is the snippet from the earlier link which
> deals with the placing xml file on correct path.
>
> [image: Inline image 1]
>
>
>
> *RegardsMuthupandi.K*
>
>  Think before you print.
>
>
>
> On Fri, Mar 6, 2015 at 2:55 PM, Hadoop Solutions <mu...@gmail.com>
> wrote:
>
>> Hi Mathu,
>>
>> Please find the attached NN log.
>>
>> i have copied all jar to /usr/hdp/current/hadoop-hdfs-namenode/lib
>> location.
>>
>> please provide me the right solution for this issue.
>>
>> Thanks,
>> Shaik
>>
>> On 6 March 2015 at 15:48, Muthu Pandi <mu...@gmail.com> wrote:
>>
>>> Could you post the logs of your Active NN or the NN where you deployed
>>> your Ranger
>>>
>>> Also Make sure you have copied your JARS to respective folders and
>>> restarted the cluster.
>>>
>>>
>>>
>>> *RegardsMuthupandi.K*
>>>
>>>  Think before you print.
>>>
>>>
>>>
>>> On Fri, Mar 6, 2015 at 1:08 PM, Hadoop Solutions <munna.hadoop@gmail.com
>>> > wrote:
>>>
>>>> Hi Amithsha,
>>>>
>>>> I have deployed ranger-hdfs-plugin again with HA NN url.
>>>>
>>>> But, i am agents are not listed in Ranger Agents. I am using HDP 2.2.
>>>>
>>>> Please advise to resolve this issue.
>>>>
>>>> Thanks,
>>>> Shaik
>>>>
>>>> On 6 March 2015 at 14:48, Amith sha <am...@gmail.com> wrote:
>>>>
>>>>> Hi Shail,
>>>>>
>>>>> Below mentioned steps are  mentioned in Ranger Guide to enable Ranger
>>>>> plugin In Hadoop HA cluster
>>>>>
>>>>>
>>>>> To enable Ranger in the HDFS HA environment, an HDFS plugin must be
>>>>> set up in each NameNode, and then pointed to the same HDFS repository
>>>>> set up in the Security Manager. Any policies created within that HDFS
>>>>> repository are automatically synchronized to the primary and secondary
>>>>> NameNodes through the installed Apache Ranger plugin. That way, if the
>>>>> primary NameNode fails, the secondary namenode takes over and the
>>>>> Ranger plugin at that NameNode begins to enforce the same policies for
>>>>> access control.
>>>>> When creating the repository, you must include the fs.default.name for
>>>>> the primary NameNode. If the primary NameNode fails during policy
>>>>> creation, you can then temporarily use the fs.default.name of the
>>>>> secondary NameNode in the repository details to enable directory
>>>>> lookup for policy creation.
>>>>>
>>>>> Thanks & Regards
>>>>> Amithsha
>>>>>
>>>>>
>>>>> On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
>>>>> <mu...@gmail.com> wrote:
>>>>> > Hi,
>>>>> >
>>>>> > I have installed Ranger from Git repo and I have started Ranger
>>>>> console.
>>>>> >
>>>>> > I am trying to deploy ranger-hdfs plugin on active NN. But, plugin
>>>>> agent
>>>>> > unable to contact with Ranger.
>>>>> >
>>>>> > Can you please let me know the right procedure for ranger-hdfs plugin
>>>>> > deployment on HA NN cluster.
>>>>> >
>>>>> >
>>>>> > Regards,
>>>>> > Shaik
>>>>>
>>>>
>>>>
>>>
>>
>

Re: How to enable HDFS plugin on HA NameNode Cluster

Posted by Gautam Borad <gb...@gmail.com>.
>
> 2015-03-08 05:28:12,179 INFO  config.ConfigWatcher
> (ConfigWatcher.java:fetchPolicyfromCahce(507)) - Policy Manager not
> available, using the last stored Policy Filenull


Seems the plugin is not able to communicate to Policy Manager. Can you
please try to 'curl' to that url:port from the host where NN is running and
see its able to connect?

Also can you provide the xasecure-hdfs-security.xml file? Another suspect
can be the *<name>xasecure.hdfs.policymgr.url.laststoredfile</name>*
property.

Thanks.

On Sun, Mar 8, 2015 at 11:23 AM, Hadoop Solutions <mu...@gmail.com>
wrote:

> I have manually added xasecure.hdfs.policymgr.url manually in xasecure-hdfs-security.xml
> in /etc/hadoop/conf, but still i am facing following error:
>
> 2015-03-08 05:28:07,914 INFO  provider.AuditProviderFactory
> (AuditProviderFactory.java:<init>(60)) - AuditProviderFactory: creating..
> 2015-03-08 05:28:07,915 INFO  provider.AuditProviderFactory
> (AuditProviderFactory.java:init(90)) - AuditProviderFactory: initializing..
> 2015-03-08 05:28:08,005 INFO  namenode.FSNamesystem
> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
> blocks.
> 2015-03-08 05:28:08,111 INFO  provider.DbAuditProvider
> (DbAuditProvider.java:<init>(68)) - DbAuditProvider: creating..
> 2015-03-08 05:28:08,115 INFO  provider.MultiDestAuditProvider
> (MultiDestAuditProvider.java:<init>(43)) - MultiDestAuditProvider:
> creating..
> 2015-03-08 05:28:08,116 INFO  provider.AsyncAuditProvider
> (AsyncAuditProvider.java:<init>(58)) - AsyncAuditProvider(DbAuditProvider):
> creating..
> 2015-03-08 05:28:08,117 INFO  provider.MultiDestAuditProvider
> (MultiDestAuditProvider.java:addAuditProvider(67)) -
> MultiDestAuditProvider.addAuditProvider(providerType=com.xasecure.audit.provider.DbAuditProvider)
> 2015-03-08 05:28:08,118 INFO  provider.AsyncAuditProvider
> (AsyncAuditProvider.java:init(81)) -
> AsyncAuditProvider(DbAuditProvider).init()
> 2015-03-08 05:28:08,118 INFO  provider.MultiDestAuditProvider
> (MultiDestAuditProvider.java:init(52)) - MultiDestAuditProvider.init()
> 2015-03-08 05:28:08,118 INFO  provider.BaseAuditProvider
> (BaseAuditProvider.java:init(47)) - BaseAuditProvider.init()
> 2015-03-08 05:28:08,119 INFO  provider.DbAuditProvider
> (DbAuditProvider.java:init(73)) - DbAuditProvider.init()
> 2015-03-08 05:28:08,119 INFO  provider.BaseAuditProvider
> (BaseAuditProvider.java:init(47)) - BaseAuditProvider.init()
> 2015-03-08 05:28:08,367 INFO  provider.DbAuditProvider
> (DbAuditProvider.java:start(121)) - DbAuditProvider.start()
> 2015-03-08 05:28:08,368 INFO  provider.DbAuditProvider
> (DbAuditProvider.java:init(170)) - DbAuditProvider: init()
> 2015-03-08 05:28:08,367 INFO  provider.AsyncAuditProvider
> (AsyncAuditProvider.java:run(134)) - ==> AsyncAuditProvider.run()
> 2015-03-08 05:28:10,919 INFO  config.PolicyRefresher
> (PolicyRefresher.java:<init>(60)) - Creating PolicyRefreshser with url:
> http://sv2lxdpdsedi01.corp.equinix.com:6080, refreshInterval: 60000,
> sslConfigFileName: null, lastStoredFileName: null
> 2015-03-08 05:28:10,936 INFO  config.ConfigWatcher
> (ConfigWatcher.java:<init>(127)) - Creating PolicyRefreshser with url:
> http://sv2lxdpdsedi01.corp.equinix.com:6080,
> refreshInterval(milliSeconds): 60000, sslConfigFileName: null,
> lastStoredFileName: null
> 2015-03-08 05:28:12,178 INFO  config.ConfigWatcher
> (ConfigWatcher.java:fetchPolicyfromCahce(507)) - Policy Manager not
> available, using the last stored Policy Filenull
> 2015-03-08 05:28:12,179 INFO  config.ConfigWatcher
> (ConfigWatcher.java:fetchPolicyfromCahce(507)) - Policy Manager not
> available, using the last stored Policy Filenull
> 2015-03-08 05:28:12,180 ERROR config.PolicyRefresher
> (PolicyRefresher.java:checkFileWatchDogThread(138)) - Unable to start the
> FileWatchDog for path [http://sv2lxdpdsedi01.corp.equinix.com:6080]
> java.lang.NullPointerException
>         at java.io.FileInputStream.<init>(FileInputStream.java:138)
>         at java.io.FileInputStream.<init>(FileInputStream.java:101)
>         at java.io.FileReader.<init>(FileReader.java:58)
>         at
> com.xasecure.pdp.config.ConfigWatcher.fetchPolicyfromCahce(ConfigWatcher.java:510)
>         at
> com.xasecure.pdp.config.ConfigWatcher.isFileChanged(ConfigWatcher.java:330)
>         at
> com.xasecure.pdp.config.ConfigWatcher.validateAndRun(ConfigWatcher.java:222)
>         at
> com.xasecure.pdp.config.ConfigWatcher.<init>(ConfigWatcher.java:133)
>         at
> com.xasecure.pdp.config.PolicyRefresher$1.<init>(PolicyRefresher.java:124)
>         at
> com.xasecure.pdp.config.PolicyRefresher.checkFileWatchDogThread(PolicyRefresher.java:124)
>         at
> com.xasecure.pdp.config.PolicyRefresher.<init>(PolicyRefresher.java:69)
>         at
> com.xasecure.pdp.hdfs.URLBasedAuthDB.<init>(URLBasedAuthDB.java:84)
>         at
> com.xasecure.pdp.hdfs.URLBasedAuthDB.getInstance(URLBasedAuthDB.java:67)
>         at
> com.xasecure.pdp.hdfs.XASecureAuthorizer.<clinit>(XASecureAuthorizer.java:28)
>
>
> On 7 March 2015 at 15:41, Ramesh Mani <rm...@hortonworks.com> wrote:
>
>>
>> From the error what I see is your Policy manager URL is null
>>
>> can you check in xasecure-hdfs-security.xml in /etc/hadoop/conf the
>> following parameter value and let me know.
>>
>> <name>xasecure.hdfs.policymgr.url</name>
>>
>> It looks like when you enabled you haven’t filled the correct url for the
>> policymgr in install.properities file.
>>
>> Thanks
>> Ramesh
>>
>>
>> On Mar 6, 2015, at 5:45 AM, Hadoop Solutions <mu...@gmail.com>
>> wrote:
>>
>> I saw following exception related to Ranger:
>>
>> 2015-03-06 13:21:36,414 INFO  ipc.Server (Server.java:saslProcess(1306))
>> - Auth successful for jhs/sv2lxdpdsedi05.corp.equinix.com@LABBDP.ORG
>> (auth:KERBEROS)
>> 2015-03-06 13:21:36,422 INFO  authorize.ServiceAuthorizationManager
>> (ServiceAuthorizationManager.java:authorize(118)) - Authorization
>> successful for jhs/sv2lxdpdsedi05.corp.equinix.com@LABBDP.ORG
>> (auth:KERBEROS) for protocol=interface
>> org.apache.hadoop.hdfs.protocol.ClientProtocol
>> 2015-03-06 13:21:36,528 INFO  provider.AuditProviderFactory
>> (AuditProviderFactory.java:<init>(60)) - AuditProviderFactory: creating..
>> 2015-03-06 13:21:36,529 INFO  provider.AuditProviderFactory
>> (AuditProviderFactory.java:init(90)) - AuditProviderFactory: initializing..
>> 2015-03-06 13:21:36,645 INFO  provider.AuditProviderFactory
>> (AuditProviderFactory.java:init(107)) - AuditProviderFactory: Audit not
>> enabled..
>> 2015-03-06 13:21:36,660 INFO  config.PolicyRefresher
>> (PolicyRefresher.java:<init>(60)) - Creating PolicyRefreshser with url:
>> null, refreshInterval: 60000, sslConfigFileName: null, lastStoredFileName:
>> null
>> 2015-03-06 13:21:36,668 ERROR config.PolicyRefresher
>> (PolicyRefresher.java:checkFileWatchDogThread(138)) - Unable to start the
>> FileWatchDog for path [null]
>> java.lang.NullPointerException
>>         at
>> com.xasecure.pdp.config.ConfigWatcher.getAgentName(ConfigWatcher.java:474)
>>         at
>> com.xasecure.pdp.config.ConfigWatcher.<init>(ConfigWatcher.java:124)
>>         at
>> com.xasecure.pdp.config.PolicyRefresher$1.<init>(PolicyRefresher.java:124)
>>         at
>> com.xasecure.pdp.config.PolicyRefresher.checkFileWatchDogThread(PolicyRefresher.java:124)
>>         at
>> com.xasecure.pdp.config.PolicyRefresher.<init>(PolicyRefresher.java:69)
>>         at
>> com.xasecure.pdp.hdfs.URLBasedAuthDB.<init>(URLBasedAuthDB.java:84)
>>         at
>> com.xasecure.pdp.hdfs.URLBasedAuthDB.getInstance(URLBasedAuthDB.java:67)
>>         at
>> com.xasecure.pdp.hdfs.XASecureAuthorizer.<clinit>(XASecureAuthorizer.java:28)
>>         at java.lang.Class.forName0(Native Method)
>>         at java.lang.Class.forName(Class.java:190)
>>         at
>> com.xasecure.authorization.hadoop.HDFSAccessVerifierFactory.getInstance(HDFSAccessVerifierFactory.java:43)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.AuthorizeAccessForUser(XaSecureFSPermissionChecker.java:137)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:108)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
>>         at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
>>         at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>         at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>> 2015-03-06 13:21:36,670 INFO  hadoop.HDFSAccessVerifierFactory
>> (HDFSAccessVerifierFactory.java:getInstance(44)) - Created a new instance
>> of class: [com.xasecure.pdp.hdfs.XASecureAuthorizer] for HDFS Access
>> verification.
>> 2015-03-06 13:21:37,212 INFO  namenode.FSNamesystem
>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>> blocks.
>> 2015-03-06 13:21:37,718 INFO  namenode.FSNamesystem
>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>> blocks.
>> 2015-03-06 13:21:38,974 INFO  ipc.Server (Server.java:saslProcess(1306))
>> - Auth successful for oozie/sv2lxdpdsedi07.corp.equinix.com@LABBDP.ORG
>> (auth:KERBEROS)
>> 2015-03-06 13:21:38,984 INFO  authorize.ServiceAuthorizationManager
>> (ServiceAuthorizationManager.java:authorize(118)) - Authorization
>> successful for oozie/sv2lxdpdsedi07.corp.equinix.com@LABBDP.ORG
>> (auth:KERBEROS) for protocol=interface
>> org.apache.hadoop.hdfs.protocol.ClientProtocol
>> 2015-03-06 13:21:44,515 INFO  namenode.FSNamesystem
>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>> blocks.
>> 2015-03-06 13:21:45,000 INFO  namenode.FSNamesystem
>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>> blocks.
>> 2015-03-06 13:21:50,709 INFO  blockmanagement.CacheReplicationMonitor
>> (CacheReplicationMonitor.java:run(178)) - Rescanning after 30000
>> milliseconds
>> 2015-03-06 13:21:50,710 INFO  blockmanagement.CacheReplicationMonitor
>> (CacheReplicationMonitor.java:run(201)) - Scanned 0 directive(s) and 0
>> block(s) in 1 millisecond(s).
>>
>>
>> On 6 March 2015 at 21:38, Hadoop Solutions <mu...@gmail.com>
>> wrote:
>>
>>> After adding xasecure.add-hadoop-authorization as true, i can able to
>>> access hadoop file system.
>>>
>>> I have restarted HDFS and Ranger Admin, but still i am not able to see
>>> agents in Ranger console.
>>>
>>> On 6 March 2015 at 21:07, Amith sha <am...@gmail.com> wrote:
>>>
>>>> make the xasecure.add-hadoop-authorization as true and after editing
>>>> the configuration files first restart Hadoop then restart Ranger and then
>>>> try to access
>>>>
>>>> Thanks & Regards
>>>> Amithsha
>>>>
>>>> On Fri, Mar 6, 2015 at 6:29 PM, Muthu Pandi <mu...@gmail.com>
>>>> wrote:
>>>>
>>>>> Did you got the plugin working?? are u able to see the agent in ranger
>>>>> console?
>>>>>
>>>>> You have disabled the Hadoop authorization in the audit file it seems
>>>>> so change
>>>>>
>>>>> xasecure.add-hadoop-authorization to true in the audit file
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> *RegardsMuthupandi.K*
>>>>>
>>>>>  Think before you print.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Mar 6, 2015 at 6:13 PM, Hadoop Solutions <
>>>>> munna.hadoop@gmail.com> wrote:
>>>>>
>>>>>> Thank you for your help, Muthu.
>>>>>>
>>>>>> I am using HDP 2.2 and i have added audit.xml file. After that i am
>>>>>> seeing following error messages.
>>>>>>
>>>>>> 2015-03-06 12:40:51,119 INFO  namenode.FSNamesystem
>>>>>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>>>>>> blocks.
>>>>>> 2015-03-06 12:40:51,485 INFO  namenode.FSNamesystem
>>>>>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>>>>>> blocks.
>>>>>> 2015-03-06 12:40:56,888 INFO  ipc.Server (Server.java:run(2060)) -
>>>>>> IPC Server handler 16 on 8020, call
>>>>>> org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from
>>>>>> 10.193.153.220:50271 Call#5020 Retry#0
>>>>>> com.xasecure.authorization.hadoop.exceptions.XaSecureAccessControlException:
>>>>>> Permission denied: principal{user=mapred,groups: [hadoop]}, access=EXECUTE,
>>>>>> directory="/"
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:112)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>>>>>         at
>>>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>>>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>>>>>         at
>>>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>>>>>         at
>>>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>>>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>>>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>>>         at
>>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>>>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>>>>>>
>>>>>>
>>>>>> Can you please let me know wht it belongs to.
>>>>>>
>>>>>> Thanks,
>>>>>> Shaik
>>>>>>
>>>>>>
>>>>>> On 6 March 2015 at 18:31, Muthu Pandi <mu...@gmail.com> wrote:
>>>>>>
>>>>>>> From your logs it looks like you are using HDP. and the audit.xml
>>>>>>> file is not in CLASSPATH what version of HDP you r using
>>>>>>>
>>>>>>> this link is for ranger installation on HDP2.2
>>>>>>> http://hortonworks.com/blog/apache-ranger-audit-framework/  make
>>>>>>> sure you have followed everything, below is the snippet from the earlier
>>>>>>> link which deals with the placing xml file on correct path.
>>>>>>>
>>>>>>> <image.png>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> *RegardsMuthupandi.K*
>>>>>>>
>>>>>>>  Think before you print.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Mar 6, 2015 at 2:55 PM, Hadoop Solutions <
>>>>>>> munna.hadoop@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi Mathu,
>>>>>>>>
>>>>>>>> Please find the attached NN log.
>>>>>>>>
>>>>>>>> i have copied all jar to /usr/hdp/current/hadoop-hdfs-namenode/lib
>>>>>>>> location.
>>>>>>>>
>>>>>>>> please provide me the right solution for this issue.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Shaik
>>>>>>>>
>>>>>>>> On 6 March 2015 at 15:48, Muthu Pandi <mu...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Could you post the logs of your Active NN or the NN where you
>>>>>>>>> deployed your Ranger
>>>>>>>>>
>>>>>>>>> Also Make sure you have copied your JARS to respective folders and
>>>>>>>>> restarted the cluster.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *RegardsMuthupandi.K*
>>>>>>>>>
>>>>>>>>>  Think before you print.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Mar 6, 2015 at 1:08 PM, Hadoop Solutions <
>>>>>>>>> munna.hadoop@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Amithsha,
>>>>>>>>>>
>>>>>>>>>> I have deployed ranger-hdfs-plugin again with HA NN url.
>>>>>>>>>>
>>>>>>>>>> But, i am agents are not listed in Ranger Agents. I am using HDP
>>>>>>>>>> 2.2.
>>>>>>>>>>
>>>>>>>>>> Please advise to resolve this issue.
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Shaik
>>>>>>>>>>
>>>>>>>>>> On 6 March 2015 at 14:48, Amith sha <am...@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Shail,
>>>>>>>>>>>
>>>>>>>>>>> Below mentioned steps are  mentioned in Ranger Guide to enable
>>>>>>>>>>> Ranger
>>>>>>>>>>> plugin In Hadoop HA cluster
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> To enable Ranger in the HDFS HA environment, an HDFS plugin must
>>>>>>>>>>> be
>>>>>>>>>>> set up in each NameNode, and then pointed to the same HDFS
>>>>>>>>>>> repository
>>>>>>>>>>> set up in the Security Manager. Any policies created within that
>>>>>>>>>>> HDFS
>>>>>>>>>>> repository are automatically synchronized to the primary and
>>>>>>>>>>> secondary
>>>>>>>>>>> NameNodes through the installed Apache Ranger plugin. That way,
>>>>>>>>>>> if the
>>>>>>>>>>> primary NameNode fails, the secondary namenode takes over and the
>>>>>>>>>>> Ranger plugin at that NameNode begins to enforce the same
>>>>>>>>>>> policies for
>>>>>>>>>>> access control.
>>>>>>>>>>> When creating the repository, you must include the
>>>>>>>>>>> fs.default.name for
>>>>>>>>>>> the primary NameNode. If the primary NameNode fails during policy
>>>>>>>>>>> creation, you can then temporarily use the fs.default.name of
>>>>>>>>>>> the
>>>>>>>>>>> secondary NameNode in the repository details to enable directory
>>>>>>>>>>> lookup for policy creation.
>>>>>>>>>>>
>>>>>>>>>>> Thanks & Regards
>>>>>>>>>>> Amithsha
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
>>>>>>>>>>> <mu...@gmail.com> wrote:
>>>>>>>>>>> > Hi,
>>>>>>>>>>> >
>>>>>>>>>>> > I have installed Ranger from Git repo and I have started
>>>>>>>>>>> Ranger console.
>>>>>>>>>>> >
>>>>>>>>>>> > I am trying to deploy ranger-hdfs plugin on active NN. But,
>>>>>>>>>>> plugin agent
>>>>>>>>>>> > unable to contact with Ranger.
>>>>>>>>>>> >
>>>>>>>>>>> > Can you please let me know the right procedure for ranger-hdfs
>>>>>>>>>>> plugin
>>>>>>>>>>> > deployment on HA NN cluster.
>>>>>>>>>>> >
>>>>>>>>>>> >
>>>>>>>>>>> > Regards,
>>>>>>>>>>> > Shaik
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>
>
>


-- 
Regards,
Gautam.

Re: How to enable HDFS plugin on HA NameNode Cluster

Posted by Hadoop Solutions <mu...@gmail.com>.
I have manually added xasecure.hdfs.policymgr.url manually in
xasecure-hdfs-security.xml
in /etc/hadoop/conf, but still i am facing following error:

2015-03-08 05:28:07,914 INFO  provider.AuditProviderFactory
(AuditProviderFactory.java:<init>(60)) - AuditProviderFactory: creating..
2015-03-08 05:28:07,915 INFO  provider.AuditProviderFactory
(AuditProviderFactory.java:init(90)) - AuditProviderFactory: initializing..
2015-03-08 05:28:08,005 INFO  namenode.FSNamesystem
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
blocks.
2015-03-08 05:28:08,111 INFO  provider.DbAuditProvider
(DbAuditProvider.java:<init>(68)) - DbAuditProvider: creating..
2015-03-08 05:28:08,115 INFO  provider.MultiDestAuditProvider
(MultiDestAuditProvider.java:<init>(43)) - MultiDestAuditProvider:
creating..
2015-03-08 05:28:08,116 INFO  provider.AsyncAuditProvider
(AsyncAuditProvider.java:<init>(58)) - AsyncAuditProvider(DbAuditProvider):
creating..
2015-03-08 05:28:08,117 INFO  provider.MultiDestAuditProvider
(MultiDestAuditProvider.java:addAuditProvider(67)) -
MultiDestAuditProvider.addAuditProvider(providerType=com.xasecure.audit.provider.DbAuditProvider)
2015-03-08 05:28:08,118 INFO  provider.AsyncAuditProvider
(AsyncAuditProvider.java:init(81)) -
AsyncAuditProvider(DbAuditProvider).init()
2015-03-08 05:28:08,118 INFO  provider.MultiDestAuditProvider
(MultiDestAuditProvider.java:init(52)) - MultiDestAuditProvider.init()
2015-03-08 05:28:08,118 INFO  provider.BaseAuditProvider
(BaseAuditProvider.java:init(47)) - BaseAuditProvider.init()
2015-03-08 05:28:08,119 INFO  provider.DbAuditProvider
(DbAuditProvider.java:init(73)) - DbAuditProvider.init()
2015-03-08 05:28:08,119 INFO  provider.BaseAuditProvider
(BaseAuditProvider.java:init(47)) - BaseAuditProvider.init()
2015-03-08 05:28:08,367 INFO  provider.DbAuditProvider
(DbAuditProvider.java:start(121)) - DbAuditProvider.start()
2015-03-08 05:28:08,368 INFO  provider.DbAuditProvider
(DbAuditProvider.java:init(170)) - DbAuditProvider: init()
2015-03-08 05:28:08,367 INFO  provider.AsyncAuditProvider
(AsyncAuditProvider.java:run(134)) - ==> AsyncAuditProvider.run()
2015-03-08 05:28:10,919 INFO  config.PolicyRefresher
(PolicyRefresher.java:<init>(60)) - Creating PolicyRefreshser with url:
http://sv2lxdpdsedi01.corp.equinix.com:6080, refreshInterval: 60000,
sslConfigFileName: null, lastStoredFileName: null
2015-03-08 05:28:10,936 INFO  config.ConfigWatcher
(ConfigWatcher.java:<init>(127)) - Creating PolicyRefreshser with url:
http://sv2lxdpdsedi01.corp.equinix.com:6080, refreshInterval(milliSeconds):
60000, sslConfigFileName: null, lastStoredFileName: null
2015-03-08 05:28:12,178 INFO  config.ConfigWatcher
(ConfigWatcher.java:fetchPolicyfromCahce(507)) - Policy Manager not
available, using the last stored Policy Filenull
2015-03-08 05:28:12,179 INFO  config.ConfigWatcher
(ConfigWatcher.java:fetchPolicyfromCahce(507)) - Policy Manager not
available, using the last stored Policy Filenull
2015-03-08 05:28:12,180 ERROR config.PolicyRefresher
(PolicyRefresher.java:checkFileWatchDogThread(138)) - Unable to start the
FileWatchDog for path [http://sv2lxdpdsedi01.corp.equinix.com:6080]
java.lang.NullPointerException
        at java.io.FileInputStream.<init>(FileInputStream.java:138)
        at java.io.FileInputStream.<init>(FileInputStream.java:101)
        at java.io.FileReader.<init>(FileReader.java:58)
        at
com.xasecure.pdp.config.ConfigWatcher.fetchPolicyfromCahce(ConfigWatcher.java:510)
        at
com.xasecure.pdp.config.ConfigWatcher.isFileChanged(ConfigWatcher.java:330)
        at
com.xasecure.pdp.config.ConfigWatcher.validateAndRun(ConfigWatcher.java:222)
        at
com.xasecure.pdp.config.ConfigWatcher.<init>(ConfigWatcher.java:133)
        at
com.xasecure.pdp.config.PolicyRefresher$1.<init>(PolicyRefresher.java:124)
        at
com.xasecure.pdp.config.PolicyRefresher.checkFileWatchDogThread(PolicyRefresher.java:124)
        at
com.xasecure.pdp.config.PolicyRefresher.<init>(PolicyRefresher.java:69)
        at
com.xasecure.pdp.hdfs.URLBasedAuthDB.<init>(URLBasedAuthDB.java:84)
        at
com.xasecure.pdp.hdfs.URLBasedAuthDB.getInstance(URLBasedAuthDB.java:67)
        at
com.xasecure.pdp.hdfs.XASecureAuthorizer.<clinit>(XASecureAuthorizer.java:28)


On 7 March 2015 at 15:41, Ramesh Mani <rm...@hortonworks.com> wrote:

>
> From the error what I see is your Policy manager URL is null
>
> can you check in xasecure-hdfs-security.xml in /etc/hadoop/conf the
> following parameter value and let me know.
>
> <name>xasecure.hdfs.policymgr.url</name>
>
> It looks like when you enabled you haven’t filled the correct url for the
> policymgr in install.properities file.
>
> Thanks
> Ramesh
>
>
> On Mar 6, 2015, at 5:45 AM, Hadoop Solutions <mu...@gmail.com>
> wrote:
>
> I saw following exception related to Ranger:
>
> 2015-03-06 13:21:36,414 INFO  ipc.Server (Server.java:saslProcess(1306)) -
> Auth successful for jhs/sv2lxdpdsedi05.corp.equinix.com@LABBDP.ORG
> (auth:KERBEROS)
> 2015-03-06 13:21:36,422 INFO  authorize.ServiceAuthorizationManager
> (ServiceAuthorizationManager.java:authorize(118)) - Authorization
> successful for jhs/sv2lxdpdsedi05.corp.equinix.com@LABBDP.ORG
> (auth:KERBEROS) for protocol=interface
> org.apache.hadoop.hdfs.protocol.ClientProtocol
> 2015-03-06 13:21:36,528 INFO  provider.AuditProviderFactory
> (AuditProviderFactory.java:<init>(60)) - AuditProviderFactory: creating..
> 2015-03-06 13:21:36,529 INFO  provider.AuditProviderFactory
> (AuditProviderFactory.java:init(90)) - AuditProviderFactory: initializing..
> 2015-03-06 13:21:36,645 INFO  provider.AuditProviderFactory
> (AuditProviderFactory.java:init(107)) - AuditProviderFactory: Audit not
> enabled..
> 2015-03-06 13:21:36,660 INFO  config.PolicyRefresher
> (PolicyRefresher.java:<init>(60)) - Creating PolicyRefreshser with url:
> null, refreshInterval: 60000, sslConfigFileName: null, lastStoredFileName:
> null
> 2015-03-06 13:21:36,668 ERROR config.PolicyRefresher
> (PolicyRefresher.java:checkFileWatchDogThread(138)) - Unable to start the
> FileWatchDog for path [null]
> java.lang.NullPointerException
>         at
> com.xasecure.pdp.config.ConfigWatcher.getAgentName(ConfigWatcher.java:474)
>         at
> com.xasecure.pdp.config.ConfigWatcher.<init>(ConfigWatcher.java:124)
>         at
> com.xasecure.pdp.config.PolicyRefresher$1.<init>(PolicyRefresher.java:124)
>         at
> com.xasecure.pdp.config.PolicyRefresher.checkFileWatchDogThread(PolicyRefresher.java:124)
>         at
> com.xasecure.pdp.config.PolicyRefresher.<init>(PolicyRefresher.java:69)
>         at
> com.xasecure.pdp.hdfs.URLBasedAuthDB.<init>(URLBasedAuthDB.java:84)
>         at
> com.xasecure.pdp.hdfs.URLBasedAuthDB.getInstance(URLBasedAuthDB.java:67)
>         at
> com.xasecure.pdp.hdfs.XASecureAuthorizer.<clinit>(XASecureAuthorizer.java:28)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:190)
>         at
> com.xasecure.authorization.hadoop.HDFSAccessVerifierFactory.getInstance(HDFSAccessVerifierFactory.java:43)
>         at
> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.AuthorizeAccessForUser(XaSecureFSPermissionChecker.java:137)
>         at
> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:108)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> 2015-03-06 13:21:36,670 INFO  hadoop.HDFSAccessVerifierFactory
> (HDFSAccessVerifierFactory.java:getInstance(44)) - Created a new instance
> of class: [com.xasecure.pdp.hdfs.XASecureAuthorizer] for HDFS Access
> verification.
> 2015-03-06 13:21:37,212 INFO  namenode.FSNamesystem
> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
> blocks.
> 2015-03-06 13:21:37,718 INFO  namenode.FSNamesystem
> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
> blocks.
> 2015-03-06 13:21:38,974 INFO  ipc.Server (Server.java:saslProcess(1306)) -
> Auth successful for oozie/sv2lxdpdsedi07.corp.equinix.com@LABBDP.ORG
> (auth:KERBEROS)
> 2015-03-06 13:21:38,984 INFO  authorize.ServiceAuthorizationManager
> (ServiceAuthorizationManager.java:authorize(118)) - Authorization
> successful for oozie/sv2lxdpdsedi07.corp.equinix.com@LABBDP.ORG
> (auth:KERBEROS) for protocol=interface
> org.apache.hadoop.hdfs.protocol.ClientProtocol
> 2015-03-06 13:21:44,515 INFO  namenode.FSNamesystem
> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
> blocks.
> 2015-03-06 13:21:45,000 INFO  namenode.FSNamesystem
> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
> blocks.
> 2015-03-06 13:21:50,709 INFO  blockmanagement.CacheReplicationMonitor
> (CacheReplicationMonitor.java:run(178)) - Rescanning after 30000
> milliseconds
> 2015-03-06 13:21:50,710 INFO  blockmanagement.CacheReplicationMonitor
> (CacheReplicationMonitor.java:run(201)) - Scanned 0 directive(s) and 0
> block(s) in 1 millisecond(s).
>
>
> On 6 March 2015 at 21:38, Hadoop Solutions <mu...@gmail.com> wrote:
>
>> After adding xasecure.add-hadoop-authorization as true, i can able to
>> access hadoop file system.
>>
>> I have restarted HDFS and Ranger Admin, but still i am not able to see
>> agents in Ranger console.
>>
>> On 6 March 2015 at 21:07, Amith sha <am...@gmail.com> wrote:
>>
>>> make the xasecure.add-hadoop-authorization as true and after editing the
>>> configuration files first restart Hadoop then restart Ranger and then try
>>> to access
>>>
>>> Thanks & Regards
>>> Amithsha
>>>
>>> On Fri, Mar 6, 2015 at 6:29 PM, Muthu Pandi <mu...@gmail.com> wrote:
>>>
>>>> Did you got the plugin working?? are u able to see the agent in ranger
>>>> console?
>>>>
>>>> You have disabled the Hadoop authorization in the audit file it seems
>>>> so change
>>>>
>>>> xasecure.add-hadoop-authorization to true in the audit file
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *RegardsMuthupandi.K*
>>>>
>>>>  Think before you print.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Fri, Mar 6, 2015 at 6:13 PM, Hadoop Solutions <
>>>> munna.hadoop@gmail.com> wrote:
>>>>
>>>>> Thank you for your help, Muthu.
>>>>>
>>>>> I am using HDP 2.2 and i have added audit.xml file. After that i am
>>>>> seeing following error messages.
>>>>>
>>>>> 2015-03-06 12:40:51,119 INFO  namenode.FSNamesystem
>>>>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>>>>> blocks.
>>>>> 2015-03-06 12:40:51,485 INFO  namenode.FSNamesystem
>>>>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>>>>> blocks.
>>>>> 2015-03-06 12:40:56,888 INFO  ipc.Server (Server.java:run(2060)) - IPC
>>>>> Server handler 16 on 8020, call
>>>>> org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from
>>>>> 10.193.153.220:50271 Call#5020 Retry#0
>>>>> com.xasecure.authorization.hadoop.exceptions.XaSecureAccessControlException:
>>>>> Permission denied: principal{user=mapred,groups: [hadoop]}, access=EXECUTE,
>>>>> directory="/"
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:112)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>>>>         at
>>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>>         at
>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>>>>>
>>>>>
>>>>> Can you please let me know wht it belongs to.
>>>>>
>>>>> Thanks,
>>>>> Shaik
>>>>>
>>>>>
>>>>> On 6 March 2015 at 18:31, Muthu Pandi <mu...@gmail.com> wrote:
>>>>>
>>>>>> From your logs it looks like you are using HDP. and the audit.xml
>>>>>> file is not in CLASSPATH what version of HDP you r using
>>>>>>
>>>>>> this link is for ranger installation on HDP2.2
>>>>>> http://hortonworks.com/blog/apache-ranger-audit-framework/  make
>>>>>> sure you have followed everything, below is the snippet from the earlier
>>>>>> link which deals with the placing xml file on correct path.
>>>>>>
>>>>>> <image.png>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *RegardsMuthupandi.K*
>>>>>>
>>>>>>  Think before you print.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Mar 6, 2015 at 2:55 PM, Hadoop Solutions <
>>>>>> munna.hadoop@gmail.com> wrote:
>>>>>>
>>>>>>> Hi Mathu,
>>>>>>>
>>>>>>> Please find the attached NN log.
>>>>>>>
>>>>>>> i have copied all jar to /usr/hdp/current/hadoop-hdfs-namenode/lib
>>>>>>> location.
>>>>>>>
>>>>>>> please provide me the right solution for this issue.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Shaik
>>>>>>>
>>>>>>> On 6 March 2015 at 15:48, Muthu Pandi <mu...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Could you post the logs of your Active NN or the NN where you
>>>>>>>> deployed your Ranger
>>>>>>>>
>>>>>>>> Also Make sure you have copied your JARS to respective folders and
>>>>>>>> restarted the cluster.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *RegardsMuthupandi.K*
>>>>>>>>
>>>>>>>>  Think before you print.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Mar 6, 2015 at 1:08 PM, Hadoop Solutions <
>>>>>>>> munna.hadoop@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi Amithsha,
>>>>>>>>>
>>>>>>>>> I have deployed ranger-hdfs-plugin again with HA NN url.
>>>>>>>>>
>>>>>>>>> But, i am agents are not listed in Ranger Agents. I am using HDP
>>>>>>>>> 2.2.
>>>>>>>>>
>>>>>>>>> Please advise to resolve this issue.
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Shaik
>>>>>>>>>
>>>>>>>>> On 6 March 2015 at 14:48, Amith sha <am...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Shail,
>>>>>>>>>>
>>>>>>>>>> Below mentioned steps are  mentioned in Ranger Guide to enable
>>>>>>>>>> Ranger
>>>>>>>>>> plugin In Hadoop HA cluster
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> To enable Ranger in the HDFS HA environment, an HDFS plugin must
>>>>>>>>>> be
>>>>>>>>>> set up in each NameNode, and then pointed to the same HDFS
>>>>>>>>>> repository
>>>>>>>>>> set up in the Security Manager. Any policies created within that
>>>>>>>>>> HDFS
>>>>>>>>>> repository are automatically synchronized to the primary and
>>>>>>>>>> secondary
>>>>>>>>>> NameNodes through the installed Apache Ranger plugin. That way,
>>>>>>>>>> if the
>>>>>>>>>> primary NameNode fails, the secondary namenode takes over and the
>>>>>>>>>> Ranger plugin at that NameNode begins to enforce the same
>>>>>>>>>> policies for
>>>>>>>>>> access control.
>>>>>>>>>> When creating the repository, you must include the
>>>>>>>>>> fs.default.name for
>>>>>>>>>> the primary NameNode. If the primary NameNode fails during policy
>>>>>>>>>> creation, you can then temporarily use the fs.default.name of the
>>>>>>>>>> secondary NameNode in the repository details to enable directory
>>>>>>>>>> lookup for policy creation.
>>>>>>>>>>
>>>>>>>>>> Thanks & Regards
>>>>>>>>>> Amithsha
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
>>>>>>>>>> <mu...@gmail.com> wrote:
>>>>>>>>>> > Hi,
>>>>>>>>>> >
>>>>>>>>>> > I have installed Ranger from Git repo and I have started Ranger
>>>>>>>>>> console.
>>>>>>>>>> >
>>>>>>>>>> > I am trying to deploy ranger-hdfs plugin on active NN. But,
>>>>>>>>>> plugin agent
>>>>>>>>>> > unable to contact with Ranger.
>>>>>>>>>> >
>>>>>>>>>> > Can you please let me know the right procedure for ranger-hdfs
>>>>>>>>>> plugin
>>>>>>>>>> > deployment on HA NN cluster.
>>>>>>>>>> >
>>>>>>>>>> >
>>>>>>>>>> > Regards,
>>>>>>>>>> > Shaik
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

Re: How to enable HDFS plugin on HA NameNode Cluster

Posted by Ramesh Mani <rm...@hortonworks.com>.
>From the error what I see is your Policy manager URL is null

can you check in xasecure-hdfs-security.xml in /etc/hadoop/conf the following parameter value and let me know.

<name>xasecure.hdfs.policymgr.url</name>

It looks like when you enabled you haven’t filled the correct url for the policymgr in install.properities file.

Thanks
Ramesh


On Mar 6, 2015, at 5:45 AM, Hadoop Solutions <mu...@gmail.com> wrote:

> I saw following exception related to Ranger:
> 
> 2015-03-06 13:21:36,414 INFO  ipc.Server (Server.java:saslProcess(1306)) - Auth successful for jhs/sv2lxdpdsedi05.corp.equinix.com@LABBDP.ORG (auth:KERBEROS)
> 2015-03-06 13:21:36,422 INFO  authorize.ServiceAuthorizationManager (ServiceAuthorizationManager.java:authorize(118)) - Authorization successful for jhs/sv2lxdpdsedi05.corp.equinix.com@LABBDP.ORG (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol
> 2015-03-06 13:21:36,528 INFO  provider.AuditProviderFactory (AuditProviderFactory.java:<init>(60)) - AuditProviderFactory: creating..
> 2015-03-06 13:21:36,529 INFO  provider.AuditProviderFactory (AuditProviderFactory.java:init(90)) - AuditProviderFactory: initializing..
> 2015-03-06 13:21:36,645 INFO  provider.AuditProviderFactory (AuditProviderFactory.java:init(107)) - AuditProviderFactory: Audit not enabled..
> 2015-03-06 13:21:36,660 INFO  config.PolicyRefresher (PolicyRefresher.java:<init>(60)) - Creating PolicyRefreshser with url: null, refreshInterval: 60000, sslConfigFileName: null, lastStoredFileName: null
> 2015-03-06 13:21:36,668 ERROR config.PolicyRefresher (PolicyRefresher.java:checkFileWatchDogThread(138)) - Unable to start the FileWatchDog for path [null]
> java.lang.NullPointerException
>         at com.xasecure.pdp.config.ConfigWatcher.getAgentName(ConfigWatcher.java:474)
>         at com.xasecure.pdp.config.ConfigWatcher.<init>(ConfigWatcher.java:124)
>         at com.xasecure.pdp.config.PolicyRefresher$1.<init>(PolicyRefresher.java:124)
>         at com.xasecure.pdp.config.PolicyRefresher.checkFileWatchDogThread(PolicyRefresher.java:124)
>         at com.xasecure.pdp.config.PolicyRefresher.<init>(PolicyRefresher.java:69)
>         at com.xasecure.pdp.hdfs.URLBasedAuthDB.<init>(URLBasedAuthDB.java:84)
>         at com.xasecure.pdp.hdfs.URLBasedAuthDB.getInstance(URLBasedAuthDB.java:67)
>         at com.xasecure.pdp.hdfs.XASecureAuthorizer.<clinit>(XASecureAuthorizer.java:28)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:190)
>         at com.xasecure.authorization.hadoop.HDFSAccessVerifierFactory.getInstance(HDFSAccessVerifierFactory.java:43)
>         at org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.AuthorizeAccessForUser(XaSecureFSPermissionChecker.java:137)
>         at org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:108)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> 2015-03-06 13:21:36,670 INFO  hadoop.HDFSAccessVerifierFactory (HDFSAccessVerifierFactory.java:getInstance(44)) - Created a new instance of class: [com.xasecure.pdp.hdfs.XASecureAuthorizer] for HDFS Access verification.
> 2015-03-06 13:21:37,212 INFO  namenode.FSNamesystem (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file blocks.
> 2015-03-06 13:21:37,718 INFO  namenode.FSNamesystem (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file blocks.
> 2015-03-06 13:21:38,974 INFO  ipc.Server (Server.java:saslProcess(1306)) - Auth successful for oozie/sv2lxdpdsedi07.corp.equinix.com@LABBDP.ORG (auth:KERBEROS)
> 2015-03-06 13:21:38,984 INFO  authorize.ServiceAuthorizationManager (ServiceAuthorizationManager.java:authorize(118)) - Authorization successful for oozie/sv2lxdpdsedi07.corp.equinix.com@LABBDP.ORG (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol
> 2015-03-06 13:21:44,515 INFO  namenode.FSNamesystem (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file blocks.
> 2015-03-06 13:21:45,000 INFO  namenode.FSNamesystem (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file blocks.
> 2015-03-06 13:21:50,709 INFO  blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor.java:run(178)) - Rescanning after 30000 milliseconds
> 2015-03-06 13:21:50,710 INFO  blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor.java:run(201)) - Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).
> 
> 
> On 6 March 2015 at 21:38, Hadoop Solutions <mu...@gmail.com> wrote:
> After adding xasecure.add-hadoop-authorization as true, i can able to access hadoop file system.
> 
> I have restarted HDFS and Ranger Admin, but still i am not able to see agents in Ranger console.
> 
> On 6 March 2015 at 21:07, Amith sha <am...@gmail.com> wrote:
> make the xasecure.add-hadoop-authorization as true and after editing the configuration files first restart Hadoop then restart Ranger and then try to access
> 
> Thanks & Regards
> Amithsha
> 
> On Fri, Mar 6, 2015 at 6:29 PM, Muthu Pandi <mu...@gmail.com> wrote:
> Did you got the plugin working?? are u able to see the agent in ranger console?
> 
> You have disabled the Hadoop authorization in the audit file it seems so change 
> 
> xasecure.add-hadoop-authorization to true in the audit file
> 
> 
> 
> Regards
> Muthupandi.K
> 
>  Think before you print.
> 
> 
> 
> 
> 
> 
> 
> On Fri, Mar 6, 2015 at 6:13 PM, Hadoop Solutions <mu...@gmail.com> wrote:
> Thank you for your help, Muthu.
> 
> I am using HDP 2.2 and i have added audit.xml file. After that i am seeing following error messages.
> 
> 2015-03-06 12:40:51,119 INFO  namenode.FSNamesystem (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file blocks.
> 2015-03-06 12:40:51,485 INFO  namenode.FSNamesystem (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file blocks.
> 2015-03-06 12:40:56,888 INFO  ipc.Server (Server.java:run(2060)) - IPC Server handler 16 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from 10.193.153.220:50271 Call#5020 Retry#0
> com.xasecure.authorization.hadoop.exceptions.XaSecureAccessControlException: Permission denied: principal{user=mapred,groups: [hadoop]}, access=EXECUTE, directory="/"
>         at org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:112)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> 
> 
> Can you please let me know wht it belongs to.
> 
> Thanks,
> Shaik
> 
> 
> On 6 March 2015 at 18:31, Muthu Pandi <mu...@gmail.com> wrote:
> From your logs it looks like you are using HDP. and the audit.xml file is not in CLASSPATH what version of HDP you r using
> 
> this link is for ranger installation on HDP2.2 http://hortonworks.com/blog/apache-ranger-audit-framework/  make sure you have followed everything, below is the snippet from the earlier link which deals with the placing xml file on correct path.
> 
> <image.png>
> 
> Regards
> Muthupandi.K
> 
>  Think before you print.
> 
> 
> 
> 
> 
> 
> 
> On Fri, Mar 6, 2015 at 2:55 PM, Hadoop Solutions <mu...@gmail.com> wrote:
> Hi Mathu,
> 
> Please find the attached NN log.
> 
> i have copied all jar to /usr/hdp/current/hadoop-hdfs-namenode/lib location.
> 
> please provide me the right solution for this issue.
> 
> Thanks,
> Shaik
> 
> On 6 March 2015 at 15:48, Muthu Pandi <mu...@gmail.com> wrote:
> Could you post the logs of your Active NN or the NN where you deployed your Ranger
> 
> Also Make sure you have copied your JARS to respective folders and restarted the cluster.
> 
> Regards
> Muthupandi.K
> 
>  Think before you print.
> 
> 
> 
> 
> 
> 
> 
> On Fri, Mar 6, 2015 at 1:08 PM, Hadoop Solutions <mu...@gmail.com> wrote:
> Hi Amithsha,
> 
> I have deployed ranger-hdfs-plugin again with HA NN url.
> 
> But, i am agents are not listed in Ranger Agents. I am using HDP 2.2.
> 
> Please advise to resolve this issue.
> 
> Thanks,
> Shaik
> 
> On 6 March 2015 at 14:48, Amith sha <am...@gmail.com> wrote:
> Hi Shail,
> 
> Below mentioned steps are  mentioned in Ranger Guide to enable Ranger
> plugin In Hadoop HA cluster
> 
> 
> To enable Ranger in the HDFS HA environment, an HDFS plugin must be
> set up in each NameNode, and then pointed to the same HDFS repository
> set up in the Security Manager. Any policies created within that HDFS
> repository are automatically synchronized to the primary and secondary
> NameNodes through the installed Apache Ranger plugin. That way, if the
> primary NameNode fails, the secondary namenode takes over and the
> Ranger plugin at that NameNode begins to enforce the same policies for
> access control.
> When creating the repository, you must include the fs.default.name for
> the primary NameNode. If the primary NameNode fails during policy
> creation, you can then temporarily use the fs.default.name of the
> secondary NameNode in the repository details to enable directory
> lookup for policy creation.
> 
> Thanks & Regards
> Amithsha
> 
> 
> On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
> <mu...@gmail.com> wrote:
> > Hi,
> >
> > I have installed Ranger from Git repo and I have started Ranger console.
> >
> > I am trying to deploy ranger-hdfs plugin on active NN. But, plugin agent
> > unable to contact with Ranger.
> >
> > Can you please let me know the right procedure for ranger-hdfs plugin
> > deployment on HA NN cluster.
> >
> >
> > Regards,
> > Shaik
> 
> 
> 
> 
> 
> 
> 
> 
> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: How to enable HDFS plugin on HA NameNode Cluster

Posted by Amith sha <am...@gmail.com>.
have u enabled XAAUDIT.DB.IS_ENABLED=true ???


Thanks & Regards
Amithsha

On Sat, Mar 7, 2015 at 11:42 AM, Amith sha <am...@gmail.com> wrote:

> check your database have u found any entry for audit
>
> Thanks & Regards
> Amithsha
>
> On Fri, Mar 6, 2015 at 7:15 PM, Hadoop Solutions <mu...@gmail.com>
> wrote:
>
>> I saw following exception related to Ranger:
>>
>> 2015-03-06 13:21:36,414 INFO  ipc.Server (Server.java:saslProcess(1306))
>> - Auth successful for jhs/sv2lxdpdsedi05.corp.equinix.com@LABBDP.ORG
>> (auth:KERBEROS)
>> 2015-03-06 13:21:36,422 INFO  authorize.ServiceAuthorizationManager
>> (ServiceAuthorizationManager.java:authorize(118)) - Authorization
>> successful for jhs/sv2lxdpdsedi05.corp.equinix.com@LABBDP.ORG
>> (auth:KERBEROS) for protocol=interface
>> org.apache.hadoop.hdfs.protocol.ClientProtocol
>> 2015-03-06 13:21:36,528 INFO  provider.AuditProviderFactory
>> (AuditProviderFactory.java:<init>(60)) - AuditProviderFactory: creating..
>> 2015-03-06 13:21:36,529 INFO  provider.AuditProviderFactory
>> (AuditProviderFactory.java:init(90)) - AuditProviderFactory: initializing..
>> 2015-03-06 13:21:36,645 INFO  provider.AuditProviderFactory
>> (AuditProviderFactory.java:init(107)) - AuditProviderFactory: Audit not
>> enabled..
>> 2015-03-06 13:21:36,660 INFO  config.PolicyRefresher
>> (PolicyRefresher.java:<init>(60)) - Creating PolicyRefreshser with url:
>> null, refreshInterval: 60000, sslConfigFileName: null, lastStoredFileName:
>> null
>> 2015-03-06 13:21:36,668 ERROR config.PolicyRefresher
>> (PolicyRefresher.java:checkFileWatchDogThread(138)) - Unable to start the
>> FileWatchDog for path [null]
>> java.lang.NullPointerException
>>         at
>> com.xasecure.pdp.config.ConfigWatcher.getAgentName(ConfigWatcher.java:474)
>>         at
>> com.xasecure.pdp.config.ConfigWatcher.<init>(ConfigWatcher.java:124)
>>         at
>> com.xasecure.pdp.config.PolicyRefresher$1.<init>(PolicyRefresher.java:124)
>>         at
>> com.xasecure.pdp.config.PolicyRefresher.checkFileWatchDogThread(PolicyRefresher.java:124)
>>         at
>> com.xasecure.pdp.config.PolicyRefresher.<init>(PolicyRefresher.java:69)
>>         at
>> com.xasecure.pdp.hdfs.URLBasedAuthDB.<init>(URLBasedAuthDB.java:84)
>>         at
>> com.xasecure.pdp.hdfs.URLBasedAuthDB.getInstance(URLBasedAuthDB.java:67)
>>         at
>> com.xasecure.pdp.hdfs.XASecureAuthorizer.<clinit>(XASecureAuthorizer.java:28)
>>         at java.lang.Class.forName0(Native Method)
>>         at java.lang.Class.forName(Class.java:190)
>>         at
>> com.xasecure.authorization.hadoop.HDFSAccessVerifierFactory.getInstance(HDFSAccessVerifierFactory.java:43)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.AuthorizeAccessForUser(XaSecureFSPermissionChecker.java:137)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:108)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
>>         at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
>>         at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>         at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>> 2015-03-06 13:21:36,670 INFO  hadoop.HDFSAccessVerifierFactory
>> (HDFSAccessVerifierFactory.java:getInstance(44)) - Created a new instance
>> of class: [com.xasecure.pdp.hdfs.XASecureAuthorizer] for HDFS Access
>> verification.
>> 2015-03-06 13:21:37,212 INFO  namenode.FSNamesystem
>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>> blocks.
>> 2015-03-06 13:21:37,718 INFO  namenode.FSNamesystem
>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>> blocks.
>> 2015-03-06 13:21:38,974 INFO  ipc.Server (Server.java:saslProcess(1306))
>> - Auth successful for oozie/sv2lxdpdsedi07.corp.equinix.com@LABBDP.ORG
>> (auth:KERBEROS)
>> 2015-03-06 13:21:38,984 INFO  authorize.ServiceAuthorizationManager
>> (ServiceAuthorizationManager.java:authorize(118)) - Authorization
>> successful for oozie/sv2lxdpdsedi07.corp.equinix.com@LABBDP.ORG
>> (auth:KERBEROS) for protocol=interface
>> org.apache.hadoop.hdfs.protocol.ClientProtocol
>> 2015-03-06 13:21:44,515 INFO  namenode.FSNamesystem
>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>> blocks.
>> 2015-03-06 13:21:45,000 INFO  namenode.FSNamesystem
>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>> blocks.
>> 2015-03-06 13:21:50,709 INFO  blockmanagement.CacheReplicationMonitor
>> (CacheReplicationMonitor.java:run(178)) - Rescanning after 30000
>> milliseconds
>> 2015-03-06 13:21:50,710 INFO  blockmanagement.CacheReplicationMonitor
>> (CacheReplicationMonitor.java:run(201)) - Scanned 0 directive(s) and 0
>> block(s) in 1 millisecond(s).
>>
>>
>> On 6 March 2015 at 21:38, Hadoop Solutions <mu...@gmail.com>
>> wrote:
>>
>>> After adding xasecure.add-hadoop-authorization as true, i can able to
>>> access hadoop file system.
>>>
>>> I have restarted HDFS and Ranger Admin, but still i am not able to see
>>> agents in Ranger console.
>>>
>>> On 6 March 2015 at 21:07, Amith sha <am...@gmail.com> wrote:
>>>
>>>> make the xasecure.add-hadoop-authorization as true and after editing
>>>> the configuration files first restart Hadoop then restart Ranger and then
>>>> try to access
>>>>
>>>> Thanks & Regards
>>>> Amithsha
>>>>
>>>> On Fri, Mar 6, 2015 at 6:29 PM, Muthu Pandi <mu...@gmail.com>
>>>> wrote:
>>>>
>>>>> Did you got the plugin working?? are u able to see the agent in ranger
>>>>> console?
>>>>>
>>>>> You have disabled the Hadoop authorization in the audit file it seems
>>>>> so change
>>>>>
>>>>> xasecure.add-hadoop-authorization to true in the audit file
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> *RegardsMuthupandi.K*
>>>>>
>>>>>  Think before you print.
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Mar 6, 2015 at 6:13 PM, Hadoop Solutions <
>>>>> munna.hadoop@gmail.com> wrote:
>>>>>
>>>>>> Thank you for your help, Muthu.
>>>>>>
>>>>>> I am using HDP 2.2 and i have added audit.xml file. After that i am
>>>>>> seeing following error messages.
>>>>>>
>>>>>> 2015-03-06 12:40:51,119 INFO  namenode.FSNamesystem
>>>>>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>>>>>> blocks.
>>>>>> 2015-03-06 12:40:51,485 INFO  namenode.FSNamesystem
>>>>>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>>>>>> blocks.
>>>>>> 2015-03-06 12:40:56,888 INFO  ipc.Server (Server.java:run(2060)) -
>>>>>> IPC Server handler 16 on 8020, call
>>>>>> org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from
>>>>>> 10.193.153.220:50271 Call#5020 Retry#0
>>>>>> com.xasecure.authorization.hadoop.exceptions.XaSecureAccessControlException:
>>>>>> Permission denied: principal{user=mapred,groups: [hadoop]}, access=EXECUTE,
>>>>>> directory="/"
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:112)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>>>>>         at
>>>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>>>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>>>>>         at
>>>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>>>>>         at
>>>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>>>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>>>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>>>         at
>>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>>>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>>>>>>
>>>>>>
>>>>>> Can you please let me know wht it belongs to.
>>>>>>
>>>>>> Thanks,
>>>>>> Shaik
>>>>>>
>>>>>>
>>>>>> On 6 March 2015 at 18:31, Muthu Pandi <mu...@gmail.com> wrote:
>>>>>>
>>>>>>> From your logs it looks like you are using HDP. and the audit.xml
>>>>>>> file is not in CLASSPATH what version of HDP you r using
>>>>>>>
>>>>>>> this link is for ranger installation on HDP2.2
>>>>>>> http://hortonworks.com/blog/apache-ranger-audit-framework/  make
>>>>>>> sure you have followed everything, below is the snippet from the earlier
>>>>>>> link which deals with the placing xml file on correct path.
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> *RegardsMuthupandi.K*
>>>>>>>
>>>>>>>  Think before you print.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Mar 6, 2015 at 2:55 PM, Hadoop Solutions <
>>>>>>> munna.hadoop@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi Mathu,
>>>>>>>>
>>>>>>>> Please find the attached NN log.
>>>>>>>>
>>>>>>>> i have copied all jar to /usr/hdp/current/hadoop-hdfs-namenode/lib
>>>>>>>> location.
>>>>>>>>
>>>>>>>> please provide me the right solution for this issue.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Shaik
>>>>>>>>
>>>>>>>> On 6 March 2015 at 15:48, Muthu Pandi <mu...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Could you post the logs of your Active NN or the NN where you
>>>>>>>>> deployed your Ranger
>>>>>>>>>
>>>>>>>>> Also Make sure you have copied your JARS to respective folders and
>>>>>>>>> restarted the cluster.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *RegardsMuthupandi.K*
>>>>>>>>>
>>>>>>>>>  Think before you print.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Mar 6, 2015 at 1:08 PM, Hadoop Solutions <
>>>>>>>>> munna.hadoop@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Amithsha,
>>>>>>>>>>
>>>>>>>>>> I have deployed ranger-hdfs-plugin again with HA NN url.
>>>>>>>>>>
>>>>>>>>>> But, i am agents are not listed in Ranger Agents. I am using HDP
>>>>>>>>>> 2.2.
>>>>>>>>>>
>>>>>>>>>> Please advise to resolve this issue.
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Shaik
>>>>>>>>>>
>>>>>>>>>> On 6 March 2015 at 14:48, Amith sha <am...@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Shail,
>>>>>>>>>>>
>>>>>>>>>>> Below mentioned steps are  mentioned in Ranger Guide to enable
>>>>>>>>>>> Ranger
>>>>>>>>>>> plugin In Hadoop HA cluster
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> To enable Ranger in the HDFS HA environment, an HDFS plugin must
>>>>>>>>>>> be
>>>>>>>>>>> set up in each NameNode, and then pointed to the same HDFS
>>>>>>>>>>> repository
>>>>>>>>>>> set up in the Security Manager. Any policies created within that
>>>>>>>>>>> HDFS
>>>>>>>>>>> repository are automatically synchronized to the primary and
>>>>>>>>>>> secondary
>>>>>>>>>>> NameNodes through the installed Apache Ranger plugin. That way,
>>>>>>>>>>> if the
>>>>>>>>>>> primary NameNode fails, the secondary namenode takes over and the
>>>>>>>>>>> Ranger plugin at that NameNode begins to enforce the same
>>>>>>>>>>> policies for
>>>>>>>>>>> access control.
>>>>>>>>>>> When creating the repository, you must include the
>>>>>>>>>>> fs.default.name for
>>>>>>>>>>> the primary NameNode. If the primary NameNode fails during policy
>>>>>>>>>>> creation, you can then temporarily use the fs.default.name of
>>>>>>>>>>> the
>>>>>>>>>>> secondary NameNode in the repository details to enable directory
>>>>>>>>>>> lookup for policy creation.
>>>>>>>>>>>
>>>>>>>>>>> Thanks & Regards
>>>>>>>>>>> Amithsha
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
>>>>>>>>>>> <mu...@gmail.com> wrote:
>>>>>>>>>>> > Hi,
>>>>>>>>>>> >
>>>>>>>>>>> > I have installed Ranger from Git repo and I have started
>>>>>>>>>>> Ranger console.
>>>>>>>>>>> >
>>>>>>>>>>> > I am trying to deploy ranger-hdfs plugin on active NN. But,
>>>>>>>>>>> plugin agent
>>>>>>>>>>> > unable to contact with Ranger.
>>>>>>>>>>> >
>>>>>>>>>>> > Can you please let me know the right procedure for ranger-hdfs
>>>>>>>>>>> plugin
>>>>>>>>>>> > deployment on HA NN cluster.
>>>>>>>>>>> >
>>>>>>>>>>> >
>>>>>>>>>>> > Regards,
>>>>>>>>>>> > Shaik
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: How to enable HDFS plugin on HA NameNode Cluster

Posted by Amith sha <am...@gmail.com>.
check your database have u found any entry for audit

Thanks & Regards
Amithsha

On Fri, Mar 6, 2015 at 7:15 PM, Hadoop Solutions <mu...@gmail.com>
wrote:

> I saw following exception related to Ranger:
>
> 2015-03-06 13:21:36,414 INFO  ipc.Server (Server.java:saslProcess(1306)) -
> Auth successful for jhs/sv2lxdpdsedi05.corp.equinix.com@LABBDP.ORG
> (auth:KERBEROS)
> 2015-03-06 13:21:36,422 INFO  authorize.ServiceAuthorizationManager
> (ServiceAuthorizationManager.java:authorize(118)) - Authorization
> successful for jhs/sv2lxdpdsedi05.corp.equinix.com@LABBDP.ORG
> (auth:KERBEROS) for protocol=interface
> org.apache.hadoop.hdfs.protocol.ClientProtocol
> 2015-03-06 13:21:36,528 INFO  provider.AuditProviderFactory
> (AuditProviderFactory.java:<init>(60)) - AuditProviderFactory: creating..
> 2015-03-06 13:21:36,529 INFO  provider.AuditProviderFactory
> (AuditProviderFactory.java:init(90)) - AuditProviderFactory: initializing..
> 2015-03-06 13:21:36,645 INFO  provider.AuditProviderFactory
> (AuditProviderFactory.java:init(107)) - AuditProviderFactory: Audit not
> enabled..
> 2015-03-06 13:21:36,660 INFO  config.PolicyRefresher
> (PolicyRefresher.java:<init>(60)) - Creating PolicyRefreshser with url:
> null, refreshInterval: 60000, sslConfigFileName: null, lastStoredFileName:
> null
> 2015-03-06 13:21:36,668 ERROR config.PolicyRefresher
> (PolicyRefresher.java:checkFileWatchDogThread(138)) - Unable to start the
> FileWatchDog for path [null]
> java.lang.NullPointerException
>         at
> com.xasecure.pdp.config.ConfigWatcher.getAgentName(ConfigWatcher.java:474)
>         at
> com.xasecure.pdp.config.ConfigWatcher.<init>(ConfigWatcher.java:124)
>         at
> com.xasecure.pdp.config.PolicyRefresher$1.<init>(PolicyRefresher.java:124)
>         at
> com.xasecure.pdp.config.PolicyRefresher.checkFileWatchDogThread(PolicyRefresher.java:124)
>         at
> com.xasecure.pdp.config.PolicyRefresher.<init>(PolicyRefresher.java:69)
>         at
> com.xasecure.pdp.hdfs.URLBasedAuthDB.<init>(URLBasedAuthDB.java:84)
>         at
> com.xasecure.pdp.hdfs.URLBasedAuthDB.getInstance(URLBasedAuthDB.java:67)
>         at
> com.xasecure.pdp.hdfs.XASecureAuthorizer.<clinit>(XASecureAuthorizer.java:28)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:190)
>         at
> com.xasecure.authorization.hadoop.HDFSAccessVerifierFactory.getInstance(HDFSAccessVerifierFactory.java:43)
>         at
> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.AuthorizeAccessForUser(XaSecureFSPermissionChecker.java:137)
>         at
> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:108)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> 2015-03-06 13:21:36,670 INFO  hadoop.HDFSAccessVerifierFactory
> (HDFSAccessVerifierFactory.java:getInstance(44)) - Created a new instance
> of class: [com.xasecure.pdp.hdfs.XASecureAuthorizer] for HDFS Access
> verification.
> 2015-03-06 13:21:37,212 INFO  namenode.FSNamesystem
> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
> blocks.
> 2015-03-06 13:21:37,718 INFO  namenode.FSNamesystem
> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
> blocks.
> 2015-03-06 13:21:38,974 INFO  ipc.Server (Server.java:saslProcess(1306)) -
> Auth successful for oozie/sv2lxdpdsedi07.corp.equinix.com@LABBDP.ORG
> (auth:KERBEROS)
> 2015-03-06 13:21:38,984 INFO  authorize.ServiceAuthorizationManager
> (ServiceAuthorizationManager.java:authorize(118)) - Authorization
> successful for oozie/sv2lxdpdsedi07.corp.equinix.com@LABBDP.ORG
> (auth:KERBEROS) for protocol=interface
> org.apache.hadoop.hdfs.protocol.ClientProtocol
> 2015-03-06 13:21:44,515 INFO  namenode.FSNamesystem
> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
> blocks.
> 2015-03-06 13:21:45,000 INFO  namenode.FSNamesystem
> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
> blocks.
> 2015-03-06 13:21:50,709 INFO  blockmanagement.CacheReplicationMonitor
> (CacheReplicationMonitor.java:run(178)) - Rescanning after 30000
> milliseconds
> 2015-03-06 13:21:50,710 INFO  blockmanagement.CacheReplicationMonitor
> (CacheReplicationMonitor.java:run(201)) - Scanned 0 directive(s) and 0
> block(s) in 1 millisecond(s).
>
>
> On 6 March 2015 at 21:38, Hadoop Solutions <mu...@gmail.com> wrote:
>
>> After adding xasecure.add-hadoop-authorization as true, i can able to
>> access hadoop file system.
>>
>> I have restarted HDFS and Ranger Admin, but still i am not able to see
>> agents in Ranger console.
>>
>> On 6 March 2015 at 21:07, Amith sha <am...@gmail.com> wrote:
>>
>>> make the xasecure.add-hadoop-authorization as true and after editing the
>>> configuration files first restart Hadoop then restart Ranger and then try
>>> to access
>>>
>>> Thanks & Regards
>>> Amithsha
>>>
>>> On Fri, Mar 6, 2015 at 6:29 PM, Muthu Pandi <mu...@gmail.com> wrote:
>>>
>>>> Did you got the plugin working?? are u able to see the agent in ranger
>>>> console?
>>>>
>>>> You have disabled the Hadoop authorization in the audit file it seems
>>>> so change
>>>>
>>>> xasecure.add-hadoop-authorization to true in the audit file
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *RegardsMuthupandi.K*
>>>>
>>>>  Think before you print.
>>>>
>>>>
>>>>
>>>> On Fri, Mar 6, 2015 at 6:13 PM, Hadoop Solutions <
>>>> munna.hadoop@gmail.com> wrote:
>>>>
>>>>> Thank you for your help, Muthu.
>>>>>
>>>>> I am using HDP 2.2 and i have added audit.xml file. After that i am
>>>>> seeing following error messages.
>>>>>
>>>>> 2015-03-06 12:40:51,119 INFO  namenode.FSNamesystem
>>>>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>>>>> blocks.
>>>>> 2015-03-06 12:40:51,485 INFO  namenode.FSNamesystem
>>>>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>>>>> blocks.
>>>>> 2015-03-06 12:40:56,888 INFO  ipc.Server (Server.java:run(2060)) - IPC
>>>>> Server handler 16 on 8020, call
>>>>> org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from
>>>>> 10.193.153.220:50271 Call#5020 Retry#0
>>>>> com.xasecure.authorization.hadoop.exceptions.XaSecureAccessControlException:
>>>>> Permission denied: principal{user=mapred,groups: [hadoop]}, access=EXECUTE,
>>>>> directory="/"
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:112)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>>>>         at
>>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>>         at
>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>>>>>
>>>>>
>>>>> Can you please let me know wht it belongs to.
>>>>>
>>>>> Thanks,
>>>>> Shaik
>>>>>
>>>>>
>>>>> On 6 March 2015 at 18:31, Muthu Pandi <mu...@gmail.com> wrote:
>>>>>
>>>>>> From your logs it looks like you are using HDP. and the audit.xml
>>>>>> file is not in CLASSPATH what version of HDP you r using
>>>>>>
>>>>>> this link is for ranger installation on HDP2.2
>>>>>> http://hortonworks.com/blog/apache-ranger-audit-framework/  make
>>>>>> sure you have followed everything, below is the snippet from the earlier
>>>>>> link which deals with the placing xml file on correct path.
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>>
>>>>>>
>>>>>> *RegardsMuthupandi.K*
>>>>>>
>>>>>>  Think before you print.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Mar 6, 2015 at 2:55 PM, Hadoop Solutions <
>>>>>> munna.hadoop@gmail.com> wrote:
>>>>>>
>>>>>>> Hi Mathu,
>>>>>>>
>>>>>>> Please find the attached NN log.
>>>>>>>
>>>>>>> i have copied all jar to /usr/hdp/current/hadoop-hdfs-namenode/lib
>>>>>>> location.
>>>>>>>
>>>>>>> please provide me the right solution for this issue.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Shaik
>>>>>>>
>>>>>>> On 6 March 2015 at 15:48, Muthu Pandi <mu...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Could you post the logs of your Active NN or the NN where you
>>>>>>>> deployed your Ranger
>>>>>>>>
>>>>>>>> Also Make sure you have copied your JARS to respective folders and
>>>>>>>> restarted the cluster.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *RegardsMuthupandi.K*
>>>>>>>>
>>>>>>>>  Think before you print.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Mar 6, 2015 at 1:08 PM, Hadoop Solutions <
>>>>>>>> munna.hadoop@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi Amithsha,
>>>>>>>>>
>>>>>>>>> I have deployed ranger-hdfs-plugin again with HA NN url.
>>>>>>>>>
>>>>>>>>> But, i am agents are not listed in Ranger Agents. I am using HDP
>>>>>>>>> 2.2.
>>>>>>>>>
>>>>>>>>> Please advise to resolve this issue.
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Shaik
>>>>>>>>>
>>>>>>>>> On 6 March 2015 at 14:48, Amith sha <am...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Shail,
>>>>>>>>>>
>>>>>>>>>> Below mentioned steps are  mentioned in Ranger Guide to enable
>>>>>>>>>> Ranger
>>>>>>>>>> plugin In Hadoop HA cluster
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> To enable Ranger in the HDFS HA environment, an HDFS plugin must
>>>>>>>>>> be
>>>>>>>>>> set up in each NameNode, and then pointed to the same HDFS
>>>>>>>>>> repository
>>>>>>>>>> set up in the Security Manager. Any policies created within that
>>>>>>>>>> HDFS
>>>>>>>>>> repository are automatically synchronized to the primary and
>>>>>>>>>> secondary
>>>>>>>>>> NameNodes through the installed Apache Ranger plugin. That way,
>>>>>>>>>> if the
>>>>>>>>>> primary NameNode fails, the secondary namenode takes over and the
>>>>>>>>>> Ranger plugin at that NameNode begins to enforce the same
>>>>>>>>>> policies for
>>>>>>>>>> access control.
>>>>>>>>>> When creating the repository, you must include the
>>>>>>>>>> fs.default.name for
>>>>>>>>>> the primary NameNode. If the primary NameNode fails during policy
>>>>>>>>>> creation, you can then temporarily use the fs.default.name of the
>>>>>>>>>> secondary NameNode in the repository details to enable directory
>>>>>>>>>> lookup for policy creation.
>>>>>>>>>>
>>>>>>>>>> Thanks & Regards
>>>>>>>>>> Amithsha
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
>>>>>>>>>> <mu...@gmail.com> wrote:
>>>>>>>>>> > Hi,
>>>>>>>>>> >
>>>>>>>>>> > I have installed Ranger from Git repo and I have started Ranger
>>>>>>>>>> console.
>>>>>>>>>> >
>>>>>>>>>> > I am trying to deploy ranger-hdfs plugin on active NN. But,
>>>>>>>>>> plugin agent
>>>>>>>>>> > unable to contact with Ranger.
>>>>>>>>>> >
>>>>>>>>>> > Can you please let me know the right procedure for ranger-hdfs
>>>>>>>>>> plugin
>>>>>>>>>> > deployment on HA NN cluster.
>>>>>>>>>> >
>>>>>>>>>> >
>>>>>>>>>> > Regards,
>>>>>>>>>> > Shaik
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: How to enable HDFS plugin on HA NameNode Cluster

Posted by Hadoop Solutions <mu...@gmail.com>.
I saw following exception related to Ranger:

2015-03-06 13:21:36,414 INFO  ipc.Server (Server.java:saslProcess(1306)) -
Auth successful for jhs/sv2lxdpdsedi05.corp.equinix.com@LABBDP.ORG
(auth:KERBEROS)
2015-03-06 13:21:36,422 INFO  authorize.ServiceAuthorizationManager
(ServiceAuthorizationManager.java:authorize(118)) - Authorization
successful for jhs/sv2lxdpdsedi05.corp.equinix.com@LABBDP.ORG
(auth:KERBEROS) for protocol=interface
org.apache.hadoop.hdfs.protocol.ClientProtocol
2015-03-06 13:21:36,528 INFO  provider.AuditProviderFactory
(AuditProviderFactory.java:<init>(60)) - AuditProviderFactory: creating..
2015-03-06 13:21:36,529 INFO  provider.AuditProviderFactory
(AuditProviderFactory.java:init(90)) - AuditProviderFactory: initializing..
2015-03-06 13:21:36,645 INFO  provider.AuditProviderFactory
(AuditProviderFactory.java:init(107)) - AuditProviderFactory: Audit not
enabled..
2015-03-06 13:21:36,660 INFO  config.PolicyRefresher
(PolicyRefresher.java:<init>(60)) - Creating PolicyRefreshser with url:
null, refreshInterval: 60000, sslConfigFileName: null, lastStoredFileName:
null
2015-03-06 13:21:36,668 ERROR config.PolicyRefresher
(PolicyRefresher.java:checkFileWatchDogThread(138)) - Unable to start the
FileWatchDog for path [null]
java.lang.NullPointerException
        at
com.xasecure.pdp.config.ConfigWatcher.getAgentName(ConfigWatcher.java:474)
        at
com.xasecure.pdp.config.ConfigWatcher.<init>(ConfigWatcher.java:124)
        at
com.xasecure.pdp.config.PolicyRefresher$1.<init>(PolicyRefresher.java:124)
        at
com.xasecure.pdp.config.PolicyRefresher.checkFileWatchDogThread(PolicyRefresher.java:124)
        at
com.xasecure.pdp.config.PolicyRefresher.<init>(PolicyRefresher.java:69)
        at
com.xasecure.pdp.hdfs.URLBasedAuthDB.<init>(URLBasedAuthDB.java:84)
        at
com.xasecure.pdp.hdfs.URLBasedAuthDB.getInstance(URLBasedAuthDB.java:67)
        at
com.xasecure.pdp.hdfs.XASecureAuthorizer.<clinit>(XASecureAuthorizer.java:28)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:190)
        at
com.xasecure.authorization.hadoop.HDFSAccessVerifierFactory.getInstance(HDFSAccessVerifierFactory.java:43)
        at
org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.AuthorizeAccessForUser(XaSecureFSPermissionChecker.java:137)
        at
org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:108)
        at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
        at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
        at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
        at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
2015-03-06 13:21:36,670 INFO  hadoop.HDFSAccessVerifierFactory
(HDFSAccessVerifierFactory.java:getInstance(44)) - Created a new instance
of class: [com.xasecure.pdp.hdfs.XASecureAuthorizer] for HDFS Access
verification.
2015-03-06 13:21:37,212 INFO  namenode.FSNamesystem
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
blocks.
2015-03-06 13:21:37,718 INFO  namenode.FSNamesystem
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
blocks.
2015-03-06 13:21:38,974 INFO  ipc.Server (Server.java:saslProcess(1306)) -
Auth successful for oozie/sv2lxdpdsedi07.corp.equinix.com@LABBDP.ORG
(auth:KERBEROS)
2015-03-06 13:21:38,984 INFO  authorize.ServiceAuthorizationManager
(ServiceAuthorizationManager.java:authorize(118)) - Authorization
successful for oozie/sv2lxdpdsedi07.corp.equinix.com@LABBDP.ORG
(auth:KERBEROS) for protocol=interface
org.apache.hadoop.hdfs.protocol.ClientProtocol
2015-03-06 13:21:44,515 INFO  namenode.FSNamesystem
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
blocks.
2015-03-06 13:21:45,000 INFO  namenode.FSNamesystem
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
blocks.
2015-03-06 13:21:50,709 INFO  blockmanagement.CacheReplicationMonitor
(CacheReplicationMonitor.java:run(178)) - Rescanning after 30000
milliseconds
2015-03-06 13:21:50,710 INFO  blockmanagement.CacheReplicationMonitor
(CacheReplicationMonitor.java:run(201)) - Scanned 0 directive(s) and 0
block(s) in 1 millisecond(s).


On 6 March 2015 at 21:38, Hadoop Solutions <mu...@gmail.com> wrote:

> After adding xasecure.add-hadoop-authorization as true, i can able to
> access hadoop file system.
>
> I have restarted HDFS and Ranger Admin, but still i am not able to see
> agents in Ranger console.
>
> On 6 March 2015 at 21:07, Amith sha <am...@gmail.com> wrote:
>
>> make the xasecure.add-hadoop-authorization as true and after editing the
>> configuration files first restart Hadoop then restart Ranger and then try
>> to access
>>
>> Thanks & Regards
>> Amithsha
>>
>> On Fri, Mar 6, 2015 at 6:29 PM, Muthu Pandi <mu...@gmail.com> wrote:
>>
>>> Did you got the plugin working?? are u able to see the agent in ranger
>>> console?
>>>
>>> You have disabled the Hadoop authorization in the audit file it seems so
>>> change
>>>
>>> xasecure.add-hadoop-authorization to true in the audit file
>>>
>>>
>>>
>>>
>>>
>>> *RegardsMuthupandi.K*
>>>
>>>  Think before you print.
>>>
>>>
>>>
>>> On Fri, Mar 6, 2015 at 6:13 PM, Hadoop Solutions <munna.hadoop@gmail.com
>>> > wrote:
>>>
>>>> Thank you for your help, Muthu.
>>>>
>>>> I am using HDP 2.2 and i have added audit.xml file. After that i am
>>>> seeing following error messages.
>>>>
>>>> 2015-03-06 12:40:51,119 INFO  namenode.FSNamesystem
>>>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>>>> blocks.
>>>> 2015-03-06 12:40:51,485 INFO  namenode.FSNamesystem
>>>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>>>> blocks.
>>>> 2015-03-06 12:40:56,888 INFO  ipc.Server (Server.java:run(2060)) - IPC
>>>> Server handler 16 on 8020, call
>>>> org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from
>>>> 10.193.153.220:50271 Call#5020 Retry#0
>>>> com.xasecure.authorization.hadoop.exceptions.XaSecureAccessControlException:
>>>> Permission denied: principal{user=mapred,groups: [hadoop]}, access=EXECUTE,
>>>> directory="/"
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:112)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
>>>>         at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
>>>>         at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>>>         at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>         at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>>>>
>>>>
>>>> Can you please let me know wht it belongs to.
>>>>
>>>> Thanks,
>>>> Shaik
>>>>
>>>>
>>>> On 6 March 2015 at 18:31, Muthu Pandi <mu...@gmail.com> wrote:
>>>>
>>>>> From your logs it looks like you are using HDP. and the audit.xml file
>>>>> is not in CLASSPATH what version of HDP you r using
>>>>>
>>>>> this link is for ranger installation on HDP2.2
>>>>> http://hortonworks.com/blog/apache-ranger-audit-framework/  make sure
>>>>> you have followed everything, below is the snippet from the earlier link
>>>>> which deals with the placing xml file on correct path.
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>>
>>>>>
>>>>> *RegardsMuthupandi.K*
>>>>>
>>>>>  Think before you print.
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Mar 6, 2015 at 2:55 PM, Hadoop Solutions <
>>>>> munna.hadoop@gmail.com> wrote:
>>>>>
>>>>>> Hi Mathu,
>>>>>>
>>>>>> Please find the attached NN log.
>>>>>>
>>>>>> i have copied all jar to /usr/hdp/current/hadoop-hdfs-namenode/lib
>>>>>> location.
>>>>>>
>>>>>> please provide me the right solution for this issue.
>>>>>>
>>>>>> Thanks,
>>>>>> Shaik
>>>>>>
>>>>>> On 6 March 2015 at 15:48, Muthu Pandi <mu...@gmail.com> wrote:
>>>>>>
>>>>>>> Could you post the logs of your Active NN or the NN where you
>>>>>>> deployed your Ranger
>>>>>>>
>>>>>>> Also Make sure you have copied your JARS to respective folders and
>>>>>>> restarted the cluster.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> *RegardsMuthupandi.K*
>>>>>>>
>>>>>>>  Think before you print.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Mar 6, 2015 at 1:08 PM, Hadoop Solutions <
>>>>>>> munna.hadoop@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi Amithsha,
>>>>>>>>
>>>>>>>> I have deployed ranger-hdfs-plugin again with HA NN url.
>>>>>>>>
>>>>>>>> But, i am agents are not listed in Ranger Agents. I am using HDP
>>>>>>>> 2.2.
>>>>>>>>
>>>>>>>> Please advise to resolve this issue.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Shaik
>>>>>>>>
>>>>>>>> On 6 March 2015 at 14:48, Amith sha <am...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi Shail,
>>>>>>>>>
>>>>>>>>> Below mentioned steps are  mentioned in Ranger Guide to enable
>>>>>>>>> Ranger
>>>>>>>>> plugin In Hadoop HA cluster
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> To enable Ranger in the HDFS HA environment, an HDFS plugin must be
>>>>>>>>> set up in each NameNode, and then pointed to the same HDFS
>>>>>>>>> repository
>>>>>>>>> set up in the Security Manager. Any policies created within that
>>>>>>>>> HDFS
>>>>>>>>> repository are automatically synchronized to the primary and
>>>>>>>>> secondary
>>>>>>>>> NameNodes through the installed Apache Ranger plugin. That way, if
>>>>>>>>> the
>>>>>>>>> primary NameNode fails, the secondary namenode takes over and the
>>>>>>>>> Ranger plugin at that NameNode begins to enforce the same policies
>>>>>>>>> for
>>>>>>>>> access control.
>>>>>>>>> When creating the repository, you must include the fs.default.name
>>>>>>>>> for
>>>>>>>>> the primary NameNode. If the primary NameNode fails during policy
>>>>>>>>> creation, you can then temporarily use the fs.default.name of the
>>>>>>>>> secondary NameNode in the repository details to enable directory
>>>>>>>>> lookup for policy creation.
>>>>>>>>>
>>>>>>>>> Thanks & Regards
>>>>>>>>> Amithsha
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
>>>>>>>>> <mu...@gmail.com> wrote:
>>>>>>>>> > Hi,
>>>>>>>>> >
>>>>>>>>> > I have installed Ranger from Git repo and I have started Ranger
>>>>>>>>> console.
>>>>>>>>> >
>>>>>>>>> > I am trying to deploy ranger-hdfs plugin on active NN. But,
>>>>>>>>> plugin agent
>>>>>>>>> > unable to contact with Ranger.
>>>>>>>>> >
>>>>>>>>> > Can you please let me know the right procedure for ranger-hdfs
>>>>>>>>> plugin
>>>>>>>>> > deployment on HA NN cluster.
>>>>>>>>> >
>>>>>>>>> >
>>>>>>>>> > Regards,
>>>>>>>>> > Shaik
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: How to enable HDFS plugin on HA NameNode Cluster

Posted by Hadoop Solutions <mu...@gmail.com>.
After adding xasecure.add-hadoop-authorization as true, i can able to
access hadoop file system.

I have restarted HDFS and Ranger Admin, but still i am not able to see
agents in Ranger console.

On 6 March 2015 at 21:07, Amith sha <am...@gmail.com> wrote:

> make the xasecure.add-hadoop-authorization as true and after editing the
> configuration files first restart Hadoop then restart Ranger and then try
> to access
>
> Thanks & Regards
> Amithsha
>
> On Fri, Mar 6, 2015 at 6:29 PM, Muthu Pandi <mu...@gmail.com> wrote:
>
>> Did you got the plugin working?? are u able to see the agent in ranger
>> console?
>>
>> You have disabled the Hadoop authorization in the audit file it seems so
>> change
>>
>> xasecure.add-hadoop-authorization to true in the audit file
>>
>>
>>
>>
>>
>> *RegardsMuthupandi.K*
>>
>>  Think before you print.
>>
>>
>>
>> On Fri, Mar 6, 2015 at 6:13 PM, Hadoop Solutions <mu...@gmail.com>
>> wrote:
>>
>>> Thank you for your help, Muthu.
>>>
>>> I am using HDP 2.2 and i have added audit.xml file. After that i am
>>> seeing following error messages.
>>>
>>> 2015-03-06 12:40:51,119 INFO  namenode.FSNamesystem
>>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>>> blocks.
>>> 2015-03-06 12:40:51,485 INFO  namenode.FSNamesystem
>>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>>> blocks.
>>> 2015-03-06 12:40:56,888 INFO  ipc.Server (Server.java:run(2060)) - IPC
>>> Server handler 16 on 8020, call
>>> org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from
>>> 10.193.153.220:50271 Call#5020 Retry#0
>>> com.xasecure.authorization.hadoop.exceptions.XaSecureAccessControlException:
>>> Permission denied: principal{user=mapred,groups: [hadoop]}, access=EXECUTE,
>>> directory="/"
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:112)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
>>>         at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
>>>         at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>>         at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>>         at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>>>
>>>
>>> Can you please let me know wht it belongs to.
>>>
>>> Thanks,
>>> Shaik
>>>
>>>
>>> On 6 March 2015 at 18:31, Muthu Pandi <mu...@gmail.com> wrote:
>>>
>>>> From your logs it looks like you are using HDP. and the audit.xml file
>>>> is not in CLASSPATH what version of HDP you r using
>>>>
>>>> this link is for ranger installation on HDP2.2
>>>> http://hortonworks.com/blog/apache-ranger-audit-framework/  make sure
>>>> you have followed everything, below is the snippet from the earlier link
>>>> which deals with the placing xml file on correct path.
>>>>
>>>> [image: Inline image 1]
>>>>
>>>>
>>>>
>>>> *RegardsMuthupandi.K*
>>>>
>>>>  Think before you print.
>>>>
>>>>
>>>>
>>>> On Fri, Mar 6, 2015 at 2:55 PM, Hadoop Solutions <
>>>> munna.hadoop@gmail.com> wrote:
>>>>
>>>>> Hi Mathu,
>>>>>
>>>>> Please find the attached NN log.
>>>>>
>>>>> i have copied all jar to /usr/hdp/current/hadoop-hdfs-namenode/lib
>>>>> location.
>>>>>
>>>>> please provide me the right solution for this issue.
>>>>>
>>>>> Thanks,
>>>>> Shaik
>>>>>
>>>>> On 6 March 2015 at 15:48, Muthu Pandi <mu...@gmail.com> wrote:
>>>>>
>>>>>> Could you post the logs of your Active NN or the NN where you
>>>>>> deployed your Ranger
>>>>>>
>>>>>> Also Make sure you have copied your JARS to respective folders and
>>>>>> restarted the cluster.
>>>>>>
>>>>>>
>>>>>>
>>>>>> *RegardsMuthupandi.K*
>>>>>>
>>>>>>  Think before you print.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Mar 6, 2015 at 1:08 PM, Hadoop Solutions <
>>>>>> munna.hadoop@gmail.com> wrote:
>>>>>>
>>>>>>> Hi Amithsha,
>>>>>>>
>>>>>>> I have deployed ranger-hdfs-plugin again with HA NN url.
>>>>>>>
>>>>>>> But, i am agents are not listed in Ranger Agents. I am using HDP 2.2.
>>>>>>>
>>>>>>> Please advise to resolve this issue.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Shaik
>>>>>>>
>>>>>>> On 6 March 2015 at 14:48, Amith sha <am...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi Shail,
>>>>>>>>
>>>>>>>> Below mentioned steps are  mentioned in Ranger Guide to enable
>>>>>>>> Ranger
>>>>>>>> plugin In Hadoop HA cluster
>>>>>>>>
>>>>>>>>
>>>>>>>> To enable Ranger in the HDFS HA environment, an HDFS plugin must be
>>>>>>>> set up in each NameNode, and then pointed to the same HDFS
>>>>>>>> repository
>>>>>>>> set up in the Security Manager. Any policies created within that
>>>>>>>> HDFS
>>>>>>>> repository are automatically synchronized to the primary and
>>>>>>>> secondary
>>>>>>>> NameNodes through the installed Apache Ranger plugin. That way, if
>>>>>>>> the
>>>>>>>> primary NameNode fails, the secondary namenode takes over and the
>>>>>>>> Ranger plugin at that NameNode begins to enforce the same policies
>>>>>>>> for
>>>>>>>> access control.
>>>>>>>> When creating the repository, you must include the fs.default.name
>>>>>>>> for
>>>>>>>> the primary NameNode. If the primary NameNode fails during policy
>>>>>>>> creation, you can then temporarily use the fs.default.name of the
>>>>>>>> secondary NameNode in the repository details to enable directory
>>>>>>>> lookup for policy creation.
>>>>>>>>
>>>>>>>> Thanks & Regards
>>>>>>>> Amithsha
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
>>>>>>>> <mu...@gmail.com> wrote:
>>>>>>>> > Hi,
>>>>>>>> >
>>>>>>>> > I have installed Ranger from Git repo and I have started Ranger
>>>>>>>> console.
>>>>>>>> >
>>>>>>>> > I am trying to deploy ranger-hdfs plugin on active NN. But,
>>>>>>>> plugin agent
>>>>>>>> > unable to contact with Ranger.
>>>>>>>> >
>>>>>>>> > Can you please let me know the right procedure for ranger-hdfs
>>>>>>>> plugin
>>>>>>>> > deployment on HA NN cluster.
>>>>>>>> >
>>>>>>>> >
>>>>>>>> > Regards,
>>>>>>>> > Shaik
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: How to enable HDFS plugin on HA NameNode Cluster

Posted by Amith sha <am...@gmail.com>.
make the xasecure.add-hadoop-authorization as true and after editing the
configuration files first restart Hadoop then restart Ranger and then try
to access

Thanks & Regards
Amithsha

On Fri, Mar 6, 2015 at 6:29 PM, Muthu Pandi <mu...@gmail.com> wrote:

> Did you got the plugin working?? are u able to see the agent in ranger
> console?
>
> You have disabled the Hadoop authorization in the audit file it seems so
> change
>
> xasecure.add-hadoop-authorization to true in the audit file
>
>
>
>
>
> *RegardsMuthupandi.K*
>
>  Think before you print.
>
>
>
> On Fri, Mar 6, 2015 at 6:13 PM, Hadoop Solutions <mu...@gmail.com>
> wrote:
>
>> Thank you for your help, Muthu.
>>
>> I am using HDP 2.2 and i have added audit.xml file. After that i am
>> seeing following error messages.
>>
>> 2015-03-06 12:40:51,119 INFO  namenode.FSNamesystem
>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>> blocks.
>> 2015-03-06 12:40:51,485 INFO  namenode.FSNamesystem
>> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
>> blocks.
>> 2015-03-06 12:40:56,888 INFO  ipc.Server (Server.java:run(2060)) - IPC
>> Server handler 16 on 8020, call
>> org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from
>> 10.193.153.220:50271 Call#5020 Retry#0
>> com.xasecure.authorization.hadoop.exceptions.XaSecureAccessControlException:
>> Permission denied: principal{user=mapred,groups: [hadoop]}, access=EXECUTE,
>> directory="/"
>>         at
>> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:112)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
>>         at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
>>         at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>         at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>>
>>
>> Can you please let me know wht it belongs to.
>>
>> Thanks,
>> Shaik
>>
>>
>> On 6 March 2015 at 18:31, Muthu Pandi <mu...@gmail.com> wrote:
>>
>>> From your logs it looks like you are using HDP. and the audit.xml file
>>> is not in CLASSPATH what version of HDP you r using
>>>
>>> this link is for ranger installation on HDP2.2
>>> http://hortonworks.com/blog/apache-ranger-audit-framework/  make sure
>>> you have followed everything, below is the snippet from the earlier link
>>> which deals with the placing xml file on correct path.
>>>
>>> [image: Inline image 1]
>>>
>>>
>>>
>>> *RegardsMuthupandi.K*
>>>
>>>  Think before you print.
>>>
>>>
>>>
>>> On Fri, Mar 6, 2015 at 2:55 PM, Hadoop Solutions <munna.hadoop@gmail.com
>>> > wrote:
>>>
>>>> Hi Mathu,
>>>>
>>>> Please find the attached NN log.
>>>>
>>>> i have copied all jar to /usr/hdp/current/hadoop-hdfs-namenode/lib
>>>> location.
>>>>
>>>> please provide me the right solution for this issue.
>>>>
>>>> Thanks,
>>>> Shaik
>>>>
>>>> On 6 March 2015 at 15:48, Muthu Pandi <mu...@gmail.com> wrote:
>>>>
>>>>> Could you post the logs of your Active NN or the NN where you deployed
>>>>> your Ranger
>>>>>
>>>>> Also Make sure you have copied your JARS to respective folders and
>>>>> restarted the cluster.
>>>>>
>>>>>
>>>>>
>>>>> *RegardsMuthupandi.K*
>>>>>
>>>>>  Think before you print.
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Mar 6, 2015 at 1:08 PM, Hadoop Solutions <
>>>>> munna.hadoop@gmail.com> wrote:
>>>>>
>>>>>> Hi Amithsha,
>>>>>>
>>>>>> I have deployed ranger-hdfs-plugin again with HA NN url.
>>>>>>
>>>>>> But, i am agents are not listed in Ranger Agents. I am using HDP 2.2.
>>>>>>
>>>>>> Please advise to resolve this issue.
>>>>>>
>>>>>> Thanks,
>>>>>> Shaik
>>>>>>
>>>>>> On 6 March 2015 at 14:48, Amith sha <am...@gmail.com> wrote:
>>>>>>
>>>>>>> Hi Shail,
>>>>>>>
>>>>>>> Below mentioned steps are  mentioned in Ranger Guide to enable Ranger
>>>>>>> plugin In Hadoop HA cluster
>>>>>>>
>>>>>>>
>>>>>>> To enable Ranger in the HDFS HA environment, an HDFS plugin must be
>>>>>>> set up in each NameNode, and then pointed to the same HDFS repository
>>>>>>> set up in the Security Manager. Any policies created within that HDFS
>>>>>>> repository are automatically synchronized to the primary and
>>>>>>> secondary
>>>>>>> NameNodes through the installed Apache Ranger plugin. That way, if
>>>>>>> the
>>>>>>> primary NameNode fails, the secondary namenode takes over and the
>>>>>>> Ranger plugin at that NameNode begins to enforce the same policies
>>>>>>> for
>>>>>>> access control.
>>>>>>> When creating the repository, you must include the fs.default.name
>>>>>>> for
>>>>>>> the primary NameNode. If the primary NameNode fails during policy
>>>>>>> creation, you can then temporarily use the fs.default.name of the
>>>>>>> secondary NameNode in the repository details to enable directory
>>>>>>> lookup for policy creation.
>>>>>>>
>>>>>>> Thanks & Regards
>>>>>>> Amithsha
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
>>>>>>> <mu...@gmail.com> wrote:
>>>>>>> > Hi,
>>>>>>> >
>>>>>>> > I have installed Ranger from Git repo and I have started Ranger
>>>>>>> console.
>>>>>>> >
>>>>>>> > I am trying to deploy ranger-hdfs plugin on active NN. But, plugin
>>>>>>> agent
>>>>>>> > unable to contact with Ranger.
>>>>>>> >
>>>>>>> > Can you please let me know the right procedure for ranger-hdfs
>>>>>>> plugin
>>>>>>> > deployment on HA NN cluster.
>>>>>>> >
>>>>>>> >
>>>>>>> > Regards,
>>>>>>> > Shaik
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: How to enable HDFS plugin on HA NameNode Cluster

Posted by Muthu Pandi <mu...@gmail.com>.
Did you got the plugin working?? are u able to see the agent in ranger
console?

You have disabled the Hadoop authorization in the audit file it seems so
change

xasecure.add-hadoop-authorization to true in the audit file





*RegardsMuthupandi.K*

 Think before you print.



On Fri, Mar 6, 2015 at 6:13 PM, Hadoop Solutions <mu...@gmail.com>
wrote:

> Thank you for your help, Muthu.
>
> I am using HDP 2.2 and i have added audit.xml file. After that i am seeing
> following error messages.
>
> 2015-03-06 12:40:51,119 INFO  namenode.FSNamesystem
> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
> blocks.
> 2015-03-06 12:40:51,485 INFO  namenode.FSNamesystem
> (FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
> blocks.
> 2015-03-06 12:40:56,888 INFO  ipc.Server (Server.java:run(2060)) - IPC
> Server handler 16 on 8020, call
> org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from
> 10.193.153.220:50271 Call#5020 Retry#0
> com.xasecure.authorization.hadoop.exceptions.XaSecureAccessControlException:
> Permission denied: principal{user=mapred,groups: [hadoop]}, access=EXECUTE,
> directory="/"
>         at
> org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:112)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>
> Can you please let me know wht it belongs to.
>
> Thanks,
> Shaik
>
>
> On 6 March 2015 at 18:31, Muthu Pandi <mu...@gmail.com> wrote:
>
>> From your logs it looks like you are using HDP. and the audit.xml file is
>> not in CLASSPATH what version of HDP you r using
>>
>> this link is for ranger installation on HDP2.2
>> http://hortonworks.com/blog/apache-ranger-audit-framework/  make sure
>> you have followed everything, below is the snippet from the earlier link
>> which deals with the placing xml file on correct path.
>>
>> [image: Inline image 1]
>>
>>
>>
>> *RegardsMuthupandi.K*
>>
>>  Think before you print.
>>
>>
>>
>> On Fri, Mar 6, 2015 at 2:55 PM, Hadoop Solutions <mu...@gmail.com>
>> wrote:
>>
>>> Hi Mathu,
>>>
>>> Please find the attached NN log.
>>>
>>> i have copied all jar to /usr/hdp/current/hadoop-hdfs-namenode/lib
>>> location.
>>>
>>> please provide me the right solution for this issue.
>>>
>>> Thanks,
>>> Shaik
>>>
>>> On 6 March 2015 at 15:48, Muthu Pandi <mu...@gmail.com> wrote:
>>>
>>>> Could you post the logs of your Active NN or the NN where you deployed
>>>> your Ranger
>>>>
>>>> Also Make sure you have copied your JARS to respective folders and
>>>> restarted the cluster.
>>>>
>>>>
>>>>
>>>> *RegardsMuthupandi.K*
>>>>
>>>>  Think before you print.
>>>>
>>>>
>>>>
>>>> On Fri, Mar 6, 2015 at 1:08 PM, Hadoop Solutions <
>>>> munna.hadoop@gmail.com> wrote:
>>>>
>>>>> Hi Amithsha,
>>>>>
>>>>> I have deployed ranger-hdfs-plugin again with HA NN url.
>>>>>
>>>>> But, i am agents are not listed in Ranger Agents. I am using HDP 2.2.
>>>>>
>>>>> Please advise to resolve this issue.
>>>>>
>>>>> Thanks,
>>>>> Shaik
>>>>>
>>>>> On 6 March 2015 at 14:48, Amith sha <am...@gmail.com> wrote:
>>>>>
>>>>>> Hi Shail,
>>>>>>
>>>>>> Below mentioned steps are  mentioned in Ranger Guide to enable Ranger
>>>>>> plugin In Hadoop HA cluster
>>>>>>
>>>>>>
>>>>>> To enable Ranger in the HDFS HA environment, an HDFS plugin must be
>>>>>> set up in each NameNode, and then pointed to the same HDFS repository
>>>>>> set up in the Security Manager. Any policies created within that HDFS
>>>>>> repository are automatically synchronized to the primary and secondary
>>>>>> NameNodes through the installed Apache Ranger plugin. That way, if the
>>>>>> primary NameNode fails, the secondary namenode takes over and the
>>>>>> Ranger plugin at that NameNode begins to enforce the same policies for
>>>>>> access control.
>>>>>> When creating the repository, you must include the fs.default.name
>>>>>> for
>>>>>> the primary NameNode. If the primary NameNode fails during policy
>>>>>> creation, you can then temporarily use the fs.default.name of the
>>>>>> secondary NameNode in the repository details to enable directory
>>>>>> lookup for policy creation.
>>>>>>
>>>>>> Thanks & Regards
>>>>>> Amithsha
>>>>>>
>>>>>>
>>>>>> On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
>>>>>> <mu...@gmail.com> wrote:
>>>>>> > Hi,
>>>>>> >
>>>>>> > I have installed Ranger from Git repo and I have started Ranger
>>>>>> console.
>>>>>> >
>>>>>> > I am trying to deploy ranger-hdfs plugin on active NN. But, plugin
>>>>>> agent
>>>>>> > unable to contact with Ranger.
>>>>>> >
>>>>>> > Can you please let me know the right procedure for ranger-hdfs
>>>>>> plugin
>>>>>> > deployment on HA NN cluster.
>>>>>> >
>>>>>> >
>>>>>> > Regards,
>>>>>> > Shaik
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: How to enable HDFS plugin on HA NameNode Cluster

Posted by Hadoop Solutions <mu...@gmail.com>.
Thank you for your help, Muthu.

I am using HDP 2.2 and i have added audit.xml file. After that i am seeing
following error messages.

2015-03-06 12:40:51,119 INFO  namenode.FSNamesystem
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
blocks.
2015-03-06 12:40:51,485 INFO  namenode.FSNamesystem
(FSNamesystem.java:listCorruptFileBlocks(7220)) - there are no corrupt file
blocks.
2015-03-06 12:40:56,888 INFO  ipc.Server (Server.java:run(2060)) - IPC
Server handler 16 on 8020, call
org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from
10.193.153.220:50271 Call#5020 Retry#0
com.xasecure.authorization.hadoop.exceptions.XaSecureAccessControlException:
Permission denied: principal{user=mapred,groups: [hadoop]}, access=EXECUTE,
directory="/"
        at
org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(XaSecureFSPermissionChecker.java:112)
        at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java)
        at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
        at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6422)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4957)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4918)
        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:826)
        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:612)
        at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)


Can you please let me know wht it belongs to.

Thanks,
Shaik


On 6 March 2015 at 18:31, Muthu Pandi <mu...@gmail.com> wrote:

> From your logs it looks like you are using HDP. and the audit.xml file is
> not in CLASSPATH what version of HDP you r using
>
> this link is for ranger installation on HDP2.2
> http://hortonworks.com/blog/apache-ranger-audit-framework/  make sure you
> have followed everything, below is the snippet from the earlier link which
> deals with the placing xml file on correct path.
>
> [image: Inline image 1]
>
>
>
> *RegardsMuthupandi.K*
>
>  Think before you print.
>
>
>
> On Fri, Mar 6, 2015 at 2:55 PM, Hadoop Solutions <mu...@gmail.com>
> wrote:
>
>> Hi Mathu,
>>
>> Please find the attached NN log.
>>
>> i have copied all jar to /usr/hdp/current/hadoop-hdfs-namenode/lib
>> location.
>>
>> please provide me the right solution for this issue.
>>
>> Thanks,
>> Shaik
>>
>> On 6 March 2015 at 15:48, Muthu Pandi <mu...@gmail.com> wrote:
>>
>>> Could you post the logs of your Active NN or the NN where you deployed
>>> your Ranger
>>>
>>> Also Make sure you have copied your JARS to respective folders and
>>> restarted the cluster.
>>>
>>>
>>>
>>> *RegardsMuthupandi.K*
>>>
>>>  Think before you print.
>>>
>>>
>>>
>>> On Fri, Mar 6, 2015 at 1:08 PM, Hadoop Solutions <munna.hadoop@gmail.com
>>> > wrote:
>>>
>>>> Hi Amithsha,
>>>>
>>>> I have deployed ranger-hdfs-plugin again with HA NN url.
>>>>
>>>> But, i am agents are not listed in Ranger Agents. I am using HDP 2.2.
>>>>
>>>> Please advise to resolve this issue.
>>>>
>>>> Thanks,
>>>> Shaik
>>>>
>>>> On 6 March 2015 at 14:48, Amith sha <am...@gmail.com> wrote:
>>>>
>>>>> Hi Shail,
>>>>>
>>>>> Below mentioned steps are  mentioned in Ranger Guide to enable Ranger
>>>>> plugin In Hadoop HA cluster
>>>>>
>>>>>
>>>>> To enable Ranger in the HDFS HA environment, an HDFS plugin must be
>>>>> set up in each NameNode, and then pointed to the same HDFS repository
>>>>> set up in the Security Manager. Any policies created within that HDFS
>>>>> repository are automatically synchronized to the primary and secondary
>>>>> NameNodes through the installed Apache Ranger plugin. That way, if the
>>>>> primary NameNode fails, the secondary namenode takes over and the
>>>>> Ranger plugin at that NameNode begins to enforce the same policies for
>>>>> access control.
>>>>> When creating the repository, you must include the fs.default.name for
>>>>> the primary NameNode. If the primary NameNode fails during policy
>>>>> creation, you can then temporarily use the fs.default.name of the
>>>>> secondary NameNode in the repository details to enable directory
>>>>> lookup for policy creation.
>>>>>
>>>>> Thanks & Regards
>>>>> Amithsha
>>>>>
>>>>>
>>>>> On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
>>>>> <mu...@gmail.com> wrote:
>>>>> > Hi,
>>>>> >
>>>>> > I have installed Ranger from Git repo and I have started Ranger
>>>>> console.
>>>>> >
>>>>> > I am trying to deploy ranger-hdfs plugin on active NN. But, plugin
>>>>> agent
>>>>> > unable to contact with Ranger.
>>>>> >
>>>>> > Can you please let me know the right procedure for ranger-hdfs plugin
>>>>> > deployment on HA NN cluster.
>>>>> >
>>>>> >
>>>>> > Regards,
>>>>> > Shaik
>>>>>
>>>>
>>>>
>>>
>>
>

Re: How to enable HDFS plugin on HA NameNode Cluster

Posted by Muthu Pandi <mu...@gmail.com>.
>From your logs it looks like you are using HDP. and the audit.xml file is
not in CLASSPATH what version of HDP you r using

this link is for ranger installation on HDP2.2
http://hortonworks.com/blog/apache-ranger-audit-framework/  make sure you
have followed everything, below is the snippet from the earlier link which
deals with the placing xml file on correct path.

[image: Inline image 1]



*RegardsMuthupandi.K*

 Think before you print.



On Fri, Mar 6, 2015 at 2:55 PM, Hadoop Solutions <mu...@gmail.com>
wrote:

> Hi Mathu,
>
> Please find the attached NN log.
>
> i have copied all jar to /usr/hdp/current/hadoop-hdfs-namenode/lib
> location.
>
> please provide me the right solution for this issue.
>
> Thanks,
> Shaik
>
> On 6 March 2015 at 15:48, Muthu Pandi <mu...@gmail.com> wrote:
>
>> Could you post the logs of your Active NN or the NN where you deployed
>> your Ranger
>>
>> Also Make sure you have copied your JARS to respective folders and
>> restarted the cluster.
>>
>>
>>
>> *RegardsMuthupandi.K*
>>
>>  Think before you print.
>>
>>
>>
>> On Fri, Mar 6, 2015 at 1:08 PM, Hadoop Solutions <mu...@gmail.com>
>> wrote:
>>
>>> Hi Amithsha,
>>>
>>> I have deployed ranger-hdfs-plugin again with HA NN url.
>>>
>>> But, i am agents are not listed in Ranger Agents. I am using HDP 2.2.
>>>
>>> Please advise to resolve this issue.
>>>
>>> Thanks,
>>> Shaik
>>>
>>> On 6 March 2015 at 14:48, Amith sha <am...@gmail.com> wrote:
>>>
>>>> Hi Shail,
>>>>
>>>> Below mentioned steps are  mentioned in Ranger Guide to enable Ranger
>>>> plugin In Hadoop HA cluster
>>>>
>>>>
>>>> To enable Ranger in the HDFS HA environment, an HDFS plugin must be
>>>> set up in each NameNode, and then pointed to the same HDFS repository
>>>> set up in the Security Manager. Any policies created within that HDFS
>>>> repository are automatically synchronized to the primary and secondary
>>>> NameNodes through the installed Apache Ranger plugin. That way, if the
>>>> primary NameNode fails, the secondary namenode takes over and the
>>>> Ranger plugin at that NameNode begins to enforce the same policies for
>>>> access control.
>>>> When creating the repository, you must include the fs.default.name for
>>>> the primary NameNode. If the primary NameNode fails during policy
>>>> creation, you can then temporarily use the fs.default.name of the
>>>> secondary NameNode in the repository details to enable directory
>>>> lookup for policy creation.
>>>>
>>>> Thanks & Regards
>>>> Amithsha
>>>>
>>>>
>>>> On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
>>>> <mu...@gmail.com> wrote:
>>>> > Hi,
>>>> >
>>>> > I have installed Ranger from Git repo and I have started Ranger
>>>> console.
>>>> >
>>>> > I am trying to deploy ranger-hdfs plugin on active NN. But, plugin
>>>> agent
>>>> > unable to contact with Ranger.
>>>> >
>>>> > Can you please let me know the right procedure for ranger-hdfs plugin
>>>> > deployment on HA NN cluster.
>>>> >
>>>> >
>>>> > Regards,
>>>> > Shaik
>>>>
>>>
>>>
>>
>

Re: How to enable HDFS plugin on HA NameNode Cluster

Posted by Hadoop Solutions <mu...@gmail.com>.
Hi Mathu,

Please find the attached NN log.

i have copied all jar to /usr/hdp/current/hadoop-hdfs-namenode/lib location.

please provide me the right solution for this issue.

Thanks,
Shaik

On 6 March 2015 at 15:48, Muthu Pandi <mu...@gmail.com> wrote:

> Could you post the logs of your Active NN or the NN where you deployed
> your Ranger
>
> Also Make sure you have copied your JARS to respective folders and
> restarted the cluster.
>
>
>
> *RegardsMuthupandi.K*
>
>  Think before you print.
>
>
>
> On Fri, Mar 6, 2015 at 1:08 PM, Hadoop Solutions <mu...@gmail.com>
> wrote:
>
>> Hi Amithsha,
>>
>> I have deployed ranger-hdfs-plugin again with HA NN url.
>>
>> But, i am agents are not listed in Ranger Agents. I am using HDP 2.2.
>>
>> Please advise to resolve this issue.
>>
>> Thanks,
>> Shaik
>>
>> On 6 March 2015 at 14:48, Amith sha <am...@gmail.com> wrote:
>>
>>> Hi Shail,
>>>
>>> Below mentioned steps are  mentioned in Ranger Guide to enable Ranger
>>> plugin In Hadoop HA cluster
>>>
>>>
>>> To enable Ranger in the HDFS HA environment, an HDFS plugin must be
>>> set up in each NameNode, and then pointed to the same HDFS repository
>>> set up in the Security Manager. Any policies created within that HDFS
>>> repository are automatically synchronized to the primary and secondary
>>> NameNodes through the installed Apache Ranger plugin. That way, if the
>>> primary NameNode fails, the secondary namenode takes over and the
>>> Ranger plugin at that NameNode begins to enforce the same policies for
>>> access control.
>>> When creating the repository, you must include the fs.default.name for
>>> the primary NameNode. If the primary NameNode fails during policy
>>> creation, you can then temporarily use the fs.default.name of the
>>> secondary NameNode in the repository details to enable directory
>>> lookup for policy creation.
>>>
>>> Thanks & Regards
>>> Amithsha
>>>
>>>
>>> On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
>>> <mu...@gmail.com> wrote:
>>> > Hi,
>>> >
>>> > I have installed Ranger from Git repo and I have started Ranger
>>> console.
>>> >
>>> > I am trying to deploy ranger-hdfs plugin on active NN. But, plugin
>>> agent
>>> > unable to contact with Ranger.
>>> >
>>> > Can you please let me know the right procedure for ranger-hdfs plugin
>>> > deployment on HA NN cluster.
>>> >
>>> >
>>> > Regards,
>>> > Shaik
>>>
>>
>>
>

Re: How to enable HDFS plugin on HA NameNode Cluster

Posted by Muthu Pandi <mu...@gmail.com>.
Could you post the logs of your Active NN or the NN where you deployed your
Ranger

Also Make sure you have copied your JARS to respective folders and
restarted the cluster.



*RegardsMuthupandi.K*

 Think before you print.



On Fri, Mar 6, 2015 at 1:08 PM, Hadoop Solutions <mu...@gmail.com>
wrote:

> Hi Amithsha,
>
> I have deployed ranger-hdfs-plugin again with HA NN url.
>
> But, i am agents are not listed in Ranger Agents. I am using HDP 2.2.
>
> Please advise to resolve this issue.
>
> Thanks,
> Shaik
>
> On 6 March 2015 at 14:48, Amith sha <am...@gmail.com> wrote:
>
>> Hi Shail,
>>
>> Below mentioned steps are  mentioned in Ranger Guide to enable Ranger
>> plugin In Hadoop HA cluster
>>
>>
>> To enable Ranger in the HDFS HA environment, an HDFS plugin must be
>> set up in each NameNode, and then pointed to the same HDFS repository
>> set up in the Security Manager. Any policies created within that HDFS
>> repository are automatically synchronized to the primary and secondary
>> NameNodes through the installed Apache Ranger plugin. That way, if the
>> primary NameNode fails, the secondary namenode takes over and the
>> Ranger plugin at that NameNode begins to enforce the same policies for
>> access control.
>> When creating the repository, you must include the fs.default.name for
>> the primary NameNode. If the primary NameNode fails during policy
>> creation, you can then temporarily use the fs.default.name of the
>> secondary NameNode in the repository details to enable directory
>> lookup for policy creation.
>>
>> Thanks & Regards
>> Amithsha
>>
>>
>> On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
>> <mu...@gmail.com> wrote:
>> > Hi,
>> >
>> > I have installed Ranger from Git repo and I have started Ranger console.
>> >
>> > I am trying to deploy ranger-hdfs plugin on active NN. But, plugin agent
>> > unable to contact with Ranger.
>> >
>> > Can you please let me know the right procedure for ranger-hdfs plugin
>> > deployment on HA NN cluster.
>> >
>> >
>> > Regards,
>> > Shaik
>>
>
>

Re: How to enable HDFS plugin on HA NameNode Cluster

Posted by Hadoop Solutions <mu...@gmail.com>.
Hi Amithsha,

I have deployed ranger-hdfs-plugin again with HA NN url.

But, i am agents are not listed in Ranger Agents. I am using HDP 2.2.

Please advise to resolve this issue.

Thanks,
Shaik

On 6 March 2015 at 14:48, Amith sha <am...@gmail.com> wrote:

> Hi Shail,
>
> Below mentioned steps are  mentioned in Ranger Guide to enable Ranger
> plugin In Hadoop HA cluster
>
>
> To enable Ranger in the HDFS HA environment, an HDFS plugin must be
> set up in each NameNode, and then pointed to the same HDFS repository
> set up in the Security Manager. Any policies created within that HDFS
> repository are automatically synchronized to the primary and secondary
> NameNodes through the installed Apache Ranger plugin. That way, if the
> primary NameNode fails, the secondary namenode takes over and the
> Ranger plugin at that NameNode begins to enforce the same policies for
> access control.
> When creating the repository, you must include the fs.default.name for
> the primary NameNode. If the primary NameNode fails during policy
> creation, you can then temporarily use the fs.default.name of the
> secondary NameNode in the repository details to enable directory
> lookup for policy creation.
>
> Thanks & Regards
> Amithsha
>
>
> On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
> <mu...@gmail.com> wrote:
> > Hi,
> >
> > I have installed Ranger from Git repo and I have started Ranger console.
> >
> > I am trying to deploy ranger-hdfs plugin on active NN. But, plugin agent
> > unable to contact with Ranger.
> >
> > Can you please let me know the right procedure for ranger-hdfs plugin
> > deployment on HA NN cluster.
> >
> >
> > Regards,
> > Shaik
>

Re: How to enable HDFS plugin on HA NameNode Cluster

Posted by Amith sha <am...@gmail.com>.
Hi Shail,

Below mentioned steps are  mentioned in Ranger Guide to enable Ranger
plugin In Hadoop HA cluster


To enable Ranger in the HDFS HA environment, an HDFS plugin must be
set up in each NameNode, and then pointed to the same HDFS repository
set up in the Security Manager. Any policies created within that HDFS
repository are automatically synchronized to the primary and secondary
NameNodes through the installed Apache Ranger plugin. That way, if the
primary NameNode fails, the secondary namenode takes over and the
Ranger plugin at that NameNode begins to enforce the same policies for
access control.
When creating the repository, you must include the fs.default.name for
the primary NameNode. If the primary NameNode fails during policy
creation, you can then temporarily use the fs.default.name of the
secondary NameNode in the repository details to enable directory
lookup for policy creation.

Thanks & Regards
Amithsha


On Fri, Mar 6, 2015 at 12:00 PM, Hadoop Solutions
<mu...@gmail.com> wrote:
> Hi,
>
> I have installed Ranger from Git repo and I have started Ranger console.
>
> I am trying to deploy ranger-hdfs plugin on active NN. But, plugin agent
> unable to contact with Ranger.
>
> Can you please let me know the right procedure for ranger-hdfs plugin
> deployment on HA NN cluster.
>
>
> Regards,
> Shaik