You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ranger.apache.org by Lune Silver <lu...@gmail.com> on 2015/11/22 10:00:09 UTC

Question about range plugin on hosts

Hello !

I sent this mail to ask a question about the plugins for namenode, hive
etc...

I readed this description from the HW website :
###
Ranger plugins
Plugins are lightweight Java programs which embed within processes of each
cluster component. For example, the Apache Ranger plugin for Apache Hive is
embedded within Hiveserver2.These plugins pull in policies from a central
server and store them locally in a file. When a user request comes through
the component, these plugins intercept the request and evaluate it against
the security policy. Plugins also collect data from the user request and
follow a separate thread to send this data back to the audit server.
###

Link :
http://hortonworks.com/hadoop/ranger/#section_2

My questions are :

Q1 - Is the path where the file is stored configurable ? If yes, which
posix permissions should I set for the path and the file ? hdfs:hdfs for
namenode, hive:hive for hiveserver2 etc... ? And 440 is sufficient for the
file ?

Q2 - How are these plugin java programs launched on the hosts ? By Ranger
server ? Or do I need to start them manually ?

Q3 - These are some ranger agents on the dfferent hosts in fact, no ? If
yes, which components on the server side do they contact to get the
policies ? With which protocole do they contact this component ?

Q4 - Same question this time but for the audit logs ? Which components do
the ranger "agents" contact and with which protocole ?

Sorry for all these questions. ^_^

Hope you can help me.

Best regards.

Lune

Re: Question about range plugin on hosts

Posted by Madhan Neethiraj <mn...@hortonworks.com>.
Lune,

Please see the response inline below.

Thanks,
Madhan

From: Lune Silver <lu...@gmail.com>>
Reply-To: "user@ranger.incubator.apache.org<ma...@ranger.incubator.apache.org>" <us...@ranger.incubator.apache.org>>
Date: Sunday, November 22, 2015 at 1:00 AM
To: "user@ranger.incubator.apache.org<ma...@ranger.incubator.apache.org>" <us...@ranger.incubator.apache.org>>
Subject: Question about range plugin on hosts

Hello !

I sent this mail to ask a question about the plugins for namenode, hive etc...

I readed this description from the HW website :
###
Ranger plugins
Plugins are lightweight Java programs which embed within processes of each cluster component. For example, the Apache Ranger plugin for Apache Hive is embedded within Hiveserver2.These plugins pull in policies from a central server and store them locally in a file. When a user request comes through the component, these plugins intercept the request and evaluate it against the security policy. Plugins also collect data from the user request and follow a separate thread to send this data back to the audit server.
###

Link :
http://hortonworks.com/hadoop/ranger/#section_2

My questions are :

Q1 - Is the path where the file is stored configurable ? If yes, which posix permissions should I set for the path and the file ? hdfs:hdfs for namenode, hive:hive for hiveserver2 etc... ? And 440 is sufficient for the file ?
[Madhan] The file should have read and write access for the process that runs Ranger plugin I.e NameNode for HDFS, HiveServer2 for Hive, Master & Region servers for HBase, ..
[Madhan] The location can be specified by updating the following configuration:
 ranger-0.5 and later:  specify policy cache directory name in ‘ranger.plugin.<serviceType>.policy.cache.dir’ property in file ‘ranger-<serviceType>-security.xml’ under the component’s conf directory (like /etc/hadoop/conf/, /etc/hive/conf, /etc/hbase/conf/, ..).
 ranger-0.4:  specify policy cache file name in ‘xasecure.<serviceType>.policymgr.url.laststoredfile’ property in file ‘xasecure-<serviceType>-security.xml’ under the component’s conf directory (like /etc/hadoop/conf/, /etc/hive/conf, /etc/hbase/conf/, ..).


Q2 - How are these plugin java programs launched on the hosts ? By Ranger server ? Or do I need to start them manually ?
[Madhan] The plugins run within the corresponding component (Namenode, HiveServer2, HBase server..) , typically loaded during component startup. The plugins don’t run as a separate process.

Q3 - These are some ranger agents on the dfferent hosts in fact, no ? If yes, which components on the server side do they contact to get the policies ? With which protocole do they contact this component ?
[Madhan] Yes, Ranger agents can run on different hosts, depending on where the components run. Ranger agents retrieve the policies from Ranger Admin (called as Policy Manager prior to ranger-0.5) using REST API.

Q4 - Same question this time but for the audit logs ? Which components do the ranger "agents" contact and with which protocole ?
[Madhan] Ranger agents can write audit logs to multiple destinations like HDFS (since ranger-0.4) , Solr (from ranger-0.5), RDBMS. HDFS Java API is used to write to HDFS; SolrJ client to write to Solr.; JDBC interface is used to write to RDBMS.

Sorry for all these questions. ^_^

Hope you can help me.

Best regards.

Lune