You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@sentry.apache.org by Zoltán Szatmári <zs...@rapidminer.com> on 2015/03/04 13:45:49 UTC

Load UDF from HDFS

Hi All,

we are developing a software at our company, that highly depends on 
UDFs. Without Sentry, it was possible to upload a JAR to the HDFS and 
afterwards execute the "Create function" statement.

With Sentry enabled, according to the Cloudera documentation 
(http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cm_mc_hive_udf.html) 
and also our experience the JAR should put into the local filesystem and 
should be added to the hive classpath.

It is too much static, we cannot replace the UDF classes on the fly and 
it probably breaks our solution.

I read in the docs, that the user can have rights on URIs, e.g. 
"uri=hdfs://namenode:port/path/to/dir". I tried them, but it doesn't 
work without the JAR on the local filesystem (hive classpath).

Is there any security hole, if we don't add the JAR to the local 
filesystem, just have rights on the HDFS? Why is implemented this 
restriction? Why is it not possible to execute UDFs if the administrator 
grants right on the HDFS location, where our JAR is uploaded.

What are the plans for the further releases? Can we consider this 
behaviour stable?

Thanks,

Zoltán