You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Anoop Sam John (Jira)" <ji...@apache.org> on 2020/05/23 12:43:00 UTC

[jira] [Created] (HBASE-24421) Support loading cluster level CPs from Hadoop file system

Anoop Sam John created HBASE-24421:
--------------------------------------

             Summary: Support loading cluster level CPs from Hadoop file system
                 Key: HBASE-24421
                 URL: https://issues.apache.org/jira/browse/HBASE-24421
             Project: HBase
          Issue Type: Improvement
            Reporter: Anoop Sam John
            Assignee: Anoop Sam John
             Fix For: 3.0.0-alpha-1


Right now we allow configuring CPs, which needs to be loaded from hadoop FS, at table level. (Via the Java API or shell)
> alter 't1', METHOD => 'table_att', 'coprocessor'=>'hdfs:///foo.jar|com.foo.FooRegionObserver|1001|arg1=1,arg2=2'
But for the cluster level CPs at Master/RS/WAL level, only way is to config at hbase-site.xml. But here we dont allow to specify any jar path. This jira suggest to add such a feature
Note: We already support config the priority of CP at xml level (FQCN|<priority>).  Same way how shell command works we can take the jar pathalso. <jar path>|<class name>|<priority>
If no '|' separator at all, consider that as FQCN in the classpath. If one '|' that will be FQCN and priority (Same as of today). If 2 '|' separators we consider the 1st part as path to the external jar.

This will help in cloud scenario specially with auto scaling. Or else customer should be executing some special scripts to make the CP jar available within the HBase classpath.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)