You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by "Gangavarupu, Venkata - Contingent Worker" <ve...@bcbsa.com> on 2015/06/26 19:48:18 UTC

HBASE Region server failing to start after Kerberos is enabled

HI All,

The region servers failing to start, after Kerberos is enabled, with below error.
Hadoop -2.6.0
HBase-0.98.4

2015-06-24 15:58:48,884 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=AuthenticationService
2015-06-24 15:58:48,886 INFO  [RS_OPEN_META-mdcthdpdas06lp:60020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870911).
2015-06-24 15:58:48,894 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=SecureBulkLoadService
2015-06-24 15:58:48,907 ERROR [RS_OPEN_META-mdcthdpdas06lp:60020-0] coprocessor.CoprocessorHost: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
java.lang.IllegalStateException: Failed to get FileSystem instance

I see below properties are included in hbase-site.xml file

<property>
      <name>hbase.coprocessor.region.classes</name>
      <value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.hadoop.hbase.security.access.AccessController</value>
    </property>

<property>
      <name>hbase.bulkload.staging.dir</name>
      <value>/apps/hbase/staging</value>
    </property>


I deleted org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint from the hbase.coprocessor.region.classes and tried to start. It worked.
I think SecureBulkLoad is causing the problem.

Please help to come over this issue. I would like to have SecureBulkLoad class.

Thanks,
Venkat


RE: HBASE Region server failing to start after Kerberos is enabled

Posted by "Gangavarapu, Venkata" <Ve...@bcbsa.com>.
Hi,

This got solved.
Problem is with hbase.bulkload.staging.dir directory permissions.
It was not with hbase:hdfs user permissions.
Once it is changed to hbase:hdfs, region servers came up.

Thanks for giving me the hint

-Venkat

From: Ted Yu [mailto:yuzhihong@gmail.com]
Sent: Friday, June 26, 2015 1:10 PM
To: common-user@hadoop.apache.org
Subject: Re: HBASE Region server failing to start after Kerberos is enabled

Can you post the complete stack trace for 'Failed to get FileSystem instance' ?

What's the permission for /apps/hbase/staging ?

Looking at commit log of SecureBulkLoadEndpoint.java, there have been a lot bug fixes since 0.98.4
Please consider upgrading hbase

Cheers

On Fri, Jun 26, 2015 at 10:48 AM, Gangavarupu, Venkata - Contingent Worker <ve...@bcbsa.com>> wrote:
HI All,

The region servers failing to start, after Kerberos is enabled, with below error.
Hadoop -2.6.0
HBase-0.98.4

2015-06-24 15:58:48,884 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=AuthenticationService
2015-06-24 15:58:48,886 INFO  [RS_OPEN_META-mdcthdpdas06lp:60020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870911).
2015-06-24 15:58:48,894 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=SecureBulkLoadService
2015-06-24 15:58:48,907 ERROR [RS_OPEN_META-mdcthdpdas06lp:60020-0] coprocessor.CoprocessorHost: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
java.lang.IllegalStateException: Failed to get FileSystem instance

I see below properties are included in hbase-site.xml file

<property>
      <name>hbase.coprocessor.region.classes</name>
      <value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.hadoop.hbase.security.access.AccessController</value>
    </property>

<property>
      <name>hbase.bulkload.staging.dir</name>
      <value>/apps/hbase/staging</value>
    </property>


I deleted org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint from the hbase.coprocessor.region.classes and tried to start. It worked.
I think SecureBulkLoad is causing the problem.

Please help to come over this issue. I would like to have SecureBulkLoad class.

Thanks,
Venkat



RE: HBASE Region server failing to start after Kerberos is enabled

Posted by "Gangavarupu, Venkata - Contingent Worker" <ve...@bcbsa.com>.
Hi,

I have attached the logs for hbase region server failures with SecureBulkLoad after Kerberos

The permission on apps/hbase/staging is

drwxrwxrwx   - ams   hdfs          0 2015-06-08 19:17 /apps/hbase/staging

2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HBASE_THRIFT_OPTS=
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:QTINC=/usr/lib64/qt-3.3/include
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:USER=hbase
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HBASE_CLASSPATH=/usr/hdp/2.2.4.2-2/hadoop/conf:/usr/hdp/2.2.4.2-2/hadoop/*:/usr/hdp/2.2.4.2-2/hadoop/lib/*:/usr/hdp/2.2.4.2-2/zookeeper/*:/usr/hdp/2.2.4.2-2/zookeeper/lib/*:
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HOME=/home/hbase
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HISTCONTROL=ignoredups
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:LESSOPEN=|/usr/bin/lesspipe.sh %s
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HBASE_LOG_PREFIX=hbase-hbase-regionserver-dn.example.com
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:LANG=en_US.UTF-8
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HBASE_IDENT_STRING=hbase
2015-06-29 21:07:03,160 INFO  [main] util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=24.65-b04
2015-06-29 21:07:03,161 INFO  [main] util.ServerCommandLine: vmInputArguments=[-Dproc_regionserver, -XX:OnOutOfMemoryError=kill -9 %p, -Xmx1000m, -Dhdp.version=2.2.4.2-2, -XX:+UseConcMarkSweepGC, -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log, -Djava.security.auth.login.config=/etc/hbase/conf/hbase_client_jaas.conf, -verbose:gc, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -Xloggc:/var/log/hbase/gc.log-201506292107, -Xmn200m, -XX:CMSInitiatingOccupancyFraction=70, -Xms1024m, -Xmx1024m, -Djava.security.auth.login.config=/etc/hbase/conf/hbase_regionserver_jaas.conf, -Dhbase.log.dir=/var/log/hbase, -Dhbase.log.file=hbase-hbase-regionserver-dn.example.com.log, -Dhbase.home.dir=/usr/hdp/current/hbase-regionserver/bin/.., -Dhbase.id.str=hbase, -Dhbase.root.logger=INFO,RFA, -Djava.library.path=:/usr/hdp/2.2.4.2-2/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.2.4.2-2/hadoop/lib/native, -Dhbase.security.logger=INFO,RFAS]
2015-06-29 21:07:03,360 DEBUG [main] regionserver.HRegionServer: regionserver/dn.example.com/172.31.3.128:60020 HConnection server-to-server retries=350
2015-06-29 21:07:03,617 INFO  [main] ipc.SimpleRpcScheduler: Using default user call queue, count=6
2015-06-29 21:07:03,652 INFO  [main] ipc.RpcServer: regionserver/dn.example.com/172.31.3.128:60020: started 10 reader(s).
2015-06-29 21:07:03,761 INFO  [main] impl.MetricsConfig: loaded properties from hadoop-metrics2-hbase.properties
2015-06-29 21:07:03,809 INFO  [main] timeline.HadoopTimelineMetricsSink: Initializing Timeline metrics sink.
2015-06-29 21:07:03,809 INFO  [main] timeline.HadoopTimelineMetricsSink: Identified hostname = dn.example.com, serviceName = hbase
2015-06-29 21:07:03,872 INFO  [main] timeline.HadoopTimelineMetricsSink: Collector Uri: http://nn.example.com:6188/ws/v1/timeline/metrics
2015-06-29 21:07:03,883 INFO  [main] impl.MetricsSinkAdapter: Sink timeline started
2015-06-29 21:07:03,955 INFO  [main] impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-06-29 21:07:03,955 INFO  [main] impl.MetricsSystemImpl: HBase metrics system started
2015-06-29 21:07:04,470 INFO  [main] security.UserGroupInformation: Login successful for user hbase/dn.example.com@EXAMPLE.COM using keytab file /etc/security/keytabs/hbase.service.keytab
2015-06-29 21:07:04,475 INFO  [main] hfile.CacheConfig: Allocating LruBlockCache with maximum size 401.6 M
2015-06-29 21:07:04,520 INFO  [main] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-06-29 21:07:04,569 INFO  [main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2015-06-29 21:07:04,582 INFO  [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver
2015-06-29 21:07:04,582 INFO  [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-06-29 21:07:04,593 INFO  [main] http.HttpServer: Jetty bound to port 60030
2015-06-29 21:07:04,593 INFO  [main] mortbay.log: jetty-6.1.26.hwx
2015-06-29 21:07:05,169 INFO  [main] mortbay.log: Started HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60030
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-2--1, built on 03/31/2015 19:31 GMT
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:host.name=dn.example.com
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.version=1.7.0_67
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.home=/usr/jdk64/jdk1.7.0_67/jre
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.class.path=/etc/hbase/conf:/usr/jdk64/jdk1.7.0_67/lib/tools.jar:/usr/hdp/current/hbase-regionserver/bin/..:/usr/hdp/current/hbase-regionserver/bin/../lib/activation-1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/aopalliance-1.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/api-util-1.0.0-M20.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/asm-3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/avro-1.7.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/azure-storage-2.0.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-beanutils-1.7.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-cli-1.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-codec-1.7.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-collections-3.2.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-compress-1.4.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-configuration-1.6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-daemon-1.0.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-digester-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-el-1.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-httpclient-3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-io-2.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-lang-2.6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-lang3-3.3.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-logging-1.1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-math-2.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-math3-3.1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-net-3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/curator-client-2.6.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/curator-framework-2.6.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/curator-recipes-2.6.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/eclipselink-2.5.2-M1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/findbugs-annotations-1.3.9-1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/gson-2.2.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/guava-12.0.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/guice-3.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/guice-servlet-3.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hamcrest-core-1.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-client-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-client.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-common-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-common-0.98.4.2.2.4.2-2-hadoop2-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-common.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-examples-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-examples.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop2-compat-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop2-compat.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop-compat-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop-compat.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-it-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-it-0.98.4.2.2.4.2-2-hadoop2-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-it.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-prefix-tree-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-prefix-tree.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-protocol-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-protocol.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-server-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-server-0.98.4.2.2.4.2-2-hadoop2-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-server.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-shell-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-shell.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-testing-util-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-testing-util.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-thrift-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-thrift.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/high-scale-lib-1.1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/htrace-core-2.04.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/htrace-core-3.0.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/httpclient-4.2.5.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/httpcore-4.1.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-core-2.2.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-core-asl-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-xc-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jamon-runtime-2.3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jasper-compiler-5.5.23.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jasper-runtime-5.5.23.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/javax.inject-1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/java-xmlbuilder-0.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jaxb-api-2.2.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-client-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-core-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-guice-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-json-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-server-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jets3t-0.9.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jettison-1.3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jetty-6.1.26.hwx.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jetty-sslengine-6.1.26.hwx.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jruby-complete-1.6.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsch-0.1.42.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsp-2.1-6.1.14.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsp-api-2.1-6.1.14.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsr305-1.3.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/junit-4.11.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/leveldbjni-all-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/libthrift-0.9.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/log4j-1.2.17.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/metrics-core-2.2.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/netty-3.6.6.Final.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ojdbc6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/paranamer-2.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/phoenix-server.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/protobuf-java-2.5.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-hbase-plugin-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-plugins-audit-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-plugins-common-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-plugins-cred-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-plugins-impl-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/servlet-api-2.5-6.1.14.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/servlet-api-2.5.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/slf4j-api-1.6.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/snappy-java-1.0.4.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xercesImpl-2.9.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xml-apis-1.3.04.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xmlenc-0.52.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xz-1.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/zookeeper.jar:/usr/hdp/2.2.4.2-2/hadoop/conf:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-api-1.7.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-impl-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ojdbc6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/httpclient-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-audit-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-client-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-httpclient-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsch-0.1.42.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-common-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/javax.persistence-2.1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-framework-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-log4j12-1.7.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/activation-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/junit-4.11.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/eclipselink-2.5.2-M1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-cred-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/httpcore-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-hdfs-plugin-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/azure-storage-2.0.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-lang3-3.3.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-recipes-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-azure.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-annotations.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-common-tests.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-nfs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-auth.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-azure-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-auth-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-nfs.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-annotations-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-common-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-common.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/./:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-nfs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jettison-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/httpclient-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/guice-3.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-net-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/curator-client-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jsch-0.1.42.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jasper-compiler-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/curator-framework-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/gson-2.2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/activation-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-core-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/httpcore-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jline-0.9.94.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/azure-storage-2.0.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-lang3-3.3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/curator-recipes-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-api-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-client-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-registry-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-tests-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/guice-3.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/junit-4.11.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//aws-java-sdk-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jettison-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-datajoin-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-extras.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-jaxrs-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-ant.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-databind-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//curator-client-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-auth.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//curator-framework-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-auth-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-xc-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-sls-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-streaming-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-annotations-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-extras-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//mockito-all-1.8.5.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-openstack-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//activation-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-aws.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-distcp-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-archives-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-aws-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-gridmix-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//junit-4.11.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-core-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//joda-time-2.7.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-lang3-3.3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-rumen-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//curator-recipes-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-ant-2.6.0.2.2.4.2-2.jar::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-mapper-asl-1.9.13.jar:/usr/hdp/current/hadoop-mapreduce-client/jaxb-api-2.2.2.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-beanutils-1.7.0.jar:/usr/hdp/current/hadoop-mapreduce-client/aws-java-sdk-1.7.4.jar:/usr/hdp/current/hadoop-mapreduce-client/jettison-1.1.jar:/usr/hdp/current/hadoop-mapreduce-client/httpclient-4.2.5.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-datajoin-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-extras.jar:/usr/hdp/current/hadoop-mapreduce-client/jetty-6.1.26.hwx.jar:/usr/hdp/current/hadoop-mapreduce-client/xz-1.0.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar:/usr/hdp/current/hadoop-mapreduce-client/jets3t-0.9.0.jar:/usr/hdp/current/hadoop-mapreduce-client/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-net-3.1.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-jaxrs-1.9.13.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-ant.jar:/usr/hdp/current/hadoop-mapreduce-client/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jsr305-1.3.9.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-io-2.4.jar:/usr/hdp/current/hadoop-mapreduce-client/guava-11.0.2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar:/usr/hdp/current/hadoop-mapreduce-client/jersey-json-1.9.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-lang-2.6.jar:/usr/hdp/current/hadoop-mapreduce-client/jetty-util-6.1.26.hwx.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-databind-2.2.3.jar:/usr/hdp/current/hadoop-mapreduce-client/curator-client-2.6.0.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-digester-1.8.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-httpclient-3.1.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-compress-1.4.1.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-auth.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle.jar:/usr/hdp/current/hadoop-mapreduce-client/jsch-0.1.42.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-logging-1.1.3.jar:/usr/hdp/current/hadoop-mapreduce-client/jasper-compiler-5.5.23.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/current/hadoop-mapreduce-client/paranamer-2.3.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-rumen.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-gridmix.jar:/usr/hdp/current/hadoop-mapreduce-client/hamcrest-core-1.3.jar:/usr/hdp/current/hadoop-mapreduce-client/java-xmlbuilder-0.4.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/current/hadoop-mapreduce-client/curator-framework-2.6.0.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-auth-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-xc-1.9.13.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-el-1.0.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-hs.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-sls-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jersey-core-1.9.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-app.jar:/usr/hdp/current/hadoop-mapreduce-client/log4j-1.2.17.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/protobuf-java-2.5.0.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-annotations-2.2.3.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-extras-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/mockito-all-1.8.5.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-app-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/gson-2.2.4.jar:/usr/hdp/current/hadoop-mapreduce-client/snappy-java-1.0.4.1.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-openstack-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-collections-3.2.1.jar:/usr/hdp/current/hadoop-mapreduce-client/htrace-core-3.0.4.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-openstack.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/activation-1.1.jar:/usr/hdp/current/hadoop-mapreduce-client/jaxb-impl-2.2.3-1.jar:/usr/hdp/current/hadoop-mapreduce-client/jersey-server-1.9.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-common.jar:/usr/hdp/current/hadoop-mapreduce-client/stax-api-1.0-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-aws.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-configuration-1.6.jar:/usr/hdp/current/hadoop-mapreduce-client/avro-1.7.4.jar:/usr/hdp/current/hadoop-mapreduce-client/api-asn1-api-1.0.0-M20.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-distcp-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jsp-api-2.1.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-archives-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jasper-runtime-5.5.23.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-datajoin.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-aws-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-gridmix-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-core-asl-1.9.13.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-beanutils-core-1.8.0.jar:/usr/hdp/current/hadoop-mapreduce-client/junit-4.11.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-core-2.2.3.jar:/usr/hdp/current/hadoop-mapreduce-client/servlet-api-2.5.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-codec-1.4.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-cli-1.2.jar:/usr/hdp/current/hadoop-mapreduce-client/joda-time-2.7.jar:/usr/hdp/current/hadoop-mapreduce-client/asm-3.2.jar:/usr/hdp/current/hadoop-mapreduce-client/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/current/hadoop-mapreduce-client/httpcore-4.2.5.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-math3-3.1.1.jar:/usr/hdp/current/hadoop-mapreduce-client/metrics-core-3.0.1.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/current/hadoop-mapreduce-client/netty-3.6.2.Final.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-sls.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-archives.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-lang3-3.3.2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-rumen-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-distcp.jar:/usr/hdp/current/hadoop-mapreduce-client/curator-recipes-2.6.0.jar:/usr/hdp/current/hadoop-mapreduce-client/xmlenc-0.52.jar:/usr/hdp/current/hadoop-mapreduce-client/api-util-1.0.0-M20.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-ant-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-runtime-library-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-common-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-yarn-timeline-history-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-api-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-tests-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-mapreduce-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-dag-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-mbeans-resource-calculator-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-examples-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-runtime-internals-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/jsr305-2.0.3.jar:/usr/hdp/current/tez-client/lib/jetty-6.1.26.hwx.jar:/usr/hdp/current/tez-client/lib/commons-io-2.4.jar:/usr/hdp/current/tez-client/lib/guava-11.0.2.jar:/usr/hdp/current/tez-client/lib/commons-collections4-4.0.jar:/usr/hdp/current/tez-client/lib/commons-lang-2.6.jar:/usr/hdp/current/tez-client/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/current/tez-client/lib/commons-logging-1.1.3.jar:/usr/hdp/current/tez-client/lib/log4j-1.2.17.jar:/usr/hdp/current/tez-client/lib/protobuf-java-2.5.0.jar:/usr/hdp/current/tez-client/lib/hadoop-mapreduce-client-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/commons-collections-3.2.1.jar:/usr/hdp/current/tez-client/lib/hadoop-mapreduce-client-core-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/jettison-1.3.4.jar:/usr/hdp/current/tez-client/lib/hadoop-yarn-server-web-proxy-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/servlet-api-2.5.jar:/usr/hdp/current/tez-client/lib/commons-codec-1.4.jar:/usr/hdp/current/tez-client/lib/hadoop-annotations-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/commons-cli-1.2.jar:/usr/hdp/current/tez-client/lib/commons-math3-3.1.1.jar:/etc/tez/conf/:/usr/hdp/2.2.4.2-2/tez/tez-runtime-library-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-common-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-yarn-timeline-history-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-api-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-tests-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-mapreduce-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-dag-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-mbeans-resource-calculator-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-examples-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-runtime-internals-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/jsr305-2.0.3.jar:/usr/hdp/2.2.4.2-2/tez/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/tez/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-collections4-4.0.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/tez/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/tez/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/tez/lib/hadoop-mapreduce-client-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/tez/lib/hadoop-mapreduce-client-core-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/jettison-1.3.4.jar:/usr/hdp/2.2.4.2-2/tez/lib/hadoop-yarn-server-web-proxy-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/tez/lib/hadoop-annotations-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-math3-3.1.1.jar:/etc/tez/conf:/usr/hdp/2.2.4.2-2/hadoop/conf:/usr/hdp/2.2.4.2-2/hadoop/hadoop-azure.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-annotations.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-common-tests.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-nfs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-auth.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-azure-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-auth-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-nfs.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-annotations-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-common-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-common.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-api-1.7.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-impl-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ojdbc6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/httpclient-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-audit-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-client-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-httpclient-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsch-0.1.42.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-common-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/javax.persistence-2.1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-framework-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-log4j12-1.7.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/activation-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/junit-4.11.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/eclipselink-2.5.2-M1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-cred-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/httpcore-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-hdfs-plugin-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/azure-storage-2.0.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-lang3-3.3.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-recipes-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/zookeeper/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/zookeeper/zookeeper.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/httpcore-4.2.3.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/httpclient-4.2.3.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-artifact-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/log4j-1.2.16.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-http-shared-1.0-beta-6.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-file-1.0-beta-6.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-http-shared4-2.4.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/classworlds-1.1-alpha-2.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/plexus-utils-3.0.8.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/netty-3.7.0.Final.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-profile-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/nekohtml-1.9.6.2.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-provider-api-2.4.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/jsoup-1.7.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/plexus-container-default-1.0-alpha-9-stable-1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/xercesMinimal-1.9.6.2.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-artifact-manager-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/commons-logging-1.1.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/commons-codec-1.6.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/plexus-interpolation-1.11.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/slf4j-api-1.6.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/ant-1.8.0.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-plugin-registry-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/slf4j-log4j12-1.6.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-http-lightweight-1.0-beta-6.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/commons-io-2.2.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-error-diagnostics-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-project-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/backport-util-concurrent-3.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/ant-launcher-1.8.0.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-ant-tasks-2.1.3.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-model-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/jline-0.9.94.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-http-2.4.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-repository-metadata-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-settings-2.2.1.jar:
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/hdp/2.2.4.2-2/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.2.4.2-2/hadoop/lib/native
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:os.name=Linux
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:os.version=2.6.32-431.11.2.el6.x86_64
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:user.name=hbase
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:user.home=/home/hbase
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:user.dir=/home/hbase
2015-06-29 21:07:05,198 INFO  [regionserver60020] zookeeper.ZooKeeper: Initiating client connection, connectString=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181 sessionTimeout=30000 watcher=regionserver:60020, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure
2015-06-29 21:07:05,215 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Process identifier=regionserver:60020 connecting to ZooKeeper ensemble=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181
2015-06-29 21:07:05,230 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.Login: successfully logged in.
2015-06-29 21:07:05,234 INFO  [Thread-10] zookeeper.Login: TGT refresh thread started.
2015-06-29 21:07:05,236 INFO  [regionserver60020-SendThread(dn.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2015-06-29 21:07:05,244 INFO  [Thread-10] zookeeper.Login: TGT valid starting at:        Mon Jun 29 21:07:05 UTC 2015
2015-06-29 21:07:05,244 INFO  [Thread-10] zookeeper.Login: TGT expires:                  Tue Jun 30 21:07:05 UTC 2015
2015-06-29 21:07:05,245 INFO  [Thread-10] zookeeper.Login: TGT refresh sleeping until: Tue Jun 30 17:29:07 UTC 2015
2015-06-29 21:07:05,247 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server dn.example.com/172.31.3.128:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2015-06-29 21:07:05,249 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to dn.example.com/172.31.3.128:2181, initiating session
2015-06-29 21:07:05,261 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server dn.example.com/172.31.3.128:2181, sessionid = 0x14e2be1fbb70009, negotiated timeout = 30000
2015-06-29 21:07:05,514 INFO  [regionserver60020] zookeeper.ZooKeeper: Initiating client connection, connectString=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181 sessionTimeout=30000 watcher=hconnection-0x55af9c7d, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure
2015-06-29 21:07:05,521 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x55af9c7d connecting to ZooKeeper ensemble=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181
2015-06-29 21:07:05,522 INFO  [regionserver60020-SendThread(dn.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2015-06-29 21:07:05,523 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server dn.example.com/172.31.3.128:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2015-06-29 21:07:05,523 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to dn.example.com/172.31.3.128:2181, initiating session
2015-06-29 21:07:05,526 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server dn.example.com/172.31.3.128:2181, sessionid = 0x14e2be1fbb7000a, negotiated timeout = 30000
2015-06-29 21:07:06,199 INFO  [main] regionserver.ShutdownHook: Installed shutdown hook thread: Shutdownhook:regionserver60020
2015-06-29 21:07:06,206 DEBUG [regionserver60020] ipc.RpcClient: Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@443fdee7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2015-06-29 21:07:06,211 DEBUG [regionserver60020] hbase.HRegionInfo: 1588230740
2015-06-29 21:07:06,212 DEBUG [regionserver60020] catalog.CatalogTracker: Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@19e951c9
2015-06-29 21:07:06,215 INFO  [regionserver60020] regionserver.HRegionServer: ClusterId : 05d0370c-07a6-40ff-ab97-5be7d7ae1f36
2015-06-29 21:07:06,218 INFO  [regionserver60020] procedure.RegionServerProcedureManagerHost: Procedure online-snapshot is initializing
2015-06-29 21:07:06,230 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Node /hbase-secure/online-snapshot/acquired already exists and this is not a retry
2015-06-29 21:07:06,235 INFO  [regionserver60020] procedure.RegionServerProcedureManagerHost: Procedure online-snapshot is initialized
2015-06-29 21:07:06,240 INFO  [regionserver60020] regionserver.MemStoreFlusher: globalMemStoreLimit=401.6 M, globalMemStoreLimitLowMark=381.5 M, maxHeap=1004 M
2015-06-29 21:07:06,242 INFO  [regionserver60020] regionserver.HRegionServer: CompactionChecker runs every 10sec
2015-06-29 21:07:06,244 DEBUG [regionserver60020] ipc.RpcClient: Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@175e895d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=dn.example.com/172.31.3.128:0
2015-06-29 21:07:06,250 INFO  [regionserver60020] regionserver.HRegionServer: reportForDuty to master=rm.example.com,60000,1435297869160 with port=60020, startcode=1435612024042
2015-06-29 21:07:06,359 DEBUG [regionserver60020] token.AuthenticationTokenSelector: No matching token found
2015-06-29 21:07:06,360 DEBUG [regionserver60020] ipc.RpcClient: RPC Server Kerberos principal name for service=RegionServerStatusService is hbase/rm.example.com@EXAMPLE.COM
2015-06-29 21:07:06,360 DEBUG [regionserver60020] ipc.RpcClient: Use KERBEROS authentication for service RegionServerStatusService, sasl=true
2015-06-29 21:07:06,372 DEBUG [regionserver60020] ipc.RpcClient: Connecting to rm.example.com/172.31.3.127:60000
2015-06-29 21:07:06,378 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Creating SASL GSSAPI client. Server's Kerberos principal name is hbase/rm.example.com@EXAMPLE.COM
2015-06-29 21:07:06,384 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Have sent token of size 633 from initSASLContext.
2015-06-29 21:07:06,388 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Will read input token of size 108 for processing by initSASLContext
2015-06-29 21:07:06,390 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Will send token of size 0 from initSASLContext.
2015-06-29 21:07:06,391 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Will read input token of size 32 for processing by initSASLContext
2015-06-29 21:07:06,392 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Will send token of size 32 from initSASLContext.
2015-06-29 21:07:06,392 DEBUG [regionserver60020] security.HBaseSaslRpcClient: SASL client context established. Negotiated QoP: auth
2015-06-29 21:07:06,411 DEBUG [regionserver60020] regionserver.HRegionServer: Config from master: hbase.rootdir=hdfs://nn.example.com:8020/apps/hbase/data
2015-06-29 21:07:06,412 DEBUG [regionserver60020] regionserver.HRegionServer: Config from master: fs.default.name=hdfs://nn.example.com:8020
2015-06-29 21:07:06,412 INFO  [regionserver60020] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:06,412 DEBUG [regionserver60020] regionserver.HRegionServer: Config from master: hbase.master.info.port=60010
2015-06-29 21:07:06,412 INFO  [regionserver60020] Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
2015-06-29 21:07:06,430 INFO  [regionserver60020] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2015-06-29 21:07:06,437 DEBUG [regionserver60020] regionserver.HRegionServer: logdir=hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435612024042
2015-06-29 21:07:06,542 DEBUG [regionserver60020] regionserver.Replication: ReplicationStatisticsThread 300
2015-06-29 21:07:06,553 INFO  [regionserver60020] wal.FSHLog: WAL/HLog configuration: blocksize=128 MB, rollsize=121.60 MB, enabled=true
2015-06-29 21:07:06,798 INFO  [regionserver60020] wal.FSHLog: New WAL /apps/hbase/data/WALs/dn.example.com,60020,1435612024042/dn.example.com%2C60020%2C1435612024042.1435612026585
2015-06-29 21:07:06,814 INFO  [regionserver60020] regionserver.MetricsRegionServerWrapperImpl: Computing regionserver metrics every 5000 milliseconds
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_OPEN_REGION-dn:60020, corePoolSize=3, maxPoolSize=3
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_OPEN_META-dn:60020, corePoolSize=1, maxPoolSize=1
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_CLOSE_REGION-dn:60020, corePoolSize=3, maxPoolSize=3
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_CLOSE_META-dn:60020, corePoolSize=1, maxPoolSize=1
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_LOG_REPLAY_OPS-dn:60020, corePoolSize=2, maxPoolSize=2
2015-06-29 21:07:06,823 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-dn:60020, corePoolSize=3, maxPoolSize=3
2015-06-29 21:07:06,826 INFO  [regionserver60020] regionserver.ReplicationSourceManager: Current list of replicators: [dn.example.com,60020,1435612024042] other RSs: [dn.example.com,60020,1435612024042]
2015-06-29 21:07:06,875 INFO  [regionserver60020] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:06,885 INFO  [regionserver60020] zookeeper.ZooKeeper: Initiating client connection, connectString=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181 sessionTimeout=30000 watcher=hconnection-0x64d5c83f, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure
2015-06-29 21:07:06,890 INFO  [regionserver60020-SendThread(nn.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2015-06-29 21:07:06,891 INFO  [regionserver60020-SendThread(nn.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server nn.example.com/172.31.3.126:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2015-06-29 21:07:06,891 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x64d5c83f connecting to ZooKeeper ensemble=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181
2015-06-29 21:07:06,895 INFO  [regionserver60020-SendThread(nn.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to nn.example.com/172.31.3.126:2181, initiating session
2015-06-29 21:07:06,909 INFO  [regionserver60020-SendThread(nn.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server nn.example.com/172.31.3.126:2181, sessionid = 0x24e2be20e450014, negotiated timeout = 30000
2015-06-29 21:07:06,939 DEBUG [regionserver60020] ipc.RpcClient: Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b0a2c6a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2015-06-29 21:07:06,954 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Node /hbase-secure/tokenauth/keys already exists and this is not a retry
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 10
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 17
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 15
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 16
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 13
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 14
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 11
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 12
2015-06-29 21:07:06,969 INFO  [ZKSecretWatcher-leaderElector] zookeeper.RecoverableZooKeeper: Node /hbase-secure/tokenauth/keymaster already exists and this is not a retry
2015-06-29 21:07:06,970 INFO  [ZKSecretWatcher-leaderElector] zookeeper.ZKLeaderManager: Found existing leader with ID: dn.example.com,60020,1435612024042
2015-06-29 21:07:07,017 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: starting
2015-06-29 21:07:07,018 INFO  [RpcServer.listener,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: starting
2015-06-29 21:07:07,018 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=0 queue=0
2015-06-29 21:07:07,018 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=1 queue=1
2015-06-29 21:07:07,019 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=2 queue=2
2015-06-29 21:07:07,019 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=3 queue=3
2015-06-29 21:07:07,019 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=4 queue=4
2015-06-29 21:07:07,019 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=5 queue=5
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=6 queue=0
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=7 queue=1
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=8 queue=2
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=9 queue=3
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=10 queue=4
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=11 queue=5
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=12 queue=0
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=13 queue=1
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=14 queue=2
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=15 queue=3
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=16 queue=4
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=17 queue=5
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=18 queue=0
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=19 queue=1
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=20 queue=2
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=21 queue=3
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=22 queue=4
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=23 queue=5
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=24 queue=0
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=25 queue=1
2015-06-29 21:07:07,024 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=26 queue=2
2015-06-29 21:07:07,024 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=27 queue=3
2015-06-29 21:07:07,024 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=28 queue=4
2015-06-29 21:07:07,025 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=29 queue=5
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=30 queue=0
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=31 queue=1
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=32 queue=2
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=33 queue=3
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=34 queue=4
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=35 queue=5
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=36 queue=0
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=37 queue=1
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=38 queue=2
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=39 queue=3
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=40 queue=4
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=41 queue=5
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=42 queue=0
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=43 queue=1
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=44 queue=2
2015-06-29 21:07:07,034 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=45 queue=3
2015-06-29 21:07:07,034 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=46 queue=4
2015-06-29 21:07:07,034 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=47 queue=5
2015-06-29 21:07:07,035 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=48 queue=0
2015-06-29 21:07:07,035 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=49 queue=1
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=50 queue=2
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=51 queue=3
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=52 queue=4
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=53 queue=5
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=54 queue=0
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=55 queue=1
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=56 queue=2
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=57 queue=3
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=58 queue=4
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=59 queue=5
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=0 queue=0
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=1 queue=0
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=2 queue=0
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=3 queue=0
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=4 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=5 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=6 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=7 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=8 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=9 queue=0
2015-06-29 21:07:07,040 DEBUG [regionserver60020] ipc.RpcExecutor: Replication Start Handler index=0 queue=0
2015-06-29 21:07:07,040 DEBUG [regionserver60020] ipc.RpcExecutor: Replication Start Handler index=1 queue=0
2015-06-29 21:07:07,040 DEBUG [regionserver60020] ipc.RpcExecutor: Replication Start Handler index=2 queue=0
2015-06-29 21:07:07,074 INFO  [regionserver60020] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:07,083 INFO  [regionserver60020] regionserver.HRegionServer: Serving as dn.example.com,60020,1435612024042, RpcServer on dn.example.com/172.31.3.128:60020, sessionid=0x14e2be1fbb70009
2015-06-29 21:07:07,083 INFO  [regionserver60020] procedure.RegionServerProcedureManagerHost: Procedure online-snapshot is starting
2015-06-29 21:07:07,083 DEBUG [regionserver60020] snapshot.RegionServerSnapshotManager: Start Snapshot Manager dn.example.com,60020,1435612024042
2015-06-29 21:07:07,083 DEBUG [regionserver60020] procedure.ZKProcedureMemberRpcs: Starting procedure member 'dn.example.com,60020,1435612024042'
2015-06-29 21:07:07,083 DEBUG [regionserver60020] procedure.ZKProcedureMemberRpcs: Checking for aborted procedures on node: '/hbase-secure/online-snapshot/abort'
2015-06-29 21:07:07,083 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] regionserver.SplitLogWorker: SplitLogWorker dn.example.com,60020,1435612024042 starting
2015-06-29 21:07:07,084 DEBUG [regionserver60020] procedure.ZKProcedureMemberRpcs: Looking for new procedures under znode:'/hbase-secure/online-snapshot/acquired'
2015-06-29 21:07:07,084 INFO  [regionserver60020] procedure.RegionServerProcedureManagerHost: Procedure online-snapshot is started
2015-06-29 21:07:07,111 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] zookeeper.ZooKeeper: Initiating client connection, connectString=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181 sessionTimeout=30000 watcher=hconnection-0x7db5292f, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure
2015-06-29 21:07:07,118 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042-SendThread(dn.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2015-06-29 21:07:07,119 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server dn.example.com/172.31.3.128:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2015-06-29 21:07:07,119 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to dn.example.com/172.31.3.128:2181, initiating session
2015-06-29 21:07:07,122 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7db5292f connecting to ZooKeeper ensemble=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181
2015-06-29 21:07:07,128 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server dn.example.com/172.31.3.128:2181, sessionid = 0x14e2be1fbb7000b, negotiated timeout = 30000
2015-06-29 21:07:07,143 DEBUG [SplitLogWorker-dn.example.com,60020,1435612024042] ipc.RpcClient: Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7aa9a046, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2015-06-29 21:07:07,165 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] regionserver.SplitLogWorker: worker dn.example.com,60020,1435612024042 acquired task /hbase-secure/splitWAL/WALs%2Fdn.example.com%2C60020%2C1435297873436-splitting%2Fdn.example.com%252C60020%252C1435297873436.1435297879386.meta
2015-06-29 21:07:07,224 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:07,234 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Splitting hlog: hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435297873436-splitting/dn.example.com%2C60020%2C1435297873436.1435297879386.meta, length=91
2015-06-29 21:07:07,234 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: DistributedLogReplay = false
2015-06-29 21:07:07,240 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435297873436-splitting/dn.example.com%2C60020%2C1435297873436.1435297879386.meta
2015-06-29 21:07:07,246 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435297873436-splitting/dn.example.com%2C60020%2C1435297873436.1435297879386.meta after 6ms
2015-06-29 21:07:07,339 DEBUG [RS_LOG_REPLAY_OPS-dn:60020-0-Writer-1] wal.HLogSplitter: Writer thread Thread[RS_LOG_REPLAY_OPS-dn:60020-0-Writer-1,5,main]: starting
2015-06-29 21:07:07,339 DEBUG [RS_LOG_REPLAY_OPS-dn:60020-0-Writer-2] wal.HLogSplitter: Writer thread Thread[RS_LOG_REPLAY_OPS-dn:60020-0-Writer-2,5,main]: starting
2015-06-29 21:07:07,338 DEBUG [RS_LOG_REPLAY_OPS-dn:60020-0-Writer-0] wal.HLogSplitter: Writer thread Thread[RS_LOG_REPLAY_OPS-dn:60020-0-Writer-0,5,main]: starting
2015-06-29 21:07:07,342 DEBUG [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Finishing writing output logs and closing down.
2015-06-29 21:07:07,342 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Waiting for split writer threads to finish
2015-06-29 21:07:07,343 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Split writers finished
2015-06-29 21:07:07,345 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Processed 0 edits across 0 regions; log file=hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435297873436-splitting/dn.example.com%2C60020%2C1435297873436.1435297879386.meta is corrupted = false progress failed = false
2015-06-29 21:07:07,351 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] handler.HLogSplitterHandler: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fdn.example.com%2C60020%2C1435297873436-splitting%2Fdn.example.com%252C60020%252C1435297873436.1435297879386.meta to final state DONE dn.example.com,60020,1435612024042
2015-06-29 21:07:07,351 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] handler.HLogSplitterHandler: worker dn.example.com,60020,1435612024042 done with task /hbase-secure/splitWAL/WALs%2Fdn.example.com%2C60020%2C1435297873436-splitting%2Fdn.example.com%252C60020%252C1435297873436.1435297879386.meta in 168ms
2015-06-29 21:07:07,387 DEBUG [regionserver60020-EventThread] regionserver.SplitLogWorker: tasks arrived or departed
2015-06-29 21:07:08,433 DEBUG [RpcServer.listener,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: connection from 172.31.3.127:37982; # active connections: 1
2015-06-29 21:07:08,434 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Kerberos principal name is hbase/dn.example.com@EXAMPLE.COM
2015-06-29 21:07:08,436 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Created SASL server with mechanism = GSSAPI
2015-06-29 21:07:08,436 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Have read input token of size 633 for processing by saslServer.evaluateResponse()
2015-06-29 21:07:08,441 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Will send token of size 108 from saslServer.
2015-06-29 21:07:08,444 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Have read input token of size 0 for processing by saslServer.evaluateResponse()
2015-06-29 21:07:08,445 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Will send token of size 32 from saslServer.
2015-06-29 21:07:08,446 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Have read input token of size 32 for processing by saslServer.evaluateResponse()
2015-06-29 21:07:08,446 DEBUG [RpcServer.reader=1,port=60020] security.HBaseSaslRpcServer: SASL server GSSAPI callback: setting canonicalized client ID: hbase/rm.example.com@EXAMPLE.COM
2015-06-29 21:07:08,446 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: SASL server context established. Authenticated client: hbase/rm.example.com@EXAMPLE.COM (auth:SIMPLE). Negotiated QoP is auth
2015-06-29 21:07:08,481 INFO  [PriorityRpcServer.handler=0,queue=0,port=60020] regionserver.HRegionServer: Open hbase:meta,,1.1588230740
2015-06-29 21:07:08,506 INFO  [PriorityRpcServer.handler=0,queue=0,port=60020] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2015-06-29 21:07:08,608 DEBUG [RS_OPEN_META-dn:60020-0] zookeeper.ZKAssign: regionserver:60020-0x14e2be1fbb70009, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure Transitioning 1588230740 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
2015-06-29 21:07:08,617 DEBUG [RS_OPEN_META-dn:60020-0] zookeeper.ZKAssign: regionserver:60020-0x14e2be1fbb70009, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure Transitioned node 1588230740 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
2015-06-29 21:07:08,618 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegionServer: logdir=hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435612024042
2015-06-29 21:07:08,618 INFO  [RS_OPEN_META-dn:60020-0] wal.FSHLog: WAL/HLog configuration: blocksize=128 MB, rollsize=121.60 MB, enabled=true
2015-06-29 21:07:08,635 INFO  [RS_OPEN_META-dn:60020-0] wal.FSHLog: New WAL /apps/hbase/data/WALs/dn.example.com,60020,1435612024042/dn.example.com%2C60020%2C1435612024042.1435612028621.meta
2015-06-29 21:07:08,650 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}
2015-06-29 21:07:08,669 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=AuthenticationService
2015-06-29 21:07:08,671 INFO  [RS_OPEN_META-dn:60020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870911).
2015-06-29 21:07:08,676 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=SecureBulkLoadService
2015-06-29 21:07:08,686 ERROR [RS_OPEN_META-dn:60020-0] coprocessor.CoprocessorHost: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
java.lang.IllegalStateException: Failed to get FileSystem instance
        at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:148)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.startup(CoprocessorHost.java:415)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:257)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:160)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:192)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:701)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:608)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:5438)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5749)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5722)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5678)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5629)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:201)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:191)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6795)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6777)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6696)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1731)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1711)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:615)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:445)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2345)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1300)
        at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:136)
        ... 21 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:201)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:191)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6795)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6777)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6696)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1731)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1711)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:615)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:445)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at org.apache.hadoop.ipc.Client.call(Client.java:1469)
        at org.apache.hadoop.ipc.Client.call(Client.java:1400)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy18.setPermission(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:361)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy19.setPermission(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:294)
        at com.sun.proxy.$Proxy20.setPermission(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2343)
        ... 26 more
2015-06-29 21:07:08,688 FATAL [RS_OPEN_META-dn:60020-0] regionserver.HRegionServer: ABORTING region server dn.example.com,60020,1435612024042: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
java.lang.IllegalStateException: Failed to get FileSystem instance
        at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:148)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.startup(CoprocessorHost.java:415)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:257)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:160)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:192)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:701)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:608)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:5438)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5749)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5722)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5678)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5629)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:201)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:191)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6795)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6777)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6696)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1731)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1711)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:615)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:445)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2345)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1300)
        at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:136)
        ... 21 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:201)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:191)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6795)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6777)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6696)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1731)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1711)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:615)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:445)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at org.apache.hadoop.ipc.Client.call(Client.java:1469)
        at org.apache.hadoop.ipc.Client.call(Client.java:1400)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy18.setPermission(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:361)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy19.setPermission(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:294)
        at com.sun.proxy.$Proxy20.setPermission(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2343)
        ... 26 more
2015-06-29 21:07:08,690 FATAL [RS_OPEN_META-dn:60020-0] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.security.token.TokenProvider]
2015-06-29 21:07:08,699 INFO  [RS_OPEN_META-dn:60020-0] regionserver.HRegionServer: STOPPED: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
2015-06-29 21:07:08,700 INFO  [regionserver60020] ipc.RpcServer: Stopping server on 60020
2015-06-29 21:07:08,700 INFO  [RpcServer.listener,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: stopping
2015-06-29 21:07:08,701 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped
2015-06-29 21:07:08,701 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2015-06-29 21:07:08,701 INFO  [regionserver60020] regionserver.SplitLogWorker: Sending interrupt to stop the worker thread
2015-06-29 21:07:08,702 INFO  [regionserver60020] regionserver.HRegionServer: Stopping infoServer
2015-06-29 21:07:08,704 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] regionserver.SplitLogWorker: SplitLogWorker interrupted while waiting for task, exiting: java.lang.InterruptedException
2015-06-29 21:07:08,704 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] regionserver.SplitLogWorker: SplitLogWorker dn.example.com,60020,1435612024042 exiting
2015-06-29 21:07:08,716 INFO  [regionserver60020] mortbay.log: Stopped HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60030
2015-06-29 21:07:08,716 WARN  [1614105668@qtp-278229142-1 - Acceptor0 HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60030] http.HttpServer: HttpServer Acceptor: isRunning is false. Rechecking.
2015-06-29 21:07:08,716 WARN  [1614105668@qtp-278229142-1 - Acceptor0 HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60030] http.HttpServer: HttpServer Acceptor: isRunning is false
2015-06-29 21:07:08,718 INFO  [regionserver60020] snapshot.RegionServerSnapshotManager: Stopping RegionServerSnapshotManager abruptly.
2015-06-29 21:07:08,719 INFO  [regionserver60020.logRoller] regionserver.LogRoller: LogRoller exiting.
2015-06-29 21:07:08,719 INFO  [regionserver60020.nonceCleaner] regionserver.ServerNonceManager$1: regionserver60020.nonceCleaner exiting
2015-06-29 21:07:08,719 INFO  [regionserver60020.compactionChecker] regionserver.HRegionServer$CompactionChecker: regionserver60020.compactionChecker exiting
2015-06-29 21:07:08,720 INFO  [regionserver60020] regionserver.HRegionServer: aborting server dn.example.com,60020,1435612024042
2015-06-29 21:07:08,721 DEBUG [regionserver60020] catalog.CatalogTracker: Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@19e951c9
2015-06-29 21:07:08,721 INFO  [regionserver60020] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x14e2be1fbb7000a
2015-06-29 21:07:08,718 INFO  [MemStoreFlusher.0] regionserver.MemStoreFlusher: MemStoreFlusher.0 exiting
2015-06-29 21:07:08,723 INFO  [MemStoreFlusher.1] regionserver.MemStoreFlusher: MemStoreFlusher.1 exiting
2015-06-29 21:07:08,719 INFO  [RS_OPEN_META-dn:60020-0-MetaLogRoller] regionserver.LogRoller: LogRoller exiting.
2015-06-29 21:07:08,724 INFO  [regionserver60020] zookeeper.ZooKeeper: Session: 0x14e2be1fbb7000a closed
2015-06-29 21:07:08,724 DEBUG [regionserver60020] ipc.RpcClient: Stopping rpc client
2015-06-29 21:07:08,725 INFO  [regionserver60020] regionserver.HRegionServer: stopping server dn.example.com,60020,1435612024042; all regions closed.
2015-06-29 21:07:08,725 INFO  [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-06-29 21:07:08,725 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncNotifier] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncNotifier interrupted while waiting for  notification from AsyncSyncer thread
2015-06-29 21:07:08,725 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncNotifier] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncNotifier exiting
2015-06-29 21:07:08,725 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer0] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer0 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,725 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer0] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer0 exiting
2015-06-29 21:07:08,725 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer1] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer1 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,725 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer1] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer1 exiting
2015-06-29 21:07:08,725 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer2] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer2 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,725 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer2] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer2 exiting
2015-06-29 21:07:08,726 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer3] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer3 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,726 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer3] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer3 exiting
2015-06-29 21:07:08,726 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer4] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer4 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,726 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer4] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer4 exiting
2015-06-29 21:07:08,726 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncWriter] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncWriter interrupted while waiting for newer writes added to local buffer
2015-06-29 21:07:08,726 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncWriter] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncWriter exiting
2015-06-29 21:07:08,726 DEBUG [regionserver60020] wal.FSHLog: Closing WAL writer in hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435612024042
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncNotifier] wal.FSHLog: regionserver60020-WAL.AsyncNotifier interrupted while waiting for  notification from AsyncSyncer thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncNotifier] wal.FSHLog: regionserver60020-WAL.AsyncNotifier exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer0] wal.FSHLog: regionserver60020-WAL.AsyncSyncer0 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncSyncer0] wal.FSHLog: regionserver60020-WAL.AsyncSyncer0 exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer1] wal.FSHLog: regionserver60020-WAL.AsyncSyncer1 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncSyncer1] wal.FSHLog: regionserver60020-WAL.AsyncSyncer1 exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer2] wal.FSHLog: regionserver60020-WAL.AsyncSyncer2 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncSyncer2] wal.FSHLog: regionserver60020-WAL.AsyncSyncer2 exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer3] wal.FSHLog: regionserver60020-WAL.AsyncSyncer3 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncSyncer3] wal.FSHLog: regionserver60020-WAL.AsyncSyncer3 exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer4] wal.FSHLog: regionserver60020-WAL.AsyncSyncer4 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,736 INFO  [regionserver60020-WAL.AsyncSyncer4] wal.FSHLog: regionserver60020-WAL.AsyncSyncer4 exiting
2015-06-29 21:07:08,736 DEBUG [regionserver60020-WAL.AsyncWriter] wal.FSHLog: regionserver60020-WAL.AsyncWriter interrupted while waiting for newer writes added to local buffer
2015-06-29 21:07:08,736 INFO  [regionserver60020-WAL.AsyncWriter] wal.FSHLog: regionserver60020-WAL.AsyncWriter exiting
2015-06-29 21:07:08,736 DEBUG [regionserver60020] wal.FSHLog: Closing WAL writer in hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435612024042
2015-06-29 21:07:08,737 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=AccessControlService
2015-06-29 21:07:08,743 DEBUG [regionserver60020] ipc.RpcClient: Stopping rpc client
2015-06-29 21:07:08,743 INFO  [RS_OPEN_META-dn:60020-0] util.ChecksumType: Checksum using org.apache.hadoop.util.PureJavaCrc32
2015-06-29 21:07:08,743 INFO  [RS_OPEN_META-dn:60020-0] util.ChecksumType: Checksum can use org.apache.hadoop.util.PureJavaCrc32C
2015-06-29 21:07:08,743 INFO  [regionserver60020] regionserver.Leases: regionserver60020 closing leases
2015-06-29 21:07:08,744 INFO  [regionserver60020] regionserver.Leases: regionserver60020 closed leases
2015-06-29 21:07:08,744 INFO  [RS_OPEN_META-dn:60020-0] access.AccessController: A minimum HFile version of 3 is required to persist cell ACLs. Consider setting hfile.format.version accordingly.
2015-06-29 21:07:08,756 INFO  [RS_OPEN_META-dn:60020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870912).
2015-06-29 21:07:08,759 INFO  [RS_OPEN_META-dn:60020-0] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:08,761 DEBUG [RS_OPEN_META-dn:60020-0] coprocessor.CoprocessorHost: Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911
2015-06-29 21:07:08,763 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService
2015-06-29 21:07:08,763 INFO  [RS_OPEN_META-dn:60020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully.
2015-06-29 21:07:08,767 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.MetricsRegionSourceImpl: Creating new MetricsRegionSourceImpl for table meta 1588230740
2015-06-29 21:07:08,767 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Instantiated hbase:meta,,1.1588230740
2015-06-29 21:07:08,825 INFO  [StoreOpener-1588230740-1] compactions.CompactionConfiguration: size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000
2015-06-29 21:07:08,883 DEBUG [StoreOpener-1588230740-1] regionserver.HStore: loaded hdfs://nn.example.com:8020/apps/hbase/data/data/hbase/meta/1588230740/info/773daf23518042b49cded2d0f6705ad7, isReference=false, isBulkLoadResult=false, seqid=52, majorCompaction=true
2015-06-29 21:07:08,892 DEBUG [StoreOpener-1588230740-1] regionserver.HStore: loaded hdfs://nn.example.com:8020/apps/hbase/data/data/hbase/meta/1588230740/info/d3e9d66e7adf462bac4b758191ad7152, isReference=false, isBulkLoadResult=false, seqid=60, majorCompaction=false
2015-06-29 21:07:08,902 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Found 0 recovered edits file(s) under hdfs://nn.example.com:8020/apps/hbase/data/data/hbase/meta/1588230740
2015-06-29 21:07:08,922 DEBUG [RS_OPEN_META-dn:60020-0] wal.HLogUtil: Written region seqId to file:hdfs://nn.example.com:8020/apps/hbase/data/data/hbase/meta/1588230740/recovered.edits/65_seqid ,newSeqId=65 ,maxSeqId=64
2015-06-29 21:07:08,924 INFO  [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Onlined 1588230740; next sequenceid=65
2015-06-29 21:07:08,931 ERROR [RS_OPEN_META-dn:60020-0] handler.OpenRegionHandler: Failed open of region=hbase:meta,,1.1588230740, starting to roll back the global memstore size.
java.io.IOException: Cannot append; log is closed
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:1000)
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog.appendNoSync(FSHLog.java:1053)
        at org.apache.hadoop.hbase.regionserver.wal.HLogUtil.writeRegionEventMarker(HLogUtil.java:309)
        at org.apache.hadoop.hbase.regionserver.HRegion.writeRegionOpenMarker(HRegion.java:933)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5785)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5750)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5722)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5678)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5629)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
2015-06-29 21:07:08,931 INFO  [RS_OPEN_META-dn:60020-0] handler.OpenRegionHandler: Opening of region {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} failed, transitioning from OPENING to FAILED_OPEN in ZK, expecting version 5
2015-06-29 21:07:08,931 DEBUG [RS_OPEN_META-dn:60020-0] zookeeper.ZKAssign: regionserver:60020-0x14e2be1fbb70009, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure Transitioning 1588230740 from RS_ZK_REGION_OPENING to RS_ZK_REGION_FAILED_OPEN
2015-06-29 21:07:08,935 DEBUG [RS_OPEN_META-dn:60020-0] zookeeper.ZKAssign: regionserver:60020-0x14e2be1fbb70009, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure Transitioned node 1588230740 from RS_ZK_REGION_OPENING to RS_ZK_REGION_FAILED_OPEN
2015-06-29 21:07:16,825 INFO  [regionserver60020.periodicFlusher] regionserver.HRegionServer$PeriodicMemstoreFlusher: regionserver60020.periodicFlusher exiting
2015-06-29 21:07:16,825 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for Split Thread to finish...
2015-06-29 21:07:16,825 INFO  [regionserver60020.leaseChecker] regionserver.Leases: regionserver60020.leaseChecker closing leases
2015-06-29 21:07:16,826 INFO  [regionserver60020.leaseChecker] regionserver.Leases: regionserver60020.leaseChecker closed leases
2015-06-29 21:07:16,826 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for Merge Thread to finish...
2015-06-29 21:07:16,826 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for Large Compaction Thread to finish...
2015-06-29 21:07:16,826 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for Small Compaction Thread to finish...
2015-06-29 21:07:16,831 INFO  [regionserver60020] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x24e2be20e450014
2015-06-29 21:07:16,833 INFO  [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-06-29 21:07:16,833 INFO  [regionserver60020] zookeeper.ZooKeeper: Session: 0x24e2be20e450014 closed
2015-06-29 21:07:16,834 DEBUG [regionserver60020] ipc.RpcClient: Stopping rpc client
2015-06-29 21:07:16,839 INFO  [regionserver60020] zookeeper.ZooKeeper: Session: 0x14e2be1fbb70009 closed
2015-06-29 21:07:16,839 INFO  [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-06-29 21:07:16,839 INFO  [regionserver60020] regionserver.HRegionServer: stopping server dn.example.com,60020,1435612024042; zookeeper connection closed.
2015-06-29 21:07:16,839 INFO  [regionserver60020] regionserver.HRegionServer: regionserver60020 exiting
2015-06-29 21:07:16,839 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: HRegionServer Aborted
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:66)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2501)
2015-06-29 21:07:16,840 INFO  [Thread-12] regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@1f343622
2015-06-29 21:07:16,841 INFO  [Thread-12] regionserver.ShutdownHook: Starting fs shutdown hook thread.
2015-06-29 21:07:16,841 INFO  [Thread-12] regionserver.ShutdownHook: Shutdown hook finished.



Please help

Thanks,
Venkat
From: Ted Yu [mailto:yuzhihong@gmail.com]
Sent: Friday, June 26, 2015 1:10 PM
To: common-user@hadoop.apache.org
Subject: Re: HBASE Region server failing to start after Kerberos is enabled

Can you post the complete stack trace for 'Failed to get FileSystem instance' ?

What's the permission for /apps/hbase/staging ?

Looking at commit log of SecureBulkLoadEndpoint.java, there have been a lot bug fixes since 0.98.4
Please consider upgrading hbase

Cheers

On Fri, Jun 26, 2015 at 10:48 AM, Gangavarupu, Venkata - Contingent Worker <ve...@bcbsa.com>> wrote:
HI All,

The region servers failing to start, after Kerberos is enabled, with below error.
Hadoop -2.6.0
HBase-0.98.4

2015-06-24 15:58:48,884 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=AuthenticationService
2015-06-24 15:58:48,886 INFO  [RS_OPEN_META-mdcthdpdas06lp:60020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870911).
2015-06-24 15:58:48,894 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=SecureBulkLoadService
2015-06-24 15:58:48,907 ERROR [RS_OPEN_META-mdcthdpdas06lp:60020-0] coprocessor.CoprocessorHost: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
java.lang.IllegalStateException: Failed to get FileSystem instance

I see below properties are included in hbase-site.xml file

<property>
      <name>hbase.coprocessor.region.classes</name>
      <value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.hadoop.hbase.security.access.AccessController</value>
    </property>

<property>
      <name>hbase.bulkload.staging.dir</name>
      <value>/apps/hbase/staging</value>
    </property>


I deleted org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint from the hbase.coprocessor.region.classes and tried to start. It worked.
I think SecureBulkLoad is causing the problem.

Please help to come over this issue. I would like to have SecureBulkLoad class.

Thanks,
Venkat



RE: HBASE Region server failing to start after Kerberos is enabled

Posted by "Gangavarupu, Venkata - Contingent Worker" <ve...@bcbsa.com>.
Hi,

I have attached the logs for hbase region server failures with SecureBulkLoad after Kerberos

The permission on apps/hbase/staging is

drwxrwxrwx   - ams   hdfs          0 2015-06-08 19:17 /apps/hbase/staging

2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HBASE_THRIFT_OPTS=
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:QTINC=/usr/lib64/qt-3.3/include
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:USER=hbase
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HBASE_CLASSPATH=/usr/hdp/2.2.4.2-2/hadoop/conf:/usr/hdp/2.2.4.2-2/hadoop/*:/usr/hdp/2.2.4.2-2/hadoop/lib/*:/usr/hdp/2.2.4.2-2/zookeeper/*:/usr/hdp/2.2.4.2-2/zookeeper/lib/*:
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HOME=/home/hbase
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HISTCONTROL=ignoredups
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:LESSOPEN=|/usr/bin/lesspipe.sh %s
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HBASE_LOG_PREFIX=hbase-hbase-regionserver-dn.example.com
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:LANG=en_US.UTF-8
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HBASE_IDENT_STRING=hbase
2015-06-29 21:07:03,160 INFO  [main] util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=24.65-b04
2015-06-29 21:07:03,161 INFO  [main] util.ServerCommandLine: vmInputArguments=[-Dproc_regionserver, -XX:OnOutOfMemoryError=kill -9 %p, -Xmx1000m, -Dhdp.version=2.2.4.2-2, -XX:+UseConcMarkSweepGC, -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log, -Djava.security.auth.login.config=/etc/hbase/conf/hbase_client_jaas.conf, -verbose:gc, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -Xloggc:/var/log/hbase/gc.log-201506292107, -Xmn200m, -XX:CMSInitiatingOccupancyFraction=70, -Xms1024m, -Xmx1024m, -Djava.security.auth.login.config=/etc/hbase/conf/hbase_regionserver_jaas.conf, -Dhbase.log.dir=/var/log/hbase, -Dhbase.log.file=hbase-hbase-regionserver-dn.example.com.log, -Dhbase.home.dir=/usr/hdp/current/hbase-regionserver/bin/.., -Dhbase.id.str=hbase, -Dhbase.root.logger=INFO,RFA, -Djava.library.path=:/usr/hdp/2.2.4.2-2/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.2.4.2-2/hadoop/lib/native, -Dhbase.security.logger=INFO,RFAS]
2015-06-29 21:07:03,360 DEBUG [main] regionserver.HRegionServer: regionserver/dn.example.com/172.31.3.128:60020 HConnection server-to-server retries=350
2015-06-29 21:07:03,617 INFO  [main] ipc.SimpleRpcScheduler: Using default user call queue, count=6
2015-06-29 21:07:03,652 INFO  [main] ipc.RpcServer: regionserver/dn.example.com/172.31.3.128:60020: started 10 reader(s).
2015-06-29 21:07:03,761 INFO  [main] impl.MetricsConfig: loaded properties from hadoop-metrics2-hbase.properties
2015-06-29 21:07:03,809 INFO  [main] timeline.HadoopTimelineMetricsSink: Initializing Timeline metrics sink.
2015-06-29 21:07:03,809 INFO  [main] timeline.HadoopTimelineMetricsSink: Identified hostname = dn.example.com, serviceName = hbase
2015-06-29 21:07:03,872 INFO  [main] timeline.HadoopTimelineMetricsSink: Collector Uri: http://nn.example.com:6188/ws/v1/timeline/metrics
2015-06-29 21:07:03,883 INFO  [main] impl.MetricsSinkAdapter: Sink timeline started
2015-06-29 21:07:03,955 INFO  [main] impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-06-29 21:07:03,955 INFO  [main] impl.MetricsSystemImpl: HBase metrics system started
2015-06-29 21:07:04,470 INFO  [main] security.UserGroupInformation: Login successful for user hbase/dn.example.com@EXAMPLE.COM using keytab file /etc/security/keytabs/hbase.service.keytab
2015-06-29 21:07:04,475 INFO  [main] hfile.CacheConfig: Allocating LruBlockCache with maximum size 401.6 M
2015-06-29 21:07:04,520 INFO  [main] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-06-29 21:07:04,569 INFO  [main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2015-06-29 21:07:04,582 INFO  [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver
2015-06-29 21:07:04,582 INFO  [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-06-29 21:07:04,593 INFO  [main] http.HttpServer: Jetty bound to port 60030
2015-06-29 21:07:04,593 INFO  [main] mortbay.log: jetty-6.1.26.hwx
2015-06-29 21:07:05,169 INFO  [main] mortbay.log: Started HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60030
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-2--1, built on 03/31/2015 19:31 GMT
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:host.name=dn.example.com
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.version=1.7.0_67
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.home=/usr/jdk64/jdk1.7.0_67/jre
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.class.path=/etc/hbase/conf:/usr/jdk64/jdk1.7.0_67/lib/tools.jar:/usr/hdp/current/hbase-regionserver/bin/..:/usr/hdp/current/hbase-regionserver/bin/../lib/activation-1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/aopalliance-1.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/api-util-1.0.0-M20.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/asm-3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/avro-1.7.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/azure-storage-2.0.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-beanutils-1.7.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-cli-1.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-codec-1.7.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-collections-3.2.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-compress-1.4.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-configuration-1.6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-daemon-1.0.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-digester-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-el-1.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-httpclient-3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-io-2.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-lang-2.6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-lang3-3.3.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-logging-1.1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-math-2.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-math3-3.1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-net-3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/curator-client-2.6.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/curator-framework-2.6.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/curator-recipes-2.6.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/eclipselink-2.5.2-M1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/findbugs-annotations-1.3.9-1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/gson-2.2.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/guava-12.0.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/guice-3.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/guice-servlet-3.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hamcrest-core-1.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-client-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-client.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-common-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-common-0.98.4.2.2.4.2-2-hadoop2-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-common.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-examples-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-examples.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop2-compat-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop2-compat.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop-compat-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop-compat.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-it-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-it-0.98.4.2.2.4.2-2-hadoop2-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-it.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-prefix-tree-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-prefix-tree.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-protocol-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-protocol.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-server-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-server-0.98.4.2.2.4.2-2-hadoop2-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-server.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-shell-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-shell.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-testing-util-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-testing-util.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-thrift-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-thrift.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/high-scale-lib-1.1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/htrace-core-2.04.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/htrace-core-3.0.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/httpclient-4.2.5.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/httpcore-4.1.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-core-2.2.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-core-asl-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-xc-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jamon-runtime-2.3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jasper-compiler-5.5.23.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jasper-runtime-5.5.23.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/javax.inject-1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/java-xmlbuilder-0.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jaxb-api-2.2.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-client-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-core-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-guice-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-json-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-server-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jets3t-0.9.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jettison-1.3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jetty-6.1.26.hwx.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jetty-sslengine-6.1.26.hwx.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jruby-complete-1.6.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsch-0.1.42.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsp-2.1-6.1.14.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsp-api-2.1-6.1.14.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsr305-1.3.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/junit-4.11.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/leveldbjni-all-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/libthrift-0.9.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/log4j-1.2.17.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/metrics-core-2.2.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/netty-3.6.6.Final.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ojdbc6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/paranamer-2.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/phoenix-server.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/protobuf-java-2.5.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-hbase-plugin-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-plugins-audit-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-plugins-common-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-plugins-cred-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-plugins-impl-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/servlet-api-2.5-6.1.14.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/servlet-api-2.5.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/slf4j-api-1.6.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/snappy-java-1.0.4.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xercesImpl-2.9.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xml-apis-1.3.04.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xmlenc-0.52.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xz-1.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/zookeeper.jar:/usr/hdp/2.2.4.2-2/hadoop/conf:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-api-1.7.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-impl-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ojdbc6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/httpclient-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-audit-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-client-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-httpclient-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsch-0.1.42.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-common-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/javax.persistence-2.1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-framework-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-log4j12-1.7.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/activation-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/junit-4.11.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/eclipselink-2.5.2-M1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-cred-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/httpcore-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-hdfs-plugin-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/azure-storage-2.0.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-lang3-3.3.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-recipes-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-azure.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-annotations.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-common-tests.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-nfs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-auth.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-azure-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-auth-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-nfs.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-annotations-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-common-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-common.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/./:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-nfs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jettison-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/httpclient-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/guice-3.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-net-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/curator-client-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jsch-0.1.42.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jasper-compiler-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/curator-framework-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/gson-2.2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/activation-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-core-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/httpcore-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jline-0.9.94.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/azure-storage-2.0.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-lang3-3.3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/curator-recipes-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-api-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-client-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-registry-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-tests-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/guice-3.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/junit-4.11.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//aws-java-sdk-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jettison-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-datajoin-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-extras.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-jaxrs-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-ant.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-databind-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//curator-client-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-auth.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//curator-framework-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-auth-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-xc-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-sls-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-streaming-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-annotations-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-extras-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//mockito-all-1.8.5.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-openstack-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//activation-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-aws.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-distcp-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-archives-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-aws-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-gridmix-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//junit-4.11.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-core-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//joda-time-2.7.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-lang3-3.3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-rumen-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//curator-recipes-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-ant-2.6.0.2.2.4.2-2.jar::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-mapper-asl-1.9.13.jar:/usr/hdp/current/hadoop-mapreduce-client/jaxb-api-2.2.2.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-beanutils-1.7.0.jar:/usr/hdp/current/hadoop-mapreduce-client/aws-java-sdk-1.7.4.jar:/usr/hdp/current/hadoop-mapreduce-client/jettison-1.1.jar:/usr/hdp/current/hadoop-mapreduce-client/httpclient-4.2.5.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-datajoin-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-extras.jar:/usr/hdp/current/hadoop-mapreduce-client/jetty-6.1.26.hwx.jar:/usr/hdp/current/hadoop-mapreduce-client/xz-1.0.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar:/usr/hdp/current/hadoop-mapreduce-client/jets3t-0.9.0.jar:/usr/hdp/current/hadoop-mapreduce-client/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-net-3.1.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-jaxrs-1.9.13.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-ant.jar:/usr/hdp/current/hadoop-mapreduce-client/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jsr305-1.3.9.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-io-2.4.jar:/usr/hdp/current/hadoop-mapreduce-client/guava-11.0.2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar:/usr/hdp/current/hadoop-mapreduce-client/jersey-json-1.9.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-lang-2.6.jar:/usr/hdp/current/hadoop-mapreduce-client/jetty-util-6.1.26.hwx.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-databind-2.2.3.jar:/usr/hdp/current/hadoop-mapreduce-client/curator-client-2.6.0.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-digester-1.8.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-httpclient-3.1.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-compress-1.4.1.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-auth.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle.jar:/usr/hdp/current/hadoop-mapreduce-client/jsch-0.1.42.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-logging-1.1.3.jar:/usr/hdp/current/hadoop-mapreduce-client/jasper-compiler-5.5.23.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/current/hadoop-mapreduce-client/paranamer-2.3.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-rumen.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-gridmix.jar:/usr/hdp/current/hadoop-mapreduce-client/hamcrest-core-1.3.jar:/usr/hdp/current/hadoop-mapreduce-client/java-xmlbuilder-0.4.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/current/hadoop-mapreduce-client/curator-framework-2.6.0.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-auth-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-xc-1.9.13.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-el-1.0.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-hs.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-sls-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jersey-core-1.9.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-app.jar:/usr/hdp/current/hadoop-mapreduce-client/log4j-1.2.17.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/protobuf-java-2.5.0.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-annotations-2.2.3.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-extras-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/mockito-all-1.8.5.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-app-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/gson-2.2.4.jar:/usr/hdp/current/hadoop-mapreduce-client/snappy-java-1.0.4.1.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-openstack-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-collections-3.2.1.jar:/usr/hdp/current/hadoop-mapreduce-client/htrace-core-3.0.4.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-openstack.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/activation-1.1.jar:/usr/hdp/current/hadoop-mapreduce-client/jaxb-impl-2.2.3-1.jar:/usr/hdp/current/hadoop-mapreduce-client/jersey-server-1.9.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-common.jar:/usr/hdp/current/hadoop-mapreduce-client/stax-api-1.0-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-aws.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-configuration-1.6.jar:/usr/hdp/current/hadoop-mapreduce-client/avro-1.7.4.jar:/usr/hdp/current/hadoop-mapreduce-client/api-asn1-api-1.0.0-M20.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-distcp-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jsp-api-2.1.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-archives-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jasper-runtime-5.5.23.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-datajoin.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-aws-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-gridmix-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-core-asl-1.9.13.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-beanutils-core-1.8.0.jar:/usr/hdp/current/hadoop-mapreduce-client/junit-4.11.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-core-2.2.3.jar:/usr/hdp/current/hadoop-mapreduce-client/servlet-api-2.5.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-codec-1.4.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-cli-1.2.jar:/usr/hdp/current/hadoop-mapreduce-client/joda-time-2.7.jar:/usr/hdp/current/hadoop-mapreduce-client/asm-3.2.jar:/usr/hdp/current/hadoop-mapreduce-client/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/current/hadoop-mapreduce-client/httpcore-4.2.5.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-math3-3.1.1.jar:/usr/hdp/current/hadoop-mapreduce-client/metrics-core-3.0.1.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/current/hadoop-mapreduce-client/netty-3.6.2.Final.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-sls.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-archives.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-lang3-3.3.2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-rumen-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-distcp.jar:/usr/hdp/current/hadoop-mapreduce-client/curator-recipes-2.6.0.jar:/usr/hdp/current/hadoop-mapreduce-client/xmlenc-0.52.jar:/usr/hdp/current/hadoop-mapreduce-client/api-util-1.0.0-M20.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-ant-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-runtime-library-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-common-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-yarn-timeline-history-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-api-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-tests-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-mapreduce-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-dag-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-mbeans-resource-calculator-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-examples-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-runtime-internals-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/jsr305-2.0.3.jar:/usr/hdp/current/tez-client/lib/jetty-6.1.26.hwx.jar:/usr/hdp/current/tez-client/lib/commons-io-2.4.jar:/usr/hdp/current/tez-client/lib/guava-11.0.2.jar:/usr/hdp/current/tez-client/lib/commons-collections4-4.0.jar:/usr/hdp/current/tez-client/lib/commons-lang-2.6.jar:/usr/hdp/current/tez-client/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/current/tez-client/lib/commons-logging-1.1.3.jar:/usr/hdp/current/tez-client/lib/log4j-1.2.17.jar:/usr/hdp/current/tez-client/lib/protobuf-java-2.5.0.jar:/usr/hdp/current/tez-client/lib/hadoop-mapreduce-client-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/commons-collections-3.2.1.jar:/usr/hdp/current/tez-client/lib/hadoop-mapreduce-client-core-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/jettison-1.3.4.jar:/usr/hdp/current/tez-client/lib/hadoop-yarn-server-web-proxy-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/servlet-api-2.5.jar:/usr/hdp/current/tez-client/lib/commons-codec-1.4.jar:/usr/hdp/current/tez-client/lib/hadoop-annotations-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/commons-cli-1.2.jar:/usr/hdp/current/tez-client/lib/commons-math3-3.1.1.jar:/etc/tez/conf/:/usr/hdp/2.2.4.2-2/tez/tez-runtime-library-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-common-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-yarn-timeline-history-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-api-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-tests-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-mapreduce-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-dag-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-mbeans-resource-calculator-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-examples-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-runtime-internals-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/jsr305-2.0.3.jar:/usr/hdp/2.2.4.2-2/tez/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/tez/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-collections4-4.0.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/tez/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/tez/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/tez/lib/hadoop-mapreduce-client-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/tez/lib/hadoop-mapreduce-client-core-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/jettison-1.3.4.jar:/usr/hdp/2.2.4.2-2/tez/lib/hadoop-yarn-server-web-proxy-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/tez/lib/hadoop-annotations-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-math3-3.1.1.jar:/etc/tez/conf:/usr/hdp/2.2.4.2-2/hadoop/conf:/usr/hdp/2.2.4.2-2/hadoop/hadoop-azure.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-annotations.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-common-tests.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-nfs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-auth.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-azure-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-auth-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-nfs.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-annotations-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-common-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-common.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-api-1.7.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-impl-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ojdbc6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/httpclient-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-audit-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-client-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-httpclient-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsch-0.1.42.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-common-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/javax.persistence-2.1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-framework-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-log4j12-1.7.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/activation-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/junit-4.11.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/eclipselink-2.5.2-M1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-cred-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/httpcore-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-hdfs-plugin-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/azure-storage-2.0.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-lang3-3.3.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-recipes-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/zookeeper/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/zookeeper/zookeeper.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/httpcore-4.2.3.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/httpclient-4.2.3.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-artifact-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/log4j-1.2.16.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-http-shared-1.0-beta-6.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-file-1.0-beta-6.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-http-shared4-2.4.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/classworlds-1.1-alpha-2.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/plexus-utils-3.0.8.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/netty-3.7.0.Final.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-profile-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/nekohtml-1.9.6.2.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-provider-api-2.4.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/jsoup-1.7.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/plexus-container-default-1.0-alpha-9-stable-1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/xercesMinimal-1.9.6.2.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-artifact-manager-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/commons-logging-1.1.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/commons-codec-1.6.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/plexus-interpolation-1.11.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/slf4j-api-1.6.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/ant-1.8.0.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-plugin-registry-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/slf4j-log4j12-1.6.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-http-lightweight-1.0-beta-6.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/commons-io-2.2.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-error-diagnostics-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-project-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/backport-util-concurrent-3.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/ant-launcher-1.8.0.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-ant-tasks-2.1.3.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-model-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/jline-0.9.94.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-http-2.4.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-repository-metadata-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-settings-2.2.1.jar:
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/hdp/2.2.4.2-2/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.2.4.2-2/hadoop/lib/native
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:os.name=Linux
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:os.version=2.6.32-431.11.2.el6.x86_64
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:user.name=hbase
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:user.home=/home/hbase
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:user.dir=/home/hbase
2015-06-29 21:07:05,198 INFO  [regionserver60020] zookeeper.ZooKeeper: Initiating client connection, connectString=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181 sessionTimeout=30000 watcher=regionserver:60020, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure
2015-06-29 21:07:05,215 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Process identifier=regionserver:60020 connecting to ZooKeeper ensemble=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181
2015-06-29 21:07:05,230 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.Login: successfully logged in.
2015-06-29 21:07:05,234 INFO  [Thread-10] zookeeper.Login: TGT refresh thread started.
2015-06-29 21:07:05,236 INFO  [regionserver60020-SendThread(dn.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2015-06-29 21:07:05,244 INFO  [Thread-10] zookeeper.Login: TGT valid starting at:        Mon Jun 29 21:07:05 UTC 2015
2015-06-29 21:07:05,244 INFO  [Thread-10] zookeeper.Login: TGT expires:                  Tue Jun 30 21:07:05 UTC 2015
2015-06-29 21:07:05,245 INFO  [Thread-10] zookeeper.Login: TGT refresh sleeping until: Tue Jun 30 17:29:07 UTC 2015
2015-06-29 21:07:05,247 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server dn.example.com/172.31.3.128:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2015-06-29 21:07:05,249 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to dn.example.com/172.31.3.128:2181, initiating session
2015-06-29 21:07:05,261 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server dn.example.com/172.31.3.128:2181, sessionid = 0x14e2be1fbb70009, negotiated timeout = 30000
2015-06-29 21:07:05,514 INFO  [regionserver60020] zookeeper.ZooKeeper: Initiating client connection, connectString=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181 sessionTimeout=30000 watcher=hconnection-0x55af9c7d, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure
2015-06-29 21:07:05,521 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x55af9c7d connecting to ZooKeeper ensemble=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181
2015-06-29 21:07:05,522 INFO  [regionserver60020-SendThread(dn.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2015-06-29 21:07:05,523 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server dn.example.com/172.31.3.128:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2015-06-29 21:07:05,523 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to dn.example.com/172.31.3.128:2181, initiating session
2015-06-29 21:07:05,526 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server dn.example.com/172.31.3.128:2181, sessionid = 0x14e2be1fbb7000a, negotiated timeout = 30000
2015-06-29 21:07:06,199 INFO  [main] regionserver.ShutdownHook: Installed shutdown hook thread: Shutdownhook:regionserver60020
2015-06-29 21:07:06,206 DEBUG [regionserver60020] ipc.RpcClient: Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@443fdee7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2015-06-29 21:07:06,211 DEBUG [regionserver60020] hbase.HRegionInfo: 1588230740
2015-06-29 21:07:06,212 DEBUG [regionserver60020] catalog.CatalogTracker: Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@19e951c9
2015-06-29 21:07:06,215 INFO  [regionserver60020] regionserver.HRegionServer: ClusterId : 05d0370c-07a6-40ff-ab97-5be7d7ae1f36
2015-06-29 21:07:06,218 INFO  [regionserver60020] procedure.RegionServerProcedureManagerHost: Procedure online-snapshot is initializing
2015-06-29 21:07:06,230 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Node /hbase-secure/online-snapshot/acquired already exists and this is not a retry
2015-06-29 21:07:06,235 INFO  [regionserver60020] procedure.RegionServerProcedureManagerHost: Procedure online-snapshot is initialized
2015-06-29 21:07:06,240 INFO  [regionserver60020] regionserver.MemStoreFlusher: globalMemStoreLimit=401.6 M, globalMemStoreLimitLowMark=381.5 M, maxHeap=1004 M
2015-06-29 21:07:06,242 INFO  [regionserver60020] regionserver.HRegionServer: CompactionChecker runs every 10sec
2015-06-29 21:07:06,244 DEBUG [regionserver60020] ipc.RpcClient: Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@175e895d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=dn.example.com/172.31.3.128:0
2015-06-29 21:07:06,250 INFO  [regionserver60020] regionserver.HRegionServer: reportForDuty to master=rm.example.com,60000,1435297869160 with port=60020, startcode=1435612024042
2015-06-29 21:07:06,359 DEBUG [regionserver60020] token.AuthenticationTokenSelector: No matching token found
2015-06-29 21:07:06,360 DEBUG [regionserver60020] ipc.RpcClient: RPC Server Kerberos principal name for service=RegionServerStatusService is hbase/rm.example.com@EXAMPLE.COM
2015-06-29 21:07:06,360 DEBUG [regionserver60020] ipc.RpcClient: Use KERBEROS authentication for service RegionServerStatusService, sasl=true
2015-06-29 21:07:06,372 DEBUG [regionserver60020] ipc.RpcClient: Connecting to rm.example.com/172.31.3.127:60000
2015-06-29 21:07:06,378 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Creating SASL GSSAPI client. Server's Kerberos principal name is hbase/rm.example.com@EXAMPLE.COM
2015-06-29 21:07:06,384 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Have sent token of size 633 from initSASLContext.
2015-06-29 21:07:06,388 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Will read input token of size 108 for processing by initSASLContext
2015-06-29 21:07:06,390 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Will send token of size 0 from initSASLContext.
2015-06-29 21:07:06,391 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Will read input token of size 32 for processing by initSASLContext
2015-06-29 21:07:06,392 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Will send token of size 32 from initSASLContext.
2015-06-29 21:07:06,392 DEBUG [regionserver60020] security.HBaseSaslRpcClient: SASL client context established. Negotiated QoP: auth
2015-06-29 21:07:06,411 DEBUG [regionserver60020] regionserver.HRegionServer: Config from master: hbase.rootdir=hdfs://nn.example.com:8020/apps/hbase/data
2015-06-29 21:07:06,412 DEBUG [regionserver60020] regionserver.HRegionServer: Config from master: fs.default.name=hdfs://nn.example.com:8020
2015-06-29 21:07:06,412 INFO  [regionserver60020] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:06,412 DEBUG [regionserver60020] regionserver.HRegionServer: Config from master: hbase.master.info.port=60010
2015-06-29 21:07:06,412 INFO  [regionserver60020] Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
2015-06-29 21:07:06,430 INFO  [regionserver60020] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2015-06-29 21:07:06,437 DEBUG [regionserver60020] regionserver.HRegionServer: logdir=hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435612024042
2015-06-29 21:07:06,542 DEBUG [regionserver60020] regionserver.Replication: ReplicationStatisticsThread 300
2015-06-29 21:07:06,553 INFO  [regionserver60020] wal.FSHLog: WAL/HLog configuration: blocksize=128 MB, rollsize=121.60 MB, enabled=true
2015-06-29 21:07:06,798 INFO  [regionserver60020] wal.FSHLog: New WAL /apps/hbase/data/WALs/dn.example.com,60020,1435612024042/dn.example.com%2C60020%2C1435612024042.1435612026585
2015-06-29 21:07:06,814 INFO  [regionserver60020] regionserver.MetricsRegionServerWrapperImpl: Computing regionserver metrics every 5000 milliseconds
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_OPEN_REGION-dn:60020, corePoolSize=3, maxPoolSize=3
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_OPEN_META-dn:60020, corePoolSize=1, maxPoolSize=1
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_CLOSE_REGION-dn:60020, corePoolSize=3, maxPoolSize=3
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_CLOSE_META-dn:60020, corePoolSize=1, maxPoolSize=1
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_LOG_REPLAY_OPS-dn:60020, corePoolSize=2, maxPoolSize=2
2015-06-29 21:07:06,823 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-dn:60020, corePoolSize=3, maxPoolSize=3
2015-06-29 21:07:06,826 INFO  [regionserver60020] regionserver.ReplicationSourceManager: Current list of replicators: [dn.example.com,60020,1435612024042] other RSs: [dn.example.com,60020,1435612024042]
2015-06-29 21:07:06,875 INFO  [regionserver60020] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:06,885 INFO  [regionserver60020] zookeeper.ZooKeeper: Initiating client connection, connectString=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181 sessionTimeout=30000 watcher=hconnection-0x64d5c83f, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure
2015-06-29 21:07:06,890 INFO  [regionserver60020-SendThread(nn.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2015-06-29 21:07:06,891 INFO  [regionserver60020-SendThread(nn.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server nn.example.com/172.31.3.126:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2015-06-29 21:07:06,891 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x64d5c83f connecting to ZooKeeper ensemble=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181
2015-06-29 21:07:06,895 INFO  [regionserver60020-SendThread(nn.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to nn.example.com/172.31.3.126:2181, initiating session
2015-06-29 21:07:06,909 INFO  [regionserver60020-SendThread(nn.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server nn.example.com/172.31.3.126:2181, sessionid = 0x24e2be20e450014, negotiated timeout = 30000
2015-06-29 21:07:06,939 DEBUG [regionserver60020] ipc.RpcClient: Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b0a2c6a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2015-06-29 21:07:06,954 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Node /hbase-secure/tokenauth/keys already exists and this is not a retry
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 10
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 17
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 15
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 16
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 13
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 14
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 11
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 12
2015-06-29 21:07:06,969 INFO  [ZKSecretWatcher-leaderElector] zookeeper.RecoverableZooKeeper: Node /hbase-secure/tokenauth/keymaster already exists and this is not a retry
2015-06-29 21:07:06,970 INFO  [ZKSecretWatcher-leaderElector] zookeeper.ZKLeaderManager: Found existing leader with ID: dn.example.com,60020,1435612024042
2015-06-29 21:07:07,017 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: starting
2015-06-29 21:07:07,018 INFO  [RpcServer.listener,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: starting
2015-06-29 21:07:07,018 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=0 queue=0
2015-06-29 21:07:07,018 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=1 queue=1
2015-06-29 21:07:07,019 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=2 queue=2
2015-06-29 21:07:07,019 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=3 queue=3
2015-06-29 21:07:07,019 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=4 queue=4
2015-06-29 21:07:07,019 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=5 queue=5
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=6 queue=0
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=7 queue=1
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=8 queue=2
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=9 queue=3
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=10 queue=4
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=11 queue=5
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=12 queue=0
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=13 queue=1
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=14 queue=2
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=15 queue=3
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=16 queue=4
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=17 queue=5
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=18 queue=0
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=19 queue=1
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=20 queue=2
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=21 queue=3
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=22 queue=4
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=23 queue=5
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=24 queue=0
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=25 queue=1
2015-06-29 21:07:07,024 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=26 queue=2
2015-06-29 21:07:07,024 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=27 queue=3
2015-06-29 21:07:07,024 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=28 queue=4
2015-06-29 21:07:07,025 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=29 queue=5
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=30 queue=0
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=31 queue=1
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=32 queue=2
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=33 queue=3
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=34 queue=4
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=35 queue=5
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=36 queue=0
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=37 queue=1
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=38 queue=2
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=39 queue=3
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=40 queue=4
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=41 queue=5
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=42 queue=0
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=43 queue=1
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=44 queue=2
2015-06-29 21:07:07,034 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=45 queue=3
2015-06-29 21:07:07,034 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=46 queue=4
2015-06-29 21:07:07,034 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=47 queue=5
2015-06-29 21:07:07,035 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=48 queue=0
2015-06-29 21:07:07,035 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=49 queue=1
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=50 queue=2
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=51 queue=3
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=52 queue=4
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=53 queue=5
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=54 queue=0
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=55 queue=1
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=56 queue=2
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=57 queue=3
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=58 queue=4
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=59 queue=5
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=0 queue=0
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=1 queue=0
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=2 queue=0
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=3 queue=0
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=4 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=5 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=6 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=7 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=8 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=9 queue=0
2015-06-29 21:07:07,040 DEBUG [regionserver60020] ipc.RpcExecutor: Replication Start Handler index=0 queue=0
2015-06-29 21:07:07,040 DEBUG [regionserver60020] ipc.RpcExecutor: Replication Start Handler index=1 queue=0
2015-06-29 21:07:07,040 DEBUG [regionserver60020] ipc.RpcExecutor: Replication Start Handler index=2 queue=0
2015-06-29 21:07:07,074 INFO  [regionserver60020] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:07,083 INFO  [regionserver60020] regionserver.HRegionServer: Serving as dn.example.com,60020,1435612024042, RpcServer on dn.example.com/172.31.3.128:60020, sessionid=0x14e2be1fbb70009
2015-06-29 21:07:07,083 INFO  [regionserver60020] procedure.RegionServerProcedureManagerHost: Procedure online-snapshot is starting
2015-06-29 21:07:07,083 DEBUG [regionserver60020] snapshot.RegionServerSnapshotManager: Start Snapshot Manager dn.example.com,60020,1435612024042
2015-06-29 21:07:07,083 DEBUG [regionserver60020] procedure.ZKProcedureMemberRpcs: Starting procedure member 'dn.example.com,60020,1435612024042'
2015-06-29 21:07:07,083 DEBUG [regionserver60020] procedure.ZKProcedureMemberRpcs: Checking for aborted procedures on node: '/hbase-secure/online-snapshot/abort'
2015-06-29 21:07:07,083 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] regionserver.SplitLogWorker: SplitLogWorker dn.example.com,60020,1435612024042 starting
2015-06-29 21:07:07,084 DEBUG [regionserver60020] procedure.ZKProcedureMemberRpcs: Looking for new procedures under znode:'/hbase-secure/online-snapshot/acquired'
2015-06-29 21:07:07,084 INFO  [regionserver60020] procedure.RegionServerProcedureManagerHost: Procedure online-snapshot is started
2015-06-29 21:07:07,111 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] zookeeper.ZooKeeper: Initiating client connection, connectString=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181 sessionTimeout=30000 watcher=hconnection-0x7db5292f, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure
2015-06-29 21:07:07,118 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042-SendThread(dn.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2015-06-29 21:07:07,119 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server dn.example.com/172.31.3.128:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2015-06-29 21:07:07,119 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to dn.example.com/172.31.3.128:2181, initiating session
2015-06-29 21:07:07,122 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7db5292f connecting to ZooKeeper ensemble=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181
2015-06-29 21:07:07,128 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server dn.example.com/172.31.3.128:2181, sessionid = 0x14e2be1fbb7000b, negotiated timeout = 30000
2015-06-29 21:07:07,143 DEBUG [SplitLogWorker-dn.example.com,60020,1435612024042] ipc.RpcClient: Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7aa9a046, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2015-06-29 21:07:07,165 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] regionserver.SplitLogWorker: worker dn.example.com,60020,1435612024042 acquired task /hbase-secure/splitWAL/WALs%2Fdn.example.com%2C60020%2C1435297873436-splitting%2Fdn.example.com%252C60020%252C1435297873436.1435297879386.meta
2015-06-29 21:07:07,224 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:07,234 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Splitting hlog: hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435297873436-splitting/dn.example.com%2C60020%2C1435297873436.1435297879386.meta, length=91
2015-06-29 21:07:07,234 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: DistributedLogReplay = false
2015-06-29 21:07:07,240 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435297873436-splitting/dn.example.com%2C60020%2C1435297873436.1435297879386.meta
2015-06-29 21:07:07,246 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435297873436-splitting/dn.example.com%2C60020%2C1435297873436.1435297879386.meta after 6ms
2015-06-29 21:07:07,339 DEBUG [RS_LOG_REPLAY_OPS-dn:60020-0-Writer-1] wal.HLogSplitter: Writer thread Thread[RS_LOG_REPLAY_OPS-dn:60020-0-Writer-1,5,main]: starting
2015-06-29 21:07:07,339 DEBUG [RS_LOG_REPLAY_OPS-dn:60020-0-Writer-2] wal.HLogSplitter: Writer thread Thread[RS_LOG_REPLAY_OPS-dn:60020-0-Writer-2,5,main]: starting
2015-06-29 21:07:07,338 DEBUG [RS_LOG_REPLAY_OPS-dn:60020-0-Writer-0] wal.HLogSplitter: Writer thread Thread[RS_LOG_REPLAY_OPS-dn:60020-0-Writer-0,5,main]: starting
2015-06-29 21:07:07,342 DEBUG [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Finishing writing output logs and closing down.
2015-06-29 21:07:07,342 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Waiting for split writer threads to finish
2015-06-29 21:07:07,343 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Split writers finished
2015-06-29 21:07:07,345 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Processed 0 edits across 0 regions; log file=hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435297873436-splitting/dn.example.com%2C60020%2C1435297873436.1435297879386.meta is corrupted = false progress failed = false
2015-06-29 21:07:07,351 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] handler.HLogSplitterHandler: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fdn.example.com%2C60020%2C1435297873436-splitting%2Fdn.example.com%252C60020%252C1435297873436.1435297879386.meta to final state DONE dn.example.com,60020,1435612024042
2015-06-29 21:07:07,351 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] handler.HLogSplitterHandler: worker dn.example.com,60020,1435612024042 done with task /hbase-secure/splitWAL/WALs%2Fdn.example.com%2C60020%2C1435297873436-splitting%2Fdn.example.com%252C60020%252C1435297873436.1435297879386.meta in 168ms
2015-06-29 21:07:07,387 DEBUG [regionserver60020-EventThread] regionserver.SplitLogWorker: tasks arrived or departed
2015-06-29 21:07:08,433 DEBUG [RpcServer.listener,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: connection from 172.31.3.127:37982; # active connections: 1
2015-06-29 21:07:08,434 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Kerberos principal name is hbase/dn.example.com@EXAMPLE.COM
2015-06-29 21:07:08,436 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Created SASL server with mechanism = GSSAPI
2015-06-29 21:07:08,436 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Have read input token of size 633 for processing by saslServer.evaluateResponse()
2015-06-29 21:07:08,441 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Will send token of size 108 from saslServer.
2015-06-29 21:07:08,444 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Have read input token of size 0 for processing by saslServer.evaluateResponse()
2015-06-29 21:07:08,445 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Will send token of size 32 from saslServer.
2015-06-29 21:07:08,446 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Have read input token of size 32 for processing by saslServer.evaluateResponse()
2015-06-29 21:07:08,446 DEBUG [RpcServer.reader=1,port=60020] security.HBaseSaslRpcServer: SASL server GSSAPI callback: setting canonicalized client ID: hbase/rm.example.com@EXAMPLE.COM
2015-06-29 21:07:08,446 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: SASL server context established. Authenticated client: hbase/rm.example.com@EXAMPLE.COM (auth:SIMPLE). Negotiated QoP is auth
2015-06-29 21:07:08,481 INFO  [PriorityRpcServer.handler=0,queue=0,port=60020] regionserver.HRegionServer: Open hbase:meta,,1.1588230740
2015-06-29 21:07:08,506 INFO  [PriorityRpcServer.handler=0,queue=0,port=60020] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2015-06-29 21:07:08,608 DEBUG [RS_OPEN_META-dn:60020-0] zookeeper.ZKAssign: regionserver:60020-0x14e2be1fbb70009, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure Transitioning 1588230740 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
2015-06-29 21:07:08,617 DEBUG [RS_OPEN_META-dn:60020-0] zookeeper.ZKAssign: regionserver:60020-0x14e2be1fbb70009, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure Transitioned node 1588230740 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
2015-06-29 21:07:08,618 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegionServer: logdir=hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435612024042
2015-06-29 21:07:08,618 INFO  [RS_OPEN_META-dn:60020-0] wal.FSHLog: WAL/HLog configuration: blocksize=128 MB, rollsize=121.60 MB, enabled=true
2015-06-29 21:07:08,635 INFO  [RS_OPEN_META-dn:60020-0] wal.FSHLog: New WAL /apps/hbase/data/WALs/dn.example.com,60020,1435612024042/dn.example.com%2C60020%2C1435612024042.1435612028621.meta
2015-06-29 21:07:08,650 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}
2015-06-29 21:07:08,669 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=AuthenticationService
2015-06-29 21:07:08,671 INFO  [RS_OPEN_META-dn:60020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870911).
2015-06-29 21:07:08,676 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=SecureBulkLoadService
2015-06-29 21:07:08,686 ERROR [RS_OPEN_META-dn:60020-0] coprocessor.CoprocessorHost: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
java.lang.IllegalStateException: Failed to get FileSystem instance
        at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:148)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.startup(CoprocessorHost.java:415)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:257)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:160)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:192)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:701)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:608)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:5438)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5749)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5722)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5678)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5629)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:201)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:191)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6795)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6777)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6696)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1731)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1711)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:615)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:445)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2345)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1300)
        at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:136)
        ... 21 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:201)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:191)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6795)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6777)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6696)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1731)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1711)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:615)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:445)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at org.apache.hadoop.ipc.Client.call(Client.java:1469)
        at org.apache.hadoop.ipc.Client.call(Client.java:1400)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy18.setPermission(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:361)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy19.setPermission(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:294)
        at com.sun.proxy.$Proxy20.setPermission(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2343)
        ... 26 more
2015-06-29 21:07:08,688 FATAL [RS_OPEN_META-dn:60020-0] regionserver.HRegionServer: ABORTING region server dn.example.com,60020,1435612024042: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
java.lang.IllegalStateException: Failed to get FileSystem instance
        at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:148)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.startup(CoprocessorHost.java:415)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:257)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:160)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:192)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:701)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:608)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:5438)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5749)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5722)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5678)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5629)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:201)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:191)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6795)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6777)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6696)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1731)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1711)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:615)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:445)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2345)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1300)
        at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:136)
        ... 21 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:201)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:191)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6795)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6777)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6696)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1731)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1711)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:615)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:445)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at org.apache.hadoop.ipc.Client.call(Client.java:1469)
        at org.apache.hadoop.ipc.Client.call(Client.java:1400)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy18.setPermission(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:361)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy19.setPermission(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:294)
        at com.sun.proxy.$Proxy20.setPermission(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2343)
        ... 26 more
2015-06-29 21:07:08,690 FATAL [RS_OPEN_META-dn:60020-0] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.security.token.TokenProvider]
2015-06-29 21:07:08,699 INFO  [RS_OPEN_META-dn:60020-0] regionserver.HRegionServer: STOPPED: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
2015-06-29 21:07:08,700 INFO  [regionserver60020] ipc.RpcServer: Stopping server on 60020
2015-06-29 21:07:08,700 INFO  [RpcServer.listener,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: stopping
2015-06-29 21:07:08,701 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped
2015-06-29 21:07:08,701 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2015-06-29 21:07:08,701 INFO  [regionserver60020] regionserver.SplitLogWorker: Sending interrupt to stop the worker thread
2015-06-29 21:07:08,702 INFO  [regionserver60020] regionserver.HRegionServer: Stopping infoServer
2015-06-29 21:07:08,704 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] regionserver.SplitLogWorker: SplitLogWorker interrupted while waiting for task, exiting: java.lang.InterruptedException
2015-06-29 21:07:08,704 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] regionserver.SplitLogWorker: SplitLogWorker dn.example.com,60020,1435612024042 exiting
2015-06-29 21:07:08,716 INFO  [regionserver60020] mortbay.log: Stopped HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60030
2015-06-29 21:07:08,716 WARN  [1614105668@qtp-278229142-1 - Acceptor0 HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60030] http.HttpServer: HttpServer Acceptor: isRunning is false. Rechecking.
2015-06-29 21:07:08,716 WARN  [1614105668@qtp-278229142-1 - Acceptor0 HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60030] http.HttpServer: HttpServer Acceptor: isRunning is false
2015-06-29 21:07:08,718 INFO  [regionserver60020] snapshot.RegionServerSnapshotManager: Stopping RegionServerSnapshotManager abruptly.
2015-06-29 21:07:08,719 INFO  [regionserver60020.logRoller] regionserver.LogRoller: LogRoller exiting.
2015-06-29 21:07:08,719 INFO  [regionserver60020.nonceCleaner] regionserver.ServerNonceManager$1: regionserver60020.nonceCleaner exiting
2015-06-29 21:07:08,719 INFO  [regionserver60020.compactionChecker] regionserver.HRegionServer$CompactionChecker: regionserver60020.compactionChecker exiting
2015-06-29 21:07:08,720 INFO  [regionserver60020] regionserver.HRegionServer: aborting server dn.example.com,60020,1435612024042
2015-06-29 21:07:08,721 DEBUG [regionserver60020] catalog.CatalogTracker: Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@19e951c9
2015-06-29 21:07:08,721 INFO  [regionserver60020] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x14e2be1fbb7000a
2015-06-29 21:07:08,718 INFO  [MemStoreFlusher.0] regionserver.MemStoreFlusher: MemStoreFlusher.0 exiting
2015-06-29 21:07:08,723 INFO  [MemStoreFlusher.1] regionserver.MemStoreFlusher: MemStoreFlusher.1 exiting
2015-06-29 21:07:08,719 INFO  [RS_OPEN_META-dn:60020-0-MetaLogRoller] regionserver.LogRoller: LogRoller exiting.
2015-06-29 21:07:08,724 INFO  [regionserver60020] zookeeper.ZooKeeper: Session: 0x14e2be1fbb7000a closed
2015-06-29 21:07:08,724 DEBUG [regionserver60020] ipc.RpcClient: Stopping rpc client
2015-06-29 21:07:08,725 INFO  [regionserver60020] regionserver.HRegionServer: stopping server dn.example.com,60020,1435612024042; all regions closed.
2015-06-29 21:07:08,725 INFO  [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-06-29 21:07:08,725 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncNotifier] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncNotifier interrupted while waiting for  notification from AsyncSyncer thread
2015-06-29 21:07:08,725 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncNotifier] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncNotifier exiting
2015-06-29 21:07:08,725 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer0] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer0 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,725 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer0] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer0 exiting
2015-06-29 21:07:08,725 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer1] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer1 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,725 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer1] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer1 exiting
2015-06-29 21:07:08,725 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer2] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer2 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,725 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer2] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer2 exiting
2015-06-29 21:07:08,726 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer3] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer3 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,726 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer3] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer3 exiting
2015-06-29 21:07:08,726 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer4] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer4 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,726 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer4] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer4 exiting
2015-06-29 21:07:08,726 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncWriter] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncWriter interrupted while waiting for newer writes added to local buffer
2015-06-29 21:07:08,726 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncWriter] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncWriter exiting
2015-06-29 21:07:08,726 DEBUG [regionserver60020] wal.FSHLog: Closing WAL writer in hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435612024042
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncNotifier] wal.FSHLog: regionserver60020-WAL.AsyncNotifier interrupted while waiting for  notification from AsyncSyncer thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncNotifier] wal.FSHLog: regionserver60020-WAL.AsyncNotifier exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer0] wal.FSHLog: regionserver60020-WAL.AsyncSyncer0 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncSyncer0] wal.FSHLog: regionserver60020-WAL.AsyncSyncer0 exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer1] wal.FSHLog: regionserver60020-WAL.AsyncSyncer1 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncSyncer1] wal.FSHLog: regionserver60020-WAL.AsyncSyncer1 exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer2] wal.FSHLog: regionserver60020-WAL.AsyncSyncer2 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncSyncer2] wal.FSHLog: regionserver60020-WAL.AsyncSyncer2 exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer3] wal.FSHLog: regionserver60020-WAL.AsyncSyncer3 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncSyncer3] wal.FSHLog: regionserver60020-WAL.AsyncSyncer3 exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer4] wal.FSHLog: regionserver60020-WAL.AsyncSyncer4 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,736 INFO  [regionserver60020-WAL.AsyncSyncer4] wal.FSHLog: regionserver60020-WAL.AsyncSyncer4 exiting
2015-06-29 21:07:08,736 DEBUG [regionserver60020-WAL.AsyncWriter] wal.FSHLog: regionserver60020-WAL.AsyncWriter interrupted while waiting for newer writes added to local buffer
2015-06-29 21:07:08,736 INFO  [regionserver60020-WAL.AsyncWriter] wal.FSHLog: regionserver60020-WAL.AsyncWriter exiting
2015-06-29 21:07:08,736 DEBUG [regionserver60020] wal.FSHLog: Closing WAL writer in hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435612024042
2015-06-29 21:07:08,737 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=AccessControlService
2015-06-29 21:07:08,743 DEBUG [regionserver60020] ipc.RpcClient: Stopping rpc client
2015-06-29 21:07:08,743 INFO  [RS_OPEN_META-dn:60020-0] util.ChecksumType: Checksum using org.apache.hadoop.util.PureJavaCrc32
2015-06-29 21:07:08,743 INFO  [RS_OPEN_META-dn:60020-0] util.ChecksumType: Checksum can use org.apache.hadoop.util.PureJavaCrc32C
2015-06-29 21:07:08,743 INFO  [regionserver60020] regionserver.Leases: regionserver60020 closing leases
2015-06-29 21:07:08,744 INFO  [regionserver60020] regionserver.Leases: regionserver60020 closed leases
2015-06-29 21:07:08,744 INFO  [RS_OPEN_META-dn:60020-0] access.AccessController: A minimum HFile version of 3 is required to persist cell ACLs. Consider setting hfile.format.version accordingly.
2015-06-29 21:07:08,756 INFO  [RS_OPEN_META-dn:60020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870912).
2015-06-29 21:07:08,759 INFO  [RS_OPEN_META-dn:60020-0] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:08,761 DEBUG [RS_OPEN_META-dn:60020-0] coprocessor.CoprocessorHost: Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911
2015-06-29 21:07:08,763 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService
2015-06-29 21:07:08,763 INFO  [RS_OPEN_META-dn:60020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully.
2015-06-29 21:07:08,767 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.MetricsRegionSourceImpl: Creating new MetricsRegionSourceImpl for table meta 1588230740
2015-06-29 21:07:08,767 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Instantiated hbase:meta,,1.1588230740
2015-06-29 21:07:08,825 INFO  [StoreOpener-1588230740-1] compactions.CompactionConfiguration: size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000
2015-06-29 21:07:08,883 DEBUG [StoreOpener-1588230740-1] regionserver.HStore: loaded hdfs://nn.example.com:8020/apps/hbase/data/data/hbase/meta/1588230740/info/773daf23518042b49cded2d0f6705ad7, isReference=false, isBulkLoadResult=false, seqid=52, majorCompaction=true
2015-06-29 21:07:08,892 DEBUG [StoreOpener-1588230740-1] regionserver.HStore: loaded hdfs://nn.example.com:8020/apps/hbase/data/data/hbase/meta/1588230740/info/d3e9d66e7adf462bac4b758191ad7152, isReference=false, isBulkLoadResult=false, seqid=60, majorCompaction=false
2015-06-29 21:07:08,902 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Found 0 recovered edits file(s) under hdfs://nn.example.com:8020/apps/hbase/data/data/hbase/meta/1588230740
2015-06-29 21:07:08,922 DEBUG [RS_OPEN_META-dn:60020-0] wal.HLogUtil: Written region seqId to file:hdfs://nn.example.com:8020/apps/hbase/data/data/hbase/meta/1588230740/recovered.edits/65_seqid ,newSeqId=65 ,maxSeqId=64
2015-06-29 21:07:08,924 INFO  [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Onlined 1588230740; next sequenceid=65
2015-06-29 21:07:08,931 ERROR [RS_OPEN_META-dn:60020-0] handler.OpenRegionHandler: Failed open of region=hbase:meta,,1.1588230740, starting to roll back the global memstore size.
java.io.IOException: Cannot append; log is closed
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:1000)
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog.appendNoSync(FSHLog.java:1053)
        at org.apache.hadoop.hbase.regionserver.wal.HLogUtil.writeRegionEventMarker(HLogUtil.java:309)
        at org.apache.hadoop.hbase.regionserver.HRegion.writeRegionOpenMarker(HRegion.java:933)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5785)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5750)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5722)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5678)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5629)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
2015-06-29 21:07:08,931 INFO  [RS_OPEN_META-dn:60020-0] handler.OpenRegionHandler: Opening of region {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} failed, transitioning from OPENING to FAILED_OPEN in ZK, expecting version 5
2015-06-29 21:07:08,931 DEBUG [RS_OPEN_META-dn:60020-0] zookeeper.ZKAssign: regionserver:60020-0x14e2be1fbb70009, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure Transitioning 1588230740 from RS_ZK_REGION_OPENING to RS_ZK_REGION_FAILED_OPEN
2015-06-29 21:07:08,935 DEBUG [RS_OPEN_META-dn:60020-0] zookeeper.ZKAssign: regionserver:60020-0x14e2be1fbb70009, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure Transitioned node 1588230740 from RS_ZK_REGION_OPENING to RS_ZK_REGION_FAILED_OPEN
2015-06-29 21:07:16,825 INFO  [regionserver60020.periodicFlusher] regionserver.HRegionServer$PeriodicMemstoreFlusher: regionserver60020.periodicFlusher exiting
2015-06-29 21:07:16,825 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for Split Thread to finish...
2015-06-29 21:07:16,825 INFO  [regionserver60020.leaseChecker] regionserver.Leases: regionserver60020.leaseChecker closing leases
2015-06-29 21:07:16,826 INFO  [regionserver60020.leaseChecker] regionserver.Leases: regionserver60020.leaseChecker closed leases
2015-06-29 21:07:16,826 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for Merge Thread to finish...
2015-06-29 21:07:16,826 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for Large Compaction Thread to finish...
2015-06-29 21:07:16,826 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for Small Compaction Thread to finish...
2015-06-29 21:07:16,831 INFO  [regionserver60020] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x24e2be20e450014
2015-06-29 21:07:16,833 INFO  [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-06-29 21:07:16,833 INFO  [regionserver60020] zookeeper.ZooKeeper: Session: 0x24e2be20e450014 closed
2015-06-29 21:07:16,834 DEBUG [regionserver60020] ipc.RpcClient: Stopping rpc client
2015-06-29 21:07:16,839 INFO  [regionserver60020] zookeeper.ZooKeeper: Session: 0x14e2be1fbb70009 closed
2015-06-29 21:07:16,839 INFO  [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-06-29 21:07:16,839 INFO  [regionserver60020] regionserver.HRegionServer: stopping server dn.example.com,60020,1435612024042; zookeeper connection closed.
2015-06-29 21:07:16,839 INFO  [regionserver60020] regionserver.HRegionServer: regionserver60020 exiting
2015-06-29 21:07:16,839 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: HRegionServer Aborted
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:66)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2501)
2015-06-29 21:07:16,840 INFO  [Thread-12] regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@1f343622
2015-06-29 21:07:16,841 INFO  [Thread-12] regionserver.ShutdownHook: Starting fs shutdown hook thread.
2015-06-29 21:07:16,841 INFO  [Thread-12] regionserver.ShutdownHook: Shutdown hook finished.



Please help

Thanks,
Venkat
From: Ted Yu [mailto:yuzhihong@gmail.com]
Sent: Friday, June 26, 2015 1:10 PM
To: common-user@hadoop.apache.org
Subject: Re: HBASE Region server failing to start after Kerberos is enabled

Can you post the complete stack trace for 'Failed to get FileSystem instance' ?

What's the permission for /apps/hbase/staging ?

Looking at commit log of SecureBulkLoadEndpoint.java, there have been a lot bug fixes since 0.98.4
Please consider upgrading hbase

Cheers

On Fri, Jun 26, 2015 at 10:48 AM, Gangavarupu, Venkata - Contingent Worker <ve...@bcbsa.com>> wrote:
HI All,

The region servers failing to start, after Kerberos is enabled, with below error.
Hadoop -2.6.0
HBase-0.98.4

2015-06-24 15:58:48,884 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=AuthenticationService
2015-06-24 15:58:48,886 INFO  [RS_OPEN_META-mdcthdpdas06lp:60020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870911).
2015-06-24 15:58:48,894 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=SecureBulkLoadService
2015-06-24 15:58:48,907 ERROR [RS_OPEN_META-mdcthdpdas06lp:60020-0] coprocessor.CoprocessorHost: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
java.lang.IllegalStateException: Failed to get FileSystem instance

I see below properties are included in hbase-site.xml file

<property>
      <name>hbase.coprocessor.region.classes</name>
      <value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.hadoop.hbase.security.access.AccessController</value>
    </property>

<property>
      <name>hbase.bulkload.staging.dir</name>
      <value>/apps/hbase/staging</value>
    </property>


I deleted org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint from the hbase.coprocessor.region.classes and tried to start. It worked.
I think SecureBulkLoad is causing the problem.

Please help to come over this issue. I would like to have SecureBulkLoad class.

Thanks,
Venkat



RE: HBASE Region server failing to start after Kerberos is enabled

Posted by "Gangavarapu, Venkata" <Ve...@bcbsa.com>.
Hi,

This got solved.
Problem is with hbase.bulkload.staging.dir directory permissions.
It was not with hbase:hdfs user permissions.
Once it is changed to hbase:hdfs, region servers came up.

Thanks for giving me the hint

-Venkat

From: Ted Yu [mailto:yuzhihong@gmail.com]
Sent: Friday, June 26, 2015 1:10 PM
To: common-user@hadoop.apache.org
Subject: Re: HBASE Region server failing to start after Kerberos is enabled

Can you post the complete stack trace for 'Failed to get FileSystem instance' ?

What's the permission for /apps/hbase/staging ?

Looking at commit log of SecureBulkLoadEndpoint.java, there have been a lot bug fixes since 0.98.4
Please consider upgrading hbase

Cheers

On Fri, Jun 26, 2015 at 10:48 AM, Gangavarupu, Venkata - Contingent Worker <ve...@bcbsa.com>> wrote:
HI All,

The region servers failing to start, after Kerberos is enabled, with below error.
Hadoop -2.6.0
HBase-0.98.4

2015-06-24 15:58:48,884 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=AuthenticationService
2015-06-24 15:58:48,886 INFO  [RS_OPEN_META-mdcthdpdas06lp:60020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870911).
2015-06-24 15:58:48,894 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=SecureBulkLoadService
2015-06-24 15:58:48,907 ERROR [RS_OPEN_META-mdcthdpdas06lp:60020-0] coprocessor.CoprocessorHost: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
java.lang.IllegalStateException: Failed to get FileSystem instance

I see below properties are included in hbase-site.xml file

<property>
      <name>hbase.coprocessor.region.classes</name>
      <value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.hadoop.hbase.security.access.AccessController</value>
    </property>

<property>
      <name>hbase.bulkload.staging.dir</name>
      <value>/apps/hbase/staging</value>
    </property>


I deleted org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint from the hbase.coprocessor.region.classes and tried to start. It worked.
I think SecureBulkLoad is causing the problem.

Please help to come over this issue. I would like to have SecureBulkLoad class.

Thanks,
Venkat



RE: HBASE Region server failing to start after Kerberos is enabled

Posted by "Gangavarupu, Venkata - Contingent Worker" <ve...@bcbsa.com>.
Hi,

I have attached the logs for hbase region server failures with SecureBulkLoad after Kerberos

The permission on apps/hbase/staging is

drwxrwxrwx   - ams   hdfs          0 2015-06-08 19:17 /apps/hbase/staging

2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HBASE_THRIFT_OPTS=
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:QTINC=/usr/lib64/qt-3.3/include
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:USER=hbase
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HBASE_CLASSPATH=/usr/hdp/2.2.4.2-2/hadoop/conf:/usr/hdp/2.2.4.2-2/hadoop/*:/usr/hdp/2.2.4.2-2/hadoop/lib/*:/usr/hdp/2.2.4.2-2/zookeeper/*:/usr/hdp/2.2.4.2-2/zookeeper/lib/*:
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HOME=/home/hbase
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HISTCONTROL=ignoredups
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:LESSOPEN=|/usr/bin/lesspipe.sh %s
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HBASE_LOG_PREFIX=hbase-hbase-regionserver-dn.example.com
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:LANG=en_US.UTF-8
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HBASE_IDENT_STRING=hbase
2015-06-29 21:07:03,160 INFO  [main] util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=24.65-b04
2015-06-29 21:07:03,161 INFO  [main] util.ServerCommandLine: vmInputArguments=[-Dproc_regionserver, -XX:OnOutOfMemoryError=kill -9 %p, -Xmx1000m, -Dhdp.version=2.2.4.2-2, -XX:+UseConcMarkSweepGC, -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log, -Djava.security.auth.login.config=/etc/hbase/conf/hbase_client_jaas.conf, -verbose:gc, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -Xloggc:/var/log/hbase/gc.log-201506292107, -Xmn200m, -XX:CMSInitiatingOccupancyFraction=70, -Xms1024m, -Xmx1024m, -Djava.security.auth.login.config=/etc/hbase/conf/hbase_regionserver_jaas.conf, -Dhbase.log.dir=/var/log/hbase, -Dhbase.log.file=hbase-hbase-regionserver-dn.example.com.log, -Dhbase.home.dir=/usr/hdp/current/hbase-regionserver/bin/.., -Dhbase.id.str=hbase, -Dhbase.root.logger=INFO,RFA, -Djava.library.path=:/usr/hdp/2.2.4.2-2/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.2.4.2-2/hadoop/lib/native, -Dhbase.security.logger=INFO,RFAS]
2015-06-29 21:07:03,360 DEBUG [main] regionserver.HRegionServer: regionserver/dn.example.com/172.31.3.128:60020 HConnection server-to-server retries=350
2015-06-29 21:07:03,617 INFO  [main] ipc.SimpleRpcScheduler: Using default user call queue, count=6
2015-06-29 21:07:03,652 INFO  [main] ipc.RpcServer: regionserver/dn.example.com/172.31.3.128:60020: started 10 reader(s).
2015-06-29 21:07:03,761 INFO  [main] impl.MetricsConfig: loaded properties from hadoop-metrics2-hbase.properties
2015-06-29 21:07:03,809 INFO  [main] timeline.HadoopTimelineMetricsSink: Initializing Timeline metrics sink.
2015-06-29 21:07:03,809 INFO  [main] timeline.HadoopTimelineMetricsSink: Identified hostname = dn.example.com, serviceName = hbase
2015-06-29 21:07:03,872 INFO  [main] timeline.HadoopTimelineMetricsSink: Collector Uri: http://nn.example.com:6188/ws/v1/timeline/metrics
2015-06-29 21:07:03,883 INFO  [main] impl.MetricsSinkAdapter: Sink timeline started
2015-06-29 21:07:03,955 INFO  [main] impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-06-29 21:07:03,955 INFO  [main] impl.MetricsSystemImpl: HBase metrics system started
2015-06-29 21:07:04,470 INFO  [main] security.UserGroupInformation: Login successful for user hbase/dn.example.com@EXAMPLE.COM using keytab file /etc/security/keytabs/hbase.service.keytab
2015-06-29 21:07:04,475 INFO  [main] hfile.CacheConfig: Allocating LruBlockCache with maximum size 401.6 M
2015-06-29 21:07:04,520 INFO  [main] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-06-29 21:07:04,569 INFO  [main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2015-06-29 21:07:04,582 INFO  [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver
2015-06-29 21:07:04,582 INFO  [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-06-29 21:07:04,593 INFO  [main] http.HttpServer: Jetty bound to port 60030
2015-06-29 21:07:04,593 INFO  [main] mortbay.log: jetty-6.1.26.hwx
2015-06-29 21:07:05,169 INFO  [main] mortbay.log: Started HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60030
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-2--1, built on 03/31/2015 19:31 GMT
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:host.name=dn.example.com
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.version=1.7.0_67
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.home=/usr/jdk64/jdk1.7.0_67/jre
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.class.path=/etc/hbase/conf:/usr/jdk64/jdk1.7.0_67/lib/tools.jar:/usr/hdp/current/hbase-regionserver/bin/..:/usr/hdp/current/hbase-regionserver/bin/../lib/activation-1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/aopalliance-1.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/api-util-1.0.0-M20.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/asm-3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/avro-1.7.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/azure-storage-2.0.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-beanutils-1.7.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-cli-1.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-codec-1.7.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-collections-3.2.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-compress-1.4.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-configuration-1.6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-daemon-1.0.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-digester-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-el-1.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-httpclient-3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-io-2.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-lang-2.6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-lang3-3.3.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-logging-1.1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-math-2.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-math3-3.1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-net-3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/curator-client-2.6.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/curator-framework-2.6.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/curator-recipes-2.6.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/eclipselink-2.5.2-M1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/findbugs-annotations-1.3.9-1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/gson-2.2.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/guava-12.0.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/guice-3.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/guice-servlet-3.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hamcrest-core-1.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-client-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-client.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-common-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-common-0.98.4.2.2.4.2-2-hadoop2-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-common.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-examples-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-examples.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop2-compat-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop2-compat.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop-compat-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop-compat.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-it-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-it-0.98.4.2.2.4.2-2-hadoop2-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-it.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-prefix-tree-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-prefix-tree.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-protocol-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-protocol.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-server-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-server-0.98.4.2.2.4.2-2-hadoop2-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-server.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-shell-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-shell.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-testing-util-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-testing-util.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-thrift-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-thrift.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/high-scale-lib-1.1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/htrace-core-2.04.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/htrace-core-3.0.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/httpclient-4.2.5.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/httpcore-4.1.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-core-2.2.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-core-asl-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-xc-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jamon-runtime-2.3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jasper-compiler-5.5.23.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jasper-runtime-5.5.23.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/javax.inject-1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/java-xmlbuilder-0.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jaxb-api-2.2.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-client-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-core-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-guice-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-json-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-server-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jets3t-0.9.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jettison-1.3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jetty-6.1.26.hwx.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jetty-sslengine-6.1.26.hwx.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jruby-complete-1.6.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsch-0.1.42.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsp-2.1-6.1.14.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsp-api-2.1-6.1.14.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsr305-1.3.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/junit-4.11.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/leveldbjni-all-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/libthrift-0.9.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/log4j-1.2.17.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/metrics-core-2.2.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/netty-3.6.6.Final.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ojdbc6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/paranamer-2.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/phoenix-server.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/protobuf-java-2.5.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-hbase-plugin-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-plugins-audit-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-plugins-common-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-plugins-cred-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-plugins-impl-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/servlet-api-2.5-6.1.14.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/servlet-api-2.5.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/slf4j-api-1.6.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/snappy-java-1.0.4.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xercesImpl-2.9.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xml-apis-1.3.04.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xmlenc-0.52.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xz-1.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/zookeeper.jar:/usr/hdp/2.2.4.2-2/hadoop/conf:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-api-1.7.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-impl-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ojdbc6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/httpclient-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-audit-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-client-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-httpclient-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsch-0.1.42.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-common-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/javax.persistence-2.1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-framework-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-log4j12-1.7.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/activation-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/junit-4.11.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/eclipselink-2.5.2-M1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-cred-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/httpcore-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-hdfs-plugin-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/azure-storage-2.0.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-lang3-3.3.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-recipes-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-azure.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-annotations.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-common-tests.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-nfs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-auth.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-azure-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-auth-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-nfs.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-annotations-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-common-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-common.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/./:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-nfs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jettison-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/httpclient-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/guice-3.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-net-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/curator-client-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jsch-0.1.42.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jasper-compiler-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/curator-framework-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/gson-2.2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/activation-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-core-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/httpcore-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jline-0.9.94.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/azure-storage-2.0.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-lang3-3.3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/curator-recipes-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-api-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-client-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-registry-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-tests-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/guice-3.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/junit-4.11.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//aws-java-sdk-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jettison-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-datajoin-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-extras.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-jaxrs-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-ant.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-databind-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//curator-client-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-auth.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//curator-framework-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-auth-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-xc-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-sls-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-streaming-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-annotations-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-extras-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//mockito-all-1.8.5.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-openstack-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//activation-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-aws.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-distcp-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-archives-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-aws-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-gridmix-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//junit-4.11.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-core-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//joda-time-2.7.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-lang3-3.3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-rumen-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//curator-recipes-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-ant-2.6.0.2.2.4.2-2.jar::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-mapper-asl-1.9.13.jar:/usr/hdp/current/hadoop-mapreduce-client/jaxb-api-2.2.2.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-beanutils-1.7.0.jar:/usr/hdp/current/hadoop-mapreduce-client/aws-java-sdk-1.7.4.jar:/usr/hdp/current/hadoop-mapreduce-client/jettison-1.1.jar:/usr/hdp/current/hadoop-mapreduce-client/httpclient-4.2.5.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-datajoin-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-extras.jar:/usr/hdp/current/hadoop-mapreduce-client/jetty-6.1.26.hwx.jar:/usr/hdp/current/hadoop-mapreduce-client/xz-1.0.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar:/usr/hdp/current/hadoop-mapreduce-client/jets3t-0.9.0.jar:/usr/hdp/current/hadoop-mapreduce-client/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-net-3.1.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-jaxrs-1.9.13.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-ant.jar:/usr/hdp/current/hadoop-mapreduce-client/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jsr305-1.3.9.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-io-2.4.jar:/usr/hdp/current/hadoop-mapreduce-client/guava-11.0.2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar:/usr/hdp/current/hadoop-mapreduce-client/jersey-json-1.9.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-lang-2.6.jar:/usr/hdp/current/hadoop-mapreduce-client/jetty-util-6.1.26.hwx.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-databind-2.2.3.jar:/usr/hdp/current/hadoop-mapreduce-client/curator-client-2.6.0.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-digester-1.8.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-httpclient-3.1.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-compress-1.4.1.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-auth.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle.jar:/usr/hdp/current/hadoop-mapreduce-client/jsch-0.1.42.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-logging-1.1.3.jar:/usr/hdp/current/hadoop-mapreduce-client/jasper-compiler-5.5.23.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/current/hadoop-mapreduce-client/paranamer-2.3.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-rumen.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-gridmix.jar:/usr/hdp/current/hadoop-mapreduce-client/hamcrest-core-1.3.jar:/usr/hdp/current/hadoop-mapreduce-client/java-xmlbuilder-0.4.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/current/hadoop-mapreduce-client/curator-framework-2.6.0.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-auth-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-xc-1.9.13.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-el-1.0.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-hs.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-sls-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jersey-core-1.9.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-app.jar:/usr/hdp/current/hadoop-mapreduce-client/log4j-1.2.17.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/protobuf-java-2.5.0.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-annotations-2.2.3.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-extras-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/mockito-all-1.8.5.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-app-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/gson-2.2.4.jar:/usr/hdp/current/hadoop-mapreduce-client/snappy-java-1.0.4.1.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-openstack-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-collections-3.2.1.jar:/usr/hdp/current/hadoop-mapreduce-client/htrace-core-3.0.4.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-openstack.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/activation-1.1.jar:/usr/hdp/current/hadoop-mapreduce-client/jaxb-impl-2.2.3-1.jar:/usr/hdp/current/hadoop-mapreduce-client/jersey-server-1.9.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-common.jar:/usr/hdp/current/hadoop-mapreduce-client/stax-api-1.0-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-aws.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-configuration-1.6.jar:/usr/hdp/current/hadoop-mapreduce-client/avro-1.7.4.jar:/usr/hdp/current/hadoop-mapreduce-client/api-asn1-api-1.0.0-M20.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-distcp-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jsp-api-2.1.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-archives-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jasper-runtime-5.5.23.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-datajoin.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-aws-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-gridmix-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-core-asl-1.9.13.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-beanutils-core-1.8.0.jar:/usr/hdp/current/hadoop-mapreduce-client/junit-4.11.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-core-2.2.3.jar:/usr/hdp/current/hadoop-mapreduce-client/servlet-api-2.5.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-codec-1.4.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-cli-1.2.jar:/usr/hdp/current/hadoop-mapreduce-client/joda-time-2.7.jar:/usr/hdp/current/hadoop-mapreduce-client/asm-3.2.jar:/usr/hdp/current/hadoop-mapreduce-client/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/current/hadoop-mapreduce-client/httpcore-4.2.5.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-math3-3.1.1.jar:/usr/hdp/current/hadoop-mapreduce-client/metrics-core-3.0.1.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/current/hadoop-mapreduce-client/netty-3.6.2.Final.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-sls.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-archives.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-lang3-3.3.2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-rumen-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-distcp.jar:/usr/hdp/current/hadoop-mapreduce-client/curator-recipes-2.6.0.jar:/usr/hdp/current/hadoop-mapreduce-client/xmlenc-0.52.jar:/usr/hdp/current/hadoop-mapreduce-client/api-util-1.0.0-M20.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-ant-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-runtime-library-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-common-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-yarn-timeline-history-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-api-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-tests-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-mapreduce-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-dag-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-mbeans-resource-calculator-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-examples-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-runtime-internals-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/jsr305-2.0.3.jar:/usr/hdp/current/tez-client/lib/jetty-6.1.26.hwx.jar:/usr/hdp/current/tez-client/lib/commons-io-2.4.jar:/usr/hdp/current/tez-client/lib/guava-11.0.2.jar:/usr/hdp/current/tez-client/lib/commons-collections4-4.0.jar:/usr/hdp/current/tez-client/lib/commons-lang-2.6.jar:/usr/hdp/current/tez-client/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/current/tez-client/lib/commons-logging-1.1.3.jar:/usr/hdp/current/tez-client/lib/log4j-1.2.17.jar:/usr/hdp/current/tez-client/lib/protobuf-java-2.5.0.jar:/usr/hdp/current/tez-client/lib/hadoop-mapreduce-client-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/commons-collections-3.2.1.jar:/usr/hdp/current/tez-client/lib/hadoop-mapreduce-client-core-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/jettison-1.3.4.jar:/usr/hdp/current/tez-client/lib/hadoop-yarn-server-web-proxy-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/servlet-api-2.5.jar:/usr/hdp/current/tez-client/lib/commons-codec-1.4.jar:/usr/hdp/current/tez-client/lib/hadoop-annotations-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/commons-cli-1.2.jar:/usr/hdp/current/tez-client/lib/commons-math3-3.1.1.jar:/etc/tez/conf/:/usr/hdp/2.2.4.2-2/tez/tez-runtime-library-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-common-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-yarn-timeline-history-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-api-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-tests-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-mapreduce-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-dag-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-mbeans-resource-calculator-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-examples-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-runtime-internals-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/jsr305-2.0.3.jar:/usr/hdp/2.2.4.2-2/tez/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/tez/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-collections4-4.0.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/tez/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/tez/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/tez/lib/hadoop-mapreduce-client-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/tez/lib/hadoop-mapreduce-client-core-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/jettison-1.3.4.jar:/usr/hdp/2.2.4.2-2/tez/lib/hadoop-yarn-server-web-proxy-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/tez/lib/hadoop-annotations-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-math3-3.1.1.jar:/etc/tez/conf:/usr/hdp/2.2.4.2-2/hadoop/conf:/usr/hdp/2.2.4.2-2/hadoop/hadoop-azure.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-annotations.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-common-tests.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-nfs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-auth.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-azure-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-auth-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-nfs.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-annotations-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-common-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-common.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-api-1.7.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-impl-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ojdbc6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/httpclient-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-audit-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-client-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-httpclient-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsch-0.1.42.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-common-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/javax.persistence-2.1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-framework-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-log4j12-1.7.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/activation-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/junit-4.11.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/eclipselink-2.5.2-M1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-cred-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/httpcore-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-hdfs-plugin-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/azure-storage-2.0.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-lang3-3.3.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-recipes-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/zookeeper/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/zookeeper/zookeeper.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/httpcore-4.2.3.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/httpclient-4.2.3.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-artifact-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/log4j-1.2.16.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-http-shared-1.0-beta-6.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-file-1.0-beta-6.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-http-shared4-2.4.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/classworlds-1.1-alpha-2.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/plexus-utils-3.0.8.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/netty-3.7.0.Final.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-profile-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/nekohtml-1.9.6.2.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-provider-api-2.4.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/jsoup-1.7.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/plexus-container-default-1.0-alpha-9-stable-1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/xercesMinimal-1.9.6.2.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-artifact-manager-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/commons-logging-1.1.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/commons-codec-1.6.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/plexus-interpolation-1.11.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/slf4j-api-1.6.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/ant-1.8.0.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-plugin-registry-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/slf4j-log4j12-1.6.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-http-lightweight-1.0-beta-6.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/commons-io-2.2.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-error-diagnostics-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-project-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/backport-util-concurrent-3.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/ant-launcher-1.8.0.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-ant-tasks-2.1.3.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-model-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/jline-0.9.94.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-http-2.4.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-repository-metadata-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-settings-2.2.1.jar:
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/hdp/2.2.4.2-2/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.2.4.2-2/hadoop/lib/native
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:os.name=Linux
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:os.version=2.6.32-431.11.2.el6.x86_64
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:user.name=hbase
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:user.home=/home/hbase
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:user.dir=/home/hbase
2015-06-29 21:07:05,198 INFO  [regionserver60020] zookeeper.ZooKeeper: Initiating client connection, connectString=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181 sessionTimeout=30000 watcher=regionserver:60020, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure
2015-06-29 21:07:05,215 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Process identifier=regionserver:60020 connecting to ZooKeeper ensemble=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181
2015-06-29 21:07:05,230 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.Login: successfully logged in.
2015-06-29 21:07:05,234 INFO  [Thread-10] zookeeper.Login: TGT refresh thread started.
2015-06-29 21:07:05,236 INFO  [regionserver60020-SendThread(dn.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2015-06-29 21:07:05,244 INFO  [Thread-10] zookeeper.Login: TGT valid starting at:        Mon Jun 29 21:07:05 UTC 2015
2015-06-29 21:07:05,244 INFO  [Thread-10] zookeeper.Login: TGT expires:                  Tue Jun 30 21:07:05 UTC 2015
2015-06-29 21:07:05,245 INFO  [Thread-10] zookeeper.Login: TGT refresh sleeping until: Tue Jun 30 17:29:07 UTC 2015
2015-06-29 21:07:05,247 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server dn.example.com/172.31.3.128:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2015-06-29 21:07:05,249 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to dn.example.com/172.31.3.128:2181, initiating session
2015-06-29 21:07:05,261 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server dn.example.com/172.31.3.128:2181, sessionid = 0x14e2be1fbb70009, negotiated timeout = 30000
2015-06-29 21:07:05,514 INFO  [regionserver60020] zookeeper.ZooKeeper: Initiating client connection, connectString=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181 sessionTimeout=30000 watcher=hconnection-0x55af9c7d, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure
2015-06-29 21:07:05,521 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x55af9c7d connecting to ZooKeeper ensemble=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181
2015-06-29 21:07:05,522 INFO  [regionserver60020-SendThread(dn.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2015-06-29 21:07:05,523 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server dn.example.com/172.31.3.128:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2015-06-29 21:07:05,523 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to dn.example.com/172.31.3.128:2181, initiating session
2015-06-29 21:07:05,526 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server dn.example.com/172.31.3.128:2181, sessionid = 0x14e2be1fbb7000a, negotiated timeout = 30000
2015-06-29 21:07:06,199 INFO  [main] regionserver.ShutdownHook: Installed shutdown hook thread: Shutdownhook:regionserver60020
2015-06-29 21:07:06,206 DEBUG [regionserver60020] ipc.RpcClient: Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@443fdee7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2015-06-29 21:07:06,211 DEBUG [regionserver60020] hbase.HRegionInfo: 1588230740
2015-06-29 21:07:06,212 DEBUG [regionserver60020] catalog.CatalogTracker: Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@19e951c9
2015-06-29 21:07:06,215 INFO  [regionserver60020] regionserver.HRegionServer: ClusterId : 05d0370c-07a6-40ff-ab97-5be7d7ae1f36
2015-06-29 21:07:06,218 INFO  [regionserver60020] procedure.RegionServerProcedureManagerHost: Procedure online-snapshot is initializing
2015-06-29 21:07:06,230 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Node /hbase-secure/online-snapshot/acquired already exists and this is not a retry
2015-06-29 21:07:06,235 INFO  [regionserver60020] procedure.RegionServerProcedureManagerHost: Procedure online-snapshot is initialized
2015-06-29 21:07:06,240 INFO  [regionserver60020] regionserver.MemStoreFlusher: globalMemStoreLimit=401.6 M, globalMemStoreLimitLowMark=381.5 M, maxHeap=1004 M
2015-06-29 21:07:06,242 INFO  [regionserver60020] regionserver.HRegionServer: CompactionChecker runs every 10sec
2015-06-29 21:07:06,244 DEBUG [regionserver60020] ipc.RpcClient: Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@175e895d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=dn.example.com/172.31.3.128:0
2015-06-29 21:07:06,250 INFO  [regionserver60020] regionserver.HRegionServer: reportForDuty to master=rm.example.com,60000,1435297869160 with port=60020, startcode=1435612024042
2015-06-29 21:07:06,359 DEBUG [regionserver60020] token.AuthenticationTokenSelector: No matching token found
2015-06-29 21:07:06,360 DEBUG [regionserver60020] ipc.RpcClient: RPC Server Kerberos principal name for service=RegionServerStatusService is hbase/rm.example.com@EXAMPLE.COM
2015-06-29 21:07:06,360 DEBUG [regionserver60020] ipc.RpcClient: Use KERBEROS authentication for service RegionServerStatusService, sasl=true
2015-06-29 21:07:06,372 DEBUG [regionserver60020] ipc.RpcClient: Connecting to rm.example.com/172.31.3.127:60000
2015-06-29 21:07:06,378 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Creating SASL GSSAPI client. Server's Kerberos principal name is hbase/rm.example.com@EXAMPLE.COM
2015-06-29 21:07:06,384 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Have sent token of size 633 from initSASLContext.
2015-06-29 21:07:06,388 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Will read input token of size 108 for processing by initSASLContext
2015-06-29 21:07:06,390 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Will send token of size 0 from initSASLContext.
2015-06-29 21:07:06,391 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Will read input token of size 32 for processing by initSASLContext
2015-06-29 21:07:06,392 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Will send token of size 32 from initSASLContext.
2015-06-29 21:07:06,392 DEBUG [regionserver60020] security.HBaseSaslRpcClient: SASL client context established. Negotiated QoP: auth
2015-06-29 21:07:06,411 DEBUG [regionserver60020] regionserver.HRegionServer: Config from master: hbase.rootdir=hdfs://nn.example.com:8020/apps/hbase/data
2015-06-29 21:07:06,412 DEBUG [regionserver60020] regionserver.HRegionServer: Config from master: fs.default.name=hdfs://nn.example.com:8020
2015-06-29 21:07:06,412 INFO  [regionserver60020] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:06,412 DEBUG [regionserver60020] regionserver.HRegionServer: Config from master: hbase.master.info.port=60010
2015-06-29 21:07:06,412 INFO  [regionserver60020] Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
2015-06-29 21:07:06,430 INFO  [regionserver60020] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2015-06-29 21:07:06,437 DEBUG [regionserver60020] regionserver.HRegionServer: logdir=hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435612024042
2015-06-29 21:07:06,542 DEBUG [regionserver60020] regionserver.Replication: ReplicationStatisticsThread 300
2015-06-29 21:07:06,553 INFO  [regionserver60020] wal.FSHLog: WAL/HLog configuration: blocksize=128 MB, rollsize=121.60 MB, enabled=true
2015-06-29 21:07:06,798 INFO  [regionserver60020] wal.FSHLog: New WAL /apps/hbase/data/WALs/dn.example.com,60020,1435612024042/dn.example.com%2C60020%2C1435612024042.1435612026585
2015-06-29 21:07:06,814 INFO  [regionserver60020] regionserver.MetricsRegionServerWrapperImpl: Computing regionserver metrics every 5000 milliseconds
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_OPEN_REGION-dn:60020, corePoolSize=3, maxPoolSize=3
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_OPEN_META-dn:60020, corePoolSize=1, maxPoolSize=1
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_CLOSE_REGION-dn:60020, corePoolSize=3, maxPoolSize=3
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_CLOSE_META-dn:60020, corePoolSize=1, maxPoolSize=1
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_LOG_REPLAY_OPS-dn:60020, corePoolSize=2, maxPoolSize=2
2015-06-29 21:07:06,823 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-dn:60020, corePoolSize=3, maxPoolSize=3
2015-06-29 21:07:06,826 INFO  [regionserver60020] regionserver.ReplicationSourceManager: Current list of replicators: [dn.example.com,60020,1435612024042] other RSs: [dn.example.com,60020,1435612024042]
2015-06-29 21:07:06,875 INFO  [regionserver60020] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:06,885 INFO  [regionserver60020] zookeeper.ZooKeeper: Initiating client connection, connectString=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181 sessionTimeout=30000 watcher=hconnection-0x64d5c83f, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure
2015-06-29 21:07:06,890 INFO  [regionserver60020-SendThread(nn.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2015-06-29 21:07:06,891 INFO  [regionserver60020-SendThread(nn.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server nn.example.com/172.31.3.126:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2015-06-29 21:07:06,891 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x64d5c83f connecting to ZooKeeper ensemble=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181
2015-06-29 21:07:06,895 INFO  [regionserver60020-SendThread(nn.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to nn.example.com/172.31.3.126:2181, initiating session
2015-06-29 21:07:06,909 INFO  [regionserver60020-SendThread(nn.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server nn.example.com/172.31.3.126:2181, sessionid = 0x24e2be20e450014, negotiated timeout = 30000
2015-06-29 21:07:06,939 DEBUG [regionserver60020] ipc.RpcClient: Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b0a2c6a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2015-06-29 21:07:06,954 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Node /hbase-secure/tokenauth/keys already exists and this is not a retry
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 10
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 17
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 15
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 16
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 13
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 14
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 11
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 12
2015-06-29 21:07:06,969 INFO  [ZKSecretWatcher-leaderElector] zookeeper.RecoverableZooKeeper: Node /hbase-secure/tokenauth/keymaster already exists and this is not a retry
2015-06-29 21:07:06,970 INFO  [ZKSecretWatcher-leaderElector] zookeeper.ZKLeaderManager: Found existing leader with ID: dn.example.com,60020,1435612024042
2015-06-29 21:07:07,017 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: starting
2015-06-29 21:07:07,018 INFO  [RpcServer.listener,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: starting
2015-06-29 21:07:07,018 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=0 queue=0
2015-06-29 21:07:07,018 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=1 queue=1
2015-06-29 21:07:07,019 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=2 queue=2
2015-06-29 21:07:07,019 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=3 queue=3
2015-06-29 21:07:07,019 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=4 queue=4
2015-06-29 21:07:07,019 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=5 queue=5
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=6 queue=0
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=7 queue=1
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=8 queue=2
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=9 queue=3
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=10 queue=4
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=11 queue=5
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=12 queue=0
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=13 queue=1
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=14 queue=2
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=15 queue=3
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=16 queue=4
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=17 queue=5
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=18 queue=0
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=19 queue=1
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=20 queue=2
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=21 queue=3
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=22 queue=4
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=23 queue=5
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=24 queue=0
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=25 queue=1
2015-06-29 21:07:07,024 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=26 queue=2
2015-06-29 21:07:07,024 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=27 queue=3
2015-06-29 21:07:07,024 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=28 queue=4
2015-06-29 21:07:07,025 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=29 queue=5
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=30 queue=0
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=31 queue=1
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=32 queue=2
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=33 queue=3
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=34 queue=4
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=35 queue=5
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=36 queue=0
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=37 queue=1
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=38 queue=2
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=39 queue=3
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=40 queue=4
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=41 queue=5
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=42 queue=0
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=43 queue=1
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=44 queue=2
2015-06-29 21:07:07,034 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=45 queue=3
2015-06-29 21:07:07,034 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=46 queue=4
2015-06-29 21:07:07,034 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=47 queue=5
2015-06-29 21:07:07,035 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=48 queue=0
2015-06-29 21:07:07,035 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=49 queue=1
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=50 queue=2
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=51 queue=3
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=52 queue=4
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=53 queue=5
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=54 queue=0
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=55 queue=1
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=56 queue=2
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=57 queue=3
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=58 queue=4
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=59 queue=5
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=0 queue=0
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=1 queue=0
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=2 queue=0
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=3 queue=0
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=4 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=5 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=6 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=7 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=8 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=9 queue=0
2015-06-29 21:07:07,040 DEBUG [regionserver60020] ipc.RpcExecutor: Replication Start Handler index=0 queue=0
2015-06-29 21:07:07,040 DEBUG [regionserver60020] ipc.RpcExecutor: Replication Start Handler index=1 queue=0
2015-06-29 21:07:07,040 DEBUG [regionserver60020] ipc.RpcExecutor: Replication Start Handler index=2 queue=0
2015-06-29 21:07:07,074 INFO  [regionserver60020] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:07,083 INFO  [regionserver60020] regionserver.HRegionServer: Serving as dn.example.com,60020,1435612024042, RpcServer on dn.example.com/172.31.3.128:60020, sessionid=0x14e2be1fbb70009
2015-06-29 21:07:07,083 INFO  [regionserver60020] procedure.RegionServerProcedureManagerHost: Procedure online-snapshot is starting
2015-06-29 21:07:07,083 DEBUG [regionserver60020] snapshot.RegionServerSnapshotManager: Start Snapshot Manager dn.example.com,60020,1435612024042
2015-06-29 21:07:07,083 DEBUG [regionserver60020] procedure.ZKProcedureMemberRpcs: Starting procedure member 'dn.example.com,60020,1435612024042'
2015-06-29 21:07:07,083 DEBUG [regionserver60020] procedure.ZKProcedureMemberRpcs: Checking for aborted procedures on node: '/hbase-secure/online-snapshot/abort'
2015-06-29 21:07:07,083 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] regionserver.SplitLogWorker: SplitLogWorker dn.example.com,60020,1435612024042 starting
2015-06-29 21:07:07,084 DEBUG [regionserver60020] procedure.ZKProcedureMemberRpcs: Looking for new procedures under znode:'/hbase-secure/online-snapshot/acquired'
2015-06-29 21:07:07,084 INFO  [regionserver60020] procedure.RegionServerProcedureManagerHost: Procedure online-snapshot is started
2015-06-29 21:07:07,111 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] zookeeper.ZooKeeper: Initiating client connection, connectString=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181 sessionTimeout=30000 watcher=hconnection-0x7db5292f, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure
2015-06-29 21:07:07,118 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042-SendThread(dn.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2015-06-29 21:07:07,119 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server dn.example.com/172.31.3.128:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2015-06-29 21:07:07,119 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to dn.example.com/172.31.3.128:2181, initiating session
2015-06-29 21:07:07,122 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7db5292f connecting to ZooKeeper ensemble=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181
2015-06-29 21:07:07,128 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server dn.example.com/172.31.3.128:2181, sessionid = 0x14e2be1fbb7000b, negotiated timeout = 30000
2015-06-29 21:07:07,143 DEBUG [SplitLogWorker-dn.example.com,60020,1435612024042] ipc.RpcClient: Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7aa9a046, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2015-06-29 21:07:07,165 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] regionserver.SplitLogWorker: worker dn.example.com,60020,1435612024042 acquired task /hbase-secure/splitWAL/WALs%2Fdn.example.com%2C60020%2C1435297873436-splitting%2Fdn.example.com%252C60020%252C1435297873436.1435297879386.meta
2015-06-29 21:07:07,224 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:07,234 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Splitting hlog: hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435297873436-splitting/dn.example.com%2C60020%2C1435297873436.1435297879386.meta, length=91
2015-06-29 21:07:07,234 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: DistributedLogReplay = false
2015-06-29 21:07:07,240 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435297873436-splitting/dn.example.com%2C60020%2C1435297873436.1435297879386.meta
2015-06-29 21:07:07,246 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435297873436-splitting/dn.example.com%2C60020%2C1435297873436.1435297879386.meta after 6ms
2015-06-29 21:07:07,339 DEBUG [RS_LOG_REPLAY_OPS-dn:60020-0-Writer-1] wal.HLogSplitter: Writer thread Thread[RS_LOG_REPLAY_OPS-dn:60020-0-Writer-1,5,main]: starting
2015-06-29 21:07:07,339 DEBUG [RS_LOG_REPLAY_OPS-dn:60020-0-Writer-2] wal.HLogSplitter: Writer thread Thread[RS_LOG_REPLAY_OPS-dn:60020-0-Writer-2,5,main]: starting
2015-06-29 21:07:07,338 DEBUG [RS_LOG_REPLAY_OPS-dn:60020-0-Writer-0] wal.HLogSplitter: Writer thread Thread[RS_LOG_REPLAY_OPS-dn:60020-0-Writer-0,5,main]: starting
2015-06-29 21:07:07,342 DEBUG [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Finishing writing output logs and closing down.
2015-06-29 21:07:07,342 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Waiting for split writer threads to finish
2015-06-29 21:07:07,343 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Split writers finished
2015-06-29 21:07:07,345 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Processed 0 edits across 0 regions; log file=hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435297873436-splitting/dn.example.com%2C60020%2C1435297873436.1435297879386.meta is corrupted = false progress failed = false
2015-06-29 21:07:07,351 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] handler.HLogSplitterHandler: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fdn.example.com%2C60020%2C1435297873436-splitting%2Fdn.example.com%252C60020%252C1435297873436.1435297879386.meta to final state DONE dn.example.com,60020,1435612024042
2015-06-29 21:07:07,351 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] handler.HLogSplitterHandler: worker dn.example.com,60020,1435612024042 done with task /hbase-secure/splitWAL/WALs%2Fdn.example.com%2C60020%2C1435297873436-splitting%2Fdn.example.com%252C60020%252C1435297873436.1435297879386.meta in 168ms
2015-06-29 21:07:07,387 DEBUG [regionserver60020-EventThread] regionserver.SplitLogWorker: tasks arrived or departed
2015-06-29 21:07:08,433 DEBUG [RpcServer.listener,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: connection from 172.31.3.127:37982; # active connections: 1
2015-06-29 21:07:08,434 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Kerberos principal name is hbase/dn.example.com@EXAMPLE.COM
2015-06-29 21:07:08,436 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Created SASL server with mechanism = GSSAPI
2015-06-29 21:07:08,436 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Have read input token of size 633 for processing by saslServer.evaluateResponse()
2015-06-29 21:07:08,441 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Will send token of size 108 from saslServer.
2015-06-29 21:07:08,444 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Have read input token of size 0 for processing by saslServer.evaluateResponse()
2015-06-29 21:07:08,445 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Will send token of size 32 from saslServer.
2015-06-29 21:07:08,446 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Have read input token of size 32 for processing by saslServer.evaluateResponse()
2015-06-29 21:07:08,446 DEBUG [RpcServer.reader=1,port=60020] security.HBaseSaslRpcServer: SASL server GSSAPI callback: setting canonicalized client ID: hbase/rm.example.com@EXAMPLE.COM
2015-06-29 21:07:08,446 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: SASL server context established. Authenticated client: hbase/rm.example.com@EXAMPLE.COM (auth:SIMPLE). Negotiated QoP is auth
2015-06-29 21:07:08,481 INFO  [PriorityRpcServer.handler=0,queue=0,port=60020] regionserver.HRegionServer: Open hbase:meta,,1.1588230740
2015-06-29 21:07:08,506 INFO  [PriorityRpcServer.handler=0,queue=0,port=60020] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2015-06-29 21:07:08,608 DEBUG [RS_OPEN_META-dn:60020-0] zookeeper.ZKAssign: regionserver:60020-0x14e2be1fbb70009, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure Transitioning 1588230740 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
2015-06-29 21:07:08,617 DEBUG [RS_OPEN_META-dn:60020-0] zookeeper.ZKAssign: regionserver:60020-0x14e2be1fbb70009, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure Transitioned node 1588230740 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
2015-06-29 21:07:08,618 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegionServer: logdir=hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435612024042
2015-06-29 21:07:08,618 INFO  [RS_OPEN_META-dn:60020-0] wal.FSHLog: WAL/HLog configuration: blocksize=128 MB, rollsize=121.60 MB, enabled=true
2015-06-29 21:07:08,635 INFO  [RS_OPEN_META-dn:60020-0] wal.FSHLog: New WAL /apps/hbase/data/WALs/dn.example.com,60020,1435612024042/dn.example.com%2C60020%2C1435612024042.1435612028621.meta
2015-06-29 21:07:08,650 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}
2015-06-29 21:07:08,669 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=AuthenticationService
2015-06-29 21:07:08,671 INFO  [RS_OPEN_META-dn:60020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870911).
2015-06-29 21:07:08,676 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=SecureBulkLoadService
2015-06-29 21:07:08,686 ERROR [RS_OPEN_META-dn:60020-0] coprocessor.CoprocessorHost: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
java.lang.IllegalStateException: Failed to get FileSystem instance
        at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:148)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.startup(CoprocessorHost.java:415)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:257)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:160)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:192)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:701)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:608)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:5438)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5749)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5722)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5678)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5629)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:201)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:191)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6795)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6777)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6696)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1731)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1711)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:615)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:445)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2345)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1300)
        at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:136)
        ... 21 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:201)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:191)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6795)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6777)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6696)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1731)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1711)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:615)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:445)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at org.apache.hadoop.ipc.Client.call(Client.java:1469)
        at org.apache.hadoop.ipc.Client.call(Client.java:1400)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy18.setPermission(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:361)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy19.setPermission(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:294)
        at com.sun.proxy.$Proxy20.setPermission(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2343)
        ... 26 more
2015-06-29 21:07:08,688 FATAL [RS_OPEN_META-dn:60020-0] regionserver.HRegionServer: ABORTING region server dn.example.com,60020,1435612024042: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
java.lang.IllegalStateException: Failed to get FileSystem instance
        at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:148)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.startup(CoprocessorHost.java:415)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:257)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:160)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:192)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:701)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:608)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:5438)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5749)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5722)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5678)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5629)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:201)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:191)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6795)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6777)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6696)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1731)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1711)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:615)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:445)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2345)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1300)
        at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:136)
        ... 21 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:201)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:191)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6795)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6777)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6696)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1731)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1711)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:615)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:445)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at org.apache.hadoop.ipc.Client.call(Client.java:1469)
        at org.apache.hadoop.ipc.Client.call(Client.java:1400)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy18.setPermission(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:361)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy19.setPermission(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:294)
        at com.sun.proxy.$Proxy20.setPermission(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2343)
        ... 26 more
2015-06-29 21:07:08,690 FATAL [RS_OPEN_META-dn:60020-0] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.security.token.TokenProvider]
2015-06-29 21:07:08,699 INFO  [RS_OPEN_META-dn:60020-0] regionserver.HRegionServer: STOPPED: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
2015-06-29 21:07:08,700 INFO  [regionserver60020] ipc.RpcServer: Stopping server on 60020
2015-06-29 21:07:08,700 INFO  [RpcServer.listener,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: stopping
2015-06-29 21:07:08,701 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped
2015-06-29 21:07:08,701 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2015-06-29 21:07:08,701 INFO  [regionserver60020] regionserver.SplitLogWorker: Sending interrupt to stop the worker thread
2015-06-29 21:07:08,702 INFO  [regionserver60020] regionserver.HRegionServer: Stopping infoServer
2015-06-29 21:07:08,704 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] regionserver.SplitLogWorker: SplitLogWorker interrupted while waiting for task, exiting: java.lang.InterruptedException
2015-06-29 21:07:08,704 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] regionserver.SplitLogWorker: SplitLogWorker dn.example.com,60020,1435612024042 exiting
2015-06-29 21:07:08,716 INFO  [regionserver60020] mortbay.log: Stopped HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60030
2015-06-29 21:07:08,716 WARN  [1614105668@qtp-278229142-1 - Acceptor0 HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60030] http.HttpServer: HttpServer Acceptor: isRunning is false. Rechecking.
2015-06-29 21:07:08,716 WARN  [1614105668@qtp-278229142-1 - Acceptor0 HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60030] http.HttpServer: HttpServer Acceptor: isRunning is false
2015-06-29 21:07:08,718 INFO  [regionserver60020] snapshot.RegionServerSnapshotManager: Stopping RegionServerSnapshotManager abruptly.
2015-06-29 21:07:08,719 INFO  [regionserver60020.logRoller] regionserver.LogRoller: LogRoller exiting.
2015-06-29 21:07:08,719 INFO  [regionserver60020.nonceCleaner] regionserver.ServerNonceManager$1: regionserver60020.nonceCleaner exiting
2015-06-29 21:07:08,719 INFO  [regionserver60020.compactionChecker] regionserver.HRegionServer$CompactionChecker: regionserver60020.compactionChecker exiting
2015-06-29 21:07:08,720 INFO  [regionserver60020] regionserver.HRegionServer: aborting server dn.example.com,60020,1435612024042
2015-06-29 21:07:08,721 DEBUG [regionserver60020] catalog.CatalogTracker: Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@19e951c9
2015-06-29 21:07:08,721 INFO  [regionserver60020] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x14e2be1fbb7000a
2015-06-29 21:07:08,718 INFO  [MemStoreFlusher.0] regionserver.MemStoreFlusher: MemStoreFlusher.0 exiting
2015-06-29 21:07:08,723 INFO  [MemStoreFlusher.1] regionserver.MemStoreFlusher: MemStoreFlusher.1 exiting
2015-06-29 21:07:08,719 INFO  [RS_OPEN_META-dn:60020-0-MetaLogRoller] regionserver.LogRoller: LogRoller exiting.
2015-06-29 21:07:08,724 INFO  [regionserver60020] zookeeper.ZooKeeper: Session: 0x14e2be1fbb7000a closed
2015-06-29 21:07:08,724 DEBUG [regionserver60020] ipc.RpcClient: Stopping rpc client
2015-06-29 21:07:08,725 INFO  [regionserver60020] regionserver.HRegionServer: stopping server dn.example.com,60020,1435612024042; all regions closed.
2015-06-29 21:07:08,725 INFO  [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-06-29 21:07:08,725 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncNotifier] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncNotifier interrupted while waiting for  notification from AsyncSyncer thread
2015-06-29 21:07:08,725 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncNotifier] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncNotifier exiting
2015-06-29 21:07:08,725 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer0] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer0 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,725 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer0] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer0 exiting
2015-06-29 21:07:08,725 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer1] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer1 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,725 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer1] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer1 exiting
2015-06-29 21:07:08,725 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer2] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer2 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,725 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer2] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer2 exiting
2015-06-29 21:07:08,726 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer3] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer3 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,726 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer3] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer3 exiting
2015-06-29 21:07:08,726 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer4] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer4 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,726 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer4] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer4 exiting
2015-06-29 21:07:08,726 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncWriter] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncWriter interrupted while waiting for newer writes added to local buffer
2015-06-29 21:07:08,726 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncWriter] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncWriter exiting
2015-06-29 21:07:08,726 DEBUG [regionserver60020] wal.FSHLog: Closing WAL writer in hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435612024042
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncNotifier] wal.FSHLog: regionserver60020-WAL.AsyncNotifier interrupted while waiting for  notification from AsyncSyncer thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncNotifier] wal.FSHLog: regionserver60020-WAL.AsyncNotifier exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer0] wal.FSHLog: regionserver60020-WAL.AsyncSyncer0 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncSyncer0] wal.FSHLog: regionserver60020-WAL.AsyncSyncer0 exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer1] wal.FSHLog: regionserver60020-WAL.AsyncSyncer1 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncSyncer1] wal.FSHLog: regionserver60020-WAL.AsyncSyncer1 exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer2] wal.FSHLog: regionserver60020-WAL.AsyncSyncer2 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncSyncer2] wal.FSHLog: regionserver60020-WAL.AsyncSyncer2 exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer3] wal.FSHLog: regionserver60020-WAL.AsyncSyncer3 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncSyncer3] wal.FSHLog: regionserver60020-WAL.AsyncSyncer3 exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer4] wal.FSHLog: regionserver60020-WAL.AsyncSyncer4 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,736 INFO  [regionserver60020-WAL.AsyncSyncer4] wal.FSHLog: regionserver60020-WAL.AsyncSyncer4 exiting
2015-06-29 21:07:08,736 DEBUG [regionserver60020-WAL.AsyncWriter] wal.FSHLog: regionserver60020-WAL.AsyncWriter interrupted while waiting for newer writes added to local buffer
2015-06-29 21:07:08,736 INFO  [regionserver60020-WAL.AsyncWriter] wal.FSHLog: regionserver60020-WAL.AsyncWriter exiting
2015-06-29 21:07:08,736 DEBUG [regionserver60020] wal.FSHLog: Closing WAL writer in hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435612024042
2015-06-29 21:07:08,737 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=AccessControlService
2015-06-29 21:07:08,743 DEBUG [regionserver60020] ipc.RpcClient: Stopping rpc client
2015-06-29 21:07:08,743 INFO  [RS_OPEN_META-dn:60020-0] util.ChecksumType: Checksum using org.apache.hadoop.util.PureJavaCrc32
2015-06-29 21:07:08,743 INFO  [RS_OPEN_META-dn:60020-0] util.ChecksumType: Checksum can use org.apache.hadoop.util.PureJavaCrc32C
2015-06-29 21:07:08,743 INFO  [regionserver60020] regionserver.Leases: regionserver60020 closing leases
2015-06-29 21:07:08,744 INFO  [regionserver60020] regionserver.Leases: regionserver60020 closed leases
2015-06-29 21:07:08,744 INFO  [RS_OPEN_META-dn:60020-0] access.AccessController: A minimum HFile version of 3 is required to persist cell ACLs. Consider setting hfile.format.version accordingly.
2015-06-29 21:07:08,756 INFO  [RS_OPEN_META-dn:60020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870912).
2015-06-29 21:07:08,759 INFO  [RS_OPEN_META-dn:60020-0] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:08,761 DEBUG [RS_OPEN_META-dn:60020-0] coprocessor.CoprocessorHost: Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911
2015-06-29 21:07:08,763 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService
2015-06-29 21:07:08,763 INFO  [RS_OPEN_META-dn:60020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully.
2015-06-29 21:07:08,767 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.MetricsRegionSourceImpl: Creating new MetricsRegionSourceImpl for table meta 1588230740
2015-06-29 21:07:08,767 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Instantiated hbase:meta,,1.1588230740
2015-06-29 21:07:08,825 INFO  [StoreOpener-1588230740-1] compactions.CompactionConfiguration: size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000
2015-06-29 21:07:08,883 DEBUG [StoreOpener-1588230740-1] regionserver.HStore: loaded hdfs://nn.example.com:8020/apps/hbase/data/data/hbase/meta/1588230740/info/773daf23518042b49cded2d0f6705ad7, isReference=false, isBulkLoadResult=false, seqid=52, majorCompaction=true
2015-06-29 21:07:08,892 DEBUG [StoreOpener-1588230740-1] regionserver.HStore: loaded hdfs://nn.example.com:8020/apps/hbase/data/data/hbase/meta/1588230740/info/d3e9d66e7adf462bac4b758191ad7152, isReference=false, isBulkLoadResult=false, seqid=60, majorCompaction=false
2015-06-29 21:07:08,902 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Found 0 recovered edits file(s) under hdfs://nn.example.com:8020/apps/hbase/data/data/hbase/meta/1588230740
2015-06-29 21:07:08,922 DEBUG [RS_OPEN_META-dn:60020-0] wal.HLogUtil: Written region seqId to file:hdfs://nn.example.com:8020/apps/hbase/data/data/hbase/meta/1588230740/recovered.edits/65_seqid ,newSeqId=65 ,maxSeqId=64
2015-06-29 21:07:08,924 INFO  [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Onlined 1588230740; next sequenceid=65
2015-06-29 21:07:08,931 ERROR [RS_OPEN_META-dn:60020-0] handler.OpenRegionHandler: Failed open of region=hbase:meta,,1.1588230740, starting to roll back the global memstore size.
java.io.IOException: Cannot append; log is closed
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:1000)
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog.appendNoSync(FSHLog.java:1053)
        at org.apache.hadoop.hbase.regionserver.wal.HLogUtil.writeRegionEventMarker(HLogUtil.java:309)
        at org.apache.hadoop.hbase.regionserver.HRegion.writeRegionOpenMarker(HRegion.java:933)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5785)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5750)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5722)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5678)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5629)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
2015-06-29 21:07:08,931 INFO  [RS_OPEN_META-dn:60020-0] handler.OpenRegionHandler: Opening of region {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} failed, transitioning from OPENING to FAILED_OPEN in ZK, expecting version 5
2015-06-29 21:07:08,931 DEBUG [RS_OPEN_META-dn:60020-0] zookeeper.ZKAssign: regionserver:60020-0x14e2be1fbb70009, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure Transitioning 1588230740 from RS_ZK_REGION_OPENING to RS_ZK_REGION_FAILED_OPEN
2015-06-29 21:07:08,935 DEBUG [RS_OPEN_META-dn:60020-0] zookeeper.ZKAssign: regionserver:60020-0x14e2be1fbb70009, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure Transitioned node 1588230740 from RS_ZK_REGION_OPENING to RS_ZK_REGION_FAILED_OPEN
2015-06-29 21:07:16,825 INFO  [regionserver60020.periodicFlusher] regionserver.HRegionServer$PeriodicMemstoreFlusher: regionserver60020.periodicFlusher exiting
2015-06-29 21:07:16,825 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for Split Thread to finish...
2015-06-29 21:07:16,825 INFO  [regionserver60020.leaseChecker] regionserver.Leases: regionserver60020.leaseChecker closing leases
2015-06-29 21:07:16,826 INFO  [regionserver60020.leaseChecker] regionserver.Leases: regionserver60020.leaseChecker closed leases
2015-06-29 21:07:16,826 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for Merge Thread to finish...
2015-06-29 21:07:16,826 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for Large Compaction Thread to finish...
2015-06-29 21:07:16,826 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for Small Compaction Thread to finish...
2015-06-29 21:07:16,831 INFO  [regionserver60020] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x24e2be20e450014
2015-06-29 21:07:16,833 INFO  [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-06-29 21:07:16,833 INFO  [regionserver60020] zookeeper.ZooKeeper: Session: 0x24e2be20e450014 closed
2015-06-29 21:07:16,834 DEBUG [regionserver60020] ipc.RpcClient: Stopping rpc client
2015-06-29 21:07:16,839 INFO  [regionserver60020] zookeeper.ZooKeeper: Session: 0x14e2be1fbb70009 closed
2015-06-29 21:07:16,839 INFO  [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-06-29 21:07:16,839 INFO  [regionserver60020] regionserver.HRegionServer: stopping server dn.example.com,60020,1435612024042; zookeeper connection closed.
2015-06-29 21:07:16,839 INFO  [regionserver60020] regionserver.HRegionServer: regionserver60020 exiting
2015-06-29 21:07:16,839 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: HRegionServer Aborted
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:66)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2501)
2015-06-29 21:07:16,840 INFO  [Thread-12] regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@1f343622
2015-06-29 21:07:16,841 INFO  [Thread-12] regionserver.ShutdownHook: Starting fs shutdown hook thread.
2015-06-29 21:07:16,841 INFO  [Thread-12] regionserver.ShutdownHook: Shutdown hook finished.



Please help

Thanks,
Venkat
From: Ted Yu [mailto:yuzhihong@gmail.com]
Sent: Friday, June 26, 2015 1:10 PM
To: common-user@hadoop.apache.org
Subject: Re: HBASE Region server failing to start after Kerberos is enabled

Can you post the complete stack trace for 'Failed to get FileSystem instance' ?

What's the permission for /apps/hbase/staging ?

Looking at commit log of SecureBulkLoadEndpoint.java, there have been a lot bug fixes since 0.98.4
Please consider upgrading hbase

Cheers

On Fri, Jun 26, 2015 at 10:48 AM, Gangavarupu, Venkata - Contingent Worker <ve...@bcbsa.com>> wrote:
HI All,

The region servers failing to start, after Kerberos is enabled, with below error.
Hadoop -2.6.0
HBase-0.98.4

2015-06-24 15:58:48,884 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=AuthenticationService
2015-06-24 15:58:48,886 INFO  [RS_OPEN_META-mdcthdpdas06lp:60020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870911).
2015-06-24 15:58:48,894 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=SecureBulkLoadService
2015-06-24 15:58:48,907 ERROR [RS_OPEN_META-mdcthdpdas06lp:60020-0] coprocessor.CoprocessorHost: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
java.lang.IllegalStateException: Failed to get FileSystem instance

I see below properties are included in hbase-site.xml file

<property>
      <name>hbase.coprocessor.region.classes</name>
      <value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.hadoop.hbase.security.access.AccessController</value>
    </property>

<property>
      <name>hbase.bulkload.staging.dir</name>
      <value>/apps/hbase/staging</value>
    </property>


I deleted org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint from the hbase.coprocessor.region.classes and tried to start. It worked.
I think SecureBulkLoad is causing the problem.

Please help to come over this issue. I would like to have SecureBulkLoad class.

Thanks,
Venkat



RE: HBASE Region server failing to start after Kerberos is enabled

Posted by "Gangavarapu, Venkata" <Ve...@bcbsa.com>.
Hi,

This got solved.
Problem is with hbase.bulkload.staging.dir directory permissions.
It was not with hbase:hdfs user permissions.
Once it is changed to hbase:hdfs, region servers came up.

Thanks for giving me the hint

-Venkat

From: Ted Yu [mailto:yuzhihong@gmail.com]
Sent: Friday, June 26, 2015 1:10 PM
To: common-user@hadoop.apache.org
Subject: Re: HBASE Region server failing to start after Kerberos is enabled

Can you post the complete stack trace for 'Failed to get FileSystem instance' ?

What's the permission for /apps/hbase/staging ?

Looking at commit log of SecureBulkLoadEndpoint.java, there have been a lot bug fixes since 0.98.4
Please consider upgrading hbase

Cheers

On Fri, Jun 26, 2015 at 10:48 AM, Gangavarupu, Venkata - Contingent Worker <ve...@bcbsa.com>> wrote:
HI All,

The region servers failing to start, after Kerberos is enabled, with below error.
Hadoop -2.6.0
HBase-0.98.4

2015-06-24 15:58:48,884 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=AuthenticationService
2015-06-24 15:58:48,886 INFO  [RS_OPEN_META-mdcthdpdas06lp:60020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870911).
2015-06-24 15:58:48,894 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=SecureBulkLoadService
2015-06-24 15:58:48,907 ERROR [RS_OPEN_META-mdcthdpdas06lp:60020-0] coprocessor.CoprocessorHost: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
java.lang.IllegalStateException: Failed to get FileSystem instance

I see below properties are included in hbase-site.xml file

<property>
      <name>hbase.coprocessor.region.classes</name>
      <value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.hadoop.hbase.security.access.AccessController</value>
    </property>

<property>
      <name>hbase.bulkload.staging.dir</name>
      <value>/apps/hbase/staging</value>
    </property>


I deleted org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint from the hbase.coprocessor.region.classes and tried to start. It worked.
I think SecureBulkLoad is causing the problem.

Please help to come over this issue. I would like to have SecureBulkLoad class.

Thanks,
Venkat



RE: HBASE Region server failing to start after Kerberos is enabled

Posted by "Gangavarapu, Venkata" <Ve...@bcbsa.com>.
Hi,

This got solved.
Problem is with hbase.bulkload.staging.dir directory permissions.
It was not with hbase:hdfs user permissions.
Once it is changed to hbase:hdfs, region servers came up.

Thanks for giving me the hint

-Venkat

From: Ted Yu [mailto:yuzhihong@gmail.com]
Sent: Friday, June 26, 2015 1:10 PM
To: common-user@hadoop.apache.org
Subject: Re: HBASE Region server failing to start after Kerberos is enabled

Can you post the complete stack trace for 'Failed to get FileSystem instance' ?

What's the permission for /apps/hbase/staging ?

Looking at commit log of SecureBulkLoadEndpoint.java, there have been a lot bug fixes since 0.98.4
Please consider upgrading hbase

Cheers

On Fri, Jun 26, 2015 at 10:48 AM, Gangavarupu, Venkata - Contingent Worker <ve...@bcbsa.com>> wrote:
HI All,

The region servers failing to start, after Kerberos is enabled, with below error.
Hadoop -2.6.0
HBase-0.98.4

2015-06-24 15:58:48,884 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=AuthenticationService
2015-06-24 15:58:48,886 INFO  [RS_OPEN_META-mdcthdpdas06lp:60020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870911).
2015-06-24 15:58:48,894 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=SecureBulkLoadService
2015-06-24 15:58:48,907 ERROR [RS_OPEN_META-mdcthdpdas06lp:60020-0] coprocessor.CoprocessorHost: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
java.lang.IllegalStateException: Failed to get FileSystem instance

I see below properties are included in hbase-site.xml file

<property>
      <name>hbase.coprocessor.region.classes</name>
      <value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.hadoop.hbase.security.access.AccessController</value>
    </property>

<property>
      <name>hbase.bulkload.staging.dir</name>
      <value>/apps/hbase/staging</value>
    </property>


I deleted org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint from the hbase.coprocessor.region.classes and tried to start. It worked.
I think SecureBulkLoad is causing the problem.

Please help to come over this issue. I would like to have SecureBulkLoad class.

Thanks,
Venkat



RE: HBASE Region server failing to start after Kerberos is enabled

Posted by "Gangavarupu, Venkata - Contingent Worker" <ve...@bcbsa.com>.
Hi,

I have attached the logs for hbase region server failures with SecureBulkLoad after Kerberos

The permission on apps/hbase/staging is

drwxrwxrwx   - ams   hdfs          0 2015-06-08 19:17 /apps/hbase/staging

2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HBASE_THRIFT_OPTS=
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:QTINC=/usr/lib64/qt-3.3/include
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:USER=hbase
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HBASE_CLASSPATH=/usr/hdp/2.2.4.2-2/hadoop/conf:/usr/hdp/2.2.4.2-2/hadoop/*:/usr/hdp/2.2.4.2-2/hadoop/lib/*:/usr/hdp/2.2.4.2-2/zookeeper/*:/usr/hdp/2.2.4.2-2/zookeeper/lib/*:
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HOME=/home/hbase
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HISTCONTROL=ignoredups
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:LESSOPEN=|/usr/bin/lesspipe.sh %s
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HBASE_LOG_PREFIX=hbase-hbase-regionserver-dn.example.com
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:LANG=en_US.UTF-8
2015-06-29 21:07:03,159 INFO  [main] util.ServerCommandLine: env:HBASE_IDENT_STRING=hbase
2015-06-29 21:07:03,160 INFO  [main] util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=24.65-b04
2015-06-29 21:07:03,161 INFO  [main] util.ServerCommandLine: vmInputArguments=[-Dproc_regionserver, -XX:OnOutOfMemoryError=kill -9 %p, -Xmx1000m, -Dhdp.version=2.2.4.2-2, -XX:+UseConcMarkSweepGC, -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log, -Djava.security.auth.login.config=/etc/hbase/conf/hbase_client_jaas.conf, -verbose:gc, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -Xloggc:/var/log/hbase/gc.log-201506292107, -Xmn200m, -XX:CMSInitiatingOccupancyFraction=70, -Xms1024m, -Xmx1024m, -Djava.security.auth.login.config=/etc/hbase/conf/hbase_regionserver_jaas.conf, -Dhbase.log.dir=/var/log/hbase, -Dhbase.log.file=hbase-hbase-regionserver-dn.example.com.log, -Dhbase.home.dir=/usr/hdp/current/hbase-regionserver/bin/.., -Dhbase.id.str=hbase, -Dhbase.root.logger=INFO,RFA, -Djava.library.path=:/usr/hdp/2.2.4.2-2/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.2.4.2-2/hadoop/lib/native, -Dhbase.security.logger=INFO,RFAS]
2015-06-29 21:07:03,360 DEBUG [main] regionserver.HRegionServer: regionserver/dn.example.com/172.31.3.128:60020 HConnection server-to-server retries=350
2015-06-29 21:07:03,617 INFO  [main] ipc.SimpleRpcScheduler: Using default user call queue, count=6
2015-06-29 21:07:03,652 INFO  [main] ipc.RpcServer: regionserver/dn.example.com/172.31.3.128:60020: started 10 reader(s).
2015-06-29 21:07:03,761 INFO  [main] impl.MetricsConfig: loaded properties from hadoop-metrics2-hbase.properties
2015-06-29 21:07:03,809 INFO  [main] timeline.HadoopTimelineMetricsSink: Initializing Timeline metrics sink.
2015-06-29 21:07:03,809 INFO  [main] timeline.HadoopTimelineMetricsSink: Identified hostname = dn.example.com, serviceName = hbase
2015-06-29 21:07:03,872 INFO  [main] timeline.HadoopTimelineMetricsSink: Collector Uri: http://nn.example.com:6188/ws/v1/timeline/metrics
2015-06-29 21:07:03,883 INFO  [main] impl.MetricsSinkAdapter: Sink timeline started
2015-06-29 21:07:03,955 INFO  [main] impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-06-29 21:07:03,955 INFO  [main] impl.MetricsSystemImpl: HBase metrics system started
2015-06-29 21:07:04,470 INFO  [main] security.UserGroupInformation: Login successful for user hbase/dn.example.com@EXAMPLE.COM using keytab file /etc/security/keytabs/hbase.service.keytab
2015-06-29 21:07:04,475 INFO  [main] hfile.CacheConfig: Allocating LruBlockCache with maximum size 401.6 M
2015-06-29 21:07:04,520 INFO  [main] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-06-29 21:07:04,569 INFO  [main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2015-06-29 21:07:04,582 INFO  [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver
2015-06-29 21:07:04,582 INFO  [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-06-29 21:07:04,593 INFO  [main] http.HttpServer: Jetty bound to port 60030
2015-06-29 21:07:04,593 INFO  [main] mortbay.log: jetty-6.1.26.hwx
2015-06-29 21:07:05,169 INFO  [main] mortbay.log: Started HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60030
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-2--1, built on 03/31/2015 19:31 GMT
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:host.name=dn.example.com
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.version=1.7.0_67
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.home=/usr/jdk64/jdk1.7.0_67/jre
2015-06-29 21:07:05,196 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.class.path=/etc/hbase/conf:/usr/jdk64/jdk1.7.0_67/lib/tools.jar:/usr/hdp/current/hbase-regionserver/bin/..:/usr/hdp/current/hbase-regionserver/bin/../lib/activation-1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/aopalliance-1.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/api-util-1.0.0-M20.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/asm-3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/avro-1.7.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/azure-storage-2.0.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-beanutils-1.7.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-cli-1.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-codec-1.7.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-collections-3.2.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-compress-1.4.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-configuration-1.6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-daemon-1.0.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-digester-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-el-1.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-httpclient-3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-io-2.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-lang-2.6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-lang3-3.3.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-logging-1.1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-math-2.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-math3-3.1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-net-3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/curator-client-2.6.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/curator-framework-2.6.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/curator-recipes-2.6.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/eclipselink-2.5.2-M1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/findbugs-annotations-1.3.9-1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/gson-2.2.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/guava-12.0.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/guice-3.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/guice-servlet-3.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hamcrest-core-1.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-client-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-client.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-common-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-common-0.98.4.2.2.4.2-2-hadoop2-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-common.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-examples-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-examples.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop2-compat-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop2-compat.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop-compat-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop-compat.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-it-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-it-0.98.4.2.2.4.2-2-hadoop2-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-it.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-prefix-tree-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-prefix-tree.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-protocol-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-protocol.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-server-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-server-0.98.4.2.2.4.2-2-hadoop2-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-server.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-shell-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-shell.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-testing-util-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-testing-util.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-thrift-0.98.4.2.2.4.2-2-hadoop2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-thrift.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/high-scale-lib-1.1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/htrace-core-2.04.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/htrace-core-3.0.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/httpclient-4.2.5.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/httpcore-4.1.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-core-2.2.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-core-asl-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-xc-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jamon-runtime-2.3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jasper-compiler-5.5.23.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jasper-runtime-5.5.23.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/javax.inject-1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/java-xmlbuilder-0.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jaxb-api-2.2.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-client-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-core-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-guice-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-json-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-server-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jets3t-0.9.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jettison-1.3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jetty-6.1.26.hwx.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jetty-sslengine-6.1.26.hwx.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jruby-complete-1.6.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsch-0.1.42.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsp-2.1-6.1.14.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsp-api-2.1-6.1.14.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsr305-1.3.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/junit-4.11.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/leveldbjni-all-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/libthrift-0.9.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/log4j-1.2.17.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/metrics-core-2.2.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/netty-3.6.6.Final.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ojdbc6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/paranamer-2.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/phoenix-server.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/protobuf-java-2.5.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-hbase-plugin-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-plugins-audit-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-plugins-common-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-plugins-cred-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-plugins-impl-0.4.0.2.2.4.2-2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/servlet-api-2.5-6.1.14.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/servlet-api-2.5.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/slf4j-api-1.6.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/snappy-java-1.0.4.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xercesImpl-2.9.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xml-apis-1.3.04.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xmlenc-0.52.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xz-1.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/zookeeper.jar:/usr/hdp/2.2.4.2-2/hadoop/conf:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-api-1.7.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-impl-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ojdbc6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/httpclient-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-audit-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-client-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-httpclient-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsch-0.1.42.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-common-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/javax.persistence-2.1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-framework-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-log4j12-1.7.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/activation-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/junit-4.11.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/eclipselink-2.5.2-M1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-cred-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/httpcore-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-hdfs-plugin-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/azure-storage-2.0.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-lang3-3.3.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-recipes-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-azure.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-annotations.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-common-tests.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-nfs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-auth.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-azure-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-auth-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-nfs.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-annotations-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-common-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/2.2.4.2-2/hadoop/.//hadoop-common.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/./:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-nfs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-hdfs/.//hadoop-hdfs-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jettison-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/httpclient-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/guice-3.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-net-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/curator-client-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jsch-0.1.42.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jasper-compiler-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/curator-framework-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/gson-2.2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/activation-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jackson-core-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/httpcore-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/jline-0.9.94.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/azure-storage-2.0.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/commons-lang3-3.3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/curator-recipes-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-api-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-client-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-registry-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-tests-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/2.2.4.2-2/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/guice-3.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/junit-4.11.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//aws-java-sdk-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jettison-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-datajoin-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-extras.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-jaxrs-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-ant.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-databind-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//curator-client-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-auth.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//curator-framework-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-auth-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-xc-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-sls-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-streaming-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-annotations-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-extras-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//mockito-all-1.8.5.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-openstack-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//activation-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-aws.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-distcp-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-archives-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-aws-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-gridmix-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//junit-4.11.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//jackson-core-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//joda-time-2.7.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//commons-lang3-3.3.2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-rumen-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//curator-recipes-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop-mapreduce/.//hadoop-ant-2.6.0.2.2.4.2-2.jar::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-mapper-asl-1.9.13.jar:/usr/hdp/current/hadoop-mapreduce-client/jaxb-api-2.2.2.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-beanutils-1.7.0.jar:/usr/hdp/current/hadoop-mapreduce-client/aws-java-sdk-1.7.4.jar:/usr/hdp/current/hadoop-mapreduce-client/jettison-1.1.jar:/usr/hdp/current/hadoop-mapreduce-client/httpclient-4.2.5.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-datajoin-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-extras.jar:/usr/hdp/current/hadoop-mapreduce-client/jetty-6.1.26.hwx.jar:/usr/hdp/current/hadoop-mapreduce-client/xz-1.0.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar:/usr/hdp/current/hadoop-mapreduce-client/jets3t-0.9.0.jar:/usr/hdp/current/hadoop-mapreduce-client/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-net-3.1.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-jaxrs-1.9.13.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-ant.jar:/usr/hdp/current/hadoop-mapreduce-client/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jsr305-1.3.9.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-io-2.4.jar:/usr/hdp/current/hadoop-mapreduce-client/guava-11.0.2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar:/usr/hdp/current/hadoop-mapreduce-client/jersey-json-1.9.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-lang-2.6.jar:/usr/hdp/current/hadoop-mapreduce-client/jetty-util-6.1.26.hwx.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-databind-2.2.3.jar:/usr/hdp/current/hadoop-mapreduce-client/curator-client-2.6.0.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-digester-1.8.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-httpclient-3.1.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-compress-1.4.1.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-auth.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle.jar:/usr/hdp/current/hadoop-mapreduce-client/jsch-0.1.42.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-logging-1.1.3.jar:/usr/hdp/current/hadoop-mapreduce-client/jasper-compiler-5.5.23.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/current/hadoop-mapreduce-client/paranamer-2.3.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-rumen.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-gridmix.jar:/usr/hdp/current/hadoop-mapreduce-client/hamcrest-core-1.3.jar:/usr/hdp/current/hadoop-mapreduce-client/java-xmlbuilder-0.4.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/current/hadoop-mapreduce-client/curator-framework-2.6.0.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-auth-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-xc-1.9.13.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-el-1.0.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-hs.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-sls-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jersey-core-1.9.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-app.jar:/usr/hdp/current/hadoop-mapreduce-client/log4j-1.2.17.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/protobuf-java-2.5.0.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-annotations-2.2.3.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-extras-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/mockito-all-1.8.5.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-app-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/gson-2.2.4.jar:/usr/hdp/current/hadoop-mapreduce-client/snappy-java-1.0.4.1.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-openstack-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-collections-3.2.1.jar:/usr/hdp/current/hadoop-mapreduce-client/htrace-core-3.0.4.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-openstack.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/activation-1.1.jar:/usr/hdp/current/hadoop-mapreduce-client/jaxb-impl-2.2.3-1.jar:/usr/hdp/current/hadoop-mapreduce-client/jersey-server-1.9.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-common.jar:/usr/hdp/current/hadoop-mapreduce-client/stax-api-1.0-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-aws.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-configuration-1.6.jar:/usr/hdp/current/hadoop-mapreduce-client/avro-1.7.4.jar:/usr/hdp/current/hadoop-mapreduce-client/api-asn1-api-1.0.0-M20.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-distcp-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jsp-api-2.1.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-archives-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jasper-runtime-5.5.23.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-datajoin.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-aws-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-gridmix-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-core-asl-1.9.13.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-beanutils-core-1.8.0.jar:/usr/hdp/current/hadoop-mapreduce-client/junit-4.11.jar:/usr/hdp/current/hadoop-mapreduce-client/jackson-core-2.2.3.jar:/usr/hdp/current/hadoop-mapreduce-client/servlet-api-2.5.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-codec-1.4.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-cli-1.2.jar:/usr/hdp/current/hadoop-mapreduce-client/joda-time-2.7.jar:/usr/hdp/current/hadoop-mapreduce-client/asm-3.2.jar:/usr/hdp/current/hadoop-mapreduce-client/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/current/hadoop-mapreduce-client/httpcore-4.2.5.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-math3-3.1.1.jar:/usr/hdp/current/hadoop-mapreduce-client/metrics-core-3.0.1.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/current/hadoop-mapreduce-client/netty-3.6.2.Final.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-sls.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-archives.jar:/usr/hdp/current/hadoop-mapreduce-client/commons-lang3-3.3.2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-rumen-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-distcp.jar:/usr/hdp/current/hadoop-mapreduce-client/curator-recipes-2.6.0.jar:/usr/hdp/current/hadoop-mapreduce-client/xmlenc-0.52.jar:/usr/hdp/current/hadoop-mapreduce-client/api-util-1.0.0-M20.jar:/usr/hdp/current/hadoop-mapreduce-client/hadoop-ant-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-runtime-library-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-common-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-yarn-timeline-history-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-api-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-tests-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-mapreduce-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-dag-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-mbeans-resource-calculator-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-examples-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/tez-runtime-internals-0.5.2.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/jsr305-2.0.3.jar:/usr/hdp/current/tez-client/lib/jetty-6.1.26.hwx.jar:/usr/hdp/current/tez-client/lib/commons-io-2.4.jar:/usr/hdp/current/tez-client/lib/guava-11.0.2.jar:/usr/hdp/current/tez-client/lib/commons-collections4-4.0.jar:/usr/hdp/current/tez-client/lib/commons-lang-2.6.jar:/usr/hdp/current/tez-client/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/current/tez-client/lib/commons-logging-1.1.3.jar:/usr/hdp/current/tez-client/lib/log4j-1.2.17.jar:/usr/hdp/current/tez-client/lib/protobuf-java-2.5.0.jar:/usr/hdp/current/tez-client/lib/hadoop-mapreduce-client-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/commons-collections-3.2.1.jar:/usr/hdp/current/tez-client/lib/hadoop-mapreduce-client-core-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/jettison-1.3.4.jar:/usr/hdp/current/tez-client/lib/hadoop-yarn-server-web-proxy-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/servlet-api-2.5.jar:/usr/hdp/current/tez-client/lib/commons-codec-1.4.jar:/usr/hdp/current/tez-client/lib/hadoop-annotations-2.6.0.2.2.4.2-2.jar:/usr/hdp/current/tez-client/lib/commons-cli-1.2.jar:/usr/hdp/current/tez-client/lib/commons-math3-3.1.1.jar:/etc/tez/conf/:/usr/hdp/2.2.4.2-2/tez/tez-runtime-library-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-common-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-yarn-timeline-history-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-api-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-tests-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-mapreduce-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-dag-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-mbeans-resource-calculator-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-examples-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/tez-runtime-internals-0.5.2.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/jsr305-2.0.3.jar:/usr/hdp/2.2.4.2-2/tez/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/tez/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-collections4-4.0.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/tez/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/tez/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/tez/lib/hadoop-mapreduce-client-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/tez/lib/hadoop-mapreduce-client-core-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/jettison-1.3.4.jar:/usr/hdp/2.2.4.2-2/tez/lib/hadoop-yarn-server-web-proxy-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/tez/lib/hadoop-annotations-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/tez/lib/commons-math3-3.1.1.jar:/etc/tez/conf:/usr/hdp/2.2.4.2-2/hadoop/conf:/usr/hdp/2.2.4.2-2/hadoop/hadoop-azure.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-annotations.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-common-tests.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-common-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-nfs-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-auth.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-azure-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-auth-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-nfs.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-annotations-2.6.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-common-2.6.0.2.2.4.2-2-tests.jar:/usr/hdp/2.2.4.2-2/hadoop/hadoop-common.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-api-1.7.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-impl-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ojdbc6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/httpclient-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/xz-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-audit-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsr305-1.3.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-client-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-httpclient-3.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsch-0.1.42.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-common-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/javax.persistence-2.1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-framework-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-el-1.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-collections-3.2.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/slf4j-log4j12-1.7.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/htrace-core-3.0.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/activation-1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/junit-4.11.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/eclipselink-2.5.2-M1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/asm-3.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-plugins-cred-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/httpcore-4.2.5.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/ranger-hdfs-plugin-0.4.0.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/azure-storage-2.0.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/commons-lang3-3.3.2.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/curator-recipes-2.6.0.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.2.4.2-2/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.2.4.2-2/zookeeper/zookeeper-3.4.6.2.2.4.2-2.jar:/usr/hdp/2.2.4.2-2/zookeeper/zookeeper.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/httpcore-4.2.3.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/httpclient-4.2.3.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-artifact-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/log4j-1.2.16.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-http-shared-1.0-beta-6.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-file-1.0-beta-6.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-http-shared4-2.4.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/classworlds-1.1-alpha-2.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/plexus-utils-3.0.8.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/netty-3.7.0.Final.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-profile-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/nekohtml-1.9.6.2.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-provider-api-2.4.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/jsoup-1.7.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/plexus-container-default-1.0-alpha-9-stable-1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/xercesMinimal-1.9.6.2.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-artifact-manager-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/commons-logging-1.1.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/commons-codec-1.6.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/plexus-interpolation-1.11.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/slf4j-api-1.6.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/ant-1.8.0.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-plugin-registry-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/slf4j-log4j12-1.6.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-http-lightweight-1.0-beta-6.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/commons-io-2.2.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-error-diagnostics-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-project-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/backport-util-concurrent-3.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/ant-launcher-1.8.0.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-ant-tasks-2.1.3.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-model-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/jline-0.9.94.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/wagon-http-2.4.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-repository-metadata-2.2.1.jar:/usr/hdp/2.2.4.2-2/zookeeper/lib/maven-settings-2.2.1.jar:
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/hdp/2.2.4.2-2/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.2.4.2-2/hadoop/lib/native
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:os.name=Linux
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:os.version=2.6.32-431.11.2.el6.x86_64
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:user.name=hbase
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:user.home=/home/hbase
2015-06-29 21:07:05,197 INFO  [regionserver60020] zookeeper.ZooKeeper: Client environment:user.dir=/home/hbase
2015-06-29 21:07:05,198 INFO  [regionserver60020] zookeeper.ZooKeeper: Initiating client connection, connectString=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181 sessionTimeout=30000 watcher=regionserver:60020, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure
2015-06-29 21:07:05,215 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Process identifier=regionserver:60020 connecting to ZooKeeper ensemble=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181
2015-06-29 21:07:05,230 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.Login: successfully logged in.
2015-06-29 21:07:05,234 INFO  [Thread-10] zookeeper.Login: TGT refresh thread started.
2015-06-29 21:07:05,236 INFO  [regionserver60020-SendThread(dn.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2015-06-29 21:07:05,244 INFO  [Thread-10] zookeeper.Login: TGT valid starting at:        Mon Jun 29 21:07:05 UTC 2015
2015-06-29 21:07:05,244 INFO  [Thread-10] zookeeper.Login: TGT expires:                  Tue Jun 30 21:07:05 UTC 2015
2015-06-29 21:07:05,245 INFO  [Thread-10] zookeeper.Login: TGT refresh sleeping until: Tue Jun 30 17:29:07 UTC 2015
2015-06-29 21:07:05,247 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server dn.example.com/172.31.3.128:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2015-06-29 21:07:05,249 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to dn.example.com/172.31.3.128:2181, initiating session
2015-06-29 21:07:05,261 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server dn.example.com/172.31.3.128:2181, sessionid = 0x14e2be1fbb70009, negotiated timeout = 30000
2015-06-29 21:07:05,514 INFO  [regionserver60020] zookeeper.ZooKeeper: Initiating client connection, connectString=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181 sessionTimeout=30000 watcher=hconnection-0x55af9c7d, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure
2015-06-29 21:07:05,521 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x55af9c7d connecting to ZooKeeper ensemble=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181
2015-06-29 21:07:05,522 INFO  [regionserver60020-SendThread(dn.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2015-06-29 21:07:05,523 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server dn.example.com/172.31.3.128:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2015-06-29 21:07:05,523 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to dn.example.com/172.31.3.128:2181, initiating session
2015-06-29 21:07:05,526 INFO  [regionserver60020-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server dn.example.com/172.31.3.128:2181, sessionid = 0x14e2be1fbb7000a, negotiated timeout = 30000
2015-06-29 21:07:06,199 INFO  [main] regionserver.ShutdownHook: Installed shutdown hook thread: Shutdownhook:regionserver60020
2015-06-29 21:07:06,206 DEBUG [regionserver60020] ipc.RpcClient: Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@443fdee7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2015-06-29 21:07:06,211 DEBUG [regionserver60020] hbase.HRegionInfo: 1588230740
2015-06-29 21:07:06,212 DEBUG [regionserver60020] catalog.CatalogTracker: Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@19e951c9
2015-06-29 21:07:06,215 INFO  [regionserver60020] regionserver.HRegionServer: ClusterId : 05d0370c-07a6-40ff-ab97-5be7d7ae1f36
2015-06-29 21:07:06,218 INFO  [regionserver60020] procedure.RegionServerProcedureManagerHost: Procedure online-snapshot is initializing
2015-06-29 21:07:06,230 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Node /hbase-secure/online-snapshot/acquired already exists and this is not a retry
2015-06-29 21:07:06,235 INFO  [regionserver60020] procedure.RegionServerProcedureManagerHost: Procedure online-snapshot is initialized
2015-06-29 21:07:06,240 INFO  [regionserver60020] regionserver.MemStoreFlusher: globalMemStoreLimit=401.6 M, globalMemStoreLimitLowMark=381.5 M, maxHeap=1004 M
2015-06-29 21:07:06,242 INFO  [regionserver60020] regionserver.HRegionServer: CompactionChecker runs every 10sec
2015-06-29 21:07:06,244 DEBUG [regionserver60020] ipc.RpcClient: Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@175e895d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=dn.example.com/172.31.3.128:0
2015-06-29 21:07:06,250 INFO  [regionserver60020] regionserver.HRegionServer: reportForDuty to master=rm.example.com,60000,1435297869160 with port=60020, startcode=1435612024042
2015-06-29 21:07:06,359 DEBUG [regionserver60020] token.AuthenticationTokenSelector: No matching token found
2015-06-29 21:07:06,360 DEBUG [regionserver60020] ipc.RpcClient: RPC Server Kerberos principal name for service=RegionServerStatusService is hbase/rm.example.com@EXAMPLE.COM
2015-06-29 21:07:06,360 DEBUG [regionserver60020] ipc.RpcClient: Use KERBEROS authentication for service RegionServerStatusService, sasl=true
2015-06-29 21:07:06,372 DEBUG [regionserver60020] ipc.RpcClient: Connecting to rm.example.com/172.31.3.127:60000
2015-06-29 21:07:06,378 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Creating SASL GSSAPI client. Server's Kerberos principal name is hbase/rm.example.com@EXAMPLE.COM
2015-06-29 21:07:06,384 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Have sent token of size 633 from initSASLContext.
2015-06-29 21:07:06,388 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Will read input token of size 108 for processing by initSASLContext
2015-06-29 21:07:06,390 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Will send token of size 0 from initSASLContext.
2015-06-29 21:07:06,391 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Will read input token of size 32 for processing by initSASLContext
2015-06-29 21:07:06,392 DEBUG [regionserver60020] security.HBaseSaslRpcClient: Will send token of size 32 from initSASLContext.
2015-06-29 21:07:06,392 DEBUG [regionserver60020] security.HBaseSaslRpcClient: SASL client context established. Negotiated QoP: auth
2015-06-29 21:07:06,411 DEBUG [regionserver60020] regionserver.HRegionServer: Config from master: hbase.rootdir=hdfs://nn.example.com:8020/apps/hbase/data
2015-06-29 21:07:06,412 DEBUG [regionserver60020] regionserver.HRegionServer: Config from master: fs.default.name=hdfs://nn.example.com:8020
2015-06-29 21:07:06,412 INFO  [regionserver60020] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:06,412 DEBUG [regionserver60020] regionserver.HRegionServer: Config from master: hbase.master.info.port=60010
2015-06-29 21:07:06,412 INFO  [regionserver60020] Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
2015-06-29 21:07:06,430 INFO  [regionserver60020] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2015-06-29 21:07:06,437 DEBUG [regionserver60020] regionserver.HRegionServer: logdir=hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435612024042
2015-06-29 21:07:06,542 DEBUG [regionserver60020] regionserver.Replication: ReplicationStatisticsThread 300
2015-06-29 21:07:06,553 INFO  [regionserver60020] wal.FSHLog: WAL/HLog configuration: blocksize=128 MB, rollsize=121.60 MB, enabled=true
2015-06-29 21:07:06,798 INFO  [regionserver60020] wal.FSHLog: New WAL /apps/hbase/data/WALs/dn.example.com,60020,1435612024042/dn.example.com%2C60020%2C1435612024042.1435612026585
2015-06-29 21:07:06,814 INFO  [regionserver60020] regionserver.MetricsRegionServerWrapperImpl: Computing regionserver metrics every 5000 milliseconds
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_OPEN_REGION-dn:60020, corePoolSize=3, maxPoolSize=3
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_OPEN_META-dn:60020, corePoolSize=1, maxPoolSize=1
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_CLOSE_REGION-dn:60020, corePoolSize=3, maxPoolSize=3
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_CLOSE_META-dn:60020, corePoolSize=1, maxPoolSize=1
2015-06-29 21:07:06,822 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_LOG_REPLAY_OPS-dn:60020, corePoolSize=2, maxPoolSize=2
2015-06-29 21:07:06,823 DEBUG [regionserver60020] executor.ExecutorService: Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-dn:60020, corePoolSize=3, maxPoolSize=3
2015-06-29 21:07:06,826 INFO  [regionserver60020] regionserver.ReplicationSourceManager: Current list of replicators: [dn.example.com,60020,1435612024042] other RSs: [dn.example.com,60020,1435612024042]
2015-06-29 21:07:06,875 INFO  [regionserver60020] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:06,885 INFO  [regionserver60020] zookeeper.ZooKeeper: Initiating client connection, connectString=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181 sessionTimeout=30000 watcher=hconnection-0x64d5c83f, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure
2015-06-29 21:07:06,890 INFO  [regionserver60020-SendThread(nn.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2015-06-29 21:07:06,891 INFO  [regionserver60020-SendThread(nn.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server nn.example.com/172.31.3.126:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2015-06-29 21:07:06,891 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x64d5c83f connecting to ZooKeeper ensemble=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181
2015-06-29 21:07:06,895 INFO  [regionserver60020-SendThread(nn.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to nn.example.com/172.31.3.126:2181, initiating session
2015-06-29 21:07:06,909 INFO  [regionserver60020-SendThread(nn.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server nn.example.com/172.31.3.126:2181, sessionid = 0x24e2be20e450014, negotiated timeout = 30000
2015-06-29 21:07:06,939 DEBUG [regionserver60020] ipc.RpcClient: Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b0a2c6a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2015-06-29 21:07:06,954 INFO  [regionserver60020] zookeeper.RecoverableZooKeeper: Node /hbase-secure/tokenauth/keys already exists and this is not a retry
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 10
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 17
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 15
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 16
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 13
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 14
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 11
2015-06-29 21:07:06,961 DEBUG [regionserver60020] token.AuthenticationTokenSecretManager: Adding key 12
2015-06-29 21:07:06,969 INFO  [ZKSecretWatcher-leaderElector] zookeeper.RecoverableZooKeeper: Node /hbase-secure/tokenauth/keymaster already exists and this is not a retry
2015-06-29 21:07:06,970 INFO  [ZKSecretWatcher-leaderElector] zookeeper.ZKLeaderManager: Found existing leader with ID: dn.example.com,60020,1435612024042
2015-06-29 21:07:07,017 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: starting
2015-06-29 21:07:07,018 INFO  [RpcServer.listener,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: starting
2015-06-29 21:07:07,018 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=0 queue=0
2015-06-29 21:07:07,018 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=1 queue=1
2015-06-29 21:07:07,019 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=2 queue=2
2015-06-29 21:07:07,019 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=3 queue=3
2015-06-29 21:07:07,019 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=4 queue=4
2015-06-29 21:07:07,019 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=5 queue=5
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=6 queue=0
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=7 queue=1
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=8 queue=2
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=9 queue=3
2015-06-29 21:07:07,020 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=10 queue=4
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=11 queue=5
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=12 queue=0
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=13 queue=1
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=14 queue=2
2015-06-29 21:07:07,021 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=15 queue=3
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=16 queue=4
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=17 queue=5
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=18 queue=0
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=19 queue=1
2015-06-29 21:07:07,022 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=20 queue=2
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=21 queue=3
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=22 queue=4
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=23 queue=5
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=24 queue=0
2015-06-29 21:07:07,023 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=25 queue=1
2015-06-29 21:07:07,024 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=26 queue=2
2015-06-29 21:07:07,024 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=27 queue=3
2015-06-29 21:07:07,024 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=28 queue=4
2015-06-29 21:07:07,025 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=29 queue=5
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=30 queue=0
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=31 queue=1
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=32 queue=2
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=33 queue=3
2015-06-29 21:07:07,031 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=34 queue=4
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=35 queue=5
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=36 queue=0
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=37 queue=1
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=38 queue=2
2015-06-29 21:07:07,032 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=39 queue=3
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=40 queue=4
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=41 queue=5
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=42 queue=0
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=43 queue=1
2015-06-29 21:07:07,033 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=44 queue=2
2015-06-29 21:07:07,034 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=45 queue=3
2015-06-29 21:07:07,034 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=46 queue=4
2015-06-29 21:07:07,034 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=47 queue=5
2015-06-29 21:07:07,035 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=48 queue=0
2015-06-29 21:07:07,035 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=49 queue=1
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=50 queue=2
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=51 queue=3
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=52 queue=4
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=53 queue=5
2015-06-29 21:07:07,036 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=54 queue=0
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=55 queue=1
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=56 queue=2
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=57 queue=3
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=58 queue=4
2015-06-29 21:07:07,037 DEBUG [regionserver60020] ipc.RpcExecutor: B.Default Start Handler index=59 queue=5
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=0 queue=0
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=1 queue=0
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=2 queue=0
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=3 queue=0
2015-06-29 21:07:07,038 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=4 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=5 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=6 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=7 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=8 queue=0
2015-06-29 21:07:07,039 DEBUG [regionserver60020] ipc.RpcExecutor: Priority Start Handler index=9 queue=0
2015-06-29 21:07:07,040 DEBUG [regionserver60020] ipc.RpcExecutor: Replication Start Handler index=0 queue=0
2015-06-29 21:07:07,040 DEBUG [regionserver60020] ipc.RpcExecutor: Replication Start Handler index=1 queue=0
2015-06-29 21:07:07,040 DEBUG [regionserver60020] ipc.RpcExecutor: Replication Start Handler index=2 queue=0
2015-06-29 21:07:07,074 INFO  [regionserver60020] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:07,083 INFO  [regionserver60020] regionserver.HRegionServer: Serving as dn.example.com,60020,1435612024042, RpcServer on dn.example.com/172.31.3.128:60020, sessionid=0x14e2be1fbb70009
2015-06-29 21:07:07,083 INFO  [regionserver60020] procedure.RegionServerProcedureManagerHost: Procedure online-snapshot is starting
2015-06-29 21:07:07,083 DEBUG [regionserver60020] snapshot.RegionServerSnapshotManager: Start Snapshot Manager dn.example.com,60020,1435612024042
2015-06-29 21:07:07,083 DEBUG [regionserver60020] procedure.ZKProcedureMemberRpcs: Starting procedure member 'dn.example.com,60020,1435612024042'
2015-06-29 21:07:07,083 DEBUG [regionserver60020] procedure.ZKProcedureMemberRpcs: Checking for aborted procedures on node: '/hbase-secure/online-snapshot/abort'
2015-06-29 21:07:07,083 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] regionserver.SplitLogWorker: SplitLogWorker dn.example.com,60020,1435612024042 starting
2015-06-29 21:07:07,084 DEBUG [regionserver60020] procedure.ZKProcedureMemberRpcs: Looking for new procedures under znode:'/hbase-secure/online-snapshot/acquired'
2015-06-29 21:07:07,084 INFO  [regionserver60020] procedure.RegionServerProcedureManagerHost: Procedure online-snapshot is started
2015-06-29 21:07:07,111 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] zookeeper.ZooKeeper: Initiating client connection, connectString=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181 sessionTimeout=30000 watcher=hconnection-0x7db5292f, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure
2015-06-29 21:07:07,118 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042-SendThread(dn.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2015-06-29 21:07:07,119 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server dn.example.com/172.31.3.128:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2015-06-29 21:07:07,119 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to dn.example.com/172.31.3.128:2181, initiating session
2015-06-29 21:07:07,122 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7db5292f connecting to ZooKeeper ensemble=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181
2015-06-29 21:07:07,128 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042-SendThread(dn.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server dn.example.com/172.31.3.128:2181, sessionid = 0x14e2be1fbb7000b, negotiated timeout = 30000
2015-06-29 21:07:07,143 DEBUG [SplitLogWorker-dn.example.com,60020,1435612024042] ipc.RpcClient: Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7aa9a046, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2015-06-29 21:07:07,165 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] regionserver.SplitLogWorker: worker dn.example.com,60020,1435612024042 acquired task /hbase-secure/splitWAL/WALs%2Fdn.example.com%2C60020%2C1435297873436-splitting%2Fdn.example.com%252C60020%252C1435297873436.1435297879386.meta
2015-06-29 21:07:07,224 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:07,234 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Splitting hlog: hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435297873436-splitting/dn.example.com%2C60020%2C1435297873436.1435297879386.meta, length=91
2015-06-29 21:07:07,234 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: DistributedLogReplay = false
2015-06-29 21:07:07,240 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435297873436-splitting/dn.example.com%2C60020%2C1435297873436.1435297879386.meta
2015-06-29 21:07:07,246 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435297873436-splitting/dn.example.com%2C60020%2C1435297873436.1435297879386.meta after 6ms
2015-06-29 21:07:07,339 DEBUG [RS_LOG_REPLAY_OPS-dn:60020-0-Writer-1] wal.HLogSplitter: Writer thread Thread[RS_LOG_REPLAY_OPS-dn:60020-0-Writer-1,5,main]: starting
2015-06-29 21:07:07,339 DEBUG [RS_LOG_REPLAY_OPS-dn:60020-0-Writer-2] wal.HLogSplitter: Writer thread Thread[RS_LOG_REPLAY_OPS-dn:60020-0-Writer-2,5,main]: starting
2015-06-29 21:07:07,338 DEBUG [RS_LOG_REPLAY_OPS-dn:60020-0-Writer-0] wal.HLogSplitter: Writer thread Thread[RS_LOG_REPLAY_OPS-dn:60020-0-Writer-0,5,main]: starting
2015-06-29 21:07:07,342 DEBUG [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Finishing writing output logs and closing down.
2015-06-29 21:07:07,342 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Waiting for split writer threads to finish
2015-06-29 21:07:07,343 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Split writers finished
2015-06-29 21:07:07,345 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] wal.HLogSplitter: Processed 0 edits across 0 regions; log file=hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435297873436-splitting/dn.example.com%2C60020%2C1435297873436.1435297879386.meta is corrupted = false progress failed = false
2015-06-29 21:07:07,351 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] handler.HLogSplitterHandler: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fdn.example.com%2C60020%2C1435297873436-splitting%2Fdn.example.com%252C60020%252C1435297873436.1435297879386.meta to final state DONE dn.example.com,60020,1435612024042
2015-06-29 21:07:07,351 INFO  [RS_LOG_REPLAY_OPS-dn:60020-0] handler.HLogSplitterHandler: worker dn.example.com,60020,1435612024042 done with task /hbase-secure/splitWAL/WALs%2Fdn.example.com%2C60020%2C1435297873436-splitting%2Fdn.example.com%252C60020%252C1435297873436.1435297879386.meta in 168ms
2015-06-29 21:07:07,387 DEBUG [regionserver60020-EventThread] regionserver.SplitLogWorker: tasks arrived or departed
2015-06-29 21:07:08,433 DEBUG [RpcServer.listener,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: connection from 172.31.3.127:37982; # active connections: 1
2015-06-29 21:07:08,434 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Kerberos principal name is hbase/dn.example.com@EXAMPLE.COM
2015-06-29 21:07:08,436 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Created SASL server with mechanism = GSSAPI
2015-06-29 21:07:08,436 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Have read input token of size 633 for processing by saslServer.evaluateResponse()
2015-06-29 21:07:08,441 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Will send token of size 108 from saslServer.
2015-06-29 21:07:08,444 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Have read input token of size 0 for processing by saslServer.evaluateResponse()
2015-06-29 21:07:08,445 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Will send token of size 32 from saslServer.
2015-06-29 21:07:08,446 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: Have read input token of size 32 for processing by saslServer.evaluateResponse()
2015-06-29 21:07:08,446 DEBUG [RpcServer.reader=1,port=60020] security.HBaseSaslRpcServer: SASL server GSSAPI callback: setting canonicalized client ID: hbase/rm.example.com@EXAMPLE.COM
2015-06-29 21:07:08,446 DEBUG [RpcServer.reader=1,port=60020] ipc.RpcServer: SASL server context established. Authenticated client: hbase/rm.example.com@EXAMPLE.COM (auth:SIMPLE). Negotiated QoP is auth
2015-06-29 21:07:08,481 INFO  [PriorityRpcServer.handler=0,queue=0,port=60020] regionserver.HRegionServer: Open hbase:meta,,1.1588230740
2015-06-29 21:07:08,506 INFO  [PriorityRpcServer.handler=0,queue=0,port=60020] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2015-06-29 21:07:08,608 DEBUG [RS_OPEN_META-dn:60020-0] zookeeper.ZKAssign: regionserver:60020-0x14e2be1fbb70009, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure Transitioning 1588230740 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
2015-06-29 21:07:08,617 DEBUG [RS_OPEN_META-dn:60020-0] zookeeper.ZKAssign: regionserver:60020-0x14e2be1fbb70009, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure Transitioned node 1588230740 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
2015-06-29 21:07:08,618 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegionServer: logdir=hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435612024042
2015-06-29 21:07:08,618 INFO  [RS_OPEN_META-dn:60020-0] wal.FSHLog: WAL/HLog configuration: blocksize=128 MB, rollsize=121.60 MB, enabled=true
2015-06-29 21:07:08,635 INFO  [RS_OPEN_META-dn:60020-0] wal.FSHLog: New WAL /apps/hbase/data/WALs/dn.example.com,60020,1435612024042/dn.example.com%2C60020%2C1435612024042.1435612028621.meta
2015-06-29 21:07:08,650 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}
2015-06-29 21:07:08,669 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=AuthenticationService
2015-06-29 21:07:08,671 INFO  [RS_OPEN_META-dn:60020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870911).
2015-06-29 21:07:08,676 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=SecureBulkLoadService
2015-06-29 21:07:08,686 ERROR [RS_OPEN_META-dn:60020-0] coprocessor.CoprocessorHost: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
java.lang.IllegalStateException: Failed to get FileSystem instance
        at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:148)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.startup(CoprocessorHost.java:415)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:257)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:160)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:192)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:701)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:608)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:5438)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5749)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5722)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5678)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5629)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:201)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:191)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6795)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6777)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6696)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1731)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1711)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:615)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:445)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2345)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1300)
        at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:136)
        ... 21 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:201)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:191)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6795)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6777)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6696)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1731)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1711)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:615)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:445)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at org.apache.hadoop.ipc.Client.call(Client.java:1469)
        at org.apache.hadoop.ipc.Client.call(Client.java:1400)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy18.setPermission(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:361)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy19.setPermission(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:294)
        at com.sun.proxy.$Proxy20.setPermission(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2343)
        ... 26 more
2015-06-29 21:07:08,688 FATAL [RS_OPEN_META-dn:60020-0] regionserver.HRegionServer: ABORTING region server dn.example.com,60020,1435612024042: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
java.lang.IllegalStateException: Failed to get FileSystem instance
        at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:148)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.startup(CoprocessorHost.java:415)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:257)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:160)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:192)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:701)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:608)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:5438)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5749)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5722)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5678)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5629)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:201)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:191)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6795)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6777)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6696)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1731)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1711)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:615)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:445)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2345)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1300)
        at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:136)
        ... 21 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:201)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:191)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6795)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6777)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6696)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1731)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1711)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:615)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:445)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at org.apache.hadoop.ipc.Client.call(Client.java:1469)
        at org.apache.hadoop.ipc.Client.call(Client.java:1400)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy18.setPermission(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:361)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy19.setPermission(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:294)
        at com.sun.proxy.$Proxy20.setPermission(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2343)
        ... 26 more
2015-06-29 21:07:08,690 FATAL [RS_OPEN_META-dn:60020-0] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.security.token.TokenProvider]
2015-06-29 21:07:08,699 INFO  [RS_OPEN_META-dn:60020-0] regionserver.HRegionServer: STOPPED: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
2015-06-29 21:07:08,700 INFO  [regionserver60020] ipc.RpcServer: Stopping server on 60020
2015-06-29 21:07:08,700 INFO  [RpcServer.listener,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: stopping
2015-06-29 21:07:08,701 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped
2015-06-29 21:07:08,701 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2015-06-29 21:07:08,701 INFO  [regionserver60020] regionserver.SplitLogWorker: Sending interrupt to stop the worker thread
2015-06-29 21:07:08,702 INFO  [regionserver60020] regionserver.HRegionServer: Stopping infoServer
2015-06-29 21:07:08,704 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] regionserver.SplitLogWorker: SplitLogWorker interrupted while waiting for task, exiting: java.lang.InterruptedException
2015-06-29 21:07:08,704 INFO  [SplitLogWorker-dn.example.com,60020,1435612024042] regionserver.SplitLogWorker: SplitLogWorker dn.example.com,60020,1435612024042 exiting
2015-06-29 21:07:08,716 INFO  [regionserver60020] mortbay.log: Stopped HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60030
2015-06-29 21:07:08,716 WARN  [1614105668@qtp-278229142-1 - Acceptor0 HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60030] http.HttpServer: HttpServer Acceptor: isRunning is false. Rechecking.
2015-06-29 21:07:08,716 WARN  [1614105668@qtp-278229142-1 - Acceptor0 HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60030] http.HttpServer: HttpServer Acceptor: isRunning is false
2015-06-29 21:07:08,718 INFO  [regionserver60020] snapshot.RegionServerSnapshotManager: Stopping RegionServerSnapshotManager abruptly.
2015-06-29 21:07:08,719 INFO  [regionserver60020.logRoller] regionserver.LogRoller: LogRoller exiting.
2015-06-29 21:07:08,719 INFO  [regionserver60020.nonceCleaner] regionserver.ServerNonceManager$1: regionserver60020.nonceCleaner exiting
2015-06-29 21:07:08,719 INFO  [regionserver60020.compactionChecker] regionserver.HRegionServer$CompactionChecker: regionserver60020.compactionChecker exiting
2015-06-29 21:07:08,720 INFO  [regionserver60020] regionserver.HRegionServer: aborting server dn.example.com,60020,1435612024042
2015-06-29 21:07:08,721 DEBUG [regionserver60020] catalog.CatalogTracker: Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@19e951c9
2015-06-29 21:07:08,721 INFO  [regionserver60020] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x14e2be1fbb7000a
2015-06-29 21:07:08,718 INFO  [MemStoreFlusher.0] regionserver.MemStoreFlusher: MemStoreFlusher.0 exiting
2015-06-29 21:07:08,723 INFO  [MemStoreFlusher.1] regionserver.MemStoreFlusher: MemStoreFlusher.1 exiting
2015-06-29 21:07:08,719 INFO  [RS_OPEN_META-dn:60020-0-MetaLogRoller] regionserver.LogRoller: LogRoller exiting.
2015-06-29 21:07:08,724 INFO  [regionserver60020] zookeeper.ZooKeeper: Session: 0x14e2be1fbb7000a closed
2015-06-29 21:07:08,724 DEBUG [regionserver60020] ipc.RpcClient: Stopping rpc client
2015-06-29 21:07:08,725 INFO  [regionserver60020] regionserver.HRegionServer: stopping server dn.example.com,60020,1435612024042; all regions closed.
2015-06-29 21:07:08,725 INFO  [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-06-29 21:07:08,725 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncNotifier] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncNotifier interrupted while waiting for  notification from AsyncSyncer thread
2015-06-29 21:07:08,725 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncNotifier] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncNotifier exiting
2015-06-29 21:07:08,725 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer0] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer0 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,725 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer0] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer0 exiting
2015-06-29 21:07:08,725 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer1] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer1 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,725 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer1] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer1 exiting
2015-06-29 21:07:08,725 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer2] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer2 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,725 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer2] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer2 exiting
2015-06-29 21:07:08,726 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer3] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer3 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,726 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer3] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer3 exiting
2015-06-29 21:07:08,726 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer4] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer4 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,726 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer4] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncSyncer4 exiting
2015-06-29 21:07:08,726 DEBUG [RS_OPEN_META-dn:60020-0-WAL.AsyncWriter] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncWriter interrupted while waiting for newer writes added to local buffer
2015-06-29 21:07:08,726 INFO  [RS_OPEN_META-dn:60020-0-WAL.AsyncWriter] wal.FSHLog: RS_OPEN_META-dn:60020-0-WAL.AsyncWriter exiting
2015-06-29 21:07:08,726 DEBUG [regionserver60020] wal.FSHLog: Closing WAL writer in hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435612024042
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncNotifier] wal.FSHLog: regionserver60020-WAL.AsyncNotifier interrupted while waiting for  notification from AsyncSyncer thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncNotifier] wal.FSHLog: regionserver60020-WAL.AsyncNotifier exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer0] wal.FSHLog: regionserver60020-WAL.AsyncSyncer0 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncSyncer0] wal.FSHLog: regionserver60020-WAL.AsyncSyncer0 exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer1] wal.FSHLog: regionserver60020-WAL.AsyncSyncer1 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncSyncer1] wal.FSHLog: regionserver60020-WAL.AsyncSyncer1 exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer2] wal.FSHLog: regionserver60020-WAL.AsyncSyncer2 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncSyncer2] wal.FSHLog: regionserver60020-WAL.AsyncSyncer2 exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer3] wal.FSHLog: regionserver60020-WAL.AsyncSyncer3 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,735 INFO  [regionserver60020-WAL.AsyncSyncer3] wal.FSHLog: regionserver60020-WAL.AsyncSyncer3 exiting
2015-06-29 21:07:08,735 DEBUG [regionserver60020-WAL.AsyncSyncer4] wal.FSHLog: regionserver60020-WAL.AsyncSyncer4 interrupted while waiting for notification from AsyncWriter thread
2015-06-29 21:07:08,736 INFO  [regionserver60020-WAL.AsyncSyncer4] wal.FSHLog: regionserver60020-WAL.AsyncSyncer4 exiting
2015-06-29 21:07:08,736 DEBUG [regionserver60020-WAL.AsyncWriter] wal.FSHLog: regionserver60020-WAL.AsyncWriter interrupted while waiting for newer writes added to local buffer
2015-06-29 21:07:08,736 INFO  [regionserver60020-WAL.AsyncWriter] wal.FSHLog: regionserver60020-WAL.AsyncWriter exiting
2015-06-29 21:07:08,736 DEBUG [regionserver60020] wal.FSHLog: Closing WAL writer in hdfs://nn.example.com:8020/apps/hbase/data/WALs/dn.example.com,60020,1435612024042
2015-06-29 21:07:08,737 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=AccessControlService
2015-06-29 21:07:08,743 DEBUG [regionserver60020] ipc.RpcClient: Stopping rpc client
2015-06-29 21:07:08,743 INFO  [RS_OPEN_META-dn:60020-0] util.ChecksumType: Checksum using org.apache.hadoop.util.PureJavaCrc32
2015-06-29 21:07:08,743 INFO  [RS_OPEN_META-dn:60020-0] util.ChecksumType: Checksum can use org.apache.hadoop.util.PureJavaCrc32C
2015-06-29 21:07:08,743 INFO  [regionserver60020] regionserver.Leases: regionserver60020 closing leases
2015-06-29 21:07:08,744 INFO  [regionserver60020] regionserver.Leases: regionserver60020 closed leases
2015-06-29 21:07:08,744 INFO  [RS_OPEN_META-dn:60020-0] access.AccessController: A minimum HFile version of 3 is required to persist cell ACLs. Consider setting hfile.format.version accordingly.
2015-06-29 21:07:08,756 INFO  [RS_OPEN_META-dn:60020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870912).
2015-06-29 21:07:08,759 INFO  [RS_OPEN_META-dn:60020-0] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-29 21:07:08,761 DEBUG [RS_OPEN_META-dn:60020-0] coprocessor.CoprocessorHost: Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911
2015-06-29 21:07:08,763 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService
2015-06-29 21:07:08,763 INFO  [RS_OPEN_META-dn:60020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully.
2015-06-29 21:07:08,767 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.MetricsRegionSourceImpl: Creating new MetricsRegionSourceImpl for table meta 1588230740
2015-06-29 21:07:08,767 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Instantiated hbase:meta,,1.1588230740
2015-06-29 21:07:08,825 INFO  [StoreOpener-1588230740-1] compactions.CompactionConfiguration: size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000
2015-06-29 21:07:08,883 DEBUG [StoreOpener-1588230740-1] regionserver.HStore: loaded hdfs://nn.example.com:8020/apps/hbase/data/data/hbase/meta/1588230740/info/773daf23518042b49cded2d0f6705ad7, isReference=false, isBulkLoadResult=false, seqid=52, majorCompaction=true
2015-06-29 21:07:08,892 DEBUG [StoreOpener-1588230740-1] regionserver.HStore: loaded hdfs://nn.example.com:8020/apps/hbase/data/data/hbase/meta/1588230740/info/d3e9d66e7adf462bac4b758191ad7152, isReference=false, isBulkLoadResult=false, seqid=60, majorCompaction=false
2015-06-29 21:07:08,902 DEBUG [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Found 0 recovered edits file(s) under hdfs://nn.example.com:8020/apps/hbase/data/data/hbase/meta/1588230740
2015-06-29 21:07:08,922 DEBUG [RS_OPEN_META-dn:60020-0] wal.HLogUtil: Written region seqId to file:hdfs://nn.example.com:8020/apps/hbase/data/data/hbase/meta/1588230740/recovered.edits/65_seqid ,newSeqId=65 ,maxSeqId=64
2015-06-29 21:07:08,924 INFO  [RS_OPEN_META-dn:60020-0] regionserver.HRegion: Onlined 1588230740; next sequenceid=65
2015-06-29 21:07:08,931 ERROR [RS_OPEN_META-dn:60020-0] handler.OpenRegionHandler: Failed open of region=hbase:meta,,1.1588230740, starting to roll back the global memstore size.
java.io.IOException: Cannot append; log is closed
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:1000)
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog.appendNoSync(FSHLog.java:1053)
        at org.apache.hadoop.hbase.regionserver.wal.HLogUtil.writeRegionEventMarker(HLogUtil.java:309)
        at org.apache.hadoop.hbase.regionserver.HRegion.writeRegionOpenMarker(HRegion.java:933)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5785)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5750)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5722)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5678)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5629)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
2015-06-29 21:07:08,931 INFO  [RS_OPEN_META-dn:60020-0] handler.OpenRegionHandler: Opening of region {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} failed, transitioning from OPENING to FAILED_OPEN in ZK, expecting version 5
2015-06-29 21:07:08,931 DEBUG [RS_OPEN_META-dn:60020-0] zookeeper.ZKAssign: regionserver:60020-0x14e2be1fbb70009, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure Transitioning 1588230740 from RS_ZK_REGION_OPENING to RS_ZK_REGION_FAILED_OPEN
2015-06-29 21:07:08,935 DEBUG [RS_OPEN_META-dn:60020-0] zookeeper.ZKAssign: regionserver:60020-0x14e2be1fbb70009, quorum=nn.example.com:2181,dn.example.com:2181,rm.example.com:2181, baseZNode=/hbase-secure Transitioned node 1588230740 from RS_ZK_REGION_OPENING to RS_ZK_REGION_FAILED_OPEN
2015-06-29 21:07:16,825 INFO  [regionserver60020.periodicFlusher] regionserver.HRegionServer$PeriodicMemstoreFlusher: regionserver60020.periodicFlusher exiting
2015-06-29 21:07:16,825 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for Split Thread to finish...
2015-06-29 21:07:16,825 INFO  [regionserver60020.leaseChecker] regionserver.Leases: regionserver60020.leaseChecker closing leases
2015-06-29 21:07:16,826 INFO  [regionserver60020.leaseChecker] regionserver.Leases: regionserver60020.leaseChecker closed leases
2015-06-29 21:07:16,826 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for Merge Thread to finish...
2015-06-29 21:07:16,826 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for Large Compaction Thread to finish...
2015-06-29 21:07:16,826 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for Small Compaction Thread to finish...
2015-06-29 21:07:16,831 INFO  [regionserver60020] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x24e2be20e450014
2015-06-29 21:07:16,833 INFO  [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-06-29 21:07:16,833 INFO  [regionserver60020] zookeeper.ZooKeeper: Session: 0x24e2be20e450014 closed
2015-06-29 21:07:16,834 DEBUG [regionserver60020] ipc.RpcClient: Stopping rpc client
2015-06-29 21:07:16,839 INFO  [regionserver60020] zookeeper.ZooKeeper: Session: 0x14e2be1fbb70009 closed
2015-06-29 21:07:16,839 INFO  [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-06-29 21:07:16,839 INFO  [regionserver60020] regionserver.HRegionServer: stopping server dn.example.com,60020,1435612024042; zookeeper connection closed.
2015-06-29 21:07:16,839 INFO  [regionserver60020] regionserver.HRegionServer: regionserver60020 exiting
2015-06-29 21:07:16,839 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: HRegionServer Aborted
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:66)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2501)
2015-06-29 21:07:16,840 INFO  [Thread-12] regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@1f343622
2015-06-29 21:07:16,841 INFO  [Thread-12] regionserver.ShutdownHook: Starting fs shutdown hook thread.
2015-06-29 21:07:16,841 INFO  [Thread-12] regionserver.ShutdownHook: Shutdown hook finished.



Please help

Thanks,
Venkat
From: Ted Yu [mailto:yuzhihong@gmail.com]
Sent: Friday, June 26, 2015 1:10 PM
To: common-user@hadoop.apache.org
Subject: Re: HBASE Region server failing to start after Kerberos is enabled

Can you post the complete stack trace for 'Failed to get FileSystem instance' ?

What's the permission for /apps/hbase/staging ?

Looking at commit log of SecureBulkLoadEndpoint.java, there have been a lot bug fixes since 0.98.4
Please consider upgrading hbase

Cheers

On Fri, Jun 26, 2015 at 10:48 AM, Gangavarupu, Venkata - Contingent Worker <ve...@bcbsa.com>> wrote:
HI All,

The region servers failing to start, after Kerberos is enabled, with below error.
Hadoop -2.6.0
HBase-0.98.4

2015-06-24 15:58:48,884 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=AuthenticationService
2015-06-24 15:58:48,886 INFO  [RS_OPEN_META-mdcthdpdas06lp:60020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870911).
2015-06-24 15:58:48,894 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0] regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1 service=SecureBulkLoadService
2015-06-24 15:58:48,907 ERROR [RS_OPEN_META-mdcthdpdas06lp:60020-0] coprocessor.CoprocessorHost: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an unexpected exception
java.lang.IllegalStateException: Failed to get FileSystem instance

I see below properties are included in hbase-site.xml file

<property>
      <name>hbase.coprocessor.region.classes</name>
      <value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.hadoop.hbase.security.access.AccessController</value>
    </property>

<property>
      <name>hbase.bulkload.staging.dir</name>
      <value>/apps/hbase/staging</value>
    </property>


I deleted org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint from the hbase.coprocessor.region.classes and tried to start. It worked.
I think SecureBulkLoad is causing the problem.

Please help to come over this issue. I would like to have SecureBulkLoad class.

Thanks,
Venkat



Re: HBASE Region server failing to start after Kerberos is enabled

Posted by Ted Yu <yu...@gmail.com>.
Can you post the complete stack trace for 'Failed to get FileSystem instance'
?

What's the permission for /apps/hbase/staging ?

Looking at commit log of SecureBulkLoadEndpoint.java, there have been a lot
bug fixes since 0.98.4
Please consider upgrading hbase

Cheers

On Fri, Jun 26, 2015 at 10:48 AM, Gangavarupu, Venkata - Contingent Worker <
venkata.gangavarupu.cs@bcbsa.com> wrote:

>  HI All,
>
>
>
> The region servers failing to start, after Kerberos is enabled, with below
> error.
>
> Hadoop -2.6.0
>
> HBase-0.98.4
>
>
>
> 2015-06-24 15:58:48,884 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0]
> regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1
> service=AuthenticationService
>
> 2015-06-24 15:58:48,886 INFO  [RS_OPEN_META-mdcthdpdas06lp:60020-0]
> coprocessor.CoprocessorHost: System coprocessor
> org.apache.hadoop.hbase.security.token.TokenProvider was loaded
> successfully with priority (536870911).
>
> 2015-06-24 15:58:48,894 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0]
> regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1
> service=SecureBulkLoadService
>
> 2015-06-24 15:58:48,907 ERROR [RS_OPEN_META-mdcthdpdas06lp:60020-0]
> coprocessor.CoprocessorHost: The coprocessor
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an
> unexpected exception
>
> java.lang.IllegalStateException: Failed to get FileSystem instance
>
>
>
> I see below properties are included in hbase-site.xml file
>
>
>
> <property>
>
>       <name>hbase.coprocessor.region.classes</name>
>
>
> <value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.hadoop.hbase.security.access.AccessController</value>
>
>     </property>
>
>
>
> <property>
>
>       <name>hbase.bulkload.staging.dir</name>
>
>       <value>/apps/hbase/staging</value>
>
>     </property>
>
>
>
>
>
> I deleted org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> from the hbase.coprocessor.region.classes and tried to start. It worked.
>
> I think SecureBulkLoad is causing the problem.
>
>
>
> Please help to come over this issue. I would like to have SecureBulkLoad
> class.
>
>
>
> Thanks,
>
> Venkat
>
>
>

Re: HBASE Region server failing to start after Kerberos is enabled

Posted by Ted Yu <yu...@gmail.com>.
Can you post the complete stack trace for 'Failed to get FileSystem instance'
?

What's the permission for /apps/hbase/staging ?

Looking at commit log of SecureBulkLoadEndpoint.java, there have been a lot
bug fixes since 0.98.4
Please consider upgrading hbase

Cheers

On Fri, Jun 26, 2015 at 10:48 AM, Gangavarupu, Venkata - Contingent Worker <
venkata.gangavarupu.cs@bcbsa.com> wrote:

>  HI All,
>
>
>
> The region servers failing to start, after Kerberos is enabled, with below
> error.
>
> Hadoop -2.6.0
>
> HBase-0.98.4
>
>
>
> 2015-06-24 15:58:48,884 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0]
> regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1
> service=AuthenticationService
>
> 2015-06-24 15:58:48,886 INFO  [RS_OPEN_META-mdcthdpdas06lp:60020-0]
> coprocessor.CoprocessorHost: System coprocessor
> org.apache.hadoop.hbase.security.token.TokenProvider was loaded
> successfully with priority (536870911).
>
> 2015-06-24 15:58:48,894 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0]
> regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1
> service=SecureBulkLoadService
>
> 2015-06-24 15:58:48,907 ERROR [RS_OPEN_META-mdcthdpdas06lp:60020-0]
> coprocessor.CoprocessorHost: The coprocessor
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an
> unexpected exception
>
> java.lang.IllegalStateException: Failed to get FileSystem instance
>
>
>
> I see below properties are included in hbase-site.xml file
>
>
>
> <property>
>
>       <name>hbase.coprocessor.region.classes</name>
>
>
> <value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.hadoop.hbase.security.access.AccessController</value>
>
>     </property>
>
>
>
> <property>
>
>       <name>hbase.bulkload.staging.dir</name>
>
>       <value>/apps/hbase/staging</value>
>
>     </property>
>
>
>
>
>
> I deleted org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> from the hbase.coprocessor.region.classes and tried to start. It worked.
>
> I think SecureBulkLoad is causing the problem.
>
>
>
> Please help to come over this issue. I would like to have SecureBulkLoad
> class.
>
>
>
> Thanks,
>
> Venkat
>
>
>

Re: HBASE Region server failing to start after Kerberos is enabled

Posted by Ted Yu <yu...@gmail.com>.
Can you post the complete stack trace for 'Failed to get FileSystem instance'
?

What's the permission for /apps/hbase/staging ?

Looking at commit log of SecureBulkLoadEndpoint.java, there have been a lot
bug fixes since 0.98.4
Please consider upgrading hbase

Cheers

On Fri, Jun 26, 2015 at 10:48 AM, Gangavarupu, Venkata - Contingent Worker <
venkata.gangavarupu.cs@bcbsa.com> wrote:

>  HI All,
>
>
>
> The region servers failing to start, after Kerberos is enabled, with below
> error.
>
> Hadoop -2.6.0
>
> HBase-0.98.4
>
>
>
> 2015-06-24 15:58:48,884 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0]
> regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1
> service=AuthenticationService
>
> 2015-06-24 15:58:48,886 INFO  [RS_OPEN_META-mdcthdpdas06lp:60020-0]
> coprocessor.CoprocessorHost: System coprocessor
> org.apache.hadoop.hbase.security.token.TokenProvider was loaded
> successfully with priority (536870911).
>
> 2015-06-24 15:58:48,894 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0]
> regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1
> service=SecureBulkLoadService
>
> 2015-06-24 15:58:48,907 ERROR [RS_OPEN_META-mdcthdpdas06lp:60020-0]
> coprocessor.CoprocessorHost: The coprocessor
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an
> unexpected exception
>
> java.lang.IllegalStateException: Failed to get FileSystem instance
>
>
>
> I see below properties are included in hbase-site.xml file
>
>
>
> <property>
>
>       <name>hbase.coprocessor.region.classes</name>
>
>
> <value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.hadoop.hbase.security.access.AccessController</value>
>
>     </property>
>
>
>
> <property>
>
>       <name>hbase.bulkload.staging.dir</name>
>
>       <value>/apps/hbase/staging</value>
>
>     </property>
>
>
>
>
>
> I deleted org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> from the hbase.coprocessor.region.classes and tried to start. It worked.
>
> I think SecureBulkLoad is causing the problem.
>
>
>
> Please help to come over this issue. I would like to have SecureBulkLoad
> class.
>
>
>
> Thanks,
>
> Venkat
>
>
>

Re: HBASE Region server failing to start after Kerberos is enabled

Posted by Ted Yu <yu...@gmail.com>.
Can you post the complete stack trace for 'Failed to get FileSystem instance'
?

What's the permission for /apps/hbase/staging ?

Looking at commit log of SecureBulkLoadEndpoint.java, there have been a lot
bug fixes since 0.98.4
Please consider upgrading hbase

Cheers

On Fri, Jun 26, 2015 at 10:48 AM, Gangavarupu, Venkata - Contingent Worker <
venkata.gangavarupu.cs@bcbsa.com> wrote:

>  HI All,
>
>
>
> The region servers failing to start, after Kerberos is enabled, with below
> error.
>
> Hadoop -2.6.0
>
> HBase-0.98.4
>
>
>
> 2015-06-24 15:58:48,884 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0]
> regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1
> service=AuthenticationService
>
> 2015-06-24 15:58:48,886 INFO  [RS_OPEN_META-mdcthdpdas06lp:60020-0]
> coprocessor.CoprocessorHost: System coprocessor
> org.apache.hadoop.hbase.security.token.TokenProvider was loaded
> successfully with priority (536870911).
>
> 2015-06-24 15:58:48,894 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0]
> regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1
> service=SecureBulkLoadService
>
> 2015-06-24 15:58:48,907 ERROR [RS_OPEN_META-mdcthdpdas06lp:60020-0]
> coprocessor.CoprocessorHost: The coprocessor
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw an
> unexpected exception
>
> java.lang.IllegalStateException: Failed to get FileSystem instance
>
>
>
> I see below properties are included in hbase-site.xml file
>
>
>
> <property>
>
>       <name>hbase.coprocessor.region.classes</name>
>
>
> <value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.hadoop.hbase.security.access.AccessController</value>
>
>     </property>
>
>
>
> <property>
>
>       <name>hbase.bulkload.staging.dir</name>
>
>       <value>/apps/hbase/staging</value>
>
>     </property>
>
>
>
>
>
> I deleted org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> from the hbase.coprocessor.region.classes and tried to start. It worked.
>
> I think SecureBulkLoad is causing the problem.
>
>
>
> Please help to come over this issue. I would like to have SecureBulkLoad
> class.
>
>
>
> Thanks,
>
> Venkat
>
>
>