You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by psynophile <ps...@gmail.com> on 2013/10/25 20:24:39 UTC

Install HBase on hadoop-2.2.0

I can't get test commands to work with hbase. Could someone help me figure
this out? Everything seems to be set up correctly, jps reports that hmaster
is running on the master node, hadoop1, and region servers are running on
the other nodes, but I get this:

hbase(main):001:0> create 'test', 'cf'
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/opt/hadoop/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library
/opt/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack
guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c
<libfile>', or link it with '-z noexecstack'.
2013-10-25 14:17:02,792 [myid:] - WARN  [main:NativeCodeLoader@62] - Unable
to load native-hadoop library for your platform... using builtin-java
classes where applicable

ERROR: Can't get master address from ZooKeeper; znode data == null


Here is my hbase-site.xml:
<configuration>
  <property>
    <name>hbase.zookeeper.quorum</name>
   
<value>hadoop1.local,node1.local,node2.local,node3.local,node4.local</value>
    <description>The directory shared by RegionServers.
    </description>
  </property>
  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/hadoop/zoo/data</value>
    <description>Property from ZooKeeper's config zoo.cfg.
    The directory where the snapshot is stored.
    </description>
  </property>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://10.11.1.1:9000/hbase</value>
    <description>The directory shared by RegionServers.
    </description>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
    <description>The mode the cluster will be in. Possible values are
      false: standalone and pseudo-distributed setups with managed Zookeeper
      true: fully-distributed with unmanaged Zookeeper Quorum (see
hbase-env.sh)
    </description>
  </property>
</configuration>

hbase-env.sh:
[hadoop@hadoop1 hbase]$ cat conf/hbase-env.sh | grep -e CLASS -e ZK
# Extra Java CLASSPATH elements.  Optional.
export
HBASE_CLASSPATH=$HBASE_CLASSPATH:/opt/hadoop/zookeeper/conf:/opt/hadoop/zookeeper
export HBASE_MANAGES_ZK=false
[hadoop@hadoop1 hbase]$ 

I have zookeeper set up and running also.
Here is the log output from master:

tail -200 /opt/hadoop/hbase/logs/hbase-hadoop-master-hadoop1.out
2013-10-25 14:12:30,247 [myid:] - WARN 
[master:hadoop1:60000:AssignmentManager@1974] - Failed assignment of
hbase:meta,,1.1588230740 to node3.local,60020,1382724162208, trying to
assign elsewhere instead; try=10 of 10
java.io.IOException: java.io.IOException
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2184)
        at
org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1851)
Caused by: java.lang.NullPointerException
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:3483)
        at
org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:19795)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2146)
        ... 1 more

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
        at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
        at
org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:235)
        at
org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:631)
        at
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1901)
        at
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1449)
        at
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1422)
        at
org.apache.hadoop.hbase.master.AssignmentManager.assignMeta(AssignmentManager.java:2437)
        at
org.apache.hadoop.hbase.master.HMaster.assignMeta(HMaster.java:1013)
        at
org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:866)
        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:603)
        at java.lang.Thread.run(Thread.java:724)
Caused by:
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException):
java.io.IOException
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2184)
        at
org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1851)
Caused by: java.lang.NullPointerException
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:3483)
        at
org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:19795)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2146)
        ... 1 more

        at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1446)
        at
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1650)
        at
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1708)
        at
org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.openRegion(AdminProtos.java:20595)
        at
org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:628)
        ... 8 more
2013-10-25 14:12:30,248 [myid:] - WARN 
[master:hadoop1:60000:RegionStates@312] - Failed to transition 1588230740 on
node3.local,60020,1382724162208, set to FAILED_OPEN
2013-10-25 14:12:30,248 [myid:] - INFO 
[master:hadoop1:60000:RegionStates@321] - Transitioned {1588230740
state=PENDING_OPEN, ts=1382724750239,
server=node3.local,60020,1382724162208} to {1588230740 state=FAILED_OPEN,
ts=1382724750248, server=node3.local,60020,1382724162208}
2013-10-25 14:12:30,248 [myid:] - INFO 
[master:hadoop1:60000:ServerManager@557] - AssignmentManager hasn't finished
failover cleanup; waiting
2013-10-25 14:16:55,130 [myid:] - INFO 
[RpcServer.handler=16,port=60000:ServerManager@369] - Registering
server=node1.local,60020,1382725013788
2013-10-25 14:16:55,136 [myid:] - INFO 
[RpcServer.handler=16,port=60000:Configuration@840] - fs.default.name is
deprecated. Instead, use fs.defa


Thanks!




--
View this message in context: http://apache-hbase.679495.n3.nabble.com/Install-HBase-on-hadoop-2-2-0-tp4052188.html
Sent from the HBase User mailing list archive at Nabble.com.

Re: Install HBase on hadoop-2.2.0

Posted by Ted Yu <yu...@gmail.com>.
bq. Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library
/opt/hadoop/lib/native/libhadoop.so.1.0.0

Can you verify your hadoop installation ? How come libhadoop.so.1.0.0 got
into play ?


On Fri, Oct 25, 2013 at 12:12 PM, psynophile <ps...@gmail.com> wrote:

> Sure, thanks for responding, I've been working on this and trying different
> things for a week now. Here's what I see in the out file. The log file is
> just repeating ulimit for the hadoop user over and over again:
>
> [hadoop@hadoop1 hbase]$ tail -200
> /opt/hadoop/hbase/logs/hbase-hadoop-regionserver-node3.local.out
>
> 2013-10-25 14:16:53,201 [myid:] - INFO  [main:ServerCommandLine@113] -
> env:HBASE_THRIFT_OPTS=
> 2013-10-25 14:16:53,201 [myid:] - INFO  [main:ServerCommandLine@113] -
> env:JRE_HOME=/opt/jdk1.7.0_40/jre
> 2013-10-25 14:16:53,201 [myid:] - INFO  [main:ServerCommandLine@113] -
> env:QTINC=/usr/lib64/qt-3.3/include
> 2013-10-25 14:16:53,201 [myid:] - INFO  [main:ServerCommandLine@113] -
> env:DISPLAY=localhost:10.0
> 2013-10-25 14:16:53,201 [myid:] - INFO  [main:ServerCommandLine@113] -
> env:USER=hadoop
> 2013-10-25 14:16:53,201 [myid:] - INFO  [main:ServerCommandLine@113] -
> env:ANT_HOME=/opt/rocks
> 2013-10-25 14:16:53,202 [myid:] - INFO  [main:ServerCommandLine@113] -
> env:HBASE_CLASSPATH=:/opt/hadoop/zookeeper/conf:/opt/hadoop/zookeeper
> 2013-10-25 14:16:53,202 [myid:] - INFO  [main:ServerCommandLine@113] -
> env:HOME=/export/home/hadoop
> 2013-10-25 14:16:53,202 [myid:] - INFO  [main:ServerCommandLine@113] -
> env:LESSOPEN=|/usr/bin/lesspipe.sh %s
> 2013-10-25 14:16:53,202 [myid:] - INFO  [main:ServerCommandLine@113] -
> env:LOADEDMODULES=
> 2013-10-25 14:16:53,202 [myid:] - INFO  [main:ServerCommandLine@113] -
>
> env:HADOOP_STREAMING=/opt/hadoop/contrib/streaming/hadoop-streaming-1.2.1.jar
> 2013-10-25 14:16:53,202 [myid:] - INFO  [main:ServerCommandLine@113] -
> env:HADOOP_CMD=/opt/hadoop/bin/hadoop
> 2013-10-25 14:16:53,202 [myid:] - INFO  [main:ServerCommandLine@113] -
> env:HBASE_LOG_PREFIX=hbase-hadoop-regionserver-node3.local
> 2013-10-25 14:16:53,202 [myid:] - INFO  [main:ServerCommandLine@113] -
> env:LANG=en_US.iso885915
> 2013-10-25 14:16:53,202 [myid:] - INFO  [main:ServerCommandLine@113] -
> env:HBASE_IDENT_STRING=hadoop
> 2013-10-25 14:16:53,204 [myid:] - INFO  [main:ServerCommandLine@79] -
> vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation,
> vmVersion=24.0-b56
> 2013-10-25 14:16:53,204 [myid:] - INFO  [main:ServerCommandLine@81] -
> vmInputArguments=[-Dproc_regionserver, -XX:OnOutOfMemoryError=kill -9 %p,
> -Xmx1000m, -XX:+UseConcMarkSweepGC, -Dhbase.log.dir=/opt/hadoop/hbase/logs,
> -Dhbase.log.file=hbase-hadoop-regionserver-node3.local.log,
> -Dhbase.home.dir=/opt/hadoop/hbase, -Dhbase.id.str=hadoop,
> -Dhbase.root.logger=INFO,RFA, -Djava.library.path=/opt/hadoop/lib/native,
> -Dhbase.security.logger=INFO,RFAS]
> 2013-10-25 14:16:53,531 [myid:] - INFO  [main:RpcServer$Listener@520] -
> regionserver/node3.local/10.11.1.4:60020: started 10 reader(s).
> 2013-10-25 14:16:53,647 [myid:] - INFO  [main:MetricsConfig@111] - loaded
> properties from hadoop-metrics2-hbase.properties
> 2013-10-25 14:16:53,728 [myid:] - INFO  [main:MetricsSystemImpl@344] -
> Scheduled snapshot period at 10 second(s).
> 2013-10-25 14:16:53,728 [myid:] - INFO  [main:MetricsSystemImpl@183] -
> HBase
> metrics system started
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
>
> [jar:file:/opt/hadoop/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
>
> [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library
> /opt/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack
> guard. The VM will try to fix the stack guard now.
> It's highly recommended that you fix the library with 'execstack -c
> <libfile>', or link it with '-z noexecstack'.
> 2013-10-25 14:16:53,886 [myid:] - WARN  [main:NativeCodeLoader@62] -
> Unable
> to load native-hadoop library for your platform... using builtin-java
> classes where applicable
> 2013-10-25 14:16:54,100 [myid:] - INFO  [main:CacheConfig@407] -
> Allocating
> LruBlockCache with maximum size 393.4 M
> 2013-10-25 14:16:54,180 [myid:] - INFO  [regionserver60020:Environment@100
> ]
> - Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012
> 17:52 GMT
> 2013-10-25 14:16:54,180 [myid:] - INFO  [regionserver60020:Environment@100
> ]
> - Client environment:host.name=node3.local
> 2013-10-25 14:16:54,180 [myid:] - INFO  [regionserver60020:Environment@100
> ]
> - Client environment:java.version=1.7.0_40
> 2013-10-25 14:16:54,180 [myid:] - INFO  [regionserver60020:Environment@100
> ]
> - Client environment:java.vendor=Oracle Corporation
> 2013-10-25 14:16:54,180 [myid:] - INFO  [regionserver60020:Environment@100
> ]
> - Client environment:java.home=/opt/jdk1.7.0_40/jre
> 2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100
> ]
> - Client
>
> environment:java.class.path=:/opt/hadoop/zookeeper/conf:/opt/hadoop/zookeeper:/opt/hadoop/hbase/bin/../conf:/opt/jdk1.7.0_40/lib/tools.jar:/opt/hadoop/hbase:/opt/hadoop/hbase/lib/activation-1.1.jar:/opt/hadoop/hbase/lib/aopalliance-1.0.jar:/opt/hadoop/hbase/lib/asm-3.1.jar:/opt/hadoop/hbase/lib/avro-1.5.3.jar:/opt/hadoop/hbase/lib/commons-beanutils-1.7.0.jar:/opt/hadoop/hbase/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop/hbase/lib/commons-cli-1.2.jar:/opt/hadoop/hbase/lib/commons-codec-1.7.jar:/opt/hadoop/hbase/lib/commons-collections-3.2.1.jar:/opt/hadoop/hbase/lib/commons-compress-1.4.jar:/opt/hadoop/hbase/lib/commons-configuration-1.6.jar:/opt/hadoop/hbase/lib/commons-daemon-1.0.13.jar:/opt/hadoop/hbase/lib/commons-digester-1.8.jar:/opt/hadoop/hbase/lib/commons-el-1.0.jar:/opt/hadoop/hbase/lib/commons-httpclient-3.1.jar:/opt/hadoop/hbase/lib/commons-io-2.4.jar:/opt/hadoop/hbase/lib/commons-lang-2.6.jar:/opt/hadoop/hbase/lib/commons-logging-1.1.1.jar:/opt/hadoop/hbase/lib/commons-math-2.2.jar:/opt/hadoop/hbase/lib/commons-net-3.1.jar:/opt/hadoop/hbase/lib/core-3.1.1.jar:/opt/hadoop/hbase/lib/findbugs-annotations-1.3.9-1.jar:/opt/hadoop/hbase/lib/gmbal-api-only-3.0.0-b023.jar:/opt/hadoop/hbase/lib/grizzly-framework-2.1.1.jar:/opt/hadoop/hbase/lib/grizzly-framework-2.1.1-tests.jar:/opt/hadoop/hbase/lib/grizzly-http-2.1.1.jar:/opt/hadoop/hbase/lib/grizzly-http-server-2.1.1.jar:/opt/hadoop/hbase/lib/grizzly-http-servlet-2.1.1.jar:/opt/hadoop/hbase/lib/grizzly-rcm-2.1.1.jar:/opt/hadoop/hbase/lib/guava-12.0.1.jar:/opt/hadoop/hbase/lib/guice-3.0.jar:/opt/hadoop/hbase/lib/guice-servlet-3.0.jar:/opt/hadoop/hbase/lib/hadoop-annotations-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-auth-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-client-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-common-2.2.0.jar:/opt/hadoop/hbase/lib/hadoop-hdfs-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-hdfs-2.1.0-beta-tests.jar:/opt/hadoop/hbase/lib/hadoop-mapreduce-client-app-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-mapreduce-client-common-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-mapreduce-client-core-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-mapreduce-client-jobclient-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-mapreduce-client-jobclient-2.1.0-beta-tests.jar:/opt/hadoop/hbase/lib/hadoop-mapreduce-client-shuffle-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-yarn-api-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-yarn-client-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-yarn-common-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-yarn-server-common-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-yarn-server-nodemanager-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hbase-client-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-common-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-common-0.96.0-hadoop2-tests.jar:/opt/hadoop/hbase/lib/hbase-examples-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-hadoop2-compat-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-hadoop-compat-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-it-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-it-0.96.0-hadoop2-tests.jar:/opt/hadoop/hbase/lib/hbase-prefix-tree-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-protocol-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-server-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-server-0.96.0-hadoop2-tests.jar:/opt/hadoop/hbase/lib/hbase-shell-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-testing-util-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-thrift-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/high-scale-lib-1.1.1.jar:/opt/hadoop/hbase/lib/htrace-core-2.01.jar:/opt/hadoop/hbase/lib/httpclient-4.1.3.jar:/opt/hadoop/hbase/lib/httpcore-4.1.3.jar:/opt/hadoop/hbase/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/hbase/lib/jackson-jaxrs-1.8.8.jar:/opt/hadoop/hbase/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/hbase/lib/jackson-xc-1.8.8.jar:/opt/hadoop/hbase/lib/jamon-runtime-2.3.1.jar:/opt/hadoop/hbase/lib/jasper-compiler-5.5.23.jar:/opt/hadoop/hbase/lib/jasper-runtime-5.5.23.jar:/opt/hadoop/hbase/lib/javax.inject-1.jar:/opt/hadoop/hbase/lib/javax.servlet-3.0.jar:/opt/hadoop/hbase/lib/jaxb-api-2.2.2.jar:/opt/hadoop/hbase/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/hbase/lib/jersey-client-1.8.jar:/opt/hadoop/hbase/lib/jersey-core-1.8.jar:/opt/hadoop/hbase/lib/jersey-grizzly2-1.8.jar:/opt/hadoop/hbase/lib/jersey-guice-1.8.jar:/opt/hadoop/hbase/lib/jersey-json-1.8.jar:/opt/hadoop/hbase/lib/jersey-server-1.8.jar:/opt/hadoop/hbase/lib/jersey-test-framework-core-1.8.jar:/opt/hadoop/hbase/lib/jersey-test-framework-grizzly2-1.8.jar:/opt/hadoop/hbase/lib/jets3t-0.6.1.jar:/opt/hadoop/hbase/lib/jettison-1.3.1.jar:/opt/hadoop/hbase/lib/jetty-6.1.26.jar:/opt/hadoop/hbase/lib/jetty-sslengine-6.1.26.jar:/opt/hadoop/hbase/lib/jetty-util-6.1.26.jar:/opt/hadoop/hbase/lib/jruby-complete-1.6.8.jar:/opt/hadoop/hbase/lib/jsch-0.1.42.jar:/opt/hadoop/hbase/lib/jsp-2.1-6.1.14.jar:/opt/hadoop/hbase/lib/jsp-api-2.1-6.1.14.jar:/opt/hadoop/hbase/lib/jsp-api-2.1.jar:/opt/hadoop/hbase/lib/jsr305-1.3.9.jar:/opt/hadoop/hbase/lib/junit-4.11.jar:/opt/hadoop/hbase/lib/libthrift-0.9.0.jar:/opt/hadoop/hbase/lib/log4j-1.2.17.jar:/opt/hadoop/hbase/lib/management-api-3.0.0-b012.jar:/opt/hadoop/hbase/lib/metrics-core-2.1.2.jar:/opt/hadoop/hbase/lib/netty-3.6.6.Final.jar:/opt/hadoop/hbase/lib/paranamer-2.3.jar:/opt/hadoop/hbase/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hbase/lib/servlet-api-2.5-6.1.14.jar:/opt/hadoop/hbase/lib/servlet-api-2.5.jar:/opt/hadoop/hbase/lib/slf4j-api-1.6.4.jar:/opt/hadoop/hbase/lib/slf4j-log4j12-1.6.4.jar:/opt/hadoop/hbase/lib/snappy-java-1.0.3.2.jar:/opt/hadoop/hbase/lib/stax-api-1.0.1.jar:/opt/hadoop/hbase/lib/xmlenc-0.52.jar:/opt/hadoop/hbase/lib/xz-1.0.jar:/opt/hadoop/hbase/lib/zookeeper-3.4.5.jar:/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/opt/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/common/lib/commons-logging-1.1.1.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop/share/hadoop/common/lib/commons-math-2.1.jar:/opt/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/opt/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/opt/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/opt/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-io-2.1.jar:/opt/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/opt/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/opt/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/common/lib/stax-api-1.0.1.jar:/opt/hadoop/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/common/lib/commons-lang-2.5.jar:/opt/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/opt/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/opt/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/opt/hadoop/share/hadoop/common/lib/jets3t-0.6.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/opt/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/opt/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/hadoop/share/hadoop/common/hadoop-nfs-2.2.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-2.2.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-io-2.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/opt/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/opt/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/yarn/lib/junit-4.10.jar:/opt/hadoop/share/hadoop/yarn/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-io-2.1.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/opt/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/yarn/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/opt/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/opt/hadoop/contrib/capacity-scheduler/*.jar
> 2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100
> ]
> - Client environment:java.library.path=/opt/hadoop/lib/native
> 2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100
> ]
> - Client environment:java.io.tmpdir=/tmp
> 2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100
> ]
> - Client environment:java.compiler=<NA>
> 2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100
> ]
> - Client environment:os.name=Linux
> 2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100
> ]
> - Client environment:os.arch=amd64
> 2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100
> ]
> - Client environment:os.version=2.6.32-279.14.1.el6.x86_64
> 2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100
> ]
> - Client environment:user.name=hadoop
> 2013-10-25 14:16:54,182 [myid:] - INFO  [regionserver60020:Environment@100
> ]
> - Client environment:user.home=/export/home/hadoop
> 2013-10-25 14:16:54,182 [myid:] - INFO  [regionserver60020:Environment@100
> ]
> - Client environment:user.dir=/opt/hadoop/hbase
> 2013-10-25 14:16:54,183 [myid:] - INFO  [regionserver60020:ZooKeeper@438]
> -
> Initiating client connection,
>
> connectString=node3.local:2181,node2.local:2181,node1.local:2181,hadoop1.local:2181,node4.local:2181
> sessionTimeout=90000 watcher=regionserver:60020
> 2013-10-25 14:16:54,248 [myid:] - INFO
> [regionserver60020:RecoverableZooKeeper@120] - Process
> identifier=regionserver:60020 connecting to ZooKeeper
>
> ensemble=node3.local:2181,node2.local:2181,node1.local:2181,hadoop1.local:2181,node4.local:2181
> 2013-10-25 14:16:54,260 [myid:] - INFO
> [regionserver60020-SendThread(node2.local:2181):ClientCnxn$SendThread@966]
> -
> Opening socket connection to server node2.local/10.11.1.3:2181. Will not
> attempt to authenticate using SASL (unknown error)
> 2013-10-25 14:16:54,267 [myid:] - INFO
> [regionserver60020-SendThread(node2.local:2181):ClientCnxn$SendThread@849]
> -
> Socket connection established to node2.local/10.11.1.3:2181, initiating
> session
> 2013-10-25 14:16:54,295 [myid:] - INFO
> [regionserver60020-SendThread(node2.local:2181):ClientCnxn$SendThread@1207
> ]
> - Session establishment complete on server node2.local/10.11.1.3:2181,
> sessionid = 0x141f0a11cb6002a, negotiated timeout = 40000
> 2013-10-25 14:16:54,836 [myid:] - INFO  [main:ShutdownHook@87] - Installed
> shutdown hook thread: Shutdownhook:regionserver60
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/Install-HBase-on-hadoop-2-2-0-tp4052188p4052197.html
> Sent from the HBase User mailing list archive at Nabble.com.
>

Re: Install HBase on hadoop-2.2.0

Posted by psynophile <ps...@gmail.com>.
I see that message (about stack guard) everywhere. It does bother me, but
other things have worked. I don't know why that lib is there. It has to have
been bundled with the hadoop-2.2.0 package that I downloaded because I
didn't copy it there or anything. This was downloaded directly from the
Apache Hadoop site and the JDK is the latest available from Oracle's
website. Here's some more information if it will help (below). Thanks again:

[hadoop@hadoop1 hadoop]$ cat /etc/profile.d/java.sh 
export JAVA_HOME=/opt/jdk1.7.0_40
export JRE_HOME=/opt/jdk1.7.0_40/jre
export PATH=$PATH:/opt/jdk1.7.0_40/bin:/opt/jdk1.7.0_40/jre/bin
[hadoop@hadoop1 hadoop]$ 

[hadoop@hadoop1 hadoop]$ yarn node -list  
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library
/opt/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack
guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c
<libfile>', or link it with '-z noexecstack'.
13/10/25 15:52:24 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
13/10/25 15:52:24 INFO client.RMProxy: Connecting to ResourceManager at
hadoop1.local/10.11.1.1:8032
Total Nodes:5
         Node-Id             Node-State Node-Http-Address      
Number-of-Running-Containers
node3.local:48015               RUNNING  node3.local:8042                                 
0
node1.local:41768               RUNNING  node1.local:8042                                 
0
hadoop1.local:39228             RUNNING hadoop1.local:8042                                
0
node4.local:58026               RUNNING  node4.local:8042                                 
0
node2.local:55301               RUNNING  node2.local:8042                                 
0
[hadoop@hadoop1 hadoop]$ 

[hadoop@hadoop1 hadoop]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library
/opt/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack
guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c
<libfile>', or link it with '-z noexecstack'.
13/10/25 15:53:20 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
Configured Capacity: 5360721592320 (4.88 TB)
Present Capacity: 5360543813632 (4.88 TB)
DFS Remaining: 5360543383552 (4.88 TB)
DFS Used: 430080 (420 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 5 (5 total, 0 dead)

Live datanodes:
Name: 10.11.1.1:50010 (hadoop1.local)
Hostname: hadoop1.local
Decommission Status : Normal
Configured Capacity: 1072144318464 (998.51 GB)
DFS Used: 118784 (116 KB)
Non DFS Used: 41562112 (39.64 MB)
DFS Remaining: 1072102637568 (998.47 GB)
DFS Used%: 0.00%
DFS Remaining%: 100.00%
Last contact: Fri Oct 25 15:53:18 EDT 2013


Name: 10.11.1.3:50010 (node2.local)
Hostname: node2.local
Decommission Status : Normal
Configured Capacity: 1072144318464 (998.51 GB)
DFS Used: 61440 (60 KB)
Non DFS Used: 34054144 (32.48 MB)
DFS Remaining: 1072110202880 (998.48 GB)
DFS Used%: 0.00%
DFS Remaining%: 100.00%
Last contact: Fri Oct 25 15:53:18 EDT 2013


Name: 10.11.1.5:50010 (node4.local)
Hostname: node4.local
Decommission Status : Normal
Configured Capacity: 1072144318464 (998.51 GB)
DFS Used: 94208 (92 KB)
Non DFS Used: 34054144 (32.48 MB)
DFS Remaining: 1072110170112 (998.48 GB)
DFS Used%: 0.00%
DFS Remaining%: 100.00%
Last contact: Fri Oct 25 15:53:17 EDT 2013


Name: 10.11.1.4:50010 (node3.local)
Hostname: node3.local
Decommission Status : Normal
Configured Capacity: 1072144318464 (998.51 GB)
DFS Used: 86016 (84 KB)
Non DFS Used: 34054144 (32.48 MB)
DFS Remaining: 1072110178304 (998.48 GB)
DFS Used%: 0.00%
DFS Remaining%: 100.00%
Last contact: Fri Oct 25 15:53:18 EDT 2013


Name: 10.11.1.2:50010 (node1.local)
Hostname: node1.local
Decommission Status : Normal
Configured Capacity: 1072144318464 (998.51 GB)
DFS Used: 69632 (68 KB)
Non DFS Used: 34054144 (32.48 MB)
DFS Remaining: 1072110194688 (998.48 GB)
DFS Used%: 0.00%
DFS Remaining%: 100.00%
Last contact: Fri Oct 25 15:53:17 EDT 2013


[hadoop@hadoop1 hadoop]$ 

[hadoop@hadoop1 hadoop]$ java -version
java version "1.7.0_40"
Java(TM) SE Runtime Environment (build 1.7.0_40-b43)
Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode)
[hadoop@hadoop1 hadoop]$ 

[hadoop@hadoop1 hadoop]$ hadoop version
Hadoop 2.2.0
Subversion https://svn.apache.org/repos/asf/hadoop/common -r 1529768
Compiled by hortonmu on 2013-10-07T06:28Z
Compiled with protoc 2.5.0
>From source with checksum 79e53ce7994d1628b240f09af91e1af4
This command was run using
/opt/hadoop/share/hadoop/common/hadoop-common-2.2.0.jar
[hadoop@hadoop1 hadoop]$ 

[hadoop@hadoop1 hadoop]$ cat /etc/profile.d/hadoop.sh 
export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export HADOOP_CMD=/opt/hadoop/bin/hadoop
export
HADOOP_STREAMING=/opt/hadoop/contrib/streaming/hadoop-streaming-1.2.1.jar
export HADOOP_PREFIX=/opt/hadoop
export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
export HADOOP_YARN_HOME=/opt/hadoop
[hadoop@hadoop1 hadoop]$ 

[hadoop@hadoop1 hadoop]$ cat /etc/profile.d/java.sh   
export JAVA_HOME=/opt/jdk1.7.0_40
export JRE_HOME=/opt/jdk1.7.0_40/jre
export PATH=$PATH:/opt/jdk1.7.0_40/bin:/opt/jdk1.7.0_40/jre/bin
[hadoop@hadoop1 hadoop]$ 






--
View this message in context: http://apache-hbase.679495.n3.nabble.com/Install-HBase-on-hadoop-2-2-0-tp4052188p4052203.html
Sent from the HBase User mailing list archive at Nabble.com.

Re: Install HBase on hadoop-2.2.0

Posted by psynophile <ps...@gmail.com>.
Sure, thanks for responding, I've been working on this and trying different
things for a week now. Here's what I see in the out file. The log file is
just repeating ulimit for the hadoop user over and over again:

[hadoop@hadoop1 hbase]$ tail -200
/opt/hadoop/hbase/logs/hbase-hadoop-regionserver-node3.local.out

2013-10-25 14:16:53,201 [myid:] - INFO  [main:ServerCommandLine@113] -
env:HBASE_THRIFT_OPTS=
2013-10-25 14:16:53,201 [myid:] - INFO  [main:ServerCommandLine@113] -
env:JRE_HOME=/opt/jdk1.7.0_40/jre
2013-10-25 14:16:53,201 [myid:] - INFO  [main:ServerCommandLine@113] -
env:QTINC=/usr/lib64/qt-3.3/include
2013-10-25 14:16:53,201 [myid:] - INFO  [main:ServerCommandLine@113] -
env:DISPLAY=localhost:10.0
2013-10-25 14:16:53,201 [myid:] - INFO  [main:ServerCommandLine@113] -
env:USER=hadoop
2013-10-25 14:16:53,201 [myid:] - INFO  [main:ServerCommandLine@113] -
env:ANT_HOME=/opt/rocks
2013-10-25 14:16:53,202 [myid:] - INFO  [main:ServerCommandLine@113] -
env:HBASE_CLASSPATH=:/opt/hadoop/zookeeper/conf:/opt/hadoop/zookeeper
2013-10-25 14:16:53,202 [myid:] - INFO  [main:ServerCommandLine@113] -
env:HOME=/export/home/hadoop
2013-10-25 14:16:53,202 [myid:] - INFO  [main:ServerCommandLine@113] -
env:LESSOPEN=|/usr/bin/lesspipe.sh %s
2013-10-25 14:16:53,202 [myid:] - INFO  [main:ServerCommandLine@113] -
env:LOADEDMODULES=
2013-10-25 14:16:53,202 [myid:] - INFO  [main:ServerCommandLine@113] -
env:HADOOP_STREAMING=/opt/hadoop/contrib/streaming/hadoop-streaming-1.2.1.jar
2013-10-25 14:16:53,202 [myid:] - INFO  [main:ServerCommandLine@113] -
env:HADOOP_CMD=/opt/hadoop/bin/hadoop
2013-10-25 14:16:53,202 [myid:] - INFO  [main:ServerCommandLine@113] -
env:HBASE_LOG_PREFIX=hbase-hadoop-regionserver-node3.local
2013-10-25 14:16:53,202 [myid:] - INFO  [main:ServerCommandLine@113] -
env:LANG=en_US.iso885915
2013-10-25 14:16:53,202 [myid:] - INFO  [main:ServerCommandLine@113] -
env:HBASE_IDENT_STRING=hadoop
2013-10-25 14:16:53,204 [myid:] - INFO  [main:ServerCommandLine@79] -
vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation,
vmVersion=24.0-b56
2013-10-25 14:16:53,204 [myid:] - INFO  [main:ServerCommandLine@81] -
vmInputArguments=[-Dproc_regionserver, -XX:OnOutOfMemoryError=kill -9 %p,
-Xmx1000m, -XX:+UseConcMarkSweepGC, -Dhbase.log.dir=/opt/hadoop/hbase/logs,
-Dhbase.log.file=hbase-hadoop-regionserver-node3.local.log,
-Dhbase.home.dir=/opt/hadoop/hbase, -Dhbase.id.str=hadoop,
-Dhbase.root.logger=INFO,RFA, -Djava.library.path=/opt/hadoop/lib/native,
-Dhbase.security.logger=INFO,RFAS]
2013-10-25 14:16:53,531 [myid:] - INFO  [main:RpcServer$Listener@520] -
regionserver/node3.local/10.11.1.4:60020: started 10 reader(s).
2013-10-25 14:16:53,647 [myid:] - INFO  [main:MetricsConfig@111] - loaded
properties from hadoop-metrics2-hbase.properties
2013-10-25 14:16:53,728 [myid:] - INFO  [main:MetricsSystemImpl@344] -
Scheduled snapshot period at 10 second(s).
2013-10-25 14:16:53,728 [myid:] - INFO  [main:MetricsSystemImpl@183] - HBase
metrics system started
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/opt/hadoop/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library
/opt/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack
guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c
<libfile>', or link it with '-z noexecstack'.
2013-10-25 14:16:53,886 [myid:] - WARN  [main:NativeCodeLoader@62] - Unable
to load native-hadoop library for your platform... using builtin-java
classes where applicable
2013-10-25 14:16:54,100 [myid:] - INFO  [main:CacheConfig@407] - Allocating
LruBlockCache with maximum size 393.4 M
2013-10-25 14:16:54,180 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012
17:52 GMT
2013-10-25 14:16:54,180 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:host.name=node3.local
2013-10-25 14:16:54,180 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:java.version=1.7.0_40
2013-10-25 14:16:54,180 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:java.vendor=Oracle Corporation
2013-10-25 14:16:54,180 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:java.home=/opt/jdk1.7.0_40/jre
2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100]
- Client
environment:java.class.path=:/opt/hadoop/zookeeper/conf:/opt/hadoop/zookeeper:/opt/hadoop/hbase/bin/../conf:/opt/jdk1.7.0_40/lib/tools.jar:/opt/hadoop/hbase:/opt/hadoop/hbase/lib/activation-1.1.jar:/opt/hadoop/hbase/lib/aopalliance-1.0.jar:/opt/hadoop/hbase/lib/asm-3.1.jar:/opt/hadoop/hbase/lib/avro-1.5.3.jar:/opt/hadoop/hbase/lib/commons-beanutils-1.7.0.jar:/opt/hadoop/hbase/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop/hbase/lib/commons-cli-1.2.jar:/opt/hadoop/hbase/lib/commons-codec-1.7.jar:/opt/hadoop/hbase/lib/commons-collections-3.2.1.jar:/opt/hadoop/hbase/lib/commons-compress-1.4.jar:/opt/hadoop/hbase/lib/commons-configuration-1.6.jar:/opt/hadoop/hbase/lib/commons-daemon-1.0.13.jar:/opt/hadoop/hbase/lib/commons-digester-1.8.jar:/opt/hadoop/hbase/lib/commons-el-1.0.jar:/opt/hadoop/hbase/lib/commons-httpclient-3.1.jar:/opt/hadoop/hbase/lib/commons-io-2.4.jar:/opt/hadoop/hbase/lib/commons-lang-2.6.jar:/opt/hadoop/hbase/lib/commons-logging-1.1.1.jar:/opt/hadoop/hbase/lib/commons-math-2.2.jar:/opt/hadoop/hbase/lib/commons-net-3.1.jar:/opt/hadoop/hbase/lib/core-3.1.1.jar:/opt/hadoop/hbase/lib/findbugs-annotations-1.3.9-1.jar:/opt/hadoop/hbase/lib/gmbal-api-only-3.0.0-b023.jar:/opt/hadoop/hbase/lib/grizzly-framework-2.1.1.jar:/opt/hadoop/hbase/lib/grizzly-framework-2.1.1-tests.jar:/opt/hadoop/hbase/lib/grizzly-http-2.1.1.jar:/opt/hadoop/hbase/lib/grizzly-http-server-2.1.1.jar:/opt/hadoop/hbase/lib/grizzly-http-servlet-2.1.1.jar:/opt/hadoop/hbase/lib/grizzly-rcm-2.1.1.jar:/opt/hadoop/hbase/lib/guava-12.0.1.jar:/opt/hadoop/hbase/lib/guice-3.0.jar:/opt/hadoop/hbase/lib/guice-servlet-3.0.jar:/opt/hadoop/hbase/lib/hadoop-annotations-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-auth-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-client-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-common-2.2.0.jar:/opt/hadoop/hbase/lib/hadoop-hdfs-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-hdfs-2.1.0-beta-tests.jar:/opt/hadoop/hbase/lib/hadoop-mapreduce-client-app-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-mapreduce-client-common-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-mapreduce-client-core-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-mapreduce-client-jobclient-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-mapreduce-client-jobclient-2.1.0-beta-tests.jar:/opt/hadoop/hbase/lib/hadoop-mapreduce-client-shuffle-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-yarn-api-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-yarn-client-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-yarn-common-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-yarn-server-common-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-yarn-server-nodemanager-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hbase-client-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-common-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-common-0.96.0-hadoop2-tests.jar:/opt/hadoop/hbase/lib/hbase-examples-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-hadoop2-compat-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-hadoop-compat-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-it-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-it-0.96.0-hadoop2-tests.jar:/opt/hadoop/hbase/lib/hbase-prefix-tree-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-protocol-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-server-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-server-0.96.0-hadoop2-tests.jar:/opt/hadoop/hbase/lib/hbase-shell-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-testing-util-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-thrift-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/high-scale-lib-1.1.1.jar:/opt/hadoop/hbase/lib/htrace-core-2.01.jar:/opt/hadoop/hbase/lib/httpclient-4.1.3.jar:/opt/hadoop/hbase/lib/httpcore-4.1.3.jar:/opt/hadoop/hbase/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/hbase/lib/jackson-jaxrs-1.8.8.jar:/opt/hadoop/hbase/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/hbase/lib/jackson-xc-1.8.8.jar:/opt/hadoop/hbase/lib/jamon-runtime-2.3.1.jar:/opt/hadoop/hbase/lib/jasper-compiler-5.5.23.jar:/opt/hadoop/hbase/lib/jasper-runtime-5.5.23.jar:/opt/hadoop/hbase/lib/javax.inject-1.jar:/opt/hadoop/hbase/lib/javax.servlet-3.0.jar:/opt/hadoop/hbase/lib/jaxb-api-2.2.2.jar:/opt/hadoop/hbase/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/hbase/lib/jersey-client-1.8.jar:/opt/hadoop/hbase/lib/jersey-core-1.8.jar:/opt/hadoop/hbase/lib/jersey-grizzly2-1.8.jar:/opt/hadoop/hbase/lib/jersey-guice-1.8.jar:/opt/hadoop/hbase/lib/jersey-json-1.8.jar:/opt/hadoop/hbase/lib/jersey-server-1.8.jar:/opt/hadoop/hbase/lib/jersey-test-framework-core-1.8.jar:/opt/hadoop/hbase/lib/jersey-test-framework-grizzly2-1.8.jar:/opt/hadoop/hbase/lib/jets3t-0.6.1.jar:/opt/hadoop/hbase/lib/jettison-1.3.1.jar:/opt/hadoop/hbase/lib/jetty-6.1.26.jar:/opt/hadoop/hbase/lib/jetty-sslengine-6.1.26.jar:/opt/hadoop/hbase/lib/jetty-util-6.1.26.jar:/opt/hadoop/hbase/lib/jruby-complete-1.6.8.jar:/opt/hadoop/hbase/lib/jsch-0.1.42.jar:/opt/hadoop/hbase/lib/jsp-2.1-6.1.14.jar:/opt/hadoop/hbase/lib/jsp-api-2.1-6.1.14.jar:/opt/hadoop/hbase/lib/jsp-api-2.1.jar:/opt/hadoop/hbase/lib/jsr305-1.3.9.jar:/opt/hadoop/hbase/lib/junit-4.11.jar:/opt/hadoop/hbase/lib/libthrift-0.9.0.jar:/opt/hadoop/hbase/lib/log4j-1.2.17.jar:/opt/hadoop/hbase/lib/management-api-3.0.0-b012.jar:/opt/hadoop/hbase/lib/metrics-core-2.1.2.jar:/opt/hadoop/hbase/lib/netty-3.6.6.Final.jar:/opt/hadoop/hbase/lib/paranamer-2.3.jar:/opt/hadoop/hbase/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hbase/lib/servlet-api-2.5-6.1.14.jar:/opt/hadoop/hbase/lib/servlet-api-2.5.jar:/opt/hadoop/hbase/lib/slf4j-api-1.6.4.jar:/opt/hadoop/hbase/lib/slf4j-log4j12-1.6.4.jar:/opt/hadoop/hbase/lib/snappy-java-1.0.3.2.jar:/opt/hadoop/hbase/lib/stax-api-1.0.1.jar:/opt/hadoop/hbase/lib/xmlenc-0.52.jar:/opt/hadoop/hbase/lib/xz-1.0.jar:/opt/hadoop/hbase/lib/zookeeper-3.4.5.jar:/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/opt/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/common/lib/commons-logging-1.1.1.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop/share/hadoop/common/lib/commons-math-2.1.jar:/opt/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/opt/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/opt/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/opt/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-io-2.1.jar:/opt/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/opt/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/opt/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/common/lib/stax-api-1.0.1.jar:/opt/hadoop/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/common/lib/commons-lang-2.5.jar:/opt/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/opt/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/opt/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/opt/hadoop/share/hadoop/common/lib/jets3t-0.6.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/opt/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/opt/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/hadoop/share/hadoop/common/hadoop-nfs-2.2.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-2.2.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-io-2.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/opt/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/opt/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/yarn/lib/junit-4.10.jar:/opt/hadoop/share/hadoop/yarn/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-io-2.1.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/opt/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/yarn/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/opt/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/opt/hadoop/contrib/capacity-scheduler/*.jar
2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:java.library.path=/opt/hadoop/lib/native
2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:java.io.tmpdir=/tmp
2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:java.compiler=<NA>
2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:os.name=Linux
2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:os.arch=amd64
2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:os.version=2.6.32-279.14.1.el6.x86_64
2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:user.name=hadoop
2013-10-25 14:16:54,182 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:user.home=/export/home/hadoop
2013-10-25 14:16:54,182 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:user.dir=/opt/hadoop/hbase
2013-10-25 14:16:54,183 [myid:] - INFO  [regionserver60020:ZooKeeper@438] -
Initiating client connection,
connectString=node3.local:2181,node2.local:2181,node1.local:2181,hadoop1.local:2181,node4.local:2181
sessionTimeout=90000 watcher=regionserver:60020
2013-10-25 14:16:54,248 [myid:] - INFO 
[regionserver60020:RecoverableZooKeeper@120] - Process
identifier=regionserver:60020 connecting to ZooKeeper
ensemble=node3.local:2181,node2.local:2181,node1.local:2181,hadoop1.local:2181,node4.local:2181
2013-10-25 14:16:54,260 [myid:] - INFO 
[regionserver60020-SendThread(node2.local:2181):ClientCnxn$SendThread@966] -
Opening socket connection to server node2.local/10.11.1.3:2181. Will not
attempt to authenticate using SASL (unknown error)
2013-10-25 14:16:54,267 [myid:] - INFO 
[regionserver60020-SendThread(node2.local:2181):ClientCnxn$SendThread@849] -
Socket connection established to node2.local/10.11.1.3:2181, initiating
session
2013-10-25 14:16:54,295 [myid:] - INFO 
[regionserver60020-SendThread(node2.local:2181):ClientCnxn$SendThread@1207]
- Session establishment complete on server node2.local/10.11.1.3:2181,
sessionid = 0x141f0a11cb6002a, negotiated timeout = 40000
2013-10-25 14:16:54,836 [myid:] - INFO  [main:ShutdownHook@87] - Installed
shutdown hook thread: Shutdownhook:regionserver60



--
View this message in context: http://apache-hbase.679495.n3.nabble.com/Install-HBase-on-hadoop-2-2-0-tp4052188p4052197.html
Sent from the HBase User mailing list archive at Nabble.com.

Re: Install HBase on hadoop-2.2.0

Posted by Ted Yu <yu...@gmail.com>.
Seems like deployment issue.

[master:hadoop1:60000:AssignmentManager@1974] - Failed assignment of
hbase:meta,,1.1588230740 to node3.local,60020,1382724162208, trying to
assign elsewhere instead; try=10 of 10

Can you take a look at region server log on node3.local ?
It should tell us why region couldn't be opened.



On Fri, Oct 25, 2013 at 11:24 AM, psynophile <ps...@gmail.com> wrote:

> I can't get test commands to work with hbase. Could someone help me figure
> this out? Everything seems to be set up correctly, jps reports that hmaster
> is running on the master node, hadoop1, and region servers are running on
> the other nodes, but I get this:
>
> hbase(main):001:0> create 'test', 'cf'
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
>
> [jar:file:/opt/hadoop/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
>
> [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library
> /opt/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack
> guard. The VM will try to fix the stack guard now.
> It's highly recommended that you fix the library with 'execstack -c
> <libfile>', or link it with '-z noexecstack'.
> 2013-10-25 14:17:02,792 [myid:] - WARN  [main:NativeCodeLoader@62] -
> Unable
> to load native-hadoop library for your platform... using builtin-java
> classes where applicable
>
> ERROR: Can't get master address from ZooKeeper; znode data == null
>
>
> Here is my hbase-site.xml:
> <configuration>
>   <property>
>     <name>hbase.zookeeper.quorum</name>
>
>
> <value>hadoop1.local,node1.local,node2.local,node3.local,node4.local</value>
>     <description>The directory shared by RegionServers.
>     </description>
>   </property>
>   <property>
>     <name>hbase.zookeeper.property.dataDir</name>
>     <value>/hadoop/zoo/data</value>
>     <description>Property from ZooKeeper's config zoo.cfg.
>     The directory where the snapshot is stored.
>     </description>
>   </property>
>   <property>
>     <name>hbase.rootdir</name>
>     <value>hdfs://10.11.1.1:9000/hbase</value>
>     <description>The directory shared by RegionServers.
>     </description>
>   </property>
>   <property>
>     <name>hbase.cluster.distributed</name>
>     <value>true</value>
>     <description>The mode the cluster will be in. Possible values are
>       false: standalone and pseudo-distributed setups with managed
> Zookeeper
>       true: fully-distributed with unmanaged Zookeeper Quorum (see
> hbase-env.sh)
>     </description>
>   </property>
> </configuration>
>
> hbase-env.sh:
> [hadoop@hadoop1 hbase]$ cat conf/hbase-env.sh | grep -e CLASS -e ZK
> # Extra Java CLASSPATH elements.  Optional.
> export
>
> HBASE_CLASSPATH=$HBASE_CLASSPATH:/opt/hadoop/zookeeper/conf:/opt/hadoop/zookeeper
> export HBASE_MANAGES_ZK=false
> [hadoop@hadoop1 hbase]$
>
> I have zookeeper set up and running also.
> Here is the log output from master:
>
> tail -200 /opt/hadoop/hbase/logs/hbase-hadoop-master-hadoop1.out
> 2013-10-25 14:12:30,247 [myid:] - WARN
> [master:hadoop1:60000:AssignmentManager@1974] - Failed assignment of
> hbase:meta,,1.1588230740 to node3.local,60020,1382724162208, trying to
> assign elsewhere instead; try=10 of 10
> java.io.IOException: java.io.IOException
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2184)
>         at
> org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1851)
> Caused by: java.lang.NullPointerException
>         at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:3483)
>         at
>
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:19795)
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2146)
>         ... 1 more
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>         at
>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>         at
>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at
>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>         at
>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>         at
>
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:235)
>         at
>
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:631)
>         at
>
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1901)
>         at
>
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1449)
>         at
>
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1422)
>         at
>
> org.apache.hadoop.hbase.master.AssignmentManager.assignMeta(AssignmentManager.java:2437)
>         at
> org.apache.hadoop.hbase.master.HMaster.assignMeta(HMaster.java:1013)
>         at
>
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:866)
>         at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:603)
>         at java.lang.Thread.run(Thread.java:724)
> Caused by:
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException):
> java.io.IOException
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2184)
>         at
> org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1851)
> Caused by: java.lang.NullPointerException
>         at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:3483)
>         at
>
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:19795)
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2146)
>         ... 1 more
>
>         at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1446)
>         at
>
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1650)
>         at
>
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1708)
>         at
>
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.openRegion(AdminProtos.java:20595)
>         at
>
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:628)
>         ... 8 more
> 2013-10-25 14:12:30,248 [myid:] - WARN
> [master:hadoop1:60000:RegionStates@312] - Failed to transition 1588230740
> on
> node3.local,60020,1382724162208, set to FAILED_OPEN
> 2013-10-25 14:12:30,248 [myid:] - INFO
> [master:hadoop1:60000:RegionStates@321] - Transitioned {1588230740
> state=PENDING_OPEN, ts=1382724750239,
> server=node3.local,60020,1382724162208} to {1588230740 state=FAILED_OPEN,
> ts=1382724750248, server=node3.local,60020,1382724162208}
> 2013-10-25 14:12:30,248 [myid:] - INFO
> [master:hadoop1:60000:ServerManager@557] - AssignmentManager hasn't
> finished
> failover cleanup; waiting
> 2013-10-25 14:16:55,130 [myid:] - INFO
> [RpcServer.handler=16,port=60000:ServerManager@369] - Registering
> server=node1.local,60020,1382725013788
> 2013-10-25 14:16:55,136 [myid:] - INFO
> [RpcServer.handler=16,port=60000:Configuration@840] - fs.default.name is
> deprecated. Instead, use fs.defa
>
>
> Thanks!
>
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/Install-HBase-on-hadoop-2-2-0-tp4052188.html
> Sent from the HBase User mailing list archive at Nabble.com.
>

Re: Install HBase on hadoop-2.2.0

Posted by psynophile <ps...@gmail.com>.
It's the last thing that I see in the file...

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library
/opt/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack
guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c
<libfile>', or link it with '-z noexecstack'.
2013-10-25 14:16:53,886 [myid:] - WARN  [main:NativeCodeLoader@62] - Unable
to load native-hadoop library for your platform... using builtin-java
classes where applicable
2013-10-25 14:16:54,100 [myid:] - INFO  [main:CacheConfig@407] - Allocating
LruBlockCache with maximum size 393.4 M
2013-10-25 14:16:54,180 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012
17:52 GMT
2013-10-25 14:16:54,180 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:host.name=node3.local
2013-10-25 14:16:54,180 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:java.version=1.7.0_40
2013-10-25 14:16:54,180 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:java.vendor=Oracle Corporation
2013-10-25 14:16:54,180 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:java.home=/opt/jdk1.7.0_40/jre
2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100]
- Client
environment:java.class.path=:/opt/hadoop/zookeeper/conf:/opt/hadoop/zookeeper:/opt/hadoop/hbase/bin/../conf:/opt/jdk1.7.0_40/lib/tools.jar:/opt/hadoop/hbase:/opt/hadoop/hbase/lib/activation-1.1.jar:/opt/hadoop/hbase/lib/aopalliance-1.0.jar:/opt/hadoop/hbase/lib/asm-3.1.jar:/opt/hadoop/hbase/lib/avro-1.5.3.jar:/opt/hadoop/hbase/lib/commons-beanutils-1.7.0.jar:/opt/hadoop/hbase/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop/hbase/lib/commons-cli-1.2.jar:/opt/hadoop/hbase/lib/commons-codec-1.7.jar:/opt/hadoop/hbase/lib/commons-collections-3.2.1.jar:/opt/hadoop/hbase/lib/commons-compress-1.4.jar:/opt/hadoop/hbase/lib/commons-configuration-1.6.jar:/opt/hadoop/hbase/lib/commons-daemon-1.0.13.jar:/opt/hadoop/hbase/lib/commons-digester-1.8.jar:/opt/hadoop/hbase/lib/commons-el-1.0.jar:/opt/hadoop/hbase/lib/commons-httpclient-3.1.jar:/opt/hadoop/hbase/lib/commons-io-2.4.jar:/opt/hadoop/hbase/lib/commons-lang-2.6.jar:/opt/hadoop/hbase/lib/commons-logging-1.1.1.jar:/opt/hadoop/hbase/lib/commons-math-2.2.jar:/opt/hadoop/hbase/lib/commons-net-3.1.jar:/opt/hadoop/hbase/lib/core-3.1.1.jar:/opt/hadoop/hbase/lib/findbugs-annotations-1.3.9-1.jar:/opt/hadoop/hbase/lib/gmbal-api-only-3.0.0-b023.jar:/opt/hadoop/hbase/lib/grizzly-framework-2.1.1.jar:/opt/hadoop/hbase/lib/grizzly-framework-2.1.1-tests.jar:/opt/hadoop/hbase/lib/grizzly-http-2.1.1.jar:/opt/hadoop/hbase/lib/grizzly-http-server-2.1.1.jar:/opt/hadoop/hbase/lib/grizzly-http-servlet-2.1.1.jar:/opt/hadoop/hbase/lib/grizzly-rcm-2.1.1.jar:/opt/hadoop/hbase/lib/guava-12.0.1.jar:/opt/hadoop/hbase/lib/guice-3.0.jar:/opt/hadoop/hbase/lib/guice-servlet-3.0.jar:/opt/hadoop/hbase/lib/hadoop-annotations-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-auth-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-client-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-common-2.2.0.jar:/opt/hadoop/hbase/lib/hadoop-hdfs-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-hdfs-2.1.0-beta-tests.jar:/opt/hadoop/hbase/lib/hadoop-mapreduce-client-app-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-mapreduce-client-common-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-mapreduce-client-core-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-mapreduce-client-jobclient-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-mapreduce-client-jobclient-2.1.0-beta-tests.jar:/opt/hadoop/hbase/lib/hadoop-mapreduce-client-shuffle-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-yarn-api-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-yarn-client-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-yarn-common-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-yarn-server-common-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hadoop-yarn-server-nodemanager-2.1.0-beta.jar:/opt/hadoop/hbase/lib/hbase-client-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-common-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-common-0.96.0-hadoop2-tests.jar:/opt/hadoop/hbase/lib/hbase-examples-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-hadoop2-compat-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-hadoop-compat-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-it-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-it-0.96.0-hadoop2-tests.jar:/opt/hadoop/hbase/lib/hbase-prefix-tree-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-protocol-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-server-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-server-0.96.0-hadoop2-tests.jar:/opt/hadoop/hbase/lib/hbase-shell-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-testing-util-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/hbase-thrift-0.96.0-hadoop2.jar:/opt/hadoop/hbase/lib/high-scale-lib-1.1.1.jar:/opt/hadoop/hbase/lib/htrace-core-2.01.jar:/opt/hadoop/hbase/lib/httpclient-4.1.3.jar:/opt/hadoop/hbase/lib/httpcore-4.1.3.jar:/opt/hadoop/hbase/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/hbase/lib/jackson-jaxrs-1.8.8.jar:/opt/hadoop/hbase/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/hbase/lib/jackson-xc-1.8.8.jar:/opt/hadoop/hbase/lib/jamon-runtime-2.3.1.jar:/opt/hadoop/hbase/lib/jasper-compiler-5.5.23.jar:/opt/hadoop/hbase/lib/jasper-runtime-5.5.23.jar:/opt/hadoop/hbase/lib/javax.inject-1.jar:/opt/hadoop/hbase/lib/javax.servlet-3.0.jar:/opt/hadoop/hbase/lib/jaxb-api-2.2.2.jar:/opt/hadoop/hbase/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/hbase/lib/jersey-client-1.8.jar:/opt/hadoop/hbase/lib/jersey-core-1.8.jar:/opt/hadoop/hbase/lib/jersey-grizzly2-1.8.jar:/opt/hadoop/hbase/lib/jersey-guice-1.8.jar:/opt/hadoop/hbase/lib/jersey-json-1.8.jar:/opt/hadoop/hbase/lib/jersey-server-1.8.jar:/opt/hadoop/hbase/lib/jersey-test-framework-core-1.8.jar:/opt/hadoop/hbase/lib/jersey-test-framework-grizzly2-1.8.jar:/opt/hadoop/hbase/lib/jets3t-0.6.1.jar:/opt/hadoop/hbase/lib/jettison-1.3.1.jar:/opt/hadoop/hbase/lib/jetty-6.1.26.jar:/opt/hadoop/hbase/lib/jetty-sslengine-6.1.26.jar:/opt/hadoop/hbase/lib/jetty-util-6.1.26.jar:/opt/hadoop/hbase/lib/jruby-complete-1.6.8.jar:/opt/hadoop/hbase/lib/jsch-0.1.42.jar:/opt/hadoop/hbase/lib/jsp-2.1-6.1.14.jar:/opt/hadoop/hbase/lib/jsp-api-2.1-6.1.14.jar:/opt/hadoop/hbase/lib/jsp-api-2.1.jar:/opt/hadoop/hbase/lib/jsr305-1.3.9.jar:/opt/hadoop/hbase/lib/junit-4.11.jar:/opt/hadoop/hbase/lib/libthrift-0.9.0.jar:/opt/hadoop/hbase/lib/log4j-1.2.17.jar:/opt/hadoop/hbase/lib/management-api-3.0.0-b012.jar:/opt/hadoop/hbase/lib/metrics-core-2.1.2.jar:/opt/hadoop/hbase/lib/netty-3.6.6.Final.jar:/opt/hadoop/hbase/lib/paranamer-2.3.jar:/opt/hadoop/hbase/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hbase/lib/servlet-api-2.5-6.1.14.jar:/opt/hadoop/hbase/lib/servlet-api-2.5.jar:/opt/hadoop/hbase/lib/slf4j-api-1.6.4.jar:/opt/hadoop/hbase/lib/slf4j-log4j12-1.6.4.jar:/opt/hadoop/hbase/lib/snappy-java-1.0.3.2.jar:/opt/hadoop/hbase/lib/stax-api-1.0.1.jar:/opt/hadoop/hbase/lib/xmlenc-0.52.jar:/opt/hadoop/hbase/lib/xz-1.0.jar:/opt/hadoop/hbase/lib/zookeeper-3.4.5.jar:/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/opt/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/common/lib/commons-logging-1.1.1.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop/share/hadoop/common/lib/commons-math-2.1.jar:/opt/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/opt/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/opt/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/opt/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-io-2.1.jar:/opt/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/opt/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/opt/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/common/lib/stax-api-1.0.1.jar:/opt/hadoop/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/common/lib/commons-lang-2.5.jar:/opt/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/opt/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/opt/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/opt/hadoop/share/hadoop/common/lib/jets3t-0.6.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/opt/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/opt/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/hadoop/share/hadoop/common/hadoop-nfs-2.2.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-2.2.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-io-2.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/opt/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/opt/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/yarn/lib/junit-4.10.jar:/opt/hadoop/share/hadoop/yarn/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-io-2.1.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/opt/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/yarn/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/opt/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/opt/hadoop/contrib/capacity-scheduler/*.jar
2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:java.library.path=/opt/hadoop/lib/native
2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:java.io.tmpdir=/tmp
2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:java.compiler=<NA>
2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:os.name=Linux
2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:os.arch=amd64
2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:os.version=2.6.32-279.14.1.el6.x86_64
2013-10-25 14:16:54,181 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:user.name=hadoop
2013-10-25 14:16:54,182 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:user.home=/export/home/hadoop
2013-10-25 14:16:54,182 [myid:] - INFO  [regionserver60020:Environment@100]
- Client environment:user.dir=/opt/hadoop/hbase
2013-10-25 14:16:54,183 [myid:] - INFO  [regionserver60020:ZooKeeper@438] -
Initiating client connection,
connectString=node3.local:2181,node2.local:2181,node1.local:2181,hadoop1.local:2181,node4.local:2181
sessionTimeout=90000 watcher=regionserver:60020
2013-10-25 14:16:54,248 [myid:] - INFO 
[regionserver60020:RecoverableZooKeeper@120] - Process
identifier=regionserver:60020 connecting to ZooKeeper
ensemble=node3.local:2181,node2.local:2181,node1.local:2181,hadoop1.local:2181,node4.local:2181
2013-10-25 14:16:54,260 [myid:] - INFO 
[regionserver60020-SendThread(node2.local:2181):ClientCnxn$SendThread@966] -
Opening socket connection to server node2.local/10.11.1.3:2181. Will not
attempt to authenticate using SASL (unknown error)
2013-10-25 14:16:54,267 [myid:] - INFO 
[regionserver60020-SendThread(node2.local:2181):ClientCnxn$SendThread@849] -
Socket connection established to node2.local/10.11.1.3:2181, initiating
session
2013-10-25 14:16:54,295 [myid:] - INFO 
[regionserver60020-SendThread(node2.local:2181):ClientCnxn$SendThread@1207]
- Session establishment complete on server node2.local/10.11.1.3:2181,
sessionid = 0x141f0a11cb6002a, negotiated timeout = 40000
2013-10-25 14:16:54,836 [myid:] - INFO  [main:ShutdownHook@87] - Installed
shutdown hook thread: Shutdownhook:regionserver60020
[hadoop@hadoop1 ~]$





--
View this message in context: http://apache-hbase.679495.n3.nabble.com/Install-HBase-on-hadoop-2-2-0-tp4052188p4052207.html
Sent from the HBase User mailing list archive at Nabble.com.

Re: Install HBase on hadoop-2.2.0

Posted by Ted Yu <yu...@gmail.com>.
The NPE was reported on the third line below (looking at HRegionServer.java
in 0.96 branch):

          Pair<HRegionInfo, ServerName> p = MetaReader.getRegion(
              this.catalogTracker, region.getRegionName());
          if (this.getServerName().equals(p.getSecond())) {

Looks like p might be null.

bq. 2013-10-25 14:16:54,836 [myid:] - INFO  [main:ShutdownHook@87] -
Installed shutdown hook thread: Shutdownhook:regionserver60

Was there more in region server log after the above line ?

Cheers

On Fri, Oct 25, 2013 at 11:24 AM, psynophile <ps...@gmail.com> wrote:

> I can't get test commands to work with hbase. Could someone help me figure
> this out? Everything seems to be set up correctly, jps reports that hmaster
> is running on the master node, hadoop1, and region servers are running on
> the other nodes, but I get this:
>
> hbase(main):001:0> create 'test', 'cf'
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
>
> [jar:file:/opt/hadoop/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
>
> [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library
> /opt/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack
> guard. The VM will try to fix the stack guard now.
> It's highly recommended that you fix the library with 'execstack -c
> <libfile>', or link it with '-z noexecstack'.
> 2013-10-25 14:17:02,792 [myid:] - WARN  [main:NativeCodeLoader@62] -
> Unable
> to load native-hadoop library for your platform... using builtin-java
> classes where applicable
>
> ERROR: Can't get master address from ZooKeeper; znode data == null
>
>
> Here is my hbase-site.xml:
> <configuration>
>   <property>
>     <name>hbase.zookeeper.quorum</name>
>
>
> <value>hadoop1.local,node1.local,node2.local,node3.local,node4.local</value>
>     <description>The directory shared by RegionServers.
>     </description>
>   </property>
>   <property>
>     <name>hbase.zookeeper.property.dataDir</name>
>     <value>/hadoop/zoo/data</value>
>     <description>Property from ZooKeeper's config zoo.cfg.
>     The directory where the snapshot is stored.
>     </description>
>   </property>
>   <property>
>     <name>hbase.rootdir</name>
>     <value>hdfs://10.11.1.1:9000/hbase</value>
>     <description>The directory shared by RegionServers.
>     </description>
>   </property>
>   <property>
>     <name>hbase.cluster.distributed</name>
>     <value>true</value>
>     <description>The mode the cluster will be in. Possible values are
>       false: standalone and pseudo-distributed setups with managed
> Zookeeper
>       true: fully-distributed with unmanaged Zookeeper Quorum (see
> hbase-env.sh)
>     </description>
>   </property>
> </configuration>
>
> hbase-env.sh:
> [hadoop@hadoop1 hbase]$ cat conf/hbase-env.sh | grep -e CLASS -e ZK
> # Extra Java CLASSPATH elements.  Optional.
> export
>
> HBASE_CLASSPATH=$HBASE_CLASSPATH:/opt/hadoop/zookeeper/conf:/opt/hadoop/zookeeper
> export HBASE_MANAGES_ZK=false
> [hadoop@hadoop1 hbase]$
>
> I have zookeeper set up and running also.
> Here is the log output from master:
>
> tail -200 /opt/hadoop/hbase/logs/hbase-hadoop-master-hadoop1.out
> 2013-10-25 14:12:30,247 [myid:] - WARN
> [master:hadoop1:60000:AssignmentManager@1974] - Failed assignment of
> hbase:meta,,1.1588230740 to node3.local,60020,1382724162208, trying to
> assign elsewhere instead; try=10 of 10
> java.io.IOException: java.io.IOException
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2184)
>         at
> org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1851)
> Caused by: java.lang.NullPointerException
>         at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:3483)
>         at
>
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:19795)
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2146)
>         ... 1 more
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>         at
>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>         at
>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at
>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>         at
>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>         at
>
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:235)
>         at
>
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:631)
>         at
>
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1901)
>         at
>
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1449)
>         at
>
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1422)
>         at
>
> org.apache.hadoop.hbase.master.AssignmentManager.assignMeta(AssignmentManager.java:2437)
>         at
> org.apache.hadoop.hbase.master.HMaster.assignMeta(HMaster.java:1013)
>         at
>
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:866)
>         at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:603)
>         at java.lang.Thread.run(Thread.java:724)
> Caused by:
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException):
> java.io.IOException
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2184)
>         at
> org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1851)
> Caused by: java.lang.NullPointerException
>         at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:3483)
>         at
>
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:19795)
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2146)
>         ... 1 more
>
>         at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1446)
>         at
>
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1650)
>         at
>
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1708)
>         at
>
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.openRegion(AdminProtos.java:20595)
>         at
>
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:628)
>         ... 8 more
> 2013-10-25 14:12:30,248 [myid:] - WARN
> [master:hadoop1:60000:RegionStates@312] - Failed to transition 1588230740
> on
> node3.local,60020,1382724162208, set to FAILED_OPEN
> 2013-10-25 14:12:30,248 [myid:] - INFO
> [master:hadoop1:60000:RegionStates@321] - Transitioned {1588230740
> state=PENDING_OPEN, ts=1382724750239,
> server=node3.local,60020,1382724162208} to {1588230740 state=FAILED_OPEN,
> ts=1382724750248, server=node3.local,60020,1382724162208}
> 2013-10-25 14:12:30,248 [myid:] - INFO
> [master:hadoop1:60000:ServerManager@557] - AssignmentManager hasn't
> finished
> failover cleanup; waiting
> 2013-10-25 14:16:55,130 [myid:] - INFO
> [RpcServer.handler=16,port=60000:ServerManager@369] - Registering
> server=node1.local,60020,1382725013788
> 2013-10-25 14:16:55,136 [myid:] - INFO
> [RpcServer.handler=16,port=60000:Configuration@840] - fs.default.name is
> deprecated. Instead, use fs.defa
>
>
> Thanks!
>
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/Install-HBase-on-hadoop-2-2-0-tp4052188.html
> Sent from the HBase User mailing list archive at Nabble.com.
>